Discovery Networks

Great news! Thanks to Tan Lee (CEO of Emotiv Systems) The subConch will be featured in a series on BCI (Brain Control Interfaces) on the Discovery Networks (which channel I’m not sure yet). This means I’ll be making some new footage in the next couple of weeks showing off the new developments like the control interface. Very exciting!

MAX/MSP

I’m currently working on sending all emo-data and audio data to MAX/MSP as OSC messages via UDP. I’ve got a basic setup working with the emo-data, but I don’t know enough about MAX/MSP to do what I want yet. Basically I want to do all the audio synthesis in MAX since it’s easier to play around and get quick results there. I’m for instance planning to generate sound waves based on the raw EEG data. Opening the application up with OpenSoundControl also makes it more flexible for other types of projects.

Demo

I demonstrated the subConch to my class at the University of Oslo today and it went very well. It was just a brief demo, but everything worked smoothly. The only problem was that my talking triggered my “down” action and sent the pitch down to 19 Hz every time I explained what was going on. My professor Alexander Refsum Jensenius wondered if I was interested in pursuing this further at the university, which is an enticing thought: a research project into how the brain thinks of musical notes using the EEG data from the headset for example.

Gallery installation

gallery The way the project is developing now I see the need to diverge the two paths. On the one hand I need to simplify the piece intended for galleries while the performance piece needs to be opened up and made more complex. After discussing the project with my brother last night I got affirmation for some of the ideas I have had floating around in regard to simplification. The gallery piece should only train one cognitive aspect, like “push” and have some sort of gravity force resetting it to neutral when you “let go”. The control should be over volume rather than pitch, but the sound will be a more complex harmony that will be brought closer (lessening reverb) and louder as the visitor mentally “pushes” the volume. Possibly it will also be pushed from a dissonance to a harmonic major. Rather than a screen or a projector I can use lights to give visual cues and also add more atmosphere. I have toyed with the idea of servo controlled lights for a while, but my brother was for a more simple approach of just adjusting the luminance of the room to give that spiritual and powerful effect I have been looking for.

Interface

interfaceHere is the interface I’ve been working on. It’s almost completely implemented now. Each one of the geometric shapes controls a different parameter of the sound. By mentally “rotating” these controls clockwise and counter-clockwise you will be able to alter the desired aspect of the sound: like LFO depth and speed, frequency modulation, reverb, wave shape etc. Smirking left and right will change which control is selected. You can still force pitch up and down as well.