Category Archives: subConch

Interface

My ambition has all the time been to have a audio interface only for this piece, i.e. no screens. I wanted to build both the PC and the amplifier into the conch sculpture and have the headset as the only external hardware, which in turn works via bluetooth. In a gallery setting this means that the user will put the headset on with the aid of the gallery assistant and commence the interaction through audio messages from the conch.

The interaction goes like this:
1) You put the headset on.
2) The conch tells you if your sensors are properly fitted. There are 16 sensors. The assistant will help you place them correctly.
3) When all the sensors are fitted properly, the training session begins. This is a 3 minute session where the conch will create a profile for your specific brain.
4) The training steps are:
a) Conch records neutral. You do nothing for 8 seconds.
b) Fade sound wave in.
c) Conch tells the user to try and “lift” the sound wave (in pitch). Records 8 seconds of “lift” action.
d) Conch tells the user to try and “drop” the sound wave (in pitch). Records 8 seconds of “drop” action.
5) You can now interact with the sound. Removing the headset resets and erases the training data so that another user can try or so that you can retrain.

This interface works fine for the training. The biggest challenge for this audio only approach is the fitting of the sensors. It would be easier to fit all the sensors properly if you had a schematic of them on a screen. This is the approach that Emotiv uses for their application and it’s easy to see which sensor is not fitted well enough. Also a button to reset the training would be nice, since the average user might not get the hang of it the first time and wants to retrain to try again. In which case it would be stupid to have to remove the headset entirely just to clear training data and the go through the sensor fitting again. I’m thinking a small screen with a schematic of the head with sensors and a reset button might be a good option to the audio only interface, but I will have to do some more testing first. The best solution would have been to have each sensor on the headset have a light diode turn on when it was fitted properly. This way the gallery assistant could easily help you get it running. The problem with this is that I have little faith in my hardware hacking skills (which are non-existent) and will have to depend on Emotiv to develop such a headset.

QtConch

I’m making some serious headway with the Qt version of the application. It’s more or less done now, in its basic version. A few more glitches to iron out. I must say that working with Qt has made me optimistic in regard to developing in C++. Kudos to AndrĂ© Brynhildsen at my former employers’ for recommending this. It’s open source and cross platform and takes care of memory cleanup, event handling, file handling, graphics, multi-threading and more. Also, I finally figured out why my application had problems communicating with the headset directly; the dll-files from Emotiv I was using were from the 2008 lite version of the SDK. Apparently these older library files wont work with the headset directly.

Raw EEG

Emotiv Systems, the developer of the technology I’m basing my application on, announces that raw EEG access will be released to independent developers and researchers. This will enable me to develop my installation in new directions, like basing sine wave patterns on brain waves. I’m also curious to see if I can find new and interesting ways to use this in conjunctions with the cognitive interface.

Guthman Musical Instrument Competition

The subConch has been admitted into the Georgia Tech Guthman Musical Instrument Competition. It’s an event where inventors, composers, artists and designers compete for a total of $10000 in cash prizes. The competition is hosted by Geogia Tech Center for Music Technology and will take place in late February. I’m pretty sure I wont have the sculpture finished by then and will have to show the piece running on my laptop hooked up to a PA. Anyway, pretty exciting. See last year’s submissions

GUI application

I’m closing in on finishing the GUI-version of the subConch. I had some initial problems since I’m still a novice in C++, but it all seems to be working now. I still haven’t implemented the wave visualization . I’ll post some screen shots soon. Basically there is a grid for setting up the emo outputs to the synthesizers inputs. I’ve got an idea to develop a node based interface for the synthesizer where I can hook different outputs from the headset to various inputs on waves and hook them up in sequence to modulate each other. But this involves developing a quite a few new classes in Qt and might be a bit too advanced for me…

Feedback

I’ve been getting some feedback from people on the video. J?rgen Larsson from The Sound Gallery in Bergen made me aware of the rich history of EEG music and supplied this great link. I was not familiar with all of these, especially not the ones from the 70′s. My friend Andrew Lloyd Goodman at Brown also tipped me off about the BrainGate interface under development there. Based on my research this far, the impression is that most EEG music interfaces translate brain waves into sound waves, and most cognitive interfaces, like the BrainGate, focus on disability and interacting with visuals (on screens). Cognitive control over audio I have found less about, though that’s not saying it isn’t out there. Emotiv’s innovative technology on the cognitive interface is a recent big leap forward. It offers cognitive training for up to four different cognitive concepts simultaneously, all trained within minutes. I’m very excited about the next phase of my project, where I’ll be fine tuning the application and building the actual installation.

Qt

I’ve started migrating the application to Qt (an open source C++ framework for developing applications with a graphic user interface). The application has so far been developed as a console application, which proffers only a text based interface. In a setting where gallery assistants might have to reset, restart and troubleshoot the application a decent windows-type user interface is probably preferable.

Demonstration


Finally a demonstration! I’m not doing it very well yet, but it’s still early stages. I feel pretty good about using this in an exhibition setting now. One of my early concerns was of course that it would be too hard for the public to interact with the piece themselves. I sort of envision two separate paths from now on. The first is the piece as planned with the conch shaped sculpture used by me in performances. The second is three cones (directional sound domes) with headsets hanging from the ceiling over small chairs. An assistant will be there to help people moist the sensor pads and put the gear on. The application will initially run through a short training session for 1 minute (I’ve already programmed this :-) so that the user’s brain can be profiled correctly. Afterwards it should be pretty straight froward to try and control the sound.