Here are the videos of the other contestants. I had to leave 10 minutes before the winner was announced and Georgia Tech still hasn’t updated their website, so I don’t know who won.
Baxter, Keith
Beck, Andrew
Bucksbarg, Andrew
Driscoll, Scott
Evans, Kyle*
Feather, Neil
Henriques, Tomas* (1st prize)
Janicek, Martin
Landman, Yuri
Leonardson, Eric
Lerew, Todd
Lieberman, David
Litt, Steven* (3rd place)
Mainstone, Di
McPherson, Andrew*
Neill, Ben
McMillen, Keith* (2nd place)
Oliver, Jaime*
Parviainen, Pessi
Plamondon, Jim
Raisanen, Juhani
Schlessinger, Daniel*
Sivertsen, Mats
Snyder, Jeff
Zoran, Amit
Loud Objects* (3rd place)
* were finalists.
(Update: winners)
Guthman
I flew in to Atlanta on Thursday night, checked into my hotel and crashed, pretty jet lagged. I got up at 5:30, wide awake, ready to prepare for my performance. Because of the snow storms in the north eastern states the schedule had been changed, and I was going at 4:00 PM rather than 9:30 AM as originally planned. I would have preferred to go in the morning of course, getting it out of the way, but there was nothing I could do about that now. I showered, had an apple and got to work. I had to rehearse one final time before the demonstration. I had spent way too much time on development and too little time on planning and training for the performance. Now I had to make sure that everything would go smoothly. I got the headset out of the box, put the saline solution on the felt pads and put it on. Then I fired up the subConch and stared at the screen, trying to push the sound up. Nothing. When I do this in my new application I am basically staring at a herz frequency number on the screen. It wouldn’t budge. I rubbed my eyes and took a deep breath. Relax. I tried again. And now it moved a little. A few fractions. The issue was that I was too stressed and I could feel the tension preventing my brain signals from working properly. This is one of the problems with the technology: the more tense, stressed of frustrated you are the more confused it will be about which action you are trying to perform. Since I knew this to be the case, I got even more stressed, and it took me about an hour to feel comfortable and have full control of the sound. At 9:00 I was finally able to control it fully. I would have liked half an hour more to feel comfortable, but it was time, so I saved my settings, packed everything to go and crossed my fingers. I was heading over to Georgia Tech.
This being America the hotel of course had a shuttle that took me over there. The driver had never heard of the building I was going to and started calling people on the phone to figure it out. In the car I stared out the window and for a moment I thought it was Georgia Tech and not Virginia Tech that was the site of the terrible high school shootings. My mind was drifting. Why was I here again?
The car dropped me off at the Couch building, which turned out to be the wrong site. Because of space issues they had divided the group in half, and I was to perform at the Architecture building on the other side of campus. I wasn’t too stressed out though, since I wasn’t performing util late in the afternoon anyway. I got there in time and set up my pc on the table with my name on it. It took me 2 minutes. I looked around the room. Trying to figure out the crowd. A lot of university types, musicians, but also your standard 45-year-old white male who has build something eccentric in his shed. I talked to a guy from Baltimore who had build some sort of guitar. He’d finished 5th last year. The performances were starting and the two judges, all ready with their black coats and European accents where asking tricky questions and writing notes. There was a lunch break. I ate with the guy from Baltimore and a man who played music on a piece of wood with stuff from the hardware store attached using a cello bow. Afterwards I got talking to this kid from Brooklyn, Stephen Lit, who’d programmed a beat sequencer using Arduino and some door bells. Pretty cool stuff. I was more and more concerned about my piece. Everyone had prepared some sort of musical performance on their pieces, and most of them had a long history performing. All I had was a demonstration. I toyed with the idea of letting the judges try it as part of the demonstration, since that is after all the intent of the piece, that anyone can do it, but I was unsure. I hadn’t prepared for that.
Finally I was my turn. They guy before me had had a few issues since the actual inventor of the thing hadn’t showed up and we had to call him up on Skype to demonstrate it. The guy who was presenting it had borrowed my laptop hoping to be able to run the device on a PC, but it hadn’t worked. So while they were all chatting to the guy on video phone I sat down, prepared the headset and put it on. I launched the subConch, loaded my profile from the morning, and I was ready. All eyes were on me. I stated that I didn’t have much of a musical performance, but that I would primarily demonstrate this mind control instrument where I would control pitch and other parameters using only the force of my mind. I turned up the volume and stared at the Herz frequency. Up, up, up, up. Nothing. Deep breath. Up up up. Please. Nothing. Then a little. And a little more. I tred down. It worked. I went down. Very low. Almost inaudible. I clicked a couple of buttons. Tried again. I went up a little. All this time the LFO was beating like hell since it was mapped to frustration levels. This was a disaster. After about 5 minutes I turned the volume down and spoke a little about how it was all set up. I was surprised at the interest and the judges seemed amazed that it was doing what I claimed it was doing. This is of course where I need to tell everyone that I did not invent the EEG-sensor hardware and software, Emotiv Systems did, but that the mapping of these properties to synthesized sound was what I was contributing. The one judge had apparently heard of this technology before. He was of course not impressed with my demonstration and wanted me to talk him though a few steps so he could indeed see that I was controlling it. I turned up the volume again and said, “OK, now I’ll attempt to push the pitch of the sound up”. I took a deep breath and stared at the Herz frequency. Up, up, up. And surprise, it went up, up, up. To a pretty hight pitch. The judge seemed relieved. I continued, “and likewise I can bring it down”, and I slowly brought the frequency down. I explained that stress could easily interfere with the reading and they had some understanding for that. Then we talked about the non cognitive aspects of the sound and how I had mapped the LFO speed and depth to frustration and excitement, reverb to boredom etc. The other judge was intrigued by this and wanted me to demonstrate. I told him it was hard to demonstrate something that was non the cognitive, but I was willing to try. I fired up the conch again, since I had turned it off while talking. I tried adjusting my frustration levels but nothing seemed to work. I was not sure what was happening and apologised for my lack of emotions. Anyway we were out of time. Afterwards I realized I had turned my headset off before when I was talking and that this is why the conch wasn’t reading any emotions at the end there. Kind of a disaster, but at least it was all over.
Of course all the Americans where excited and positive, telling me how wonderful it was. I sometimes wonder what’s it’s like to be American. Are they generally more happy than Europeans? I think they might be. At this point I was pretty happy too. I had done my best, it just wasn’t good enough. Had I been able to demonstrate it more properly I might have gotten better feedback, who knows maybe made it to the finals, but that was out of the question now. I had learned a thing or two though. I should have put more emphasis on doing a pure presentation, hooked the laptop up to the projector so everyone could have seen what I was doing on the screen when mind controlling. Visuals can help you understand what you are hearing. I should have explained more how it worked and taken them through it step by step. Perhaps I would have relaxed more myself. I think I was thrown off by the performance aspect of all the other contestants. Also I think I was hoping to get into it when doing the initial “performance” not having to perform things on the spot like I had to in the end. Still, it was what it was. I showed the piece., and had some interesting conversations with the other contestants. Today the eight finalists of competing for 1st, 2nd and 3rd prize. There are some very interesting things going on. Jamie Olivier, last years winner, has a web camera reading hand configurations and motions. Stephen, who I mentioned, is in the finals with his beat-box-thing. His friends from NYC, Loud Objects do a live performance soldering integrated circuits on the fly, hooking up wires and generating noise and music. It’s a performance where the first 5 minutes are completely silent, just watching them arrange bits and pieces on top of an overhead projector, wearing sunglasses.
Priorities
Ever since starting that course at the university I keep coming up new ideas for my application. Integration with MAX/MSP and VST, using Phidgets and other sensors to control things like volume, RFID to select instrument patches… The list is long. I have to prioritize. In a way I have two separate paths for the conch. One is gallery life, which is the first priority. Then there is the afterlife, further development and making the software accessible to home users. Also, I have a demonstration coming up at the Guthman Musical Instrument Competition, and I still haven’t fixed all the things I need to fix for that. So here is a list of priorities:
Demonstration:
1. User profiles. Save and load. When I fix this I can rehearse in advance of the demo and just load my profile as I set up.
2. Bug fixing. I’m sure there are many I have not found. Like interrupting training.
Gallery:
1. Solve the sensor connectivity issue. How do you fit the headset properly if you only have an audio interface?
2. Visualization of pitch, a dot of light on the wall rising and falling perhaps. This can be done with a servo and Phidgets. Or a screen on the wall with graphics.
3. Funding and building the sculpture.
4. New voice audio (using speech synthesizer today).
Further development:
1. MIDI out.
2. UDP out (both raw and processed braid wave data).
3. Audio BUS.
4. Other sensor input.
5. Include detection of facial expressions.
6. Design a nicer GUI.
With MIDI out and UDP it can be used in conjunction with other software, like MAX to make a more complex synthesizer than the one that is programmed in the conch. This opens up opportunities for using the technology in live performances.
Audio Programming
Today I’m starting a course in audio programming at Blindern University. There I will have to delve into some pretty hard core theory and math regarding music and sound. “The Computer Music Tutorial” from MIT Press is on the curriculum and it’s 1250 pages… The class is taught by Alexander Refsum Jensenius and seems like just the thing I need to get cracking on more of these sensor controlled instruments.
Pentatonic
I’ve decided to add an option to map the frequencies to a pentatonic scale. This way it might be more of a “musical” experience for the average user. I think the pentatonic scale is more universally recognized as something musical, playfully corroborated by Bobby McFerrin in this great video.
Interface
My ambition has all the time been to have a audio interface only for this piece, i.e. no screens. I wanted to build both the PC and the amplifier into the conch sculpture and have the headset as the only external hardware, which in turn works via bluetooth. In a gallery setting this means that the user will put the headset on with the aid of the gallery assistant and commence the interaction through audio messages from the conch.
The interaction goes like this:
1) You put the headset on.
2) The conch tells you if your sensors are properly fitted. There are 16 sensors. The assistant will help you place them correctly.
3) When all the sensors are fitted properly, the training session begins. This is a 3 minute session where the conch will create a profile for your specific brain.
4) The training steps are:
a) Conch records neutral. You do nothing for 8 seconds.
b) Fade sound wave in.
c) Conch tells the user to try and “lift” the sound wave (in pitch). Records 8 seconds of “lift” action.
d) Conch tells the user to try and “drop” the sound wave (in pitch). Records 8 seconds of “drop” action.
5) You can now interact with the sound. Removing the headset resets and erases the training data so that another user can try or so that you can retrain.
This interface works fine for the training. The biggest challenge for this audio only approach is the fitting of the sensors. It would be easier to fit all the sensors properly if you had a schematic of them on a screen. This is the approach that Emotiv uses for their application and it’s easy to see which sensor is not fitted well enough. Also a button to reset the training would be nice, since the average user might not get the hang of it the first time and wants to retrain to try again. In which case it would be stupid to have to remove the headset entirely just to clear training data and the go through the sensor fitting again. I’m thinking a small screen with a schematic of the head with sensors and a reset button might be a good option to the audio only interface, but I will have to do some more testing first. The best solution would have been to have each sensor on the headset have a light diode turn on when it was fitted properly. This way the gallery assistant could easily help you get it running. The problem with this is that I have little faith in my hardware hacking skills (which are non-existent) and will have to depend on Emotiv to develop such a headset.
QtConch
I’m making some serious headway with the Qt version of the application. It’s more or less done now, in its basic version. A few more glitches to iron out. I must say that working with Qt has made me optimistic in regard to developing in C++. Kudos to AndrĂ© Brynhildsen at my former employers’ for recommending this. It’s open source and cross platform and takes care of memory cleanup, event handling, file handling, graphics, multi-threading and more. Also, I finally figured out why my application had problems communicating with the headset directly; the dll-files from Emotiv I was using were from the 2008 lite version of the SDK. Apparently these older library files wont work with the headset directly.
People playing with mind control and audio
Tron of Illuminated Sounds recently acquired an EPOC and posted this quick demo of the interface triggering beats in Ableton Live.
Raw EEG
Emotiv Systems, the developer of the technology I’m basing my application on, announces that raw EEG access will be released to independent developers and researchers. This will enable me to develop my installation in new directions, like basing sine wave patterns on brain waves. I’m also curious to see if I can find new and interesting ways to use this in conjunctions with the cognitive interface.
Guthman Musical Instrument Competition
The subConch has been admitted into the Georgia Tech Guthman Musical Instrument Competition. It’s an event where inventors, composers, artists and designers compete for a total of $10000 in cash prizes. The competition is hosted by Geogia Tech Center for Music Technology and will take place in late February. I’m pretty sure I wont have the sculpture finished by then and will have to show the piece running on my laptop hooked up to a PA. Anyway, pretty exciting. See last year’s submissions