I demonstrated the subConch to my class at the University of Oslo today and it went very well. It was just a brief demo, but everything worked smoothly. The only problem was that my talking triggered my “down” action and sent the pitch down to 19 Hz every time I explained what was going on. My professor Alexander Refsum Jensenius wondered if I was interested in pursuing this further at the university, which is an enticing thought: a research project into how the brain thinks of musical notes using the EEG data from the headset for example.
Category Archives: subConch
Gallery installation
The way the project is developing now I see the need to diverge the two paths. On the one hand I need to simplify the piece intended for galleries while the performance piece needs to be opened up and made more complex. After discussing the project with my brother last night I got affirmation for some of the ideas I have had floating around in regard to simplification. The gallery piece should only train one cognitive aspect, like “push” and have some sort of gravity force resetting it to neutral when you “let go”. The control should be over volume rather than pitch, but the sound will be a more complex harmony that will be brought closer (lessening reverb) and louder as the visitor mentally “pushes” the volume. Possibly it will also be pushed from a dissonance to a harmonic major. Rather than a screen or a projector I can use lights to give visual cues and also add more atmosphere. I have toyed with the idea of servo controlled lights for a while, but my brother was for a more simple approach of just adjusting the luminance of the room to give that spiritual and powerful effect I have been looking for.
Interface
Here is the interface I’ve been working on. It’s almost completely implemented now. Each one of the geometric shapes controls a different parameter of the sound. By mentally “rotating” these controls clockwise and counter-clockwise you will be able to alter the desired aspect of the sound: like LFO depth and speed, frequency modulation, reverb, wave shape etc. Smirking left and right will change which control is selected. You can still force pitch up and down as well.
Funny
A friend of mine found this online.
Usage
I have created a new page on the website explaining how the software works, including a screen shot. Check it out.
Guthman 2
Here are the videos of the other contestants. I had to leave 10 minutes before the winner was announced and Georgia Tech still hasn’t updated their website, so I don’t know who won.
Baxter, Keith
Beck, Andrew
Bucksbarg, Andrew
Driscoll, Scott
Evans, Kyle*
Feather, Neil
Henriques, Tomas* (1st prize)
Janicek, Martin
Landman, Yuri
Leonardson, Eric
Lerew, Todd
Lieberman, David
Litt, Steven* (3rd place)
Mainstone, Di
McPherson, Andrew*
Neill, Ben
McMillen, Keith* (2nd place)
Oliver, Jaime*
Parviainen, Pessi
Plamondon, Jim
Raisanen, Juhani
Schlessinger, Daniel*
Sivertsen, Mats
Snyder, Jeff
Zoran, Amit
Loud Objects* (3rd place)
* were finalists.
(Update: winners)
Guthman
I flew in to Atlanta on Thursday night, checked into my hotel and crashed, pretty jet lagged. I got up at 5:30, wide awake, ready to prepare for my performance. Because of the snow storms in the north eastern states the schedule had been changed, and I was going at 4:00 PM rather than 9:30 AM as originally planned. I would have preferred to go in the morning of course, getting it out of the way, but there was nothing I could do about that now. I showered, had an apple and got to work. I had to rehearse one final time before the demonstration. I had spent way too much time on development and too little time on planning and training for the performance. Now I had to make sure that everything would go smoothly. I got the headset out of the box, put the saline solution on the felt pads and put it on. Then I fired up the subConch and stared at the screen, trying to push the sound up. Nothing. When I do this in my new application I am basically staring at a herz frequency number on the screen. It wouldn’t budge. I rubbed my eyes and took a deep breath. Relax. I tried again. And now it moved a little. A few fractions. The issue was that I was too stressed and I could feel the tension preventing my brain signals from working properly. This is one of the problems with the technology: the more tense, stressed of frustrated you are the more confused it will be about which action you are trying to perform. Since I knew this to be the case, I got even more stressed, and it took me about an hour to feel comfortable and have full control of the sound. At 9:00 I was finally able to control it fully. I would have liked half an hour more to feel comfortable, but it was time, so I saved my settings, packed everything to go and crossed my fingers. I was heading over to Georgia Tech.
This being America the hotel of course had a shuttle that took me over there. The driver had never heard of the building I was going to and started calling people on the phone to figure it out. In the car I stared out the window and for a moment I thought it was Georgia Tech and not Virginia Tech that was the site of the terrible high school shootings. My mind was drifting. Why was I here again?
The car dropped me off at the Couch building, which turned out to be the wrong site. Because of space issues they had divided the group in half, and I was to perform at the Architecture building on the other side of campus. I wasn’t too stressed out though, since I wasn’t performing util late in the afternoon anyway. I got there in time and set up my pc on the table with my name on it. It took me 2 minutes. I looked around the room. Trying to figure out the crowd. A lot of university types, musicians, but also your standard 45-year-old white male who has build something eccentric in his shed. I talked to a guy from Baltimore who had build some sort of guitar. He’d finished 5th last year. The performances were starting and the two judges, all ready with their black coats and European accents where asking tricky questions and writing notes. There was a lunch break. I ate with the guy from Baltimore and a man who played music on a piece of wood with stuff from the hardware store attached using a cello bow. Afterwards I got talking to this kid from Brooklyn, Stephen Lit, who’d programmed a beat sequencer using Arduino and some door bells. Pretty cool stuff. I was more and more concerned about my piece. Everyone had prepared some sort of musical performance on their pieces, and most of them had a long history performing. All I had was a demonstration. I toyed with the idea of letting the judges try it as part of the demonstration, since that is after all the intent of the piece, that anyone can do it, but I was unsure. I hadn’t prepared for that.
Finally I was my turn. They guy before me had had a few issues since the actual inventor of the thing hadn’t showed up and we had to call him up on Skype to demonstrate it. The guy who was presenting it had borrowed my laptop hoping to be able to run the device on a PC, but it hadn’t worked. So while they were all chatting to the guy on video phone I sat down, prepared the headset and put it on. I launched the subConch, loaded my profile from the morning, and I was ready. All eyes were on me. I stated that I didn’t have much of a musical performance, but that I would primarily demonstrate this mind control instrument where I would control pitch and other parameters using only the force of my mind. I turned up the volume and stared at the Herz frequency. Up, up, up, up. Nothing. Deep breath. Up up up. Please. Nothing. Then a little. And a little more. I tred down. It worked. I went down. Very low. Almost inaudible. I clicked a couple of buttons. Tried again. I went up a little. All this time the LFO was beating like hell since it was mapped to frustration levels. This was a disaster. After about 5 minutes I turned the volume down and spoke a little about how it was all set up. I was surprised at the interest and the judges seemed amazed that it was doing what I claimed it was doing. This is of course where I need to tell everyone that I did not invent the EEG-sensor hardware and software, Emotiv Systems did, but that the mapping of these properties to synthesized sound was what I was contributing. The one judge had apparently heard of this technology before. He was of course not impressed with my demonstration and wanted me to talk him though a few steps so he could indeed see that I was controlling it. I turned up the volume again and said, “OK, now I’ll attempt to push the pitch of the sound up”. I took a deep breath and stared at the Herz frequency. Up, up, up. And surprise, it went up, up, up. To a pretty hight pitch. The judge seemed relieved. I continued, “and likewise I can bring it down”, and I slowly brought the frequency down. I explained that stress could easily interfere with the reading and they had some understanding for that. Then we talked about the non cognitive aspects of the sound and how I had mapped the LFO speed and depth to frustration and excitement, reverb to boredom etc. The other judge was intrigued by this and wanted me to demonstrate. I told him it was hard to demonstrate something that was non the cognitive, but I was willing to try. I fired up the conch again, since I had turned it off while talking. I tried adjusting my frustration levels but nothing seemed to work. I was not sure what was happening and apologised for my lack of emotions. Anyway we were out of time. Afterwards I realized I had turned my headset off before when I was talking and that this is why the conch wasn’t reading any emotions at the end there. Kind of a disaster, but at least it was all over.
Of course all the Americans where excited and positive, telling me how wonderful it was. I sometimes wonder what’s it’s like to be American. Are they generally more happy than Europeans? I think they might be. At this point I was pretty happy too. I had done my best, it just wasn’t good enough. Had I been able to demonstrate it more properly I might have gotten better feedback, who knows maybe made it to the finals, but that was out of the question now. I had learned a thing or two though. I should have put more emphasis on doing a pure presentation, hooked the laptop up to the projector so everyone could have seen what I was doing on the screen when mind controlling. Visuals can help you understand what you are hearing. I should have explained more how it worked and taken them through it step by step. Perhaps I would have relaxed more myself. I think I was thrown off by the performance aspect of all the other contestants. Also I think I was hoping to get into it when doing the initial “performance” not having to perform things on the spot like I had to in the end. Still, it was what it was. I showed the piece., and had some interesting conversations with the other contestants. Today the eight finalists of competing for 1st, 2nd and 3rd prize. There are some very interesting things going on. Jamie Olivier, last years winner, has a web camera reading hand configurations and motions. Stephen, who I mentioned, is in the finals with his beat-box-thing. His friends from NYC, Loud Objects do a live performance soldering integrated circuits on the fly, hooking up wires and generating noise and music. It’s a performance where the first 5 minutes are completely silent, just watching them arrange bits and pieces on top of an overhead projector, wearing sunglasses.
Priorities
Ever since starting that course at the university I keep coming up new ideas for my application. Integration with MAX/MSP and VST, using Phidgets and other sensors to control things like volume, RFID to select instrument patches… The list is long. I have to prioritize. In a way I have two separate paths for the conch. One is gallery life, which is the first priority. Then there is the afterlife, further development and making the software accessible to home users. Also, I have a demonstration coming up at the Guthman Musical Instrument Competition, and I still haven’t fixed all the things I need to fix for that. So here is a list of priorities:
Demonstration:
1. User profiles. Save and load. When I fix this I can rehearse in advance of the demo and just load my profile as I set up.
2. Bug fixing. I’m sure there are many I have not found. Like interrupting training.
Gallery:
1. Solve the sensor connectivity issue. How do you fit the headset properly if you only have an audio interface?
2. Visualization of pitch, a dot of light on the wall rising and falling perhaps. This can be done with a servo and Phidgets. Or a screen on the wall with graphics.
3. Funding and building the sculpture.
4. New voice audio (using speech synthesizer today).
Further development:
1. MIDI out.
2. UDP out (both raw and processed braid wave data).
3. Audio BUS.
4. Other sensor input.
5. Include detection of facial expressions.
6. Design a nicer GUI.
With MIDI out and UDP it can be used in conjunction with other software, like MAX to make a more complex synthesizer than the one that is programmed in the conch. This opens up opportunities for using the technology in live performances.
Audio Programming
Today I’m starting a course in audio programming at Blindern University. There I will have to delve into some pretty hard core theory and math regarding music and sound. “The Computer Music Tutorial” from MIT Press is on the curriculum and it’s 1250 pages… The class is taught by Alexander Refsum Jensenius and seems like just the thing I need to get cracking on more of these sensor controlled instruments.
Pentatonic
I’ve decided to add an option to map the frequencies to a pentatonic scale. This way it might be more of a “musical” experience for the average user. I think the pentatonic scale is more universally recognized as something musical, playfully corroborated by Bobby McFerrin in this great video.