GUI application

I’m closing in on finishing the GUI-version of the subConch. I had some initial problems since I’m still a novice in C++, but it all seems to be working now. I still haven’t implemented the wave visualization . I’ll post some screen shots soon. Basically there is a grid for setting up the emo outputs to the synthesizers inputs. I’ve got an idea to develop a node based interface for the synthesizer where I can hook different outputs from the headset to various inputs on waves and hook them up in sequence to modulate each other. But this involves developing a quite a few new classes in Qt and might be a bit too advanced for me…

Feedback

I’ve been getting some feedback from people on the video. J?rgen Larsson from The Sound Gallery in Bergen made me aware of the rich history of EEG music and supplied this great link. I was not familiar with all of these, especially not the ones from the 70′s. My friend Andrew Lloyd Goodman at Brown also tipped me off about the BrainGate interface under development there. Based on my research this far, the impression is that most EEG music interfaces translate brain waves into sound waves, and most cognitive interfaces, like the BrainGate, focus on disability and interacting with visuals (on screens). Cognitive control over audio I have found less about, though that’s not saying it isn’t out there. Emotiv’s innovative technology on the cognitive interface is a recent big leap forward. It offers cognitive training for up to four different cognitive concepts simultaneously, all trained within minutes. I’m very excited about the next phase of my project, where I’ll be fine tuning the application and building the actual installation.

Qt

I’ve started migrating the application to Qt (an open source C++ framework for developing applications with a graphic user interface). The application has so far been developed as a console application, which proffers only a text based interface. In a setting where gallery assistants might have to reset, restart and troubleshoot the application a decent windows-type user interface is probably preferable.

Demonstration


Finally a demonstration! I’m not doing it very well yet, but it’s still early stages. I feel pretty good about using this in an exhibition setting now. One of my early concerns was of course that it would be too hard for the public to interact with the piece themselves. I sort of envision two separate paths from now on. The first is the piece as planned with the conch shaped sculpture used by me in performances. The second is three cones (directional sound domes) with headsets hanging from the ceiling over small chairs. An assistant will be there to help people moist the sensor pads and put the gear on. The application will initially run through a short training session for 1 minute (I’ve already programmed this :-) so that the user’s brain can be profiled correctly. Afterwards it should be pretty straight froward to try and control the sound.

Robin Fox

laser showLast night I saw this fantastic show in Oslo. At club BL? Robin Fox and Anthony Pateras (AUS) warmed up with a nerve shaking audio piece where the whole crowd was forced to wear ear plugs. Afterwards Robin Fox used one of his laser lights for the most profound aesthetic experience of the year. This is what Fox himself says to describe it: “Enveloping the audience in synchronous sound and light information, the experience resembles a synaesthetic experience where what you hear is also what you see. The same electricity generated to move the speaker cones is sent simultaneously to high-speed motors that deflect the laser light on an x/y axis converting sonic vibration into light movement.” For me it was kind of like being in the end sequence of “2001: A space odyssey”. Rays engulfed me in a rhythmic seance where the green “eye” in a distance played the part of an alien entity, a higher consciousness from Star Trek’s next generation, commanding my ears, my body and mind to become one with the light. It demanded submission. Simultaneously ideas of how to expand my own piece kept flying by. This was the kind of profoundness that I needed my audience to experience. By controlling the sound themselves they too would feel all powerful…

Performance

Today my friend Mimi (Magnhild Fossum) called, all excited after reading about my project. She’s a dancer and choreographer who I’ve sort of had in the back of my mind in regard to the performance part of this project. As I’m still practising using the headset I didn’t want to excite her too much. Still, it seems like my movement triggers the various cognitive aspects of the Emotive application, and this can probably be used to create synchronised motion and sound events in a performance. We definitely will be looking at doing a collaboration when the time comes.

The headset has arrived!

headsetI have been a bit under the weather the last couple of days so when someone kept buzzing my door, and I was not expecting anyone, I did not bother to get up. I figured it was someone trying to sell me something I didn’t want. Only after incessant buzzing did I finally comply. It turned out to be UPS with a package for me! It was the headset. As it turned out, the guy had tried my door yesterday too. I had frankly not expected the package yet since typhoons in Asia kept creating delays and I had not received any shipping confirmation. Anyway, the headset was in the house and I was ready to try it on my application.
It is now a couple of hours later and I have of course encountered many issues. Firstly, my training application does not run as smoothly as I expected. Secondly, the amount of control I can have with the sounds is limited. This said, it is quite exiting just to be able to push the sound around with the force of your mind alone. With a little more training I think I might be able to at least have some fun, but I will probably not perform any musical feats of lasting impression. The main thing is still that audiences can have fun an try for themselves, which still seems to be within reach. The piece probably needs an assistant to help users put the headset on and keep the sensors moist with a saline solution.
All in all not to far from what I expected. I was impressed with the signal quality I got even through a full head of hair. I still have some issues with user profile handling and loading and saving user data. I hope to iron out these as soon as I have patched the problems with the training.

Building costs

I’m beginning to hear back from manufacturers on the costs of building the conch. Prices are pretty much what I expected (approximately 100.000 NOK for the 3 editions), which means that building is out of the question without external funding. The good news is that most manufacturers will deliver the whole conch, in carbon fibre or plastic, all painted with a slick finish. All I have to do is slide my electronics in through the main hole and wire it up. I also heard back from Eminence (one of the speaker manufacturers I contacted) about providing me with some free samples of their speaker drivers for my prototype.

Shipment

According to the Emotiv discussion forum all overdue shipments of the headset will ship tomorrow October 3rd. Exciting! This means that I’ll soon be able to test my application with the hardware. I have booked Atelier Nord’s audio/visual room for October 14th, hopefully I’ll have everything ready by then…