My ambition has all the time been to have a audio interface only for this piece, i.e. no screens. I wanted to build both the PC and the amplifier into the conch sculpture and have the headset as the only external hardware, which in turn works via bluetooth. In a gallery setting this means that the user will put the headset on with the aid of the gallery assistant and commence the interaction through audio messages from the conch.

The interaction goes like this:
1) You put the headset on.
2) The conch tells you if your sensors are properly fitted. There are 16 sensors. The assistant will help you place them correctly.
3) When all the sensors are fitted properly, the training session begins. This is a 3 minute session where the conch will create a profile for your specific brain.
4) The training steps are:
a) Conch records neutral. You do nothing for 8 seconds.
b) Fade sound wave in.
c) Conch tells the user to try and “lift” the sound wave (in pitch). Records 8 seconds of “lift” action.
d) Conch tells the user to try and “drop” the sound wave (in pitch). Records 8 seconds of “drop” action.
5) You can now interact with the sound. Removing the headset resets and erases the training data so that another user can try or so that you can retrain.

This interface works fine for the training. The biggest challenge for this audio only approach is the fitting of the sensors. It would be easier to fit all the sensors properly if you had a schematic of them on a screen. This is the approach that Emotiv uses for their application and it’s easy to see which sensor is not fitted well enough. Also a button to reset the training would be nice, since the average user might not get the hang of it the first time and wants to retrain to try again. In which case it would be stupid to have to remove the headset entirely just to clear training data and the go through the sensor fitting again. I’m thinking a small screen with a schematic of the head with sensors and a reset button might be a good option to the audio only interface, but I will have to do some more testing first. The best solution would have been to have each sensor on the headset have a light diode turn on when it was fitted properly. This way the gallery assistant could easily help you get it running. The problem with this is that I have little faith in my hardware hacking skills (which are non-existent) and will have to depend on Emotiv to develop such a headset.