6 thoughts on “New demonstration

  1. Anonymous

    Hi, my name is Cristian and I’m form Colombia, I was wondering if I could get a Subconch. I would like to know how much would it cost.

    Thank you, I must tell that this is awsome!

    Cristian

    Reply
  2. Mats Sivertsen

    Hey Cristian,

    Thanks! I’m still developing parts of the software, but hope to make it available sometime this fall. In addtion to the software you’ll need the Emotiv EPOC headset, which will cost you around $299 US.

    Mats

    Reply
  3. Olivier Preziosa

    Hello Mats,

    I was very pleased to find out about your project and wish you the best luck of success with it.

    Me and a friend of mine (we’re both developpers and musicians) are planning to work on a similar project (after I saw emotiv products, I had the idea and I’m glad you did too :). Our goal, as fictional as it sounds, would be to create some kind of SubConch, where the mind can control a software to create music. Coming from your experience, do you have any advise?

    If I may ask, what are your future plans with SubConch? I would be delighted to work with you on a shared vision on what could be accomplished with this technology.

    Thank you for your work… it makes all of us dream :)

    Olivier Preziosa

    Reply
  4. Mats Sivertsen

    Hi Oliver,

    Thanks for your comments! In theory you can make music with the subConch as it is now, but controlling the pitch with accuracy is a challenge. But you are looking for something more complex perhaps. I suppose the imagination is the limit here. It’s possible to map the data from the headset in many other ways than pitch and tone control. It could be hooked up to some kind of DJ program where you mix different pre-made segments, or a composition program where your mental data is used to compose a piece that is played live (Norwegian composer Rolf Wallin has done some experiments with the latter). These alternatives offer more music, but less direct control. You could also feed the mental wave-data into an amplifier so you can hear what your brain sounds like. This has been done before though, as early as in the 60s, I think.

    The plan for the subConch now is finishing the gallery installation. Here you can use a simplified version of the conch that works as a type of horn. You mentally “blow air” into it to produce a sound that fades when you stop “blowing”.

    What are your ideas for mind controlled music? I’d love to hear your ideas.

    Reply
  5. Olivier Preziosa

    Hello Mats,

    Thank you for your answer.

    As a musician myself, loving improvisation, I dream of an instrument that I could simply use to express my musical creativity with no technical barriers. With the use of a software, I imagine it is perhaps possible to mind-automate functionalities like “add a reggae beat”, “loop”, “add one instrument”, “make this as trumpet”, “accelerate” ? as we create a song, with the option of interacting more individuals with the same “super-powers”, playing their part. But to create the music itself, the notes, the melody, there is an impossible challenge to be able to properly interpret whatever the mind is imagining. First, the signal, do we have enough information/resolution to be able to say “the info is in the signal, let’s find it!”? In my opinion, there is already there a highly possible chance that it isn’t the case (meaning impossible from the start). However, for the sake of imagination, let’s say we got it all, how to find the info in the signal/noise that the brain permanently makes?

    For this last point, I was thinking of the following study and would like your opinion. Let’s say we take a musician who improvises a song and we record the signal sent by his brain as he’s playing. The final result, the music, has a couple aspects that we should be able to somehow retrieve, mathematically, in the signal sent from the brain as he was playing. For one, the rhythm, I might want to say, should be kinda easy to see if there are harmonics in the signal which follows it. But we can extend and perhaps follow more the music into the signal, from other angles (harmonics, notes, volume?). Is that Sciences Fiction?

    I was thinking, out of all our thoughts, music should be one of the easiest to dig out the brain, as it relatively simple in the way it is coded and how it usually offer a high level of concentration (as a consequence, less noise in the signal I would say).

    Thank you again for your answer (sorry I am less fast to respond than you), I would be pleased to read your thoughts on the topic.

    Sincerely,

    Olivier

    Reply
  6. Mats Sivertsen

    Hi again,

    This time I am the one who is slow to reply :-)

    I think some of these things rely on our approach to consciousness – is it something that resides in the brain alone by itself, complete and separate from the body (Carthesian) or is it something that arises out of our interactions with the world, dependent on the body, in a two way fashion (phenomenological).

    I did myself have the idea of trying to record the brain wave patterns of a person “singing quietly to himself” a single note and see if it was possible to find some kind of correlating pattern and thus be able to differentiate between several notes and have the software recognize and play them. This approach would be better than the current one – in which we only adjust the pitch up and down like a mind-thermin. If a pattern does arise, that is common among several subjects, the possibility of doing more informed guesswork and be able to differentiate between more than four “thoughts” should be possible – say several octaves of 8 notes.

    I share your optimism that music should be an easy thing to dig out of the brain, since the brain is also musical and relies on harmonies and frequencies, but I am also afraid that it might be more difficult than we hope. Research shows that there is often less coherence than we might expect between subjects on how the brain processes information, (for instance, mice who had their optical nerves tied to the auditory cortex in the brain instead of the visual cortex could still see). Also it is obvious that all our interaction with the environment – be in an instrument or this headset – is dependent on feedback: once you hear that the instrument is doing what you want it do, your brain locks in on this and tries to do more of it – in a kind of feedback-loop. However, it should be possible to find some patterns emerge, especially if working with only one subject, a trained musician. But the correlation to the music might be hard to spot at first. The 14 sensor headset is also unimaginably crude compared the the actual complexity of the brain or music in general.

    Mats

    Reply

Comments are closed.

Leave a Reply to Mats Sivertsen Cancel reply

Your email address will not be published. Required fields are marked *


5 + three =

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>