<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	
	>
<channel>
	<title>Comments for mats-sivertsen.net</title>
	<atom:link href="http://mats-sivertsen.net/blog/?feed=comments-rss2" rel="self" type="application/rss+xml" />
	<link>http://mats-sivertsen.net/blog</link>
	<description>Mats Sivertsen</description>
	<lastBuildDate>Mon, 21 Nov 2016 13:33:54 +0000</lastBuildDate>
		<sy:updatePeriod>hourly</sy:updatePeriod>
		<sy:updateFrequency>1</sy:updateFrequency>
	<generator>https://wordpress.org/?v=3.9.40</generator>
	<item>
		<title>Comment on General release by Mats Sivertsen</title>
		<link>http://mats-sivertsen.net/blog/?p=58#comment-241</link>
		<dc:creator><![CDATA[Mats Sivertsen]]></dc:creator>
		<pubDate>Mon, 21 Nov 2016 13:33:54 +0000</pubDate>
		<guid isPermaLink="false">http://mats-sivertsen.net/blog/?p=58#comment-241</guid>
		<description><![CDATA[Hi! Thanks for your interest! I&#039;m sorry to say that a Mac-version is not available and probably never will be – I&#039;m not actively maintaining or updating this software. 

Best, Mats]]></description>
		<content:encoded><![CDATA[<p>Hi! Thanks for your interest! I&#8217;m sorry to say that a Mac-version is not available and probably never will be – I&#8217;m not actively maintaining or updating this software. </p>
<p>Best, Mats</p>
]]></content:encoded>
	</item>
	<item>
		<title>Comment on General release by Joaquin farias</title>
		<link>http://mats-sivertsen.net/blog/?p=58#comment-240</link>
		<dc:creator><![CDATA[Joaquin farias]]></dc:creator>
		<pubDate>Wed, 06 Apr 2016 20:58:19 +0000</pubDate>
		<guid isPermaLink="false">http://mats-sivertsen.net/blog/?p=58#comment-240</guid>
		<description><![CDATA[Greetings,
I would love to use Subconch as a tool in my research. 
Would you please let me know where i can download the version for Mac OSX
Best wishes and congratulations on your work.
Joaquin]]></description>
		<content:encoded><![CDATA[<p>Greetings,<br />
I would love to use Subconch as a tool in my research.<br />
Would you please let me know where i can download the version for Mac OSX<br />
Best wishes and congratulations on your work.<br />
Joaquin</p>
]]></content:encoded>
	</item>
	<item>
		<title>Comment on New demonstration by Mats Sivertsen</title>
		<link>http://mats-sivertsen.net/blog/?p=41#comment-221</link>
		<dc:creator><![CDATA[Mats Sivertsen]]></dc:creator>
		<pubDate>Mon, 08 Oct 2012 12:10:15 +0000</pubDate>
		<guid isPermaLink="false">http://mats-sivertsen.net/blog/?p=41#comment-221</guid>
		<description><![CDATA[Hi again, &lt;br /&gt;
&lt;br /&gt;
This time I am the one who is slow to reply :-) &lt;br /&gt;
&lt;br /&gt;
I think some of these things rely on our approach to consciousness – is it something that resides in the brain alone by itself, complete and separate from the body (Carthesian) or is it something that arises out of our interactions with the world, dependent on the body, in a two way fashion (phenomenological). &lt;br /&gt;
&lt;br /&gt;
I did myself have the idea of trying to record the brain wave patterns of a person &quot;singing quietly to himself&quot; a single note and see if it was possible to find some kind of correlating pattern and thus be able to differentiate between several notes and have the software recognize and play them. This approach would be better than the current one – in which we only adjust the pitch up and down like a mind-thermin. If a pattern does arise, that is common among several subjects, the possibility of doing more informed guesswork and be able to differentiate between more than four &quot;thoughts&quot; should be possible – say several octaves of 8 notes. &lt;br /&gt;
&lt;br /&gt;
I share your optimism that music should be an easy thing to dig out of the brain, since the brain is also musical and relies on harmonies and frequencies, but I am also afraid that it might be more difficult than we hope. Research shows that there is often less coherence than we might expect between subjects on how the brain processes information, (for instance, mice who had their optical nerves tied to the auditory cortex in the brain instead of the visual cortex could still see). Also it is obvious that all our interaction with the environment – be in an instrument or this headset – is dependent on feedback: once you hear that the instrument is doing what you want it do, your brain locks in on this and tries to do more of it – in a kind of feedback-loop. However, it should be possible to find some patterns emerge, especially if working with only one subject, a trained musician. But the correlation to the music might be hard to spot at first. The 14 sensor headset is also unimaginably crude compared the the actual complexity of the brain or music in general.&lt;br /&gt;
&lt;br /&gt;
Mats
]]></description>
		<content:encoded><![CDATA[<p>Hi again, </p>
<p>This time I am the one who is slow to reply <img src="http://mats-sivertsen.net/blog/wp-includes/images/smilies/icon_smile.gif" alt=":-)" class="wp-smiley" />  </p>
<p>I think some of these things rely on our approach to consciousness – is it something that resides in the brain alone by itself, complete and separate from the body (Carthesian) or is it something that arises out of our interactions with the world, dependent on the body, in a two way fashion (phenomenological). </p>
<p>I did myself have the idea of trying to record the brain wave patterns of a person &#8220;singing quietly to himself&#8221; a single note and see if it was possible to find some kind of correlating pattern and thus be able to differentiate between several notes and have the software recognize and play them. This approach would be better than the current one – in which we only adjust the pitch up and down like a mind-thermin. If a pattern does arise, that is common among several subjects, the possibility of doing more informed guesswork and be able to differentiate between more than four &#8220;thoughts&#8221; should be possible – say several octaves of 8 notes. </p>
<p>I share your optimism that music should be an easy thing to dig out of the brain, since the brain is also musical and relies on harmonies and frequencies, but I am also afraid that it might be more difficult than we hope. Research shows that there is often less coherence than we might expect between subjects on how the brain processes information, (for instance, mice who had their optical nerves tied to the auditory cortex in the brain instead of the visual cortex could still see). Also it is obvious that all our interaction with the environment – be in an instrument or this headset – is dependent on feedback: once you hear that the instrument is doing what you want it do, your brain locks in on this and tries to do more of it – in a kind of feedback-loop. However, it should be possible to find some patterns emerge, especially if working with only one subject, a trained musician. But the correlation to the music might be hard to spot at first. The 14 sensor headset is also unimaginably crude compared the the actual complexity of the brain or music in general.</p>
<p>Mats</p>
]]></content:encoded>
	</item>
	<item>
		<title>Comment on New demonstration by Olivier Preziosa</title>
		<link>http://mats-sivertsen.net/blog/?p=41#comment-223</link>
		<dc:creator><![CDATA[Olivier Preziosa]]></dc:creator>
		<pubDate>Fri, 17 Aug 2012 11:45:20 +0000</pubDate>
		<guid isPermaLink="false">http://mats-sivertsen.net/blog/?p=41#comment-223</guid>
		<description><![CDATA[Hello Mats,&lt;br /&gt;
&lt;br /&gt;
Thank you for your answer.&lt;br /&gt;
&lt;br /&gt;
As a musician myself, loving improvisation, I dream of an instrument that I could simply use to express my musical creativity with no technical barriers. With the use of a software, I imagine it is perhaps possible to mind-automate functionalities like &quot;add a reggae beat&quot;, &quot;loop&quot;, &quot;add one instrument&quot;, &quot;make this as trumpet&quot;, &quot;accelerate&quot; ? as we create a song, with the option of interacting more individuals with the same &quot;super-powers&quot;, playing their part. But to create the music itself, the notes, the melody, there is an impossible challenge to be able to properly interpret whatever the mind is imagining. First, the signal, do we have enough information/resolution to be able to say &quot;the info is in the signal, let&#039;s find it!&quot;? In my opinion, there is already there a highly possible chance that it isn&#039;t the case (meaning impossible from the start). However, for the sake of imagination, let&#039;s say we got it all, how to find the info in the signal/noise that the brain permanently makes?&lt;br /&gt;
&lt;br /&gt;
For this last point, I was thinking of the following study and would like your opinion. Let&#039;s say we take a musician who improvises a song and we record the signal sent by his brain as he&#039;s playing. The final result, the music, has a couple aspects that we should be able to somehow retrieve, mathematically, in the signal sent from the brain as he was playing. For one, the rhythm, I might want to say, should be kinda easy to see if there are harmonics in the signal which follows it. But we can extend and perhaps follow more the music into the signal, from other angles (harmonics, notes, volume?). Is that Sciences Fiction?&lt;br /&gt;
&lt;br /&gt;
I was thinking, out of all our thoughts, music should be one of the easiest to dig out the brain, as it relatively simple in the way it is coded and how it usually offer a high level of concentration (as a consequence, less noise in the signal I would say).&lt;br /&gt;
&lt;br /&gt;
Thank you again for your answer (sorry I am less fast to respond than you), I would be pleased to read your thoughts on the topic.&lt;br /&gt;
&lt;br /&gt;
Sincerely,&lt;br /&gt;
&lt;br /&gt;
Olivier&lt;br /&gt;
]]></description>
		<content:encoded><![CDATA[<p>Hello Mats,</p>
<p>Thank you for your answer.</p>
<p>As a musician myself, loving improvisation, I dream of an instrument that I could simply use to express my musical creativity with no technical barriers. With the use of a software, I imagine it is perhaps possible to mind-automate functionalities like &#8220;add a reggae beat&#8221;, &#8220;loop&#8221;, &#8220;add one instrument&#8221;, &#8220;make this as trumpet&#8221;, &#8220;accelerate&#8221; ? as we create a song, with the option of interacting more individuals with the same &#8220;super-powers&#8221;, playing their part. But to create the music itself, the notes, the melody, there is an impossible challenge to be able to properly interpret whatever the mind is imagining. First, the signal, do we have enough information/resolution to be able to say &#8220;the info is in the signal, let&#8217;s find it!&#8221;? In my opinion, there is already there a highly possible chance that it isn&#8217;t the case (meaning impossible from the start). However, for the sake of imagination, let&#8217;s say we got it all, how to find the info in the signal/noise that the brain permanently makes?</p>
<p>For this last point, I was thinking of the following study and would like your opinion. Let&#8217;s say we take a musician who improvises a song and we record the signal sent by his brain as he&#8217;s playing. The final result, the music, has a couple aspects that we should be able to somehow retrieve, mathematically, in the signal sent from the brain as he was playing. For one, the rhythm, I might want to say, should be kinda easy to see if there are harmonics in the signal which follows it. But we can extend and perhaps follow more the music into the signal, from other angles (harmonics, notes, volume?). Is that Sciences Fiction?</p>
<p>I was thinking, out of all our thoughts, music should be one of the easiest to dig out the brain, as it relatively simple in the way it is coded and how it usually offer a high level of concentration (as a consequence, less noise in the signal I would say).</p>
<p>Thank you again for your answer (sorry I am less fast to respond than you), I would be pleased to read your thoughts on the topic.</p>
<p>Sincerely,</p>
<p>Olivier</p>
]]></content:encoded>
	</item>
	<item>
		<title>Comment on New demonstration by Mats Sivertsen</title>
		<link>http://mats-sivertsen.net/blog/?p=41#comment-222</link>
		<dc:creator><![CDATA[Mats Sivertsen]]></dc:creator>
		<pubDate>Fri, 10 Aug 2012 12:08:53 +0000</pubDate>
		<guid isPermaLink="false">http://mats-sivertsen.net/blog/?p=41#comment-222</guid>
		<description><![CDATA[Hi Oliver,&lt;br /&gt;
&lt;br /&gt;
Thanks for your comments! In theory you can make music with the subConch as it is now, but controlling the pitch with accuracy is a challenge. But you are looking for something more complex perhaps. I suppose the imagination is the limit here. It&#039;s possible to map the data from the headset in many other ways than pitch and tone control. It could be hooked up to some kind of DJ program where you mix different pre-made segments, or a composition program where your mental data is used to compose a piece that is played live (Norwegian composer Rolf Wallin has done some experiments with the latter). These alternatives offer more music, but less direct control. You could also feed the mental wave-data into an amplifier so you can hear what your brain sounds like. This has been done before though, as early as in the 60s, I think.&lt;br /&gt;
&lt;br /&gt;
The plan for the subConch now is finishing the gallery installation. Here you can use a simplified version of the conch that works as a type of horn. You mentally &quot;blow air&quot; into it to produce a sound that fades when you stop &quot;blowing&quot;.&lt;br /&gt;
&lt;br /&gt;
What are your ideas for mind controlled music? I&#039;d love to hear your ideas.
]]></description>
		<content:encoded><![CDATA[<p>Hi Oliver,</p>
<p>Thanks for your comments! In theory you can make music with the subConch as it is now, but controlling the pitch with accuracy is a challenge. But you are looking for something more complex perhaps. I suppose the imagination is the limit here. It&#8217;s possible to map the data from the headset in many other ways than pitch and tone control. It could be hooked up to some kind of DJ program where you mix different pre-made segments, or a composition program where your mental data is used to compose a piece that is played live (Norwegian composer Rolf Wallin has done some experiments with the latter). These alternatives offer more music, but less direct control. You could also feed the mental wave-data into an amplifier so you can hear what your brain sounds like. This has been done before though, as early as in the 60s, I think.</p>
<p>The plan for the subConch now is finishing the gallery installation. Here you can use a simplified version of the conch that works as a type of horn. You mentally &#8220;blow air&#8221; into it to produce a sound that fades when you stop &#8220;blowing&#8221;.</p>
<p>What are your ideas for mind controlled music? I&#8217;d love to hear your ideas.</p>
]]></content:encoded>
	</item>
	<item>
		<title>Comment on New demonstration by Olivier Preziosa</title>
		<link>http://mats-sivertsen.net/blog/?p=41#comment-220</link>
		<dc:creator><![CDATA[Olivier Preziosa]]></dc:creator>
		<pubDate>Fri, 10 Aug 2012 10:52:17 +0000</pubDate>
		<guid isPermaLink="false">http://mats-sivertsen.net/blog/?p=41#comment-220</guid>
		<description><![CDATA[Hello Mats,&lt;br /&gt;
&lt;br /&gt;
I was very pleased to find out about your project and wish you the best luck of success with it.&lt;br /&gt;
&lt;br /&gt;
Me and a friend of mine (we&#039;re both developpers and musicians) are planning to work on a similar project (after I saw emotiv products, I had the idea and I&#039;m glad you did too :). Our goal, as fictional as it sounds, would be to create some kind of SubConch, where the mind can control a software to create music. Coming from your experience, do you have any advise?&lt;br /&gt;
&lt;br /&gt;
If I may ask, what are your future plans with SubConch? I would be delighted to work with you on a shared vision on what could be accomplished with this technology.&lt;br /&gt;
&lt;br /&gt;
Thank you for your work... it makes all of us dream :)&lt;br /&gt;
&lt;br /&gt;
Olivier Preziosa
]]></description>
		<content:encoded><![CDATA[<p>Hello Mats,</p>
<p>I was very pleased to find out about your project and wish you the best luck of success with it.</p>
<p>Me and a friend of mine (we&#8217;re both developpers and musicians) are planning to work on a similar project (after I saw emotiv products, I had the idea and I&#8217;m glad you did too :). Our goal, as fictional as it sounds, would be to create some kind of SubConch, where the mind can control a software to create music. Coming from your experience, do you have any advise?</p>
<p>If I may ask, what are your future plans with SubConch? I would be delighted to work with you on a shared vision on what could be accomplished with this technology.</p>
<p>Thank you for your work&#8230; it makes all of us dream <img src="http://mats-sivertsen.net/blog/wp-includes/images/smilies/icon_smile.gif" alt=":)" class="wp-smiley" /> </p>
<p>Olivier Preziosa</p>
]]></content:encoded>
	</item>
	<item>
		<title>Comment on Mattel! by Adrian Wetzel</title>
		<link>http://mats-sivertsen.net/blog/?p=43#comment-238</link>
		<dc:creator><![CDATA[Adrian Wetzel]]></dc:creator>
		<pubDate>Tue, 08 Mar 2011 07:46:06 +0000</pubDate>
		<guid isPermaLink="false">http://mats-sivertsen.net/blog/?p=43#comment-238</guid>
		<description><![CDATA[Don&#039;t sell yourself short....  This Mattel hack is cool in its own right but the guy just used the existing Mattel fan voltage output that normally controls the levitating ball to control the Radio Shack MG1 Moog synth which already is set up to do this from the store.  This in no more than a simple hardware (circuit) modification.  Normally you would use a foot pedal or some other form of variable resistor for the MG1.  No software involved.... Even though his video is edited he is still not able to control pitch instantly.  Now if he built a midi controller that was in turn controlled my the Mattel or better yet the Emotiv headset that was capable of more than pitch like your software is then I would say you have some competition.
]]></description>
		<content:encoded><![CDATA[<p>Don&#8217;t sell yourself short&#8230;.  This Mattel hack is cool in its own right but the guy just used the existing Mattel fan voltage output that normally controls the levitating ball to control the Radio Shack MG1 Moog synth which already is set up to do this from the store.  This in no more than a simple hardware (circuit) modification.  Normally you would use a foot pedal or some other form of variable resistor for the MG1.  No software involved&#8230;. Even though his video is edited he is still not able to control pitch instantly.  Now if he built a midi controller that was in turn controlled my the Mattel or better yet the Emotiv headset that was capable of more than pitch like your software is then I would say you have some competition.</p>
]]></content:encoded>
	</item>
	<item>
		<title>Comment on New demonstration by Mats Sivertsen</title>
		<link>http://mats-sivertsen.net/blog/?p=41#comment-218</link>
		<dc:creator><![CDATA[Mats Sivertsen]]></dc:creator>
		<pubDate>Tue, 27 Jul 2010 12:57:50 +0000</pubDate>
		<guid isPermaLink="false">http://mats-sivertsen.net/blog/?p=41#comment-218</guid>
		<description><![CDATA[Hey Cristian,&lt;br /&gt;
&lt;br /&gt;
Thanks! I&#039;m still developing parts of the software, but hope to make it available sometime this fall. In addtion to the software you&#039;ll need the Emotiv EPOC headset, which will cost you around $299 US.&lt;br /&gt;
&lt;br /&gt;
Mats
]]></description>
		<content:encoded><![CDATA[<p>Hey Cristian,</p>
<p>Thanks! I&#8217;m still developing parts of the software, but hope to make it available sometime this fall. In addtion to the software you&#8217;ll need the Emotiv EPOC headset, which will cost you around $299 US.</p>
<p>Mats</p>
]]></content:encoded>
	</item>
	<item>
		<title>Comment on New demonstration by Anonymous</title>
		<link>http://mats-sivertsen.net/blog/?p=41#comment-219</link>
		<dc:creator><![CDATA[Anonymous]]></dc:creator>
		<pubDate>Tue, 27 Jul 2010 04:48:26 +0000</pubDate>
		<guid isPermaLink="false">http://mats-sivertsen.net/blog/?p=41#comment-219</guid>
		<description><![CDATA[Hi, my name is Cristian and I&#039;m form Colombia, I was wondering if I could get a Subconch. I would like to know how much would it cost.&lt;br /&gt;
&lt;br /&gt;
Thank you, I must tell that this is awsome!&lt;br /&gt;
&lt;br /&gt;
Cristian
]]></description>
		<content:encoded><![CDATA[<p>Hi, my name is Cristian and I&#8217;m form Colombia, I was wondering if I could get a Subconch. I would like to know how much would it cost.</p>
<p>Thank you, I must tell that this is awsome!</p>
<p>Cristian</p>
]]></content:encoded>
	</item>
	<item>
		<title>Comment on MAX/MSP by Mats Sivertsen</title>
		<link>http://mats-sivertsen.net/blog/?p=37#comment-213</link>
		<dc:creator><![CDATA[Mats Sivertsen]]></dc:creator>
		<pubDate>Tue, 11 May 2010 10:45:58 +0000</pubDate>
		<guid isPermaLink="false">http://mats-sivertsen.net/blog/?p=37#comment-213</guid>
		<description><![CDATA[Yes and no! Got ideas for the OSC implementation using his emoosc, but decided to do it a little bit differently (also I think it worked only with EmoComposer). I didn&#039;t know he had updated it, I need to take a look. Thanks! The main bulk of my osc-packet is the raw eeg and gyro data, but it would be cool if the implementations where sort of interchangable.
]]></description>
		<content:encoded><![CDATA[<p>Yes and no! Got ideas for the OSC implementation using his emoosc, but decided to do it a little bit differently (also I think it worked only with EmoComposer). I didn&#8217;t know he had updated it, I need to take a look. Thanks! The main bulk of my osc-packet is the raw eeg and gyro data, but it would be cool if the implementations where sort of interchangable.</p>
]]></content:encoded>
	</item>
</channel>
</rss>
