Wednesday, May 22, 2013

Final Rendition

You can find the final version of the Visual Cortex Quartet here. As opposed to the first rendition, it draws from 2 sets of 4 voxels each: one set for the first 3 run-throughs of the data, and a second for the final two. Daisy Morin built a visual movie of this so we can see how the task progresses as the music plays - unfortunately the timing was unable to sync closely enough, so it ends up being a bit distracting to the piece.

For those who want to check out the R code at the end of the runthrough, it's here in R format and here in plaintext format. Lastly, the class wrap-up slides can be found here. Thanks to everyone - especially Chris and Hongchan - for a wonderful class!



Sunday, May 19, 2013

Visual Cortex Quartet

Here's a new rendition of the visual data. Currently Daisy Morin, who helped with film editing on a student film I wrote and directed years ago, is helping to sync this to the task/brain imaging we saw in the last post. In this piece, I've created several movements that run through the data at different speeds. This is close to what I'll run with at the end; however, at present each data run-through uses the exact same notes/data. What I'll do for the final draft is make sure that each pass moves through different voxels in the cortex. For example, this hits 1,5,9,13. I'll make sure to hit each of the 14 points we have at least once, and keep them as spaced-apart as possible.

Wednesday, May 15, 2013

Visual System Beta

A couple more renditions of the visual system data - without gating or duration mods - can be found here:

Visual System Beta 1

Visual System Beta 2

The changes are essentially, reducing the number of voxels we're rendering into audio, and what sounds are played. In Beta 1, we're rendering 7 of the 14 voxels (alternating), in Beta 2, we're rendering 4 of the 14 (every fourth voxel). 

Monday, May 13, 2013

Visual System Alpha


For today I built a very brief MIDI conversion of the data. I wanted to hear it without strong filtering, so there's no gating of the signal and no duration or threshold moderation. The result is pretty cacophonous, so I don't recommend it, but here it is. However, I think something interesting is that you can see propagation of the wave through the brain, in the MIDI data above.

Also, you can see what the task looks like (originally from the Wandell lab) here:

video

Sunday, April 28, 2013

Gating

A few minor tweaks this week as I focus on getting the vision maps in MrVista. The first is fixing the "play-every-note" syndrome we've had going. In the first interpretation this wasn't obvious because anything outside the legal range of the instrument was just dropping out, so there was a natural filter - if a sloppy one. This was really obvious in last week's rendition, though.

To solve this I added in a simple gate: drop the note if the signal is in the bottom 10% of the range. As it turns out, this drops a *ton* of notes, especially in what Jim had characterized as the default mode network. It may be interesting to sometimes reverse the threshold based on where we are in the brain - if we want to pick up a lot of busy humdrum noise then we may want to kill everything in the top 30% of the range, etc.

Also solved the duration problem in the Java file thanks to Hongchan. At present it's just calling duration (128/(velocity+1)) so softer notes are held longer.

The Latest Result: using the same instruments as last time (flute: attention, violins: visual, cello: default mode network)

Sunday, April 21, 2013

Getting Things in Tune

A flawed, if interesting, aspect of the last audio rendition was that it was playing outside the legal range for the instruments I mapped to it, resulting in dropped notes. This week I set up an R program that accomplishes several things to get around this:

1. A converter to build an array of the major scale of legal notes from any starting note. For instance, if you pick D#0 as your starting note, it will build the D# major scale. It's simple enough to generate one of these for minor and for the other scale types; as long as there is a fixed progression of intervals between notes in the scale, we're good to go.

2. A repository of legal MIDI note ranges for instruments. If I pick weirder, nonsymphonic instruments, I can  just specify a few notes that I prefer.

3. A Scale-Instrument checker that creates a new array of what the legal notes are in a scale, taking into account the range of legal notes for that instrument.

4. A converter that then apportions the incoming data signal into this new legal range that the checker provides it.

5. The creation of an output file that gives us these new note values in columns.

I'm no coder but it seemed to work well enough in D major. You can see the code in plain text or download the R file directly. I'm no programmer so it's frought with bad style from the naming conventions on, but at least it works! Here's the output song:

New Rendition

If you notice, it's got that ceaseless quality again, which is worsened by the fact that every note is played. I tried to play around with the Java file that does the final output, but nothing I did altered the note duration. I'm open to suggestions in class.

This marks a switchover in my focus for the next 1-1.5 weeks. I'll need to learn Mr. Vista and the visual field maps in order to get the timecourses out of the sample data set. Since that will take some time and we have a pipeline set up for creating the music from the data, I'll be doing less coding of the pipeline. However, there are a few things I'll want to tinker with in the coming week on the music side:
  • Discovering a way to increase note duration
  • Putting a gating mechanism into the code (only translate note if signal threshold above a certain value)
  • Hardcoding other scale pipelines than just major
The first two points should ease up the mechanical nature of things a fair bit. It's coming along!

Monday, April 15, 2013

First Demo

So today I prepped a CSV by hand to think through what I want the signal converter R file to do. You can find it here. In this first-pass test, I used the results of a PCA analysis on my first study, which has people donating to preserve threatened park land. However the PCA analysis was done by a friend who misunderstood the timing increment (thinking that we were capturing data every 1 second, instead of every 2) and so the results should be taken with a grain of salt. It resulted in the capture of 3 distinct neural networks, which roughly appeared to be attentional focus, visual processing, and the "default mode network" (what's lighting up when you've got the internal monologue going but aren't focusing on a task). I used the following equations to define pitch and velocity for each of the 3 networks:

Xi = signal at timepoint i
Xh = max signal
Xl = min signal
Pi = a given network's proportional contribution to the total signal strength of all 3 networks at timepoint i

Pitch: 128*[(Xi - Xl)/(Xh-Xl)]
Velocity: 128*Pi

These values (converted to integers) were dumped into a separate prepped CSV file to fit into the CSV-to-MIDI converter. This created a .mid file which I used Audacity to convert to a .midi file, then import into Reaper. Reaper split the tracks by channel, and I used Eastwest QL Symphonic Orchestra and Absynth 5 for the VSTs. The resulting file, which tracks attention with strings, visual processing with woodwinds, and the default mode network with a synth, can be found here.

You can hear really abrupt notes. After going into the Java file for the CSV-to-MIDI converter, I realized this is because note_on and note_off commands use the same Tick variable. So in future I will mod their converter to take both a duration variable where note_off occurs at (note_on + dur). I also found a trove of MIDI messages which I should be able to invoke in the Java file. This grants a specific way to do pitch bend, mod wheel, pedal, control change, etc. data interpretation which I would otherwise not be able to get at using their vanilla program.

Sunday, April 14, 2013

The Lay of the Land

The goal of this project will be, ultimately, to turn brain signals from the complex and data-rich visual cortex into a symphony, so that we can hear the visual cortex at work while subjects undergoing neuroimaging are viewing a visual stimulus.

My background is in studying decision-making processes on environmental issues using fMRI, for my PhD in E-IPER, The Interdisciplinary Program in Environment and Resources. However, analyzing the visual cortex has a number of idiosyncrasies and complexities compared to the regions I normally study, involved in emotional decision-making and reward/valuation analysis. The visual cortex is one of the most studied and complex areas of the brain, allowing the roles of its regions to be clearly and precisely articulated (as opposed to regions associated with more nebulous concepts like theory of mind, empathy, or cost-benefit analysis). To elicit a map of what bits of grey do what, stimuli that are often geometric in nature: checkerboard patterns of rotating wedges, and shrinking and expanding rings, are often used. The goal of this first piece will be to break down a data set when someone is viewing one of these retinotopic mapping stimulus sets, and form a piece that viewers can listen to as they watch the stimulus themselves. If this works, we can try it with more visually complex and engaging stimuli.

The process of creating this music will go roughly as follows:

  1. Obtain a retinotopic data set.
  2. Isolate regions of interest (ROIs) in the visual cortex, that have distinct reactions to different aspects of the stimulus (i.e., the way the lines are oriented, the scale of the objects, the colors, etc.)
  3. Obtain time courses for each ROI. In other words, I will know what the strength of the fMRI signal was for each of these ROIs, and how it changed over the duration of the stimulus presentation.
  4. Convert the time courses into MIDI notes or control changes (CCs). There are a number of interpretive liberties at this stage which we'll go into later.
  5. Arrange instruments and effects within a DAW that will play these notes and respond to these CCs.
  6. Present the stimulus set to an audience visually, as they hear the arrangement.
As luck would have it, the other course I'm taking this quarter is Psych 204B, Computational Neuroimaging: Analysis Methods, taught by members of Stanford's Vision and Perception Lab. Last week I spent a little time meeting with them and getting an idea of what might be necessary in order to obtain a retinotopic data set. I was led to Mr. Vista, a MATLAB interface for obtaining functional and anatomical data from fMRI data sets. On this website there is also a very clean (i.e., not noisy) sample data set. However, in order to dump out time courses - i.e., what's lighting up when - I need to learn Mr. Vista, learn exactly what to look for in the neuroanatomy of the visual cortex, and then hand-draw the ROIs on the data set. If this were a well-studied area for my lab, i.e., the nucleus accumbens (reward center) or anterior cingulate cortex (detects errors between prediction/expectation and outcomes, cognitive dissonance), we would have predefined ROIs and a Python script that would dump out the time courses. However, since we don't and the 204B group would like us to analyze a data set using Mr. Vista regardless, I'll be learning the program and the neuroanatomy in order to do this.

An interesting side story I learned is that we can actually derive more data than just signal strength. In my lab, only signal strength matters - however, there are at least 2 more dimensions that vision scientists care about in the data, and these can be derived for each ROI using Mr. Vista. This is very exciting for our purposes, as we can control multiple aspects of each instrument: for example, the pitch, velocity, and overlaid effects. We can also control other modulations of each instrument by equations involving these 3 dimensions, in addition to equations involving regions picked out in other analyses, like PCA (which can get signals not just of specific ROIs, but of distinct networks of neurons that are working together during the task, across the entire brain).

The other intriguing detail I learned - which Kevin Weiner of the vision lab had initially thought would be a disadvantage - is the presence of an echo signal. There are stratified regions of the visual cortex that pass information along across different streams, that become more and more specialized. Apparently some of these regions will basically have almost identical signal patterns, only delayed in time. Imagine a phrase being played by the cellos first, then followed a moment later by the violins, and you'll begin to grasp the kind of structures and impressions of intentionality that could fall out of this kind of a data set when you render it to audio.

Ultimately we'll be converting the time courses we get in CSV form into MIDI notes. To do this I'll be using this CSV to MIDI converter. Tonight I tested it and it worked with a little editing of the batch file and installation of the latest Java development kit. Over the coming week, I'll be building a Signal-to-CSV converter in the statistical program R that will make a bland data set into a CSV that is ready for the MIDI converter. This will involve defining channels, note durations, tempo, and most importantly, the ability to easily convert the raw signal numbers into a MIDI scale. I'd like to put in place packages, so that, say you want the potential notes to be output in F Blues scale rather than all possible notes, we auto-allot the signal across just the notes that comprise F Blues. I would have various command streams to output the initial CSV in a new file that specifies which conversion is used for each column. This doesn't just have to be used for defining scales: say I enjoy a specific VST which has amazing percussive sounds in 17 widely and unevenly distributed notes, but I don't much care for the rest? I would find out which MIDI notes these are, and specify them in an array. The program reads the length of the array and then auto-splices and assigns all signal values amidst those 17. Similarly, if you want to cut the high end out of the possible values for a MIDI CC, you just end the array early - say, 0-95.

Over the next week I'll not only be working on this converter but also reading up on the visual cortex and downloading Mr. Vista. Here's the first primer I'm tackling: Visual Field Maps in the Human Cortex. This should give me an idea of how to chop up the ROIs. Hopefully next week I'll be at a stage where I can actually define these ROIs in Mr. Vista and output a workable data set. Creating an underlying Signal-to-CSV converter that can quickly define scales and other presets for each column will take some time as well, so if Mr. Vista is a bit slow to pick up, I'll still have plenty to chew on. With the signal converter lined up, it should be straightforward to send time courses right to the arrangement stage.

Stay tuned!