Wednesday, May 22, 2013

Final Rendition

You can find the final version of the Visual Cortex Quartet here. As opposed to the first rendition, it draws from 2 sets of 4 voxels each: one set for the first 3 run-throughs of the data, and a second for the final two. Daisy Morin built a visual movie of this so we can see how the task progresses as the music plays - unfortunately the timing was unable to sync closely enough, so it ends up being a bit distracting to the piece.

For those who want to check out the R code at the end of the runthrough, it's here in R format and here in plaintext format. Lastly, the class wrap-up slides can be found here. Thanks to everyone - especially Chris and Hongchan - for a wonderful class!



Sunday, May 19, 2013

Visual Cortex Quartet

Here's a new rendition of the visual data. Currently Daisy Morin, who helped with film editing on a student film I wrote and directed years ago, is helping to sync this to the task/brain imaging we saw in the last post. In this piece, I've created several movements that run through the data at different speeds. This is close to what I'll run with at the end; however, at present each data run-through uses the exact same notes/data. What I'll do for the final draft is make sure that each pass moves through different voxels in the cortex. For example, this hits 1,5,9,13. I'll make sure to hit each of the 14 points we have at least once, and keep them as spaced-apart as possible.

Wednesday, May 15, 2013

Visual System Beta

A couple more renditions of the visual system data - without gating or duration mods - can be found here:

Visual System Beta 1

Visual System Beta 2

The changes are essentially, reducing the number of voxels we're rendering into audio, and what sounds are played. In Beta 1, we're rendering 7 of the 14 voxels (alternating), in Beta 2, we're rendering 4 of the 14 (every fourth voxel). 

Monday, May 13, 2013

Visual System Alpha


For today I built a very brief MIDI conversion of the data. I wanted to hear it without strong filtering, so there's no gating of the signal and no duration or threshold moderation. The result is pretty cacophonous, so I don't recommend it, but here it is. However, I think something interesting is that you can see propagation of the wave through the brain, in the MIDI data above.

Also, you can see what the task looks like (originally from the Wandell lab) here:


Sunday, April 28, 2013

Gating

A few minor tweaks this week as I focus on getting the vision maps in MrVista. The first is fixing the "play-every-note" syndrome we've had going. In the first interpretation this wasn't obvious because anything outside the legal range of the instrument was just dropping out, so there was a natural filter - if a sloppy one. This was really obvious in last week's rendition, though.

To solve this I added in a simple gate: drop the note if the signal is in the bottom 10% of the range. As it turns out, this drops a *ton* of notes, especially in what Jim had characterized as the default mode network. It may be interesting to sometimes reverse the threshold based on where we are in the brain - if we want to pick up a lot of busy humdrum noise then we may want to kill everything in the top 30% of the range, etc.

Also solved the duration problem in the Java file thanks to Hongchan. At present it's just calling duration (128/(velocity+1)) so softer notes are held longer.

The Latest Result: using the same instruments as last time (flute: attention, violins: visual, cello: default mode network)

Sunday, April 21, 2013

Getting Things in Tune

A flawed, if interesting, aspect of the last audio rendition was that it was playing outside the legal range for the instruments I mapped to it, resulting in dropped notes. This week I set up an R program that accomplishes several things to get around this:

1. A converter to build an array of the major scale of legal notes from any starting note. For instance, if you pick D#0 as your starting note, it will build the D# major scale. It's simple enough to generate one of these for minor and for the other scale types; as long as there is a fixed progression of intervals between notes in the scale, we're good to go.

2. A repository of legal MIDI note ranges for instruments. If I pick weirder, nonsymphonic instruments, I can  just specify a few notes that I prefer.

3. A Scale-Instrument checker that creates a new array of what the legal notes are in a scale, taking into account the range of legal notes for that instrument.

4. A converter that then apportions the incoming data signal into this new legal range that the checker provides it.

5. The creation of an output file that gives us these new note values in columns.

I'm no coder but it seemed to work well enough in D major. You can see the code in plain text or download the R file directly. I'm no programmer so it's frought with bad style from the naming conventions on, but at least it works! Here's the output song:

New Rendition

If you notice, it's got that ceaseless quality again, which is worsened by the fact that every note is played. I tried to play around with the Java file that does the final output, but nothing I did altered the note duration. I'm open to suggestions in class.

This marks a switchover in my focus for the next 1-1.5 weeks. I'll need to learn Mr. Vista and the visual field maps in order to get the timecourses out of the sample data set. Since that will take some time and we have a pipeline set up for creating the music from the data, I'll be doing less coding of the pipeline. However, there are a few things I'll want to tinker with in the coming week on the music side:
  • Discovering a way to increase note duration
  • Putting a gating mechanism into the code (only translate note if signal threshold above a certain value)
  • Hardcoding other scale pipelines than just major
The first two points should ease up the mechanical nature of things a fair bit. It's coming along!

Monday, April 15, 2013

First Demo

So today I prepped a CSV by hand to think through what I want the signal converter R file to do. You can find it here. In this first-pass test, I used the results of a PCA analysis on my first study, which has people donating to preserve threatened park land. However the PCA analysis was done by a friend who misunderstood the timing increment (thinking that we were capturing data every 1 second, instead of every 2) and so the results should be taken with a grain of salt. It resulted in the capture of 3 distinct neural networks, which roughly appeared to be attentional focus, visual processing, and the "default mode network" (what's lighting up when you've got the internal monologue going but aren't focusing on a task). I used the following equations to define pitch and velocity for each of the 3 networks:

Xi = signal at timepoint i
Xh = max signal
Xl = min signal
Pi = a given network's proportional contribution to the total signal strength of all 3 networks at timepoint i

Pitch: 128*[(Xi - Xl)/(Xh-Xl)]
Velocity: 128*Pi

These values (converted to integers) were dumped into a separate prepped CSV file to fit into the CSV-to-MIDI converter. This created a .mid file which I used Audacity to convert to a .midi file, then import into Reaper. Reaper split the tracks by channel, and I used Eastwest QL Symphonic Orchestra and Absynth 5 for the VSTs. The resulting file, which tracks attention with strings, visual processing with woodwinds, and the default mode network with a synth, can be found here.

You can hear really abrupt notes. After going into the Java file for the CSV-to-MIDI converter, I realized this is because note_on and note_off commands use the same Tick variable. So in future I will mod their converter to take both a duration variable where note_off occurs at (note_on + dur). I also found a trove of MIDI messages which I should be able to invoke in the Java file. This grants a specific way to do pitch bend, mod wheel, pedal, control change, etc. data interpretation which I would otherwise not be able to get at using their vanilla program.