Wednesday, May 22, 2013

Final Rendition

You can find the final version of the Visual Cortex Quartet here. As opposed to the first rendition, it draws from 2 sets of 4 voxels each: one set for the first 3 run-throughs of the data, and a second for the final two. Daisy Morin built a visual movie of this so we can see how the task progresses as the music plays - unfortunately the timing was unable to sync closely enough, so it ends up being a bit distracting to the piece.

For those who want to check out the R code at the end of the runthrough, it's here in R format and here in plaintext format. Lastly, the class wrap-up slides can be found here. Thanks to everyone - especially Chris and Hongchan - for a wonderful class!



Sunday, May 19, 2013

Visual Cortex Quartet

Here's a new rendition of the visual data. Currently Daisy Morin, who helped with film editing on a student film I wrote and directed years ago, is helping to sync this to the task/brain imaging we saw in the last post. In this piece, I've created several movements that run through the data at different speeds. This is close to what I'll run with at the end; however, at present each data run-through uses the exact same notes/data. What I'll do for the final draft is make sure that each pass moves through different voxels in the cortex. For example, this hits 1,5,9,13. I'll make sure to hit each of the 14 points we have at least once, and keep them as spaced-apart as possible.

Wednesday, May 15, 2013

Visual System Beta

A couple more renditions of the visual system data - without gating or duration mods - can be found here:

Visual System Beta 1

Visual System Beta 2

The changes are essentially, reducing the number of voxels we're rendering into audio, and what sounds are played. In Beta 1, we're rendering 7 of the 14 voxels (alternating), in Beta 2, we're rendering 4 of the 14 (every fourth voxel). 

Monday, May 13, 2013

Visual System Alpha


For today I built a very brief MIDI conversion of the data. I wanted to hear it without strong filtering, so there's no gating of the signal and no duration or threshold moderation. The result is pretty cacophonous, so I don't recommend it, but here it is. However, I think something interesting is that you can see propagation of the wave through the brain, in the MIDI data above.

Also, you can see what the task looks like (originally from the Wandell lab) here: