Tag Archives: Music

Code DSP

Spatial Upmixer

I sort of got obsessed with the idea of being able to take a recording of a soloist and make it sound like an ensemble. As I found out, it’s not an easy thing to do, but adding on to spatialization software written by Ryan McGee, I was able to make something that sounds pretty decent (imho).

Technically speaking, this process is called “decorrellated upmixing” and it’s much harder than just adding a bunch of small delays to a signal. In real life, no two instruments are ever playing perfectly in phase with one another, but when we try to multiply a single signal that’s basically what’s happening, so as a result we often end up with a really strange “alien” phase-cancellation effect which sounds completely unnatural. My software gets around this by all sorts of filters and randomization techniques. Still there’s much work to be done, but I was able to create some realistic ensembles as well as some interesting effects on a variety of files. If anything it might be a quick way to sneak around copyright if you want to use someone’s recording…
[audio:http://amusesmile.com/old/projects/240B/violin.mp3|titles=Bach Chaconne Solo]
[audio:http://amusesmile.com/old/projects/240B/violin_multi.mp3|titles=Bach Chaconne Ensemble]
More Examples in the full project page.

Music

Violin Experiment

So out of the blue yesterday, one of my friends sold me a violin for 75 bucks. Random right? I didn’t ask where he picked it up…

Anyway this is my first experiment with the instrument. Since I can’t play the violin to save my life, I’m using an improved version of my remix software to turn simple plucking into a coherent beat.
[audio:http://amusesmile.com/old/sound/violin_augment.mp3|titles=Violin Augment]

Music

Sonic Digitalization

Working in Max I built a performance interface for analyzing and chopping live samples based on attack points, change in frequency, etc. This allows you to “perform” the sample by playing it back through a series of sequencers while adding effects, changing playback speed, and generally tweaking all sorts of parameters on the fly.

I’ve done this alone and as a duet with other performers. Usually it works by having them play ~10 seconds of material into the program which I then use to improvise an accompaniment that they can perform over. I enjoy the effect of “trapping” an acoustic instrument into a digital medium and sound. When done right it sounds more organic than an electronic piece, but less so than an actual human performer.

Flute:
[audio:http://amusesmile.com/old/sound/flute5.mp3,http://amusesmile.com/old/sound/fluteBassM.mp3,http://amusesmile.com/old/sound/fluteBreathM.mp3,http://amusesmile.com/old/sound/fluteDuetM.mp3,http://amusesmile.com/old/sound/fluteHalfM.mp3|titles=flute chop 5,sub two step,breathy stutter,duet,flute chop 4]
Saxophone:
[audio:http://amusesmile.com/old/sound/hyperSax1.mp3,http://amusesmile.com/old/sound/saxElectric.mp3,http://amusesmile.com/old/sound/saxTranceM.mp3,http://amusesmile.com/old/sound/saxTrance2M.mp3,http://amusesmile.com/old/sound/saxCom.mp3|titles=hyper,breathing machine, trance, trance 2, unison at the fourth]
Drums:
[audio:http://amusesmile.com/old/sound/alienDrumM.mp3|titles=stellar tabla]
Bass:
[audio:http://amusesmile.com/old/sound/mingusChop.mp3|titles=mingus chop]
Water:
[audio:http://amusesmile.com/old/sound/dripM.mp3|titles=drip]

Music

Folie à Deux

For midi piano and any melodic instrument
2009

Score

This piece uses my MaxMSP based “com_poser” automated performance/composition tool. For each movement, the basic tonal area and length are chosen to begin with. Parameters are also set to determine form: tempo, rhythmic type/sporadicity, harmonic depth, number of voices, number of rhythmic voices, volume range/sporadicity, and harmonic range/sporadicity are all graphed in time before hand in order to control the midi player piano’s improvisation. This creates the skeleton of the piece which the performer uses to improvise his/her own accompaniment.

Philosophically, this plays on the idea of interactivity: since parameters are chosen before hand, often by the player him/herself, is there really any interaction, or is this just some sort of elaborate solo? It also works as a musical demonstration of the Turing Test: many people unfamiliar with player pianos assume the recording to be of an actual human player. Those familiar with computer music can’t always tell whether the piano is actually responding to the instrumentalist or vice-versa or both. What does it mean when we can no longer tell?

I. Contagion
[audio:http://amusesmile.com/old/sound/fol1.mp3|titles=contagion]
II. Folie imposée
[audio:http://amusesmile.com/old/sound/fol2.mp3|titles=folie imposée]
III. Afferent Feedback
[audio:http://amusesmile.com/old/sound/fol3.mp3|titles=afferent feedback]
IV. Paracusia
[audio:http://amusesmile.com/old/sound/fol4.mp3|titles=paracusia]
V. Folie simultanée
[audio:http://amusesmile.com/old/sound/fol5.mp3|titles=folie simultanée]