Code DSP Tutorial

Audio Programming with Cinder Part I

I created this tutorial because I was surprised at the lack of documentation and support for audio programming using the Cinder framework. The tutorial is split up into three parts:

Part one explains how to download Cinder, set up a basic application, add an audio callback to the project and use it to output a sine wave. Part two describes how to add the Gamma synthesis library to Cinder and demonstrates how it can be used to set up a simple FM synth. Part three shows how to build an interface for this synthesizer using the ciUI library.  It ends with the creation of a simple waveform visualizer, leaving us with a nice little audio/visual instrument app.  Here’s a preview of the final result:

Part I: Installing Cinder and Creating an Audio Callback

1. Download and Install the latest version of Cinder from here.

2. Open up Tinderbox inside of the Cinder/tools folder. This helps us to create a new Cinder project. Make sure you use these settings:

Target: Basic App (default)
Template: OpenGL (default)
Project Name: CinderGamma

Then decide where you want to put it and use Xcode as the compiler. Click “Create” and it will create a folder called CinderGamma with a bunch of stuff in it.

3. Inside of this folder, expand the xcode folder and double click on the xcode project. If everything worked out once Xcode opens you should be able to run the Xcode project which should launch a simple Cinder application with a plain black screen. If this part isn’t working, check your build settings to make sure that your “Base SDK” is set to “Latest OS X…” and not something weird because I’ve had problems with this in the past.

4. Now that the basic app is building and running, we have access to a huge amount of graphics and interface tools. For instance, we can easily change the background color by altering the RGB color amounts in the draw function in our main program file, CinderGammaApp.cpp:

void CinderGammaApp::draw()
{
    // clear out the window with blue instead of black
    gl::clear( Color( 0, 0, 0.7 ) );
}

But this tutorial is about sound, not graphics so let’s get started. First we need to include two header files that allow audio output and callback loops.

#include "cinder/audio/Output.h"
#include "cinder/audio/Callback.h"

Then we need to add a callback prototype and two floats to our CinderGammaApp class.

void myAudioCallback( uint64_t inSampleOffset, uint32_t ioSampleCount, ci::audio::Buffer32f * ioBuffer );

float phaseIncrement;
float phase;

Then initialize the callback in our setup function.

void CinderGammaApp::setup()
{
    audio::Output::play( audio::createCallback( this, &CinderGammaApp::myAudioCallback ) );
}

And finally we need to implement our actual audio callback.

void CinderGammaApp::myAudioCallback( uint64_t inSampleOffset, uint32_t ioSampleCount, audio::Buffer32f * ioBuffer )
{
    // frequency over sample rate times two_pi
    phaseIncrement = (300.0/44100.0)*(float)M_PI*2.0;
    for ( uint32_t i = 0; i < ioSampleCount; i++ ) {
        phase += phaseIncrement;
        float tempVal = math::sin(phase);
    ioBuffer->mData[ i * ioBuffer->mNumberChannels ] = tempVal;
    ioBuffer->mData[ i * ioBuffer->mNumberChannels + 1 ] = tempVal;

    //prevent weird overflow problems
    if (phase >= (float)M_PI*2.0)
    phase -= ((float)M_PI*2.0);
    }
}

5. Now we have a simple app which opens and plays a 300.0 hz sine tone. The complete code is posted below for your convenience. Our next step will be to add the Gamma synthesis library which has a staggering number of signal processing tools that we can use to make awesome sounds.

Complete program:

#include "cinder/app/AppBasic.h"
#include "cinder/gl/gl.h"
#include "cinder/audio/Output.h"
#include "cinder/audio/Callback.h"

using namespace ci;
using namespace ci::app;
using namespace std;

class CinderGammaApp : public AppBasic {
public:
    void setup();
    void mouseDown( MouseEvent event );
    void update();
    void draw();
    void myAudioCallback( uint64_t inSampleOffset, uint32_t ioSampleCount, ci::audio::Buffer32f * ioBuffer );
private:
    float phaseIncrement;
    float phase;
};

void CinderGammaApp::setup()
{
    audio::Output::play( audio::createCallback( this, & CinderGammaApp::myAudioCallback ) );
}

void CinderGammaApp::mouseDown( MouseEvent event )
{
}

void CinderGammaApp::update()
{
}

void CinderGammaApp::draw()
{
    // clear out the window with blue
    gl::clear( Color( 0, 0, 0.7 ) );
}

void CinderGammaApp::myAudioCallback( uint64_t inSampleOffset, uint32_t ioSampleCount, audio::Buffer32f * ioBuffer )
{
    // frequency over sample rate times two_pi
    phaseIncrement = (300.0/44100.0)*(float)M_PI*2.0;
    for ( uint32_t i = 0; i < ioSampleCount; i++ ) {
        phase += phaseIncrement;
        float tempVal = math::sin(phase);
        ioBuffer->mData[ i * ioBuffer->mNumberChannels ] = tempVal;
        ioBuffer->mData[ i * ioBuffer->mNumberChannels + 1 ] = tempVal;

        //prevent weird overflow problems
        if (phase >= (float)M_PI*2.0)
        phase -= ((float)M_PI*2.0);
    }
}

CINDER_APP_BASIC( CinderGammaApp, RendererGl )
Art Code Drip Music

Drip: a Water Powered Sound Installation

I created this piece in collaboration with the new media artist Muhammad Hafiz Wan Rosli this spring. It was featured in the UCSB Media Art and Technology Program’s “Bits and Pieces” Exhibition back in May and we’ll also be showing it on September 1st at the Soundwalk Festival in Longbeach.

This is a technical description from the Soundwalk proposal:

“Drip is an interactive sound sculpture consisting of 16 tuned metal bars hung from a 3” by 3” by 5” high (freestanding) iron frame. Attached to the frame above each bar are solenoid water valves that can be triggered by an Arduino microcontroller. As the valves are opened and closed, drops of water pass through them falling onto each of the sixteen bars. The resulting sound is acoustically amplified through attached piezoelectric microphones. This action of falling water produces rhythms and melodies which are sequenced in real time and which can be altered by the audience’s interaction via light sensors embedded in the piece. Since all sound is generated acoustically, viewers can also interact with the piece by directly tapping the bars or plucking the nylon wire that suspends them in the air. The resulting soundscape is something like a surrealist version of rain falling on a tin roof or a collection of gongs being struck in chaotic mathematical patterns.”

Art Code Music

Standing Waves

“Standing Waves” is an interactive multimedia installation based off of a 3D implementation of the universal wave equation. Motion-tracking controllers allow the audience to physically “drag” waves through the virtual pool. A system of oscillators hidden beneath these waves is used to sonify the amount of energy at each region of the system. Because each oscillator is tuned to a different frequency, participants can hear energy propagate throughout the system as they interact with it.

The piece was created with Processing using the glGraphics, PeasyCam, and minim libraries.

Art Code Music Voice of Sisyphus

Voice of Sisyphus Presented at ICAD

On June 19th I gave a talk with my colleague Ryan McGee at the 2012 International Community for Auditory Display (ICAD), hosted by Georgia Tech. Our presentation was about image sonification (turning pixels into sound) and a piece we created using this technique called “Voice of Sisyphus.”  Here’s a link to the white paper: Voice of Sisyphus: an Image Sonifcation Multimedia Installation

Voice of Sisyphus is a multimedia installation created in conjunction with Ryan McGee under the artistic direction of George Legrady. It opens at Nature Morte Gallery, Berlin on September 7th, 2012 and was displayed at the Edward Cella Gallery, Los Angeles November 2011 – February 2012.

Art Music

Oscilloscope Interface Studies

Over the past year I’ve been experimenting every so often with an audio-visual performance interface based off of Lissajous Displays and the 3D Oscilloscope series by Dan Iglesia. My version uses a glove-based motion capture interface that we have here at UCSB in the Allosphere, our 3-story mutlimedia environment. Here’s what they look like:

And here are are a few screenshots of the performances. Basically the performer puts on these gloves, which control the frequencies of multiple oscillators. These are then visualized as an oscilloscope display and the resulting pixels are translated into sound. A skilled performer can access many interesting shapes like the ones below tuning the oscillators in harmonic ratios.

Art

Nebulous

This is a an animation of wave propagation mixed with reaction diffusion that I programmed in C++. I start by triggering a single red wave in the center, which sets the complex system in motion.

It works well as a metaphor for the creation of the universe because slight irregularities in the starting conditions allow unstable patterns to emerge (in this case, the trigger wave was ever so slightly off-center). By measuring the cosmic microwave background radiation (CMB), physicists have discovered that the early universe was slightly irregular. This irregularity is what allows galaxies, stars, and planets to form. If instead it were completely uniform, it would have remained in a stable state and nothing would have emerged. Similarly, this animation starts out with very symmetrical, slow-moving patterns but as time goes on these become more chaotic and complex.

Drawing inspiration from animators such as Oskar Fischinger, I set this to a classical score that I think fits quite nicely. The piece is movement 15 of “Vingt regards sur l’enfant-Jésus” (Twenty Contemplations on the Infant Jesus) by Olivier Messiaen.

Code Music

Algorhythmic Dubstep Competition


The British algorithmic dubstep artist Kiti le Step recently released the source code for their latest track and the Super Collider Symposium is sponsoring a competition to see who can create the best remix- either through traditional means or by hacking the code to create a new self-generative composition. The winner will get a Novation Launchpad ( !! ) and as an added twist the competition will be completely judged by computers. I decided to take a stab at modifying the code and using it as an excuse to learn Super Collider. Here’s the result that I entered into the competition. Hopefully the A.I. judges find my beat sufficiently dirty.

[audio:http://amusesmile.com/old/sound/algostepMaster.mp3|titles=Golliwog’s Cakestep]