Category Archives: Code

Art Code Music

Virtual Reality Music Visualization

13389153_10153545417151078_2109466568_o

Under the name “Echobit,” Brian Hansen and I have been performing immersive VJ sets and audio-visual experiments using the Oculus Rift. We apply audio feature extraction and MIR techniques in order to create rich, interactive visuals. Users are able to explore visual worlds while they react to musical material in real time. The visuals are also projected on the wall so that all audience members can all take part in the experience.

I believe it’s the first application of Virtual Reality technology as applied to VJing and music visualization in general.

It is built in OpenFrameworks using a custom system for generating audio-reactive geometry and GLSL shaders.

13405733_10153545417126078_329967991_o

13383837_10153545417091078_1576806256_o

13410685_10153545454681078_2133745267_o

13396857_10153545417206078_607676417_o

Art Code

Interactive Installation for SB Museum

DSCF1428_straigt

I created an interactive touchscreen application for the Santa Barbara Museum of Art that enabled museum patrons to construct their own camera-less images (photograms) in the style of Laszlo Moholy-Nagy, then upload the results to social media. There were nearly a thousand finished submissions, which can be viewed in this Flickr album. Here are a few of the results:

picasion.com_cb087fc5652f729f41118d9ed4145894

It was very interesting trying to strike a balance between a simple and intuitive interface, while allowing enough depth for more subtle artistic expression (i.e. “low-floor, high-ceiling”). Based on how I observed people using it, it seems to have been relatively effective in this regard. Some users would create a photogram from start to finish in only 20 seconds, while other users would refine the placement of their objects for 15 minutes or more.

DSCF1435

DSCF1296sbma3

Art Code

Photogram Simulation for SBMA

Screen Shot 2015-06-04 at 11.39.51 PMHere are a few screenshots of an interactive app I’m installing for an upcoming exhibition in the Santa Barbara Museum of Art. The show is called The Paintings of Moholy-Nagy: The Shape of Things to Come. I’m trying to create a realistic simulation of a “photogram,” or an image made by placing objects directly over light sensitive paper. Users will use a touch-screen to place and rotate the various objects before “exposing” them to create the finished photogram. The results are starting to look more convincing, although there’s still a bit of work to perfect regarding layering and the simulated perspective shifts. Everything is done in Canvas/Javascript.

Screen Shot 2015-06-04 at 11.39.33 PMScreen Shot 2015-06-04 at 11.34.05 PM

 

Art Code Voice of Sisyphus

VOS going to Shanghai

vos

Voice of Sisyphus will be shown at the Chronus Art Center in Shanghai starting in June. I developed the compositional software which controls the piece.

Art Code

Harold Cohen’s Coloring Algorithm in C++

Harold Cohen’s coloring formula is a heuristic method by which to probabilistically control randomly generated color saturation, and lightness values.It has been used extensively by his AARON algorithmic painter, and is summarized in detail in the essay “Color, Simply,” 2006. I ported the algorithm into a C++ as a class which you can download here (relies on AlloCore currently. A standalone version is soon to come).
The algorithm can be described as follows:

  1. Three normalized number ranges are chosen, corresponding to low, medium, and high values. For example 0.15-0.35 (L), 0.4-0.65 (M), 0.8-1.0 (H)
  2. These are set up in a 3×3 matrix, each corresponding to a possible saturation-lightness pairing. For example, a low-low (LL) pairing would provide both saturation and lightness values chosen randomly from within the low range
  3. During initialization, a probability value is assigned to each of the 9 pairing possibilities inside of this matrix. Cohen suggests only using 2-3 of them per composition, for example: 20%-LL, 40%-HL, and 40%-MH
  4. When a new color is desired, one of these range pairs is selected based on its assigned probability, and then a specific saturation-lightness pair is chosen randomly from within each of the selected pair’s ranges.
Code DSP Tutorial

Linkwitz-Riley Filters

If you’re trying to split an audio signal into three frequency bands without using a costly FFT, the Linkwitz-Riley Crossover technique is a great solution. A Linkwitz-Riley filter is just two identical butterworth filters in series (because of that it’s sometimes called the “Butterworth Squared” filter), which has the unique identity of a perfectly flat crossover point. This means that if you pass a signal through a low-pass and a high-pass version and add the result, you’ll basically get back your original signal (all-passed).

This is a schematic of a 3-band splitter using this principle. Basically, one audio signal is being split in half and then split in half again, giving us 3 separate bands. The only weird thing on the diagram is the two filters in parallel on the low band. These are necessary to align the phase of the bottom with the top two bands. Every time a signal passes through one of these LR filters it gets its phase shifted, so if the bottom band only goes through one filter and the top two bands go through two (one initial and one after the split) then they’ll be out of phase with the bottom and you get bad dips and peaks in the frequency response.

Many, many thanks to Robin Schmidt, a.k.a. RS-MET for help with this tutorial.

Code DSP Tutorial

Audio Programming with Cinder Part III

Part III: Creating an interface with ciUI

At the end of this tutorial we’ll have built a complete FM synthesizer and visualizer as well as a graphical user interface to control it in real-time. Here’s a taste:

1.  In order to build our interface it’s best not to start completely from scratch.  CiUI is a user interface library for Cinder built by a great designer, programmer, and friend, Reza Ali. It allows you to make customizable GUIs in a quick and relatively painless fashion, which is a perfect way to add control to the FM Synth we built in part two.  Let’s get it installed and working.

(These next steps are adapted from his tutorial, so if anything doesn’t work, refer to his setup instructions directly. Something might have changed since the time I wrote this.)

2. First we need to download the library itself, which can be found here. After downloading and unzipping the library, drag the src (ciUI/src) folder within ciUI into the Source folder in the CinderGammaApp xCode project. When prompted “Choose options for adding these files”, make sure that you leave “Copy items into destination group’s folder (if needed) unchecked”, then press “Finish.” You can rename the resulting “src” folder to ciUI to make things neat and organized.

3. Now we must import a font resource that ciUI will use. Select the Resources folder in xCode and right click. Select the option starts with “Add files to ‘ciUITutorial’ When the file dialog opens, navigate to where ciUI lives within your Blocks folder. Navigate to samples/ciUISimpleExample/xcode and select the “NewMedia Fett.ttf” font file. Make sure that you check “Copy items into destination group’s folder (if needed)”, then press “Finish.” This will copy the font file to your project.

4. The next step is to include ciUI in our program (at the top of CinderGammaApp.cpp):

#include "ciUI.h"

Then create a new ciUICanvas object inside of the CinderGammaApp class:

ciUICanvas *gui;

And add these two functions in the CinderGammaApp class declaration:

void shutdown();
void guiEvent(ciUIEvent *event);

Finally we should add a variable to store the maximum frequency of our synth:

float maxFrequency;

5. Next we need to write the function definitions below (still in the “CinderGammaApp.cpp” file). Adding “delete gui” inside of shutdown() makes sure to delete the interface after we close our application:

void CinderGammaApp::shutdown()
{
    delete gui;
}

void CinderGammaApp::guiEvent(ciUIEvent *event)
{
}

6. Within the setup() function we are going to initialize the gui object itself and add a bunch of widgets to it:

gui = new ciUICanvas(0,0,416,320);

Note: The arguments define the GUI’s top left corner position and width and height. If no arguments are passed then the GUI will position itself on top of the window and the back of the GUI won’t be drawn.

We are now going to initialize our synth’s maximum frequency and then add widgets (a label, and three sliders) to the GUI, in the setup function after creating a new ciUICanvas:

maxFrequency = 2000.0;
gui = new ciUICanvas(0,0,416,320); //ciUICanvas(float x, float y, float width, float height)
gui->addWidgetDown(new ciUILabel("FM SYNTH", CI_UI_FONT_LARGE));
gui->addWidgetDown(new ciUISlider(400, 16, 0.0, maxFrequency, &myFM.carrierFreq, "CARRIER FREQUENCY"));
gui->addWidgetDown(new ciUISlider(400, 16, 0.0, maxFrequency, &myFM.modFreq, "MODULATOR FREQUENCY"));
gui->addWidgetDown(new ciUISlider(400, 16, 0.0, maxFrequency, &myFM.depth, "DEPTH"));

Notice how we pass the address of whatever variable we want our slider to control (&myFM.carrierFreq). This automatically sets up a callback so that whenever the slider is moved, this variable will be changed accordingly and vice versa. The next couple line will tightly fit the canvas to the widgets, and register events with the CinderGammaApp class, and set the UI color theme.

gui->autoSizeToFitWidgets();
gui->registerUIEvents(this, &CinderGammaApp::guiEvent);
gui->setTheme(CI_UI_THEME_DEFAULT);

Note: The second to last line adds a listener/callback, so the gui knows what function to call once a widget is triggered or interacted with by the user, don’t worry if its doesn’t make too much sense right now, you’ll get the hang of it.

7. Now we’ve initialized the GUI and placed a bunch of widgets inside of it, but we need to tell our app to update and display it every frame. We do this by adding these lines to the update and draw functions within the CinderGammaApp class. We’re also going to color the background based on our synth’s current carrier frequency, modulator frequency, and depth which will correspond to amounts of red, green, and blue (RGB) respectively:

void CinderGammaApp::update()
{
    gui->update();
}

void CinderGammaApp::draw()
{
    gl::clear(Color(myFM.carrierFreq/maxFrequency, myFM.modFreq/maxFrequency, myFM.depth/maxFrequency));
    gui->draw();
}

At this point you can actually control our synthesizer in real time through the use of the sliders we set up. Every time one of them is moved, the variable we told it to point to is changed, which in turn changes the resulting sound. Have fun playing around with it for a minute- we’ve certainly come a long way from our sine wave in part one.

8. Ok awesome- we’re creating these complex waveforms through Frequency Modulation synthesis. But now let’s visualize them. First add a vector of floats inside of the CinderGammaApp class to hold a buffer full of sample values that we’ll use to draw our visualization. Also create an int to hold the vector’s offset (this is necessary for the audio loop to fill it up properly from frame to frame). Finally, we’re going to add a boolean (true/false) value to determine whether or not we want to draw our samples connected as a line or separated as little dots:

vector visualBuffer;
int visualBufferOffset;
bool connectSamples;

9. Now in setup() we’re going to fill our vector with 4000 zeros and initialize our offset to 0. While we’re at it, we’re also going to add a toggle widget to set the boolean connectSamples variable to true/false:

void CinderGammaApp::setup()
{
    visualBuffer.resize(4000,0.0);
    visualBufferOffset = 0;
    ...
    gui->addWidgetDown(new ciUIToggle(16, 16, &connectSamples, "CONNECT SAMPLES"));
}

10. Now we add this in the draw() function. This is the code which actually draws the lines or dots that will create our visualization. If it’s confusing just go through it step by step. First we create a constant float called padding, which is just the number of pixels that we want to leave as free space below our wave. Then we set the height of our wave. Next we determine a width offset, which is just the size of the window divided by the size of our buffer (because we’re about to iterate over our entire buffer, drawing a point for each value and we need to know how far to move over at each step). Then we create an openGL color which is the oposite of our background color. Next we determine if that connectSamples boolean is true or false. If it’s true we draw a line, and if it’s false we draw unconnected dots. Finally, we actually iterate through our entire vector drawing amplitude values at each bin. We then end our shape using glEnd():

//draw the visual buffer every frame
const float padding = 35.0;
const float waveHeight = 100.0;
float widthOffset = (float)getWindowWidth()/visualBuffer.size();
gl::color(Color(1.0-myFM.carrierFreq/maxFrequency, 1.0-myFM.modFreq/maxFrequency, 1.0-myFM.depth/maxFrequency));
if(connectSamples == true){
    glBegin(GL_LINE_STRIP);
}
else{
    glBegin(GL_POINTS);
}
for(int i = 0;i<visualBuffer.size();i++) {
    gl::vertex(i*widthOffset, getWindowHeight()-(waveHeight+padding+(waveHeight*visualBuffer[i])));
}
glEnd();

11. But where were those amplitude values coming from? We need to actually fill our visualBuffer vector inside of the audio callback. We use the visualBufferOffset to keep track of what sample we’re supposed to be filling so that the audio callback doesn’t just fill the first 512 samples each frame:

void CinderGammaApp::myAudioCallback( uint64_t inSampleOffset, uint32_t ioSampleCount, audio::Buffer32f * ioBuffer )
{
    for ( uint32_t i = 0; i < ioSampleCount; i++ ) {
        float tempVal = myFM();
        ioBuffer->mData[ i * ioBuffer->mNumberChannels ] = tempVal;
        ioBuffer->mData[ i * ioBuffer->mNumberChannels + 1 ] = tempVal;

        visualBuffer[visualBufferOffset%visualBuffer.size()] = tempVal;
        visualBufferOffset = (visualBufferOffset+1)%visualBuffer.size();
    }
}

Great, so now we have the final, finished program: An FM synthesizer and multi-mode visualizer which can be performed in real time. Not to mention that we did this all in just 121 lines of code! We’ve seen how to use the Gamma Synthesis library inside of Cinder which gives us access to dozens of useful mathematical generators and functions that can be used to create audio and visuals (honestly, we didn’t even scratch the surface). We’ve also seen how to use the ciUI library to create efficient, powerful, and well designed user interfaces for our future Cinder applications.

Thanks for sticking around, I hope you found this helpful, and please send me any feedback or cool things that you make using these techniques.

Here’s the full program:

#include "cinder/app/AppBasic.h"
#include "cinder/gl/gl.h"
#include "cinder/audio/Output.h"
#include "cinder/audio/Callback.h"
#include "examples.h"
#include "ciUI.h"

using namespace ci;
using namespace ci::app;
using namespace std;

struct FM {
    float carrierFreq;
    float modFreq;
    float depth;
    Sine<> carrier;
    Sine<> mod;
    FM(){
    }
    float operator()(){
        mod.freq(modFreq);
        carrier.freq(carrierFreq+(mod()*depth));
        return carrier();
    }
};

class CinderGammaApp : public AppBasic {
public:
    void setup();
    void mouseDown( MouseEvent event );
    void update();
    void draw();
    void myAudioCallback( uint64_t inSampleOffset, uint32_t ioSampleCount, ci::audio::Buffer32f * ioBuffer );
    void shutdown();
    void guiEvent(ciUIEvent *event);
private:
    FM myFM;
    ciUICanvas *gui;
    float maxFrequency;
    bool connectSamples;
    vector visualBuffer;
    int visualBufferOffset;
};

void CinderGammaApp::setup()
{
    visualBuffer.resize(4000,0.0);
    visualBufferOffset = 0;

    audio::Output::play( audio::createCallback( this, & CinderGammaApp::myAudioCallback ) );
    Sync::master().spu(44100.0);

    myFM.carrierFreq = 300.0;
    myFM.modFreq = 20.0;
    myFM.depth = 20.0;
    maxFrequency = 2000.0;

    gui = new ciUICanvas(0,0,416,320); //ciUICanvas(float x, float y, float width, float height)
    gui->addWidgetDown(new ciUILabel("FM SYNTH", CI_UI_FONT_LARGE));
    gui->addWidgetDown(new ciUISlider(400, 16, 0.0, maxFrequency, &myFM.carrierFreq, "CARRIER FREQUENCY"));
    gui->addWidgetDown(new ciUISlider(400, 16, 0.0, maxFrequency, &myFM.modFreq, "MODULATOR FREQUENCY"));
    gui->addWidgetDown(new ciUISlider(400, 16, 0.0, maxFrequency, &myFM.depth, "DEPTH"));
    gui->addWidgetDown(new ciUIToggle(16, 16, &connectSamples, "CONNECT SAMPLES"));
    gui->autoSizeToFitWidgets();
    gui->registerUIEvents(this, &CinderGammaApp::guiEvent);
    gui->setTheme(CI_UI_THEME_DEFAULT);
}

void CinderGammaApp::mouseDown( MouseEvent event )
{
}

void CinderGammaApp::update()
{
    gui->update();
}

void CinderGammaApp::draw()
{
    gl::clear(Color(myFM.carrierFreq/maxFrequency, myFM.modFreq/maxFrequency, myFM.depth/maxFrequency));
    gui->draw();

    //draw the visual buffer every frame
    const float padding = 35.0;
    const float waveHeight = 100.0;
    float widthOffset = (float)getWindowWidth()/visualBuffer.size();
    gl::color(Color(1.0-myFM.carrierFreq/maxFrequency, 1.0-myFM.modFreq/maxFrequency, 1.0-myFM.depth/maxFrequency));
    if(connectSamples == true){
        glBegin(GL_LINE_STRIP);
    }
    else{
        glBegin(GL_POINTS);
    }
    for(int i = 0;i<visualBuffer.size();i++) {
        gl::vertex(i*widthOffset, getWindowHeight()-(waveHeight+padding+(waveHeight*visualBuffer[i])));
    }
    glEnd();
}

void CinderGammaApp::shutdown()
{
    delete gui;
}

void CinderGammaApp::guiEvent(ciUIEvent *event)
{
}

void CinderGammaApp::myAudioCallback( uint64_t inSampleOffset, uint32_t ioSampleCount, audio::Buffer32f * ioBuffer )
{
    for ( uint32_t i = 0; i < ioSampleCount; i++ ) {
        float tempVal = myFM();
        ioBuffer->mData[ i * ioBuffer->mNumberChannels ] = tempVal;
        ioBuffer->mData[ i * ioBuffer->mNumberChannels + 1 ] = tempVal;

        visualBuffer[visualBufferOffset%visualBuffer.size()] = tempVal;
        visualBufferOffset = (visualBufferOffset+1)%visualBuffer.size();
    }
}

CINDER_APP_BASIC( CinderGammaApp, RendererGl )