Thursday, 31 May 2012

Reflective Report


The ' Kinect : You : Music ' was a success and  the process I took to get there was simple. Discovering the minim library, xbox kinect and there integration possibilities, I was able to effectively augment space visually and phonically. Although it had some syncing troubles it was still provided an intersting interactive product.

I experimented with the effects, looping and Audioplayer properties of the minim library. I discovered that the bandpass filter was the most appropriate effect for the project. I also realised that the best way to loop the audio was to find where the end was and reload and replay them again. This resolved all the syncing issues with looping and helped the program run smoother.

Then I got an xbox kinect and experimented with the way it tracks movement and found that the scene function was the most effective for the project. It initialised the current scene and recorded it all as black. Then anything that moved between the sensor and the background became coloured. The change in colour was what could be measure and was the key to providing interaction in the project.

Once the audio run smoothly and the xbox was able to measure the density of people in the spaces, I then worked on a little visual display and the audio tracks. The Visual display was three rectangles with the frequency wave of the individual track in the middle of each rectangle. On site the rectangles where changed to fit the metal panels on the wall and the wave forms were moved to align with the groves of the panels. These visualisations where essential to springing the trap of interaction, once the rectangles would flash on the walls people would realise that they were in an interactive musical zone where they could not only: control the rectangles, but also the music that augmented the space they are in.

The strengths of this project are: the music firstly had a dramatic effect on the vibrancies of the space, the massive light display was a lure to the project, the program was straight forward easy to manipulate and debug, and lastly it stimulates both visual and audio senses.

The weaknesses of the project are: It has a slight sync issue and needs to run on fast computers, as the audio tracks are small and become repetitive, I only programmed three zones and instruments, the scale of the effects were not drawn out enough feeling like the effect was either on or off rather than gradually switching.

I learnt a great deal about minim and using the xbox kinect. The two most important things I discovered in this whole process was; music is an amazing tool for livening space, and interaction has to be sprung like a trap to be effective. This realisation only dawn upon me whist watching people walk through the project. Sometimes people would stop in surprise look then keep walking, but more often then not people would start interacting with the design as if their purpose was always to play with the project.   

Overall I learnt a lot about going through the whole process of concept - program - hardware - present - display. Arguable too much fun to be an assignment or perhaps  an example of a well delivered teaching method with hands on learning. This project was a good stepping stone for interesting possibilities in future Digital Design courses.

Monday, 7 May 2012

Kinect - The Xbox device of crazy happy fun times.


I have finally got myself an Xbox Kinect and suddenly the world is not so dull and my project is looking all the more probable.

To get the Xbox Kinect working on a Windows computer you will need to download and install either the SimpleOpenNI or the Windows SDK drivers. Daniel Shiffman who works very closely with processing and has developed OpenKinect, however it only runs on Mac. http://www.shiffman.net/p5/kinect/

He recommends SimpleOpenNI for windows users so this is the library(s) I downloaded. It is a straight forward process: you go to the web site http://code.google.com/p/simple-openni/ , download a bundle of drivers. and install each one IN THE ORDER SPECIFIED. The 32 bit version works very smoothly whilst the 64 bit version was a pain and discarded. Once installed I plugged in the Kinect and started playing with the examples - much fun. SimpleOpenNI can track a skeleton, which requires a calibration pose whilst tracking movement is done automatically. With depth and scene detection the initial background can be mapped (white is close black is far away) or it can be left black. In each case when something moves in the scene it is coloured and tracked.

The processing sketch created to test for movement was based of the Scene example found in the simpleNi library. It colours the initial scene black and then when something moves it is coloured. This is useful as I can test to see how much of the sketch window is not black and use that to effect the output sound of the sketch (minim audioplayer) http://code.compartmental.net/tools/minim/ . At the moment I am using a bandpassfilter, which hollows out the track and pulls the pitch up, as people move into the scene it thickens the sound out. The high pass and the low pass seem to cut out allot of the songs frequency and I do not think it is as effective as the bandpass.

Lastly I've been using a program called Reaper http://www.reaper.fm/ , an Iphone app by Native Instruments http://www.native-instruments.com/ called Imaschine and a bass guitar to create the drum/treble/bass tracks for the project. The native instruments Imaschine has an amazing library of drum and synth sounds which automatically loop. This is the greatest audio sketch pad for under $10 and has been a pleasure to use during this project. Reaper is the simplest recording software I've ever used, I am still evaluating it at the moment however for a recreational licence its around $60 - AWESOME. Reaper allows me to record 30 second sketches and copy them out into 10minute loops and then "render" or export each individual track to mp3 which is perfect for importing into and working with the minim library.

From here on in I am focusing on getting the sounds out that I want and getting them all in to the sketch. I will lastly work on a visualisation to go with the audio sketch, probably a beat detection which augments with the Xbox Kinect data. This is mainly to let the users of the space know that the sound is connected to the exhibition and that they can interact with it. 

Tuesday, 20 March 2012

More on Minim

I have been trying to find a way to control sounds using a mouse. This is because hopefully the xbox can substitute peoples coordinates in place of the MouseX/Y. The first issue I ran into was how to control the volume of the input from the Audioplayer. Volume() or setVolume was not working for the processing platform I am using and after sometime researching I discovered that I was not alone. The Answer to this issue to this problem was given to me by Zachary Seldess from this forum

http://processing.org/discourse/yabb2/YaBB.pl?num=1224778221

he suggested altering the gain() of the audio track using setGain(). This worked a treat.
The next step was to figure out how to use the mouse x/y's to manipulate the gain amount. I borrowed the principles of the Color  Transparencies  with sound sketch which used the map() function to remap the mouseY position into something useful for the gain value.

http://www.openprocessing.org/sketch/15427

The next task was to split the area (400 x 400 sketch) into zones. As the concept is still to get people to trigger different instruments I need to use the mouseX/Y's to achieve this initially. Using the principles again from the Color Transparencies with sound code again I discovered that -80db is inaudible. - so the easy thing to do was to tell the program to make the track inaudible when in the wrong zone. ie, if you are in Track 1's Zone Track 2's gain is set to -80 ..etc The issue that arose from this was an annoying clipping sound, like a scratched CD. I went back to the forum where Zachary Seldess helped us out and searched untill I wfound a solution to this problem - mute() & unmute how simple. Muting the track will allow me to keep the tracks coherent whilst stopping the sound from emiting.

the code to date is as follows:

// import minim libraries
import ddf.minim.*;
import ddf.minim.signals.*;
import ddf.minim.analysis.*;
import ddf.minim.effects.*;

//------------------------------------------------------------------------

//set up audio functions
AudioPlayer player1;
AudioPlayer player2;
Minim minim;


// setup varables
int x = 10;
int y = 0;
int rectWidth = 400;
int rectHeight = 400;

//------------------------------------------------------------------------

void setup() {

  size(400, 400);

  minim = new Minim(this); // telling the sketch that this object is infact the Minim Function.

// these are the two tracks being loaded into the functions
  player1 = minim.loadFile("08 - Samba Tranquille.mp3", 2048);// Thievory Corporation
//feel free to add your favorite songs in place - as you can see they can bee mp3 or wav
  player2 = minim.loadFile("13 Aqueous Transmission.wav", 2048); // Incubus
}

//------------------------------------------------------------------------

void draw () {

// playing the Audio
  if (mousePressed == true) {
    player1.play();
    player2.play();
  }


// using the mouseY to set the gain of the tracks, map() helps re - map so to fit with
// the db range
  float vol =  map(((y+rectHeight)-mouseY), y, (y+rectHeight), -40, 0); //variable vol is created with the map() as its value
  player1.setGain(vol); // set the gain of the player1 function
  player2.setGain(vol); // set the gain of the player2 function

  println(player1.gain());// have a look to see how everything is (or isn't) working
  println(player2.gain());

//this is suposed to mute the tracks when the mouse goes out of the window - so far doesn't work
 if ((mouseX < 0 || mouseX > rectWidth) && (mouseY < 0 || mouseY > rectHeight)) {
    player1.mute(); // stop the sound of the audio but keep the function running.
    player2.mute();
  }


// zoning the window
  if (mouseX > 200) {// right hand side if the window
    player1.mute();// silence Incubus
    player2.unmute(); // unsilence Thievory Corporation
  }

  if (mouseX < 200) {
    player2.mute();// silence Thievory Corporation
    player1.unmute(); // unsilence Incubus
  }


}

//------------------------------------------------------------------------

//yet to discover a good explanation for this function (apart from the obvious)
  void stop() {
    // always close Minim audio classes when you are done with them
    player1.close();
    minim.stop();
    super.stop();
  }

please comment - would like to hear of possible improvements.

My USB interface and home recording software is all set up and ready to start laying down some tracks. I am using Reaper recording software which is incredibly intuitive, and I am using m-audio Fast Track as my interface to input into Reaper. The next step is to get some dummy tracks set up and add a swarm to the sketch window to test the responsiveness of the sketch.

Thursday, 1 March 2012

Proposal

Human Sketch Pad

Synopsis -

The lobby space is to be turned into a series of zones. Each zone will represent an element of an audio track. For example, zone one is a bass track, zone two a drum track and zone 3 a treble tack. The tracks will play simultaneously but will have "0" volume until activated. Each zone is activated by a person walking into it. The tone of the track will be controlled by the position of the people in the zone. This will create a full piece of music when the space is completely occupied. This project creates an interactive piece of digital design using audio which responds to the environmental, social and ontological aspects of the brief.


Background and Context -

People have been making sounds collaboratively for thousands of years. An individual starts to play a beat in a street somewhere, then another brings a guitar and then someone starts to sing and suddenly everyone on the street is creating music. Digitally this can be achieved with triggers to initiate sounds and speakers to play back the audio. Interactive digital design based audio has already been achieved in forms of the "Reactable" by Reactable Systems and "Maschine" by Native Instrument.

  • The "Maschine"  technology relies on physical triggers in the form of impact pads to initiate and audio clip stored either on a computer or it's own internal memory. The onboard memory also allows the user to loop the sounds they create, which is one of the most powerful parts of this type of equipment. Maschine has the ability to put effects on the audio clips through a series of switches and dials.

  • "Reactable" is more complex system and behaves more like an instrument than a machine. It relies on images on the bottom of a tile to be read by a camera underneath the table. The images proximity to one another, location, rotation and order to the centre of the table allow different sounds/effects to occur. It is a revolutionary digital instrument.

 These two examples are great for the individuals however are too complex for a mass of people. This is because it relies on people to work together succinctly and has no flexibility for errors to occur (especially the Maschine). "Piano Stairs" and "the world's largest bin" by funtheory.com (Volkswagen), are a great way to create simple sounds using a large mass of people.

  • The Piano stairs are initiative to get people using the stairs. It uses a series of impact sensors to trigger the corresponding notes of the piano. This is a neat concept as it also responds to our brief quite well. The idea is that it is more fun to walk up the interactive stair and potentially coordinate with other people to make music.

  • The worlds larges bin uses a motion sensor to activate a long falling sound. This is an initiative to get people using the bin more often instead of littering. This technique is closer to the type of set up I would like to use for my project - a motion sensor which initiates a noise.

It is clear that there are many types of audio systems which are controlled by digital mediums, however as demonstrated in "the worlds greatest puppet show",


the Xbox kinect system is probably going to be the most effective way of achieve the desired outcome for this project. The kinect is able to track movement in a 3D environment and use the movement of people as an input to control the output - image or sound.

References, I have and will continue to studied, that are helping me to generate ideas and concepts are listed below:

Processing+minim - Video sonification

Sekvenser - Processing+minim
also good

Vanessa Wang - Mumble Bubble

How does this address the brief? -

There are four insinuated parts to the brief:
  1. Site Specific
  2. Sustainable
  3. Social
  4. Ontological

1. The project will meet this criterion by having to use the specific volume (m³ - area times height) of the exhibition space. The volumes dimensions will need to be accurately calculated to efficiently control the sound which is initiated by the people in the space. The music will also be a response to the demographic, sophistication and history of the site. This means that the tracks will be a modern jazz fusion with a focus on drum and bass. The music will also be tailored to inhance the atmospher of the site as the day turns into night.

2. Unfortunately in the digital world we need power, but the sustainable factor of this project is that it can potentially create a fun atmosphere for free. This is economically sustainable as it means that people can create music without having to pay for instruments or lessons. It is also socially sustainable as it creates interaction between individuals

3. As stated previously, the ability to connect people is the social component of this project. The idea that the people who are individually enjoying the visual exhibition, must then coordinate in order to enjoy an audible exhibition. This subtle interaction maybe the ice breaker people need in this setting to verbally interact.

4. The project will allow people to redefine themselves as an instrument, or an augmentor of sound. As one person initiates a bass sound and another a drum, they redefine themselves and each other. This abstraction of self is one of the strengths of this project as it allows people to immerse themselves in the exhibition space.




Description -

The initial step of the project is to understand the tools needed for the project:                  
·         motion sensor (xbox)
·         speakers (provided by Dave)
·         computer (with program)
·         audio tracks (provided by Maurice Timbers, Payton Giles, myself and others)
·         exhibition space (Lobby space of the newActon "east" building)

Next is to understand and investigate the different ways of dealing with sound using processing. Minim is an easy way of dealing with sound through processing. It allows for the programming of multiple inputs, volume control and "bandpassfilter", which alters the sound depending upon X and Y positions. This comes with many more options, which can open up an array of opportunities to expand the project. This will be readjusted and experimented with till the day we present the project.

A week will be set aside for creating a 2 - 4 hour song which is suitable for the program. The song whilst having all the parts recorded together will be split into separate tracks. This allows the tracks to be independently controlled by the different zones. This will be done in a week as it should not be the focus of the project however must still be to a high standard.

The next thing to do will be figuring out how to incorporate the motion sensor which will be the Xbox kinect. Sketches which react to position are easily made in 2D however a lot of investigation time will be spent figuring how to create a 3D position sensitive sketch and then how to incorporate the xbox as the input. The range of the Xbox will also need to be determind to inform the scope of the work. This will take up the first 8 weeks of the semester as it will be challenging.

The outcomes expected are for there to be alot of trickery in getting the tracks to play on time with each other and for the xbox to do its job properly.  I expect the Minim library to provide the control over the audio output and I expect the xbox to handle the programs 3D space input. The logic behind the sketch is quite simple however the coding involved could be rather complex.

Monday, 27 February 2012

Suedo Code

Here is my first attempt at the Suedo code for the sketch. I am also working on a name...


Start Program
______________________________________________________________

input space    // through the xbox into the computer
                       
                        variable (int) occupants
                                    the xbox needs to keep a track of how many objects are                                      in each zone (1 - 10) as this effects the volume.

                        variable (float || int) depth
                                    I want the depth of people with in a space to be recorded                                 alsoas this will control the tone of the track
                       

            initiate sound (track1, track2, track3) // drum, bass, treble
                        // the tracks may have to be functions
                        // and they must start at the same time

                       
           
                        track function -
                                    setup -
                                                track volume = occupants
                                                            // number of occupants control the volume
                                                track tone = depth
                                                            // proximity to the xbox controls the tone

                       
                        Controlling the tracks
                                    if there is motion in zone 1
                                                play track 1
                                    if there is motion in zone 2
                                                play track 2
                                    if there is motion in zone 3
                                                play track 3

______________________________________________________________

End program

Next Step

Context and Back Ground - who has done this and why?

Trawling through the internet I was able to find many recipes for groovy Halloween gadget which involved sensors setting audio clips and flashing LED. This whilst for filling some components of the brief did not come close to the quality and elegance I was hoping to find.

Though out you-tube there are very elegant sketches which react with people.

Interactive wall installation.
http://www.youtube.com/watch?v=OGoZktCzMS4&feature=related

Kinect Projection Mapping with Box2D physics
http://www.youtube.com/watch?v=4V11V9Peqpc&feature=fvwrel

These where great to prove that you can use some sort of sensor to input human motion into a computer, but none of these dealt with sound. I did find a clip were the person created their own synthesizer useing SuperCollider and Processing which was neat,

Generative music in SuperCollider & Processing
http://www.youtube.com/watch?v=rMbcqv8rxnA

but it was not inputting data from a human activated sensor.

A really clever sketch I found did both.

Kinect hacks create world's greatest Puppet show.
http://www.youtube.com/watch?v=CeQwhujiWVk&feature=related

SO

It doesn't look like anybody has done this before. I know that sounds like and arrogant statement but I cannot for find a close enough example - frustrating.

This means I really need to sit down and suedo code my idea so that I can figure out and lock down this idea I have. This is daunting as I have no clue as to how big a piece to bite out of this idea. The more I use processing the more I can understand that it is a matter of precisely understanding what you want to achieve. It is at this point when you try to construct your sketch.

Monday, 13 February 2012

My First ever blog post. Here goes the transformation of my thoughts into the ether we call the internet. 


In Digital Design Project Urban we are required to document the progress of our work.


Our work will be a digital design display at the new Acton precinct. So far a lot of thought and demonstrations have been about the visual aspects of digital design - I will focus on the Audible.


Coming from and Architectural and Musical back ground I can see the opportunity for digital design to implement their collaboration and this is how i'm planing on achieving this.


What I want to achieve is for people to walk through a space and have a direct impact on the music they hear (in effect altering the feel of the space). The next step is when there are multiple people in the space, for these people to work together to create a song (elements will be initially broken down into treble/bass/drums). Furthering this their position in different zones will effect the hifi/lofi/volume of the sounds created.


Simple. 


I am currently looking for examples of this and have found the principles used on a motion board. It was at a Festival outside of Canberra called Corinbank. A man had devised a perspex sheet with a camera underneath. People worked together to move the different symbols around the perspex to create different music. Instead of symbols I'll use people, instead of a camera I'll use some sort of sensor.


Once again, Simple.