Monday, 27 February 2012

Next Step

Context and Back Ground - who has done this and why?

Trawling through the internet I was able to find many recipes for groovy Halloween gadget which involved sensors setting audio clips and flashing LED. This whilst for filling some components of the brief did not come close to the quality and elegance I was hoping to find.

Though out you-tube there are very elegant sketches which react with people.

Interactive wall installation.
http://www.youtube.com/watch?v=OGoZktCzMS4&feature=related

Kinect Projection Mapping with Box2D physics
http://www.youtube.com/watch?v=4V11V9Peqpc&feature=fvwrel

These where great to prove that you can use some sort of sensor to input human motion into a computer, but none of these dealt with sound. I did find a clip were the person created their own synthesizer useing SuperCollider and Processing which was neat,

Generative music in SuperCollider & Processing
http://www.youtube.com/watch?v=rMbcqv8rxnA

but it was not inputting data from a human activated sensor.

A really clever sketch I found did both.

Kinect hacks create world's greatest Puppet show.
http://www.youtube.com/watch?v=CeQwhujiWVk&feature=related

SO

It doesn't look like anybody has done this before. I know that sounds like and arrogant statement but I cannot for find a close enough example - frustrating.

This means I really need to sit down and suedo code my idea so that I can figure out and lock down this idea I have. This is daunting as I have no clue as to how big a piece to bite out of this idea. The more I use processing the more I can understand that it is a matter of precisely understanding what you want to achieve. It is at this point when you try to construct your sketch.

No comments:

Post a Comment