deQuencher does (as he says):
The idea behind it is that, instead of having separate and isolated layers for musical events, you have your sound generators and gates/triggers/parameter changes share the same canvas (and layer), and you interact with your “objects” on that canvas to express your musical ideas on time domain.
What I find interesting is that he divides the sound creation in two parts. One is the graphical and the other is the musical, using one software for each one of them. So, he uses supercollider for manipulating sound and openframeworks for building the gui. He comunicates both environments using OSC protocol.
He is also interesting by the way he interacts with the gui, and its representation.
See good quality video (you must be a vimeo user)
I am using fragments from its code for setting up the communication between openframeworks and supercollider. And it is really interesting too how he uses the freesound database.
Linked from the openframeworks forum: deQuencher – an interface for SuperCollider