Facial Noise _demo uses facial expressions to generate and manipulate sounds. It explores the idea of the face as an interface, a remote control, that together with the sounds it generates turns into a tool of performance.

Performed by the Butoh dancer Azumi Oe. In Butoh, the face is being used by the performer as a mask, freed from social constraints. Made out of muscles, the facial expressions that are being portrayed by the performer are empty from any subjective emotions. The audience is the one that creates the relationship between the expression and the performer that delivers it, creating an emotional bond between the two.

Sound/Video tech flow:

Kyle McDonald’s FaceOSC (based on the work of Jason Saragih) openFrameworks ofxFaceTracker addon and Max Jitter for tracking.
Max MSP and Abelton Live for manipulating and creating new sound compositions (by the performer). Mobiola on iPhone, for a wireless video input and a monitor.

The task was mainly to connect all the above and try to create meaningful facial choreography that will produce meaningful sounds.


Sound design Khen Price.
Performed by Azumi Oe.


Thank you The HIVE NYC and Juan Patino