An interactive installation in which a user performs a song, while the system learns their voice so that it can synthesize the voices of all users together in various combinations and play that voice back to the user. Users can control these combinations through a visual interface, inspired by the song’s music video. The concept was originally conceived in class with Cesar Mocan and later developed by me.
The original piece
Laurie Anderson’s “O Superman” consists of a constant rhythmic vocal base singing “ha-ha” in stacatto, on top of which Anderson sings/speaks. Her voice intermittently passes through a vocoder giving it a robotic/alien quality, in what has become an iconic 80’s sci-fi/electronic music sound.
step 1 – voice sample and setting base line
User puts on headphones, the base vocal line from “O Superman” is playing. The text “ha ha ha ha ha ha ha ha…” is projected. The user sings into the microphone. As soon as the system has gotten a good sample, it switches from the voice in the original song to the user’s voice. She will hear the base part in her own voice on loop throughout. The text goes out.
step 2 – voice synthesis and visuals
The lyrics appear, karaoke style, for the user to sing. As they are singing, the system synthesizes their voice with the voices of all previous users and plays that back through the headphones.
A camera captures the user’s left hand and projects it onto a circle. The multi-voice synthesis is controlled by the shape that the hand makes within the circle: the percentage of circle covered by the hand determines how many additional voices the system will combine. Other elements like angle, distance from center of the circle determine which types of voices will be more dominant in the algorithm.
A second camera mounted on the microphone is filming the user’s mouth. As different voices are added to and taken out of the synthesis, the images of all those users’ mouths are projected around the circle.
Posted in Spring 20 - Music Interaction Design |