soundSense: Where Movement and Data are Turned Into Music

March 1, 2005

The children really get it, says Steve Feller. The adults walk in, hear the music, look around, stick their hands in their pockets and think about it. They act like adults. The children, however, realize that they control the music, the data and the lines appearing on the screens.

“We had two kids come in yesterday, and immediately they started running around, making music, holding hands as they danced around,” Feller said. “They figured out which sounds occur when they go to certain areas of the room.”

Feller is one of the researchers operating soundSense, an unusual collaboration of the Fitzpatrick Center for Photonics and Communications Systems, the Department of Music and ISIS (Information Sciences + Information Studies).

SoundSense is located in a room in the new Fitzpatrick Center for Interdisciplinary Engineering, Medicine and Applied Sciences. The purpose, according to David Brady, is to map the dynamics of people and space onto sound. Brady is director of the Fitzpatrick photonics center and Addy Family Professor of Electrical and Computer Engineering.

Twenty sensors hanging around the room take the movement of the people in the room and turn it into data and sound. The data can be read on four monitors hanging in the middle of the room. Other monitors spread around the room display parts of a poem written by Joseph Donahue of the Department of English.

The music was written by Department of Music chair Scott Lindroth, an expert in electronic music. Although initiated by the movement of people, the music is played according to one of four “states” depending on the level of activity in the room. When there are few people in the room milling about, the music at State #1 is contemplative, with sparse chiming. As more people enter the room, and as the movement becomes greater, the music rises up to State #4, which is characterized by a more frenetic sound.

Behind this unusual collaboration rests a significant scientific effort to represent human identity and motion through sound. The fundamental goal is to understand how computers recognize and communicate with people. For children, it’s a lot of fun. For researchers, it may also open new avenues of study, from how businessmen might access stock market data to how sound might be used to identify cancer cells.

“It is not intended to be a single purpose production space,” Brady said. “It is intended to be a platform on which multiple student projects and teams may be integrated to explore human-machine interaction and multimodal human-machine communications.” Beginning next fall, the room will be used in several courses as a sensor network, sensor space and computer-based musical and visual display space, Brady said.

He said research in the studio will focus on three questions:

    • What are the most efficient and information-rich mappings between human spatial configurations and machine understanding?
    • What is the most efficient means of communicating complex multimedia to groups of individuals?
    • How can geometric and spatial information be sensed and represented, i.e., how can machines become aware of the space in which they are embedded?