Auditory Augmentation

Auditory Augmentation supports the design of data representation and monitoring tools which unobtrusively alter the auditory characteristics of the object’s auditory gestalt. The object’s sound can be shaped by data-driven parameters, creating a subtle display for ambient data streams.

Take the keyboard as an example. Typing on a keyboard causes the keyboard to emit sound. That sound can be altered using Auditory Augmentation. In doing this several benefits emerge. The sound of the keyboard in general is not annoying (at least not for the one typing). When the monitored data stream changes the sound of the keyboard changes accordingly and will catch the users attention. In a peripheral monitoring situation, the data stays out of the user’s way if she wants to concentrate on other things.

Wetterreim utilises this feature for a dedicated scenario: the day-to-day work on a computer as it is common at almost any office workplace. As the source for the auditory augmentation, we chose the keyboard, one of the main interfaces for the daily work with computers. Typing on it results in a characteristic sound that is shaped by the design of the keyboard and its interplay with the writer’s fingers. A contact microphone captured the keyboard’s structure-borne sound, on which we based a Sonification of weather-indicating measurements. When filtering the captured sound by data-driven filter parameters, an audio stream is created, which is close to the characteristics of the original but additionally features characteristics of the integrated data. The result filter output is superimposed to the original sound such that it is perceived as one coherent auditory gestalt. The developed filter parameterisation for the weather data allows people to perceive a drop in pressure or an approaching cloud front as a change in the object’s auditory characteristic. An example for the use of Wetterreim is given in the corresponding video on the website.

Auditory Augmentation – Demo

 

ReferenceS
  • Auditory Augmentation: T. Bovermann, R. Tünnermann and T. Hermann, (2010). International Journal of Ambient Computing and Intelligence (IJACI), 2(2):27–41.
  • NIME 2011 Oslo, Tutorial on Auditory Augmentation of Everyday Objects with Near Real-time Data, T. Bovermann, R. Tünnermann and T. Hermann
  • Sonic Interaction Design Exhibition, Oslo 2011, Exhibition Auditory Augmentation at your Fingertips, R. Tünnermann, T. Bovermann, and T. Hermann

 

Images taken at SID exhibition in Oslo (NIME 2011).

 

 

 

 

 

This is a joint work together with Till Bovermann, Thomas Hermann