What is sonification

From Soundscapes
Jump to navigation Jump to search

When we make a sound to inform about something we are applying a sonification system. We represent data in the auditory field. We turn data into sounds, these data usually can be representing anything that can be expressed in numbers: a physical measurement, a notion, an action or the vectorial tracking of a sequence of values from a sensor. Many definitions were created for this process called sonification: from “subtype of auditory displays that use non-speech audio to represent information”, to “transformation of data relations into perceived relations in an acoustic signal for the purposes of facilitating communication or interpretation” (Kramer et al., 1999) and, in a more definitive and precise way, “data-dependent generation of sound, if the transformation is systematic, objective and reproducible” (Hermann et al., 2011), and finally “technique of transforming non-audible data into sound that can be perceived by human hearing” (wikipedia on 9th of April 2024). To make it simple in the context of this manual we can state briefly that “sonification is the process of generating sound from any sort of data to represent their information as audio”. In even more simple terms we can say to a student that sonification describes data with sound as visualization does with graphs, flow charts, histograms etc.

So basically we want to combine data (Input) and sounds (Output), and decide the way these two are related (mapping or protocol). So a sonification system is defined by these 3 parts:

1 - Input data 2 - Output sounds 3 - Mapping or protocol

Types of data

Real-time sonification vs 'a posteriori'

Acoustic ecology

State of the art examples