Sonification in practice: Difference between revisions

From Soundscapes
Jump to navigation Jump to search
No edit summary
No edit summary
Line 82: Line 82:
In a sonification system, which is our final product, the data is the source of the sound engine, and some particular sounds will be the output. The inputs and outputs are mapped onto each other following a protocol that establishes which sounds are played according to which data. So first we need to know and understand the data we want to sonify. We must know what we want to say with our system - what we will talk about. We must know how the data change (usually we have time based data but there can also be spatially referenced data, like maps) and what characteristics of its behavior we want to represent. For example if you have a single value (like luminosity of a star, linear position of a car, amount of likes in a youtube channel, number of new posts on wikipedia, etc) you can choose to play a sound when this value is more than a certain threshold value, or play a sound that gets louder when the values becomes higher, or a sound when the values are raising or decreasing in time. In some cases it is useful to determine the highest and the lowest value within the whole range of values available. In terms of outputs this can help define a “container” of initial values that can define the range of deviations in the output. We can highlight certain features of data. There are many types of data. The most common are: 
In a sonification system, which is our final product, the data is the source of the sound engine, and some particular sounds will be the output. The inputs and outputs are mapped onto each other following a protocol that establishes which sounds are played according to which data. So first we need to know and understand the data we want to sonify. We must know what we want to say with our system - what we will talk about. We must know how the data change (usually we have time based data but there can also be spatially referenced data, like maps) and what characteristics of its behavior we want to represent. For example if you have a single value (like luminosity of a star, linear position of a car, amount of likes in a youtube channel, number of new posts on wikipedia, etc) you can choose to play a sound when this value is more than a certain threshold value, or play a sound that gets louder when the values becomes higher, or a sound when the values are raising or decreasing in time. In some cases it is useful to determine the highest and the lowest value within the whole range of values available. In terms of outputs this can help define a “container” of initial values that can define the range of deviations in the output. We can highlight certain features of data. There are many types of data. The most common are: 


- Single data: indicating a state ON-OFF (boolean data).
'''Single data:''' indicating a state ON-OFF (boolean data).
- A single data value covering a range of values: usually mapped to a single sound or sound feature like the pitch, or bpm (beats per minute), or an effect, but it can control more than one feature or sound at once.  
 
- Multiple data: more than one data of the previous type. Usually there are many types of data collected at the same time so these data sets consist of several layers of synchronized data.  
'''A single data value covering a range of values:''' usually mapped to a single sound or sound feature like the pitch, or bpm (beats per minute), or an effect, but it can control more than one feature or sound at once.  
 
'''Multiple data:''' more than one data of the previous type. Usually there are many types of data collected at the same time so these data sets consist of several layers of synchronized data.  


Sound has the advantage over visual perception that more layers of data can be perceived at the same time. Changes in patterns are more easily detected listening than looking at. Especially if the amount of data is very large. So, in sum, we need to consider the data we have, how they evolve in time, how they are arranged and what are the salient parts we want to use to feed our sonification system. We have to ask ourselves “what will the sound mean?” We need to understand that data is not the message! We must metabolize the data and their behavior and find what message will be triggering sound.  
Sound has the advantage over visual perception that more layers of data can be perceived at the same time. Changes in patterns are more easily detected listening than looking at. Especially if the amount of data is very large. So, in sum, we need to consider the data we have, how they evolve in time, how they are arranged and what are the salient parts we want to use to feed our sonification system. We have to ask ourselves “what will the sound mean?” We need to understand that data is not the message! We must metabolize the data and their behavior and find what message will be triggering sound.  
Line 94: Line 96:
Acording to the use of the sonification system (to analyze or to monitor a certain phenomena) we distinguish two “modes”:  
Acording to the use of the sonification system (to analyze or to monitor a certain phenomena) we distinguish two “modes”:  


Real-time (to monitor) - a stream of data is sonifed instantly and a sound is produced to display the value and behavior of the data in that particular moment;  
'''Real-time (to monitor)''' - a stream of data is sonifed instantly and a sound is produced to display the value and behavior of the data in that particular moment;  
“A posteriori” (to analyze) - time-series sonification of a set of pre-recorded data is converted into an audio file that displays the values and behavior of the data over the period of time covered by the time-series. 
 
'''“A posteriori” (to analyze)''' - time-series sonification of a set of pre-recorded data is converted into an audio file that displays the values and behavior of the data over the period of time covered by the time-series. 


These two methods are not mutually exclusive and can eventually display the same sounds. The difference is that in an “a posteriori” sonification, because the sound is produced after the events that originated the data, the parameters of the final piece can be adapted, i.e. the total duration. In a real-time case, you can control the time resolution: that is the time interval at which the sound can change and is played.
These two methods are not mutually exclusive and can eventually display the same sounds. The difference is that in an “a posteriori” sonification, because the sound is produced after the events that originated the data, the parameters of the final piece can be adapted, i.e. the total duration. In a real-time case, you can control the time resolution: that is the time interval at which the sound can change and is played.

Revision as of 11:39, 6 April 2026

Sonification for educational purposes in practice is a process of exploring every possibility that essentially answers the question: “How can I use sound to highlight or demonstrate one or more pieces of information or conclusions arising from a movement, measurement, or phenomenon that either exists, has occurred, or is unfolding over time?» The existing data at our disposal, the conditions and methods of their collection, as well as the educational purpose for which the sonification is intended, are the determining factors for its effective use.

The aspects below shape the relationship between educational needs and the concept of sound along with its structured arrangement over time—that is, the concept of music.


Aspects of teaching with Sonification as a musical practice

The indisputable connection between sound and numbers—specifically, the concept of breaking down sound into frequencies or harmonics—provides a sufficiently structured framework for interdisciplinary teaching using sound, within which all aspects of STEAM can be addressed. Since the concept of time defines the sound phenomenon, the representational act of a sound effect cannot but be at the center of any pedagogical approach. Consequently, the organized arrangement of sound elements in time in a harmonious manner—both in terms of rhythm, intensity, timbre, pitch, and their positional placement on the musical scale, whether diatonic or not—constitutes a musical result. This rational organization can serve as a field for experimentation in musical composition, while the parameterization of all the above concepts can enrich any educational objective that depends on the evolution of a phenomenon over time or the conversion of data into sound.

Thus, we can reasonably distinguish the concept of sonification for educational purposes into three basic approaches:


• The symbolic

• The mathematical

• The adaptive


Symbolic Sonification

The reproduction of sound characteristics, namely: pitch, intensity, timbre, repetition rate (if any), and duration—which are linked to scientific concepts, terms, and quantities without being logically mapped to a data-set (data-mapping)— constitutes the subject of symbolic sonification.

A simple example would be to “sound-paint” a gray cloud using low-frequency noise and a white cloud using high-frequency noise. Another example would be a class of students representing the sound of rain by randomly tapping their fingernails on their desks. Another example that relates composition with music representation is the leitmotif. A leitmotif is a short melodic theme consisting of a few specific notes which, as a unique motif (pattern), is associated with a character in an opera and played by the orchestra, particularly in Wagner’s operas. A character’s leitmotif brings the character to mind throughout the entire work, whether the character is on stage or not! Translating this to a data series, such a leitmotif could replace the expected sound of a prominent low or high value (or a specific value or even a range of values) without having any coherence or arising from the neighboring data.


Mathematical Sonification

When pitch, intensity, timbre, rhythm (if any), and duration as sound-characteristics, runs through a series of data-measurements connected to a physical term, or a scientific concept, they form a logical map of one or more parts of that series (direct data-mapping). The sounding result of this match is mathematical sonification.

An example that perfectly illustrates the above distinction, primarily by exploiting the characteristic of rhythm, is that of the mechanism for audibly indicating the distance between a car and the one next to it while parking, a feature found in many cars. The repetition frequency of this momentary acoustic signal forms a repeating pattern whose rhythm varies (slow-fast) depending on the proximity data to the obstacle, which is detected with high precision by a sensor.

To understand the difference between symbolic and mathematical representation, we can adapt the previous examples as “unplugged activities” in a classroom. Mathematical sonification in the cloud example would occur if we defined a color threshold for white or gray and represented the droplets that make up the clouds with millions of frequency particles of minimal duration (sound nebulae) the droplets of which clouds are composed. In the example of rain, we would have a mathematical sonification if students represented with absolute precision, one by one, every raindrop at a specific time and surface area. Finally, in the example of “parking,” we would have symbolic representation if the students’ eyes took on the role of the sensor, where data would be estimated visually without absolute mathematical measurement.


Adaptive Sonification

It is a sound design or musical composition (by expanding this notion), resulting from mathematical sonification in which, however, methods of aesthetic sound-rendering are creatively utilized to meet teaching objectives in describing learning concepts.

Furthermore, the analysis of data-mapping methods in conjunction with the diatonic scale opens up a fruitful field for exploring teaching tools that allow sound to be processed in terms of musical composition. The use of MIDI for sound processing or the highlighting of musical motifs—which can serve as a starting point for creating musical compositions—perfectly extends adaptive sonification. In fact, the graphical representation of data (graphical display) can be creatively transformed into sound by treating the display as a two-dimensional scheme, or even a photograph as a three-dimensional image. The result is referred to as “schematic sonification”.

This adaptable approach broadens access to the auditory outcome of data sonification across a wide range of age groups and grade levels, inviting educators from other disciplines—such as Art, Theater, and Music, to actively participate in interdisciplinary teaching. An example of this approach has been implemented in the "Sounds of the Stars" scenario [1] in collaboration with the National Observatory of Athens (community: Ήχοι των Άστρων[2]). The scenario is part of the SoundScapes learning scenarios repository [3].

In the following pages, practical ways to implement the above approaches with or without handling data sets coming either from measurements or from sensors, are entitled as: Unplugged activities, Real-time sonification and ''a posteriori'' sonification.

THE MIDI protocol. Why is it useful for sonification in school?

MIDI Protocol stands for Musical Instrument Digital Interface and was introduced in early ’80s as machine language allowing analog and later digital instruments interconnection. This language interprets several aspects of music performance and notation in an electronic format.

MIDI enables the user to receive, transmit, store and edit electronically produced signals that correspond to several aspects of music. Main parameters of these aspects include note-on, note-off, velocity, timbre and pitch. All these parameters can be stored as code in timeline fashion within a MIDI file. A MIDI file resembles the “program” in the form of a revolving cylinder or perforated paper used in late 18th c. music boxes or early 20th c. “pianolas”, which are musical automata. It is this characteristic that can be proved enormously useful for educational purposes, as numerous midi applications, sensors and programs are widely spread throughout the internet. However, it is the ability to edit the output as a musical score or as a part of a polyphonic composition that makes MIDI an exceptionally powerful educational tool. Within the present WIKI pages sensors using MIDI in particular, are widely displayed.



Sonification components

A sonification activity consists in the design and building of a sonification system. A sonification system can be accomplished in many different ways but 3 components must always be considered: 1) INPUT DATA; 2) MAPPING PROTOCOL; 3) AUDIO OUTPUT;

Data Input

In a sonification system, which is our final product, the data is the source of the sound engine, and some particular sounds will be the output. The inputs and outputs are mapped onto each other following a protocol that establishes which sounds are played according to which data. So first we need to know and understand the data we want to sonify. We must know what we want to say with our system - what we will talk about. We must know how the data change (usually we have time based data but there can also be spatially referenced data, like maps) and what characteristics of its behavior we want to represent. For example if you have a single value (like luminosity of a star, linear position of a car, amount of likes in a youtube channel, number of new posts on wikipedia, etc) you can choose to play a sound when this value is more than a certain threshold value, or play a sound that gets louder when the values becomes higher, or a sound when the values are raising or decreasing in time. In some cases it is useful to determine the highest and the lowest value within the whole range of values available. In terms of outputs this can help define a “container” of initial values that can define the range of deviations in the output. We can highlight certain features of data. There are many types of data. The most common are: 

Single data: indicating a state ON-OFF (boolean data).

A single data value covering a range of values: usually mapped to a single sound or sound feature like the pitch, or bpm (beats per minute), or an effect, but it can control more than one feature or sound at once.

Multiple data: more than one data of the previous type. Usually there are many types of data collected at the same time so these data sets consist of several layers of synchronized data.

Sound has the advantage over visual perception that more layers of data can be perceived at the same time. Changes in patterns are more easily detected listening than looking at. Especially if the amount of data is very large. So, in sum, we need to consider the data we have, how they evolve in time, how they are arranged and what are the salient parts we want to use to feed our sonification system. We have to ask ourselves “what will the sound mean?” We need to understand that data is not the message! We must metabolize the data and their behavior and find what message will be triggering sound.

And, therefore, before this we need to ask ourselves what is the purpose of the sonification? Will it be applied continuously, maybe in the background, or just after some time of collecting data, or both?

Real-Time Sonification vs “A Posteriori”

Acording to the use of the sonification system (to analyze or to monitor a certain phenomena) we distinguish two “modes”:

Real-time (to monitor) - a stream of data is sonifed instantly and a sound is produced to display the value and behavior of the data in that particular moment;

“A posteriori” (to analyze) - time-series sonification of a set of pre-recorded data is converted into an audio file that displays the values and behavior of the data over the period of time covered by the time-series. 

These two methods are not mutually exclusive and can eventually display the same sounds. The difference is that in an “a posteriori” sonification, because the sound is produced after the events that originated the data, the parameters of the final piece can be adapted, i.e. the total duration. In a real-time case, you can control the time resolution: that is the time interval at which the sound can change and is played.

Mapping Protocol

Audio Output

Unplugged activities

Real-time sonification

''a posteriori'' sonification

References