<?xml version="1.0"?>
<feed xmlns="http://www.w3.org/2005/Atom" xml:lang="en">
	<id>https://wiki.soundscapes.nuclio.org:443/w/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=Mick</id>
	<title>Soundscapes - User contributions [en]</title>
	<link rel="self" type="application/atom+xml" href="https://wiki.soundscapes.nuclio.org:443/w/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=Mick"/>
	<link rel="alternate" type="text/html" href="https://wiki.soundscapes.nuclio.org:443/wiki/Special:Contributions/Mick"/>
	<updated>2026-04-11T18:28:51Z</updated>
	<subtitle>User contributions</subtitle>
	<generator>MediaWiki 1.41.0</generator>
	<entry>
		<id>https://wiki.soundscapes.nuclio.org:443/w/index.php?title=Technical_analysis_of_existing_solutions_for_the_creation_of_sonification_tools&amp;diff=192</id>
		<title>Technical analysis of existing solutions for the creation of sonification tools</title>
		<link rel="alternate" type="text/html" href="https://wiki.soundscapes.nuclio.org:443/w/index.php?title=Technical_analysis_of_existing_solutions_for_the_creation_of_sonification_tools&amp;diff=192"/>
		<updated>2024-10-07T14:55:44Z</updated>

		<summary type="html">&lt;p&gt;Mick: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Technical analysis of existing solutions to selection of the most cost-effective and sustainable tools and materials for the creation of sonification tools.  &lt;br /&gt;
For an effective and interactive class that will allow students to be interested and inspired, and for teachers to be able to guide a sonification class, we had to look into all the hardware and software tools available and their effectiveness, availability, cost, and the ability to promote important digital and technological skills and the enhancement of the digital readiness of the involved schools. Our project stands out from similar educational projects because we combine the programming skills with electronics in order to produce data sets in real-time and build interactive digital systems that receive data through sensors and express them as particular sounds. For example data received from the classroom environment, or the school area. This is made possible when we use microcontrollers to control sensors and feed their data into softwares or even sonify them through the microcontrollers itself.&lt;br /&gt;
&lt;br /&gt;
=SOFTWARE=&lt;br /&gt;
There is a vast choice of software that can be used for the sonification of data both in real-time and “a posteriori”..&lt;br /&gt;
&lt;br /&gt;
Although the teachers and students have the choice to choose what software to use, we recommend using online software when possible, especially in the case of short duration courses because it usually saves the time of having to install additional software for each student.&lt;br /&gt;
In the vast field of audio softwares online some of them are more sonification oriented and most have customizable parameters to some degree.&lt;br /&gt;
&lt;br /&gt;
For example MusicaAlgorithms &amp;lt;ref&amp;gt;https://musicalgorithms.org/3.2/&amp;lt;/ref&amp;gt; offers the possibility to upload our own data. The drawback is that it assumes that your data will be mapped onto pitch and duration. Allowing your choice on the type of scale and not other aspects like the timbre (what instrument is going to play).&lt;br /&gt;
 &lt;br /&gt;
The common and universal MIDI protocol suffices when needed to control custom types of sound and serves as a common format to exchange musical information between audio platforms. Despite some great tools being available like libraries based on programming languages such as python (for example Sonecules&amp;lt;ref&amp;gt; https://github.com/interactive-sonification/sonecules/&amp;lt;/ref&amp;gt; or MIDItime &amp;lt;ref&amp;gt; https://github.com/mikejcorey/miditime&amp;lt;/ref&amp;gt;) and dedicated software (ex. SonicPi, Pure Data). We choose to use the TwoTone &amp;lt;ref&amp;gt; https://twotone.io/ &amp;lt;/ref&amp;gt; software. It is  free to use, versatile and with a  user-friendly interface that allows even beginners with reduced skills in programming and minimal expertise in music and audio, to generate a consistent sonification output.&lt;br /&gt;
&lt;br /&gt;
=HARDWARE=&lt;br /&gt;
Apart from using computers we also looked into microcontrollers to handle sensors and actuators (ex. leds, motors) to add some hands-on approach to the generation of the data to be sonified, with a DIY attitude that has a greater impact on young students than theoretical books and manuals. And giving priority to low-cost sustainable materials&lt;br /&gt;
There are many low-cost microcontrollers available on the market (Arduino, BBC micro:bit, Raspberry Pi Pico, ESP32, Teensy, Particle Argon/Boron, etc…)&lt;br /&gt;
&lt;br /&gt;
The most widely used microcontroller is likely the Arduino, which has many different versions and copies due to its open-source nature. Other  options include the more complex Raspberry Pi and the more educationally accessible Micro:bit.&lt;br /&gt;
&lt;br /&gt;
We choose the micro:bit because it has some advantages over the other microcontrollers available: &lt;br /&gt;
the device is programmable with a graphical user interface accessible through an internet browser for free, without the need to create an account;&lt;br /&gt;
the board already has included several sensors including environmental sensors for light, temperature, magnetism, acceleration and sound, and also a small piezoelectric speaker allowing the students to build interactive digital sonification systems that receive data through sensors in a very short time;&lt;br /&gt;
the micro:bit can also function as a gateway to more complex projects and act as an interface with other devices through MIDI, USB, Bluetooth, radio and other protocols;&lt;br /&gt;
There is an official data logging library available for the micro:bit &amp;lt;ref&amp;gt; https://makecode.microbit.org/reference/datalogger &amp;lt;/ref&amp;gt; that allows the students to record data over time very easily;&lt;br /&gt;
&lt;br /&gt;
=CONCLUSION=&lt;br /&gt;
In sum, the  Micro:bit microcontroller and the TwoTone software were chosen as the main basic technologies to start working with sonification. The reason is that both provide an interactive and hands-on learning experience for both students and teachers. The Micro:bit is affordable hardware, with many embedded sensors, and it is designed for educational purposes. Its programming environment makecode &amp;lt;ref&amp;gt; https://www.microsoft.com/en-us/makecode &amp;lt;/ref&amp;gt; is easily accessible through any internet browser and can be programmed with the simple Blockly &amp;lt;ref&amp;gt; https://developers.google.com/blockly &amp;lt;/ref&amp;gt; visual code editor  and also in Javascript and Python. &lt;br /&gt;
Also, the micro:bit is already available in most of the partner institutions and schools allowing the students to give them a second life and reducing waste.  &lt;br /&gt;
The free software TwoTone is designed to allow users with little experience to upload data and generate an audio file that is the corresponding sonification. A variety of instruments and musical parameters like scale and tempo can be customised by users. These are the main reasons that made them practical choices for schools. Both can work with various tools, data sets, sound outputs, and complementary devices (including other electronic components and MIDI) allowing students to create and manipulate sound creatively. And always keeping in mind that other tools exist and can be explored and used for particular, or more advanced projects.&lt;/div&gt;</summary>
		<author><name>Mick</name></author>
	</entry>
	<entry>
		<id>https://wiki.soundscapes.nuclio.org:443/w/index.php?title=Technical_analysis_of_existing_solutions_for_the_creation_of_sonification_tools&amp;diff=191</id>
		<title>Technical analysis of existing solutions for the creation of sonification tools</title>
		<link rel="alternate" type="text/html" href="https://wiki.soundscapes.nuclio.org:443/w/index.php?title=Technical_analysis_of_existing_solutions_for_the_creation_of_sonification_tools&amp;diff=191"/>
		<updated>2024-10-07T14:52:07Z</updated>

		<summary type="html">&lt;p&gt;Mick: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Technical analysis of existing solutions to selection of the most cost-effective and sustainable tools and materials for the creation of sonification tools.  &lt;br /&gt;
For an effective and interactive class that will allow students to be interested and inspired, and for teachers to be able to guide a sonification class, we had to look into all the hardware and software tools available and their effectiveness, availability, cost, and the ability to promote important digital and technological skills and the enhancement of the digital readiness of the involved schools. Our project stands out from similar educational projects because we combine the programming skills with electronics in order to produce data sets in real-time and build interactive digital systems that receive data through sensors and express them as particular sounds. For example data received from the classroom environment, or the school area. This is made possible when we use microcontrollers to control sensors and feed their data into softwares or even sonify them through the microcontrollers itself.&lt;br /&gt;
&lt;br /&gt;
=SOFTWARE=&lt;br /&gt;
There is a vast choice of software that can be used for the sonification of data both in real-time and “a posteriori”..&lt;br /&gt;
&lt;br /&gt;
Although the teachers and students have the choice to choose what software to use, we recommend using online software when possible, especially in the case of short duration courses because it usually saves the time of having to install additional software for each student.&lt;br /&gt;
In the vast field of audio softwares online some of them are more sonification oriented and most have customizable parameters to some degree.&lt;br /&gt;
&lt;br /&gt;
For example MusicaAlgorithms &amp;lt;ref&amp;gt;https://musicalgorithms.org/3.2/&amp;lt;/ref&amp;gt; offers the possibility to upload our own data. The drawback is that it assumes that your data will be mapped onto pitch and duration. Allowing your choice on the type of scale and not other aspects like the timbre (what instrument is going to play).&lt;br /&gt;
 &lt;br /&gt;
The common and universal MIDI protocol suffices when needed to control custom types of sound and serves as a common format to exchange musical information between audio platforms. Despite some great tools being available like libraries based on programming languages such as python (for example Sonecules&amp;lt;ref&amp;gt; https://github.com/interactive-sonification/sonecules/&amp;lt;/ref&amp;gt; or MIDItime &amp;lt;ref&amp;gt; https://github.com/mikejcorey/miditime&amp;lt;/ref&amp;gt;) and dedicated software (ex. SonicPi, Pure Data). We choose to use the TwoTone &amp;lt;ref&amp;gt; https://twotone.io/ &amp;lt;/ref&amp;gt; software. It is  free to use, versatile and with a  user-friendly interface that allows even beginners with reduced skills in programming and minimal expertise in music and audio, to generate a consistent sonification output.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=HARDWARE=&lt;br /&gt;
Apart from using computers we also looked into microcontrollers to handle sensors and actuators (ex. leds, motors) to add some hands-on approach to the generation of the data to be sonified, with a DIY attitude that has a greater impact on young students than theoretical books and manuals. And giving priority to low-cost sustainable materials&lt;br /&gt;
There are many low-cost microcontrollers available on the market (Arduino, BBC micro:bit, Raspberry Pi Pico, ESP32, Teensy, Particle Argon/Boron, etc…)&lt;br /&gt;
&lt;br /&gt;
The most widely used microcontroller is likely the Arduino, which has many different versions and copies due to its open-source nature. Other  options include the more complex Raspberry Pi and the more educationally accessible Micro:bit.&lt;br /&gt;
&lt;br /&gt;
We choose the micro:bit because it has some advantages over the other microcontrollers available: &lt;br /&gt;
the device is programmable with a graphical user interface accessible through an internet browser for free, without the need to create an account;&lt;br /&gt;
the board already has included several sensors including environmental sensors for light, temperature, magnetism, acceleration and sound, and also a small piezoelectric speaker allowing the students to build interactive digital sonification systems that receive data through sensors in a very short time;&lt;br /&gt;
the micro:bit can also function as a gateway to more complex projects and act as an interface with other devices through MIDI, USB, Bluetooth, radio and other protocols;&lt;br /&gt;
There is an official data logging library available for the micro:bit[1]  that allows the students to record data over time very easily;&lt;br /&gt;
&lt;br /&gt;
=CONCLUSION=&lt;br /&gt;
In sum, the  Micro:bit microcontroller and the TwoTone software were chosen as the main basic technologies to start working with sonification. The reason is that both provide an interactive and hands-on learning experience for both students and teachers. The Micro:bit is affordable hardware, with many embedded sensors, and it is designed for educational purposes. Its programming environment makecode [2] is easily accessible through any internet browser and can be programmed with the simple Blockly [33] visual code editor  and also in Javascript and Python. &lt;br /&gt;
Also, the micro:bit is already available in most of the partner institutions and schools allowing the students to give them a second life and reducing waste.  &lt;br /&gt;
The free software TwoTone is designed to allow users with little experience to upload data and generate an audio file that is the corresponding sonification. A variety of instruments and musical parameters like scale and tempo can be customised by users. These are the main reasons that made them practical choices for schools. Both can work with various tools, data sets, sound outputs, and complementary devices (including other electronic components and MIDI) allowing students to create and manipulate sound creatively. And always keeping in mind that other tools exist and can be explored and used for particular, or more advanced projects.&lt;br /&gt;
&lt;br /&gt;
[1] https://makecode.microbit.org/reference/datalogger&lt;br /&gt;
[2] https://www.microsoft.com/en-us/makecode&lt;br /&gt;
[3] https://developers.google.com/blockly&lt;/div&gt;</summary>
		<author><name>Mick</name></author>
	</entry>
	<entry>
		<id>https://wiki.soundscapes.nuclio.org:443/w/index.php?title=Technical_analysis_of_existing_solutions_for_the_creation_of_sonification_tools&amp;diff=190</id>
		<title>Technical analysis of existing solutions for the creation of sonification tools</title>
		<link rel="alternate" type="text/html" href="https://wiki.soundscapes.nuclio.org:443/w/index.php?title=Technical_analysis_of_existing_solutions_for_the_creation_of_sonification_tools&amp;diff=190"/>
		<updated>2024-10-07T14:49:40Z</updated>

		<summary type="html">&lt;p&gt;Mick: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Technical analysis of existing solutions to selection of the most cost-effective and sustainable tools and materials for the creation of sonification tools.  &lt;br /&gt;
For an effective and interactive class that will allow students to be interested and inspired, and for teachers to be able to guide a sonification class, we had to look into all the hardware and software tools available and their effectiveness, availability, cost, and the ability to promote important digital and technological skills and the enhancement of the digital readiness of the involved schools. Our project stands out from similar educational projects because we combine the programming skills with electronics in order to produce data sets in real-time and build interactive digital systems that receive data through sensors and express them as particular sounds. For example data received from the classroom environment, or the school area. This is made possible when we use microcontrollers to control sensors and feed their data into softwares or even sonify them through the microcontrollers itself.&lt;br /&gt;
&lt;br /&gt;
=SOFTWARE=&lt;br /&gt;
There is a vast choice of software that can be used for the sonification of data both in real-time and “a posteriori”..&lt;br /&gt;
&lt;br /&gt;
Although the teachers and students have the choice to choose what software to use, we recommend using online software when possible, especially in the case of short duration courses because it usually saves the time of having to install additional software for each student.&lt;br /&gt;
In the vast field of audio softwares online some of them are more sonification oriented and most have customizable parameters to some degree.&lt;br /&gt;
&lt;br /&gt;
For example MusicaAlgorithms &amp;lt;ref&amp;gt;https://musicalgorithms.org/3.2/&amp;lt;/ref&amp;gt; offers the possibility to upload our own data. The drawback is that it assumes that your data will be mapped onto pitch and duration. Allowing your choice on the type of scale and not other aspects like the timbre (what instrument is going to play).&lt;br /&gt;
 &lt;br /&gt;
The common and universal MIDI protocol suffices when needed to control custom types of sound and serves as a common format to exchange musical information between audio platforms. Despite some great tools being available like libraries based on programming languages such as python (for example Sonecules [2] or MIDItime [3]) and dedicated software (ex. SonicPi, Pure Data). We choose to use the TwoTone [4] software. It is  free to use, versatile and with a  user-friendly interface that allows even beginners with reduced skills in programming and minimal expertise in music and audio, to generate a consistent sonification output.&lt;br /&gt;
&lt;br /&gt;
[1] https://musicalgorithms.org/3.2/&lt;br /&gt;
[2] https://github.com/interactive-sonification/sonecules/&lt;br /&gt;
[3] https://github.com/mikejcorey/miditime&lt;br /&gt;
[4] https://twotone.io/&lt;br /&gt;
&lt;br /&gt;
=HARDWARE=&lt;br /&gt;
Apart from using computers we also looked into microcontrollers to handle sensors and actuators (ex. leds, motors) to add some hands-on approach to the generation of the data to be sonified, with a DIY attitude that has a greater impact on young students than theoretical books and manuals. And giving priority to low-cost sustainable materials&lt;br /&gt;
There are many low-cost microcontrollers available on the market (Arduino, BBC micro:bit, Raspberry Pi Pico, ESP32, Teensy, Particle Argon/Boron, etc…)&lt;br /&gt;
&lt;br /&gt;
The most widely used microcontroller is likely the Arduino, which has many different versions and copies due to its open-source nature. Other  options include the more complex Raspberry Pi and the more educationally accessible Micro:bit.&lt;br /&gt;
&lt;br /&gt;
We choose the micro:bit because it has some advantages over the other microcontrollers available: &lt;br /&gt;
the device is programmable with a graphical user interface accessible through an internet browser for free, without the need to create an account;&lt;br /&gt;
the board already has included several sensors including environmental sensors for light, temperature, magnetism, acceleration and sound, and also a small piezoelectric speaker allowing the students to build interactive digital sonification systems that receive data through sensors in a very short time;&lt;br /&gt;
the micro:bit can also function as a gateway to more complex projects and act as an interface with other devices through MIDI, USB, Bluetooth, radio and other protocols;&lt;br /&gt;
There is an official data logging library available for the micro:bit[1]  that allows the students to record data over time very easily;&lt;br /&gt;
&lt;br /&gt;
=CONCLUSION=&lt;br /&gt;
In sum, the  Micro:bit microcontroller and the TwoTone software were chosen as the main basic technologies to start working with sonification. The reason is that both provide an interactive and hands-on learning experience for both students and teachers. The Micro:bit is affordable hardware, with many embedded sensors, and it is designed for educational purposes. Its programming environment makecode [2] is easily accessible through any internet browser and can be programmed with the simple Blockly [33] visual code editor  and also in Javascript and Python. &lt;br /&gt;
Also, the micro:bit is already available in most of the partner institutions and schools allowing the students to give them a second life and reducing waste.  &lt;br /&gt;
The free software TwoTone is designed to allow users with little experience to upload data and generate an audio file that is the corresponding sonification. A variety of instruments and musical parameters like scale and tempo can be customised by users. These are the main reasons that made them practical choices for schools. Both can work with various tools, data sets, sound outputs, and complementary devices (including other electronic components and MIDI) allowing students to create and manipulate sound creatively. And always keeping in mind that other tools exist and can be explored and used for particular, or more advanced projects.&lt;br /&gt;
&lt;br /&gt;
[1] https://makecode.microbit.org/reference/datalogger&lt;br /&gt;
[2] https://www.microsoft.com/en-us/makecode&lt;br /&gt;
[3] https://developers.google.com/blockly&lt;/div&gt;</summary>
		<author><name>Mick</name></author>
	</entry>
	<entry>
		<id>https://wiki.soundscapes.nuclio.org:443/w/index.php?title=Unplugged_activities&amp;diff=181</id>
		<title>Unplugged activities</title>
		<link rel="alternate" type="text/html" href="https://wiki.soundscapes.nuclio.org:443/w/index.php?title=Unplugged_activities&amp;diff=181"/>
		<updated>2024-09-26T14:08:28Z</updated>

		<summary type="html">&lt;p&gt;Mick: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;As a first approach to a class of students or another audience who are new to the sonification process, and also to digital tools, it is important to introduce some games and activities based on personal human-to-human communication. This will help break the ice within a group of students for them to understand the fundamental concepts of the sonification workflow, and create a positive and relaxed atmosphere that will help in the following steps. It also allows us to explain and apply the core concepts of a sonification system (it must be composed of one input data, one output sound, and a relation (protocol/mapping) between them). We can use the clapping as an output sound. Its rhythm can easily be modulated. Real instruments would be welcome.&lt;br /&gt;
&lt;br /&gt;
== Example 1: sonifying a person&#039;s position in a trajectory ==&lt;br /&gt;
&lt;br /&gt;
[[File:Soundscapes pilot 1.jpg|thumb|right|alt=Person walking before student class between two points A and B |Unplugged Sonification]]&lt;br /&gt;
&lt;br /&gt;
A person positions herself in the space before an audience, between two points A and B. The audience starts clapping similarly to a car parking system beeping sound, with clapping frequency proportional to the position of the person between the two points.&lt;br /&gt;
&lt;br /&gt;
This is a good moment to introduce students to diffrent types of variables:&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Binary variables&#039;&#039;&#039;: These are the simplest variables that basically give an ON/OFF information. &lt;br /&gt;
&lt;br /&gt;
Example: In a scenario where someone is in front of a crowd, people clap to indicate that the person is &#039;on&#039; or present. Conversely, if someone is not on stage, they are not clapping.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Single variable with a range&#039;&#039;&#039;: it represents a quantity with values from a minimum to a maximum. &lt;br /&gt;
&lt;br /&gt;
Example: sonify the position of a student moving along a line, between points A and B. &lt;br /&gt;
&lt;br /&gt;
# Divide the class into two groups: (1) “active sound generators” and (2) “listeners”&lt;br /&gt;
# The students  of group 2 look at the moving student and clap “more or less” (let them decide what that means) depending on the proximity of the student to point A or point B,  &lt;br /&gt;
# The students of group (2) evaluate the result of clapping without visual contact of the moving student. &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;The purpose is to make the students think and discuss about all the details that a sonification system has to take into account.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
The experiment can be developed. For example:&lt;br /&gt;
&lt;br /&gt;
# A student from group 2 can reproduce the movement of the moving student according to the clapping coming from group 1 (thus evaluating the sonification information). &lt;br /&gt;
# His/her classmates can try to help him/her, through verbal contact, to refine the position using only their ears. &lt;br /&gt;
# The teacher can asks the class to freeze at a particular moment ask the two groups to evaluate their positions. &lt;br /&gt;
# Groups’ role interchanges.&lt;br /&gt;
&lt;br /&gt;
Other modifications to this example can be done, i.e. the clapping can be performed by a single person, or by many to emphasize two concepts: 1) the protocol is subjective and 2) many sounds together generate confusion (“noise”).&lt;br /&gt;
&lt;br /&gt;
== Example 2 ==&lt;br /&gt;
&lt;br /&gt;
[[File:4seasons data ss.png |thumb|right|alt=Four seasons data sheet example |Four seasons data sheet example. Source: https://datosclima.es/]]&lt;br /&gt;
&lt;br /&gt;
Start by asking your students what spring, summer, autumn, or winter sounds like?  Brainstorm sounds that determine a season, perhaps precipitation, temperature, wind speed? &lt;br /&gt;
&lt;br /&gt;
Provide the students with a table with data on temperature, precipitation, atmospheric pressure, wind speed, and other data that you consider relevant for sounding a station. You can provide the daily data for a specific year in your region. &lt;br /&gt;
&lt;br /&gt;
Students will have to analyze this data and reflect on the following questions: how can this data be represented? Maybe the idea comes up to represent this information with graphs, in a visual way, but, how can this data be encoded with sound?&lt;br /&gt;
&lt;br /&gt;
Having done the introduction and thought about how to sonify this data, let&#039;s get down to work. &lt;br /&gt;
&lt;br /&gt;
# &#039;&#039;&#039;Data collection and analysis&#039;&#039;&#039;. Students should take the most important features and data to represent the seasons. They should choose only the data for a particular season, or perhaps be encouraged to sonify the data for a whole year, thus showing all four seasons. They can choose to sonify each value of the table (one per day) or make monthly averages or by fortnights. Let them experiment and see what is most representative for interpreting the data. They should also choose the variables to sonify the season, e.g. temperature and atmospheric pressure.&lt;br /&gt;
# &#039;&#039;&#039;Organize the data&#039;&#039;&#039;. Students must organize the selected data, identify patterns, and define how to represent them with different sounds and instruments. Students can choose to play a variable when its value is higher or lower than a certain threshold value or play a sound that gets louder when the values are higher. Students can define the maximum and minimum values for each feature, make monthly averages, or create scales of values that will then represent different sounds. They can play with volume, pitch, and timbre to sonify the range of values. Encourage them to use their body (clapping, voice) or use items around them (a table, a pen with a bottle) as instruments. Remember that this is an unplugged activity, no digital device will be used for data collection, analysis, or reproduction.&lt;br /&gt;
# &#039;&#039;&#039;Create the algorithm to sonify a season&#039;&#039;&#039;. Students have to create the composition based on the selected data. At this point, students must define the mapping protocol. To do so, they must create the algorithm that associates certain sounds to defined data, i.e. create the set of rules by which the output sounds correspond to the input data assigning to each value or range of values an instrument, pitch, loudness, timbre, and rhythm.&lt;br /&gt;
# &#039;&#039;&#039;Time to perform!&#039;&#039;&#039; Once the composition has been created and rehearsed, it is time to perform it and share it with the rest of the class. Can you guess what season it is, what variables you have sounded, is it a season with a lot of rain and low temperatures or are temperatures on the rise?&lt;br /&gt;
&lt;br /&gt;
Enjoy the results, go back to the previous steps to improve the sonification if necessary and experiment with new variables and ways of representing them. &lt;br /&gt;
&lt;br /&gt;
Finally, you can bring to the classroom The Four Seasons composition, a group of four violin concerti by Italian composer Antonio Vivaldi, each of which gives musical expression to a season of the year. Will you be able to guess which one it is? What data is represented in this composition? The composition includes accompanying poems describing what Vivaldi wanted to represent in relation to each of the seasons, find out more information and discover it for yourself!&lt;/div&gt;</summary>
		<author><name>Mick</name></author>
	</entry>
	<entry>
		<id>https://wiki.soundscapes.nuclio.org:443/w/index.php?title=Sonification_in_practice&amp;diff=180</id>
		<title>Sonification in practice</title>
		<link rel="alternate" type="text/html" href="https://wiki.soundscapes.nuclio.org:443/w/index.php?title=Sonification_in_practice&amp;diff=180"/>
		<updated>2024-09-26T14:08:21Z</updated>

		<summary type="html">&lt;p&gt;Mick: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;A sonification activity consistes in the design and buidling of a sonification system. A sonification system can be accomplished in many different ways but 3 components must always be considered: &lt;br /&gt;
1) INPUT DATA; &lt;br /&gt;
2) MAPPING PROTOCOL; &lt;br /&gt;
3) AUDIO OUTPUT; &lt;br /&gt;
&lt;br /&gt;
== Data Input ==&lt;br /&gt;
&lt;br /&gt;
== Mapping Protocol ==&lt;br /&gt;
&lt;br /&gt;
== Audio Output ==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Unplugged activities]]&lt;br /&gt;
&lt;br /&gt;
[[Real-time sonification]]&lt;br /&gt;
&lt;br /&gt;
[[&#039;&#039;a posteriori&#039;&#039; sonification]]&lt;/div&gt;</summary>
		<author><name>Mick</name></author>
	</entry>
	<entry>
		<id>https://wiki.soundscapes.nuclio.org:443/w/index.php?title=Sonification_in_practice&amp;diff=179</id>
		<title>Sonification in practice</title>
		<link rel="alternate" type="text/html" href="https://wiki.soundscapes.nuclio.org:443/w/index.php?title=Sonification_in_practice&amp;diff=179"/>
		<updated>2024-09-26T14:08:00Z</updated>

		<summary type="html">&lt;p&gt;Mick: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;A sonification activity consistes in the design and buidling of a sonification system. A sonification system can be accomplished in many different ways but 3 components must always be considered: &lt;br /&gt;
1) INPUT DATA; &lt;br /&gt;
2) MAPPING PROTOCOL; &lt;br /&gt;
3) AUDIO OUTPUT; &lt;br /&gt;
&lt;br /&gt;
== Data Input ==&lt;br /&gt;
&lt;br /&gt;
== Mapping Protocol ==&lt;br /&gt;
&lt;br /&gt;
== Audio Output ==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[3.1 Unplugged activities]]&lt;br /&gt;
&lt;br /&gt;
[[Real-time sonification]]&lt;br /&gt;
&lt;br /&gt;
[[&#039;&#039;a posteriori&#039;&#039; sonification]]&lt;/div&gt;</summary>
		<author><name>Mick</name></author>
	</entry>
	<entry>
		<id>https://wiki.soundscapes.nuclio.org:443/w/index.php?title=Real-time_sonification&amp;diff=178</id>
		<title>Real-time sonification</title>
		<link rel="alternate" type="text/html" href="https://wiki.soundscapes.nuclio.org:443/w/index.php?title=Real-time_sonification&amp;diff=178"/>
		<updated>2024-09-26T14:06:40Z</updated>

		<summary type="html">&lt;p&gt;Mick: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Real-time sonification is an exciting technique that can strongly promote students&#039; engagement in STEAM fields. Real-time sonification means that we are not able to perceive the time interval between the acquisition of the data and the respective sound produced by our sonification device because of the speed of the process. Moreover, the methods for creating sound representations of the data are defined simultaneously with data collection (in &amp;quot;real-time&amp;quot;).&lt;br /&gt;
&lt;br /&gt;
Before starting, we want to emphasize that the quality of the sound, which is subjective and therefore depends on the user&#039;s taste, must be such that at least it does not disturb the user. On the contrary, if it were appealing enough to attract their attention it would be better. On the other hand, when trying to do something &amp;quot;pleasant&amp;quot; there is a risk of generating sound results that do not fulfill the objective of describing the behavior of the input data well. It is therefore necessary to find a compromise: the sound must be sufficiently pleasant as well as exhaustively informative&lt;br /&gt;
&lt;br /&gt;
== Real-time sonification devices ==&lt;br /&gt;
&lt;br /&gt;
[[File:Microbit.jpg|thumb|right|alt=View of the micro:bit on the both sides |The BBC micro:bit microcontroller]]&lt;br /&gt;
&lt;br /&gt;
To create a real-time sonification device it is useful to use a microcontroller. These are like &amp;quot;small and simple computers&amp;quot;  with a single processor unit. They are not computers though. Their architecture is much simpler and they cannot run an operating system. Still, they can be programmed to execute a single program at a time, which can perform multiple tasks but sequentially, according to the order of the instructions listed in the program. &lt;br /&gt;
There are several types of microcontrollers, the [https://www.arduino.cc/ Arduino (arduino.cc)] being the most popular.&lt;br /&gt;
&lt;br /&gt;
To begin with, the SoundScapes project suggests using the [https://microbit.org/ BBC micro:bit] microcontroller. This tool is very simple to use, versatile, and includes several embedded sensors readily available to use, eliminating the requirement to build a specific electrical circuit for operation. The micro:bit can be programmed online with [https://makecode.microbit.org/ Makecode] (using the [https://www.google.com/chrome/  Chrome browser] for better compatibility) in python, javascript, or blocks.&lt;br /&gt;
&lt;br /&gt;
== Sonification with micro:bit ==&lt;br /&gt;
&lt;br /&gt;
Before diving into sonification with the micro:bit you must first get familiarized with the [https://makecode.microbit.org/ Makecode] programming environment. On the main page, there are various tutorials, like the &amp;quot;Flashing Heart&amp;quot;, the &amp;quot;Name Tag&amp;quot;, etc, between which you can choose to get started.  If you sign up on the platform, your projects will be saved on your account and you can access them from any device as long as you sign in. Otherwise, they are anyway saved as cookies, however, you can loose them if you clear your browser cache.&lt;br /&gt;
&lt;br /&gt;
=== Sound notions in micro:bit ===&lt;br /&gt;
&lt;br /&gt;
In the [https://makecode.microbit.org/#editor Makecode editor], there is a useful and attractive library dedicated to music, especially for young students. This [https://makecode.microbit.org/reference/music music library] offers several commands/blocks that facilitate the generation of sounds and the creation of melodies. There are many blocks and combinations of blocks you can use to generate different kinds of sounds. Here we introduce you to the most basic bocks and progress to more complex examples. It is a good exercise to play with the different blocks and hear what happens to get familiar with them.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[File:Soundscapes makecode music.png |centre|800 px|alt=Makecode editor music category]]&lt;br /&gt;
&lt;br /&gt;
==== Generate a single tone ====&lt;br /&gt;
&lt;br /&gt;
The following code generates a single tone with a pre-specified frequency Middle C and duration 1 beat when button A is pressed, or a continuous Middle E ring when button B is pressed. It is possible to change the frequency of the tones by clicking the white input fields with values &amp;quot;Middle C&amp;quot; and &amp;quot;Middle E&amp;quot;. From the drop-down menu arrows, it is also possible to change the beat duration of the &amp;quot;Middle C&amp;quot; tone and whether the sound is played sequentially with other command blocks, in the background, or in loop &amp;lt;ref name=&amp;quot;code&amp;quot; group=&amp;quot;Note&amp;quot;&amp;gt; Click the button &amp;quot;Simulator&amp;quot; on the top bar to interact with a virtual micro:bit and test the code. You can edit the code by clicking &amp;quot;Edit&amp;quot; on the top-right corner.&amp;lt;/ref&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;HTML&amp;gt;&lt;br /&gt;
&amp;lt;div style=&amp;quot;position:relative;height:0;padding-bottom:40%;overflow:hidden;&amp;quot;&amp;gt;&amp;lt;iframe style=&amp;quot;position:absolute;top:0;left:0;width:100%;height:100%;&amp;quot; src=&amp;quot;https://makecode.microbit.org/#pub:_3PbcX84vRRuJ&amp;quot; frameborder=&amp;quot;0&amp;quot; sandbox=&amp;quot;allow-popups allow-forms allow-scripts allow-same-origin&amp;quot;&amp;gt;&amp;lt;/iframe&amp;gt;&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/html&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Play a melody ====&lt;br /&gt;
&lt;br /&gt;
To play a melody use the following block and click on it to create the melody:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[File:Soundscapes makecode melody.png  |centre|500 px|alt=Play melody block]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The following example code plays two melodies with different bpm values for buttons A and B and stops all sounds when A and B are pressed simultaneously. It is possible to change the melodies by clicking the white input fields with the colorful music notes. As in the previous example, it is also possible to change the beat duration and whether the sound is played sequentially with other command blocks, in the background, or in loop &amp;lt;ref name=&amp;quot;code&amp;quot; group=&amp;quot;Note&amp;quot;/&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;HTML&amp;gt;&lt;br /&gt;
&amp;lt;div style=&amp;quot;position:relative;height:0;padding-bottom:40%;overflow:hidden;&amp;quot;&amp;gt;&amp;lt;iframe style=&amp;quot;position:absolute;top:0;left:0;width:100%;height:100%;&amp;quot; src=&amp;quot;https://makecode.microbit.org/#pub:_YoTh7YLWvFbm&amp;quot; frameborder=&amp;quot;0&amp;quot; sandbox=&amp;quot;allow-popups allow-forms allow-scripts allow-same-origin&amp;quot;&amp;gt;&amp;lt;/iframe&amp;gt;&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/html&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Manipulate frequency change, waveform, volume and duration ====&lt;br /&gt;
&lt;br /&gt;
It is also possible to generate more complex sounds by manipulating frequency change, waveform, volume, and duration with the following block:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[File:Soundscapes makecode complex sounds.png |centre|500 px|alt=Complex sounds block]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The following example plays two complex sounds sequentially forever &amp;lt;ref name=&amp;quot;code&amp;quot; group=&amp;quot;Note&amp;quot;/&amp;gt;:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;html&amp;gt;&lt;br /&gt;
&amp;lt;div style=&amp;quot;position:relative;height:0;padding-bottom:40%;overflow:hidden;&amp;quot;&amp;gt;&amp;lt;iframe style=&amp;quot;position:absolute;top:0;left:0;width:100%;height:100%;&amp;quot; src=&amp;quot;https://makecode.microbit.org/#pub:_gE8UsCAhe2dR&amp;quot; frameborder=&amp;quot;0&amp;quot; sandbox=&amp;quot;allow-popups allow-forms allow-scripts allow-same-origin&amp;quot;&amp;gt;&amp;lt;/iframe&amp;gt;&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/html&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Sonification of a Boolean ===&lt;br /&gt;
&lt;br /&gt;
In computer science, a Boolean, or logical, data type is a fundamental primitive that can hold one of two possible values: true or false, often represented as 1 or 0. To illustrate this concept, we will sonify the simplest data type, the Boolean. Common examples of sensors that produce Boolean data include presence sensors, contact sensors, switches, and buttons.&lt;br /&gt;
&lt;br /&gt;
The following implements the sonification of a Boolean sensor using the micro:Bit, specifically focusing on button A. When the button is pressed, we will hear the note C, and when it is released, the note will change to F. This auditory feedback provides a clear representation of the button&#039;s state, enhancing our understanding of Boolean data in a practical context &amp;lt;ref name=&amp;quot;code&amp;quot; group=&amp;quot;Note&amp;quot;/&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;HTML&amp;gt;&lt;br /&gt;
&amp;lt;div style=&amp;quot;position:relative;height:0;padding-bottom:50%;overflow:hidden;&amp;quot;&amp;gt;&amp;lt;iframe style=&amp;quot;position:absolute;top:0;left:0;width:100%;height:100%;&amp;quot; src=&amp;quot;https://makecode.microbit.org/#pub:_4LULCW5kwiPi&amp;quot; frameborder=&amp;quot;0&amp;quot; sandbox=&amp;quot;allow-popups allow-forms allow-scripts allow-same-origin&amp;quot;&amp;gt;&amp;lt;/iframe&amp;gt;&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/html&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Detailed explanation of the code:&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
The blocks are evaluated sequentially from the top to the bottom within the loop block &#039;&#039;&#039;forever&#039;&#039;&#039; which repeats the following evaluation sequence until something stops the program:&lt;br /&gt;
&lt;br /&gt;
# Set the variable &#039;&#039;&#039;X&#039;&#039;&#039; to the button state (&#039;&#039;&#039;true&#039;&#039;&#039; or &#039;&#039;&#039;false&#039;&#039;&#039; whether the button is pressed by the time of the pink block &#039;&#039;&#039;button A is pressed&#039;&#039;&#039; evaluation)&lt;br /&gt;
# &#039;&#039;&#039;If&#039;&#039;&#039; the variable/condition &#039;&#039;&#039;X&#039;&#039;&#039; holds &#039;&#039;&#039;true&#039;&#039;&#039; (the button was pressed), &#039;&#039;&#039;ring tone (Hz) Middle C&#039;&#039;&#039;, else, &#039;&#039;&#039;ring tone (Hz) Middle E&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
=== Sonification of a range of values (using input sensors) ===&lt;br /&gt;
&lt;br /&gt;
Most sensors provide a range of values, not just 0 or 1, in which case we must first find out what the lowest and highest possible values are before defining the mapping for sonification. This variable input from the sensor can originate from the light level sensor, the accelerometer, the magnetometer, the intensity of the sound captured by the microphone, or other sensors connected to the micro:bit through the pins. This data can easily be collected by the microcontroller. &lt;br /&gt;
&lt;br /&gt;
==== Change pith with fixed rhythm ====&lt;br /&gt;
&lt;br /&gt;
In this example, we show how to map the &#039;&#039;&#039;light level&#039;&#039;&#039; to a frequency range. The internal light sensor of the micro:bit provides a value between 0 (dark) and 255 (very bright). We call this input value variable &#039;&#039;&#039;x&#039;&#039;&#039;. We also define the variables &#039;&#039;&#039;x-Min&#039;&#039;&#039; and &#039;&#039;&#039;x-Max&#039;&#039;&#039; with the minimum and maximum values of our sensor. For the purpose of sonifying the measured light level, we will map the value of the light level to a pitch between 200 Hz (minimum value) and 2000 Hz (maximum value), played at a fixed rhythm &amp;lt;ref name=&amp;quot;code&amp;quot; group=&amp;quot;Note&amp;quot;/&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;HTML&amp;gt;&lt;br /&gt;
&amp;lt;div style=&amp;quot;position:relative;height:0;padding-bottom:70%;overflow:hidden;&amp;quot;&amp;gt;&amp;lt;iframe style=&amp;quot;position:absolute;top:0;left:0;width:100%;height:100%;&amp;quot; src=&amp;quot;https://makecode.microbit.org/#pub:S29417-89547-25165-22076&amp;quot; frameborder=&amp;quot;0&amp;quot; sandbox=&amp;quot;allow-popups allow-forms allow-scripts allow-same-origin&amp;quot;&amp;gt;&amp;lt;/iframe&amp;gt;&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/html&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Detailed explanation of the code:&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
The blocks within the &#039;&#039;&#039;on start&#039;&#039;&#039; block are evaluated sequentially before anything else in the program when the micro:bit is turned on.&lt;br /&gt;
&lt;br /&gt;
# Set the &#039;&#039;&#039;x-Min&#039;&#039;&#039; variable  to the light level lowest possible measured value &#039;&#039;&#039;0&#039;&#039;&#039;.&lt;br /&gt;
# Set the &#039;&#039;&#039;x-Max&#039;&#039;&#039; variable to the light level highest possible measured value &#039;&#039;&#039;255&#039;&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
The blocks within the block &#039;&#039;&#039;forever&#039;&#039;&#039; are evaluated sequentially in a loop from top to bottom after the &#039;&#039;&#039;on start&#039;&#039;&#039; sequence:&lt;br /&gt;
&lt;br /&gt;
# Set the  &#039;&#039;&#039;x&#039;&#039;&#039; variable to the measured &#039;&#039;&#039;light  level&#039;&#039;&#039;&lt;br /&gt;
# Play a one 1 beat tone with a frequency resulting from mapping the &#039;&#039;&#039;x&#039;&#039;&#039; value (in the &#039;&#039;&#039;x-Min&#039;&#039;&#039; to &#039;&#039;&#039;x-Max&#039;&#039;&#039; range) to the chosen frequency range in the &#039;&#039;&#039;map&#039;&#039;&#039; block.&lt;br /&gt;
&lt;br /&gt;
==== Change rhythm with fixed pitch ====&lt;br /&gt;
&lt;br /&gt;
Another option is to maintain a fixed pitch while varying the rhythm based on the light level. We can achieve this by playing a short-duration note and introducing pauses that vary in length, ranging from 1000 ms (for dark conditions) to 20 ms (for very bright conditions). This approach allows for a dynamic auditory representation of the changing light levels &amp;lt;ref name=&amp;quot;code&amp;quot; group=&amp;quot;Note&amp;quot;/&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;HTML&amp;gt;&lt;br /&gt;
&amp;lt;div style=&amp;quot;position:relative;height:0;padding-bottom:70%;overflow:hidden;&amp;quot;&amp;gt;&amp;lt;iframe style=&amp;quot;position:absolute;top:0;left:0;width:100%;height:100%;&amp;quot; src=&amp;quot;https://makecode.microbit.org/#pub:_F4g6Y9Fd6WRW&amp;quot; frameborder=&amp;quot;0&amp;quot; sandbox=&amp;quot;allow-popups allow-forms allow-scripts allow-same-origin&amp;quot;&amp;gt;&amp;lt;/iframe&amp;gt;&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/html&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Detailed explanation of the code:&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
The blocks within the &#039;&#039;&#039;on start&#039;&#039;&#039; block are evaluated sequentially before anything else in the program when the micro:bit is turned on.&lt;br /&gt;
&lt;br /&gt;
# Set the &#039;&#039;&#039;x-Min&#039;&#039;&#039; variable  to the light level lowest possible measured value &#039;&#039;&#039;0&#039;&#039;&#039;.&lt;br /&gt;
# Set the &#039;&#039;&#039;x-Max&#039;&#039;&#039; variable to the light level highest possible measured value &#039;&#039;&#039;255&#039;&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
The blocks within the block &#039;&#039;&#039;forever&#039;&#039;&#039; are evaluated sequentially in a loop from top to bottom after the &#039;&#039;&#039;on start&#039;&#039;&#039; sequence:&lt;br /&gt;
&lt;br /&gt;
# Set the  &#039;&#039;&#039;x&#039;&#039;&#039; variable to the measured &#039;&#039;&#039;light  level&#039;&#039;&#039;&lt;br /&gt;
# Play a one 1 beat High D tone.&lt;br /&gt;
# Pause for a period calculated from mapping the &#039;&#039;&#039;x&#039;&#039;&#039; value (in the &#039;&#039;&#039;x-Min&#039;&#039;&#039; to &#039;&#039;&#039;x-Max&#039;&#039;&#039; range) to the chosen time range in the &#039;&#039;&#039;map&#039;&#039;&#039; block.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Reminder:&#039;&#039;&#039; You can replace the &#039;&#039;&#039;light level&#039;&#039;&#039; input block with any other micro:bit sensor [https://makecode.microbit.org/reference/input input block] (or any other sensors connected to the micro:bit through the pins) that provide a range of values. Just be sure, to redefine the &#039;&#039;&#039;x-Min&#039;&#039;&#039; and &#039;&#039;&#039;x-Max&#039;&#039;&#039; values accordingly, as the [https://makecode.microbit.org/reference/input/acceleration accelerometer] and the [https://makecode.microbit.org/reference/input/compass-heading compass] heading, for instance, work on a different range.&lt;br /&gt;
&lt;br /&gt;
==== Using external input sensors ====&lt;br /&gt;
&lt;br /&gt;
To use an external digital/analog sensor on a micro pin or using for instance the I2C protocol (all of these blocks can be found under the advanced categories) you can use the same programs but simply replace the &#039;&#039;&#039;light level&#039;&#039;&#039; input block with the corresponding block as follows:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[File:Soundscapes realtime digitalread.png|350 px|center|Digital read pin]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[File:Soundscapes realtime analogread.png|350 px|center|Analog read pin]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[File:Soudnscapes realtime i2c.png|700 px|center|i2c]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Attention to the pin number or the i2c address!&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
=== Multiple inputs mapped to a single sound ===&lt;br /&gt;
&lt;br /&gt;
Sonification systems often serve to provide more than one piece of information. We can map as many variables as the amount of sound parameters we can control. As long as the sound does not become confusing due to the multiple sound layers playing simultaneously. If we consider that a philharmonic orchestra can have over one hundred elements we have some room for overlaying several sounds. Opposite to the visual stimuli where we cannot exceed a certain number, usually inferior to that of audio stimuli. Finally, like in the orchestra, the sounds have to be carefully arranged together in case of large numbers.&lt;br /&gt;
&lt;br /&gt;
The following sonifies the &#039;&#039;&#039;light level&#039;&#039;&#039; mapped to pith with a pause detefined by the &#039;&#039;&#039;compass heading&#039;&#039;&#039; mapped to milliseconds &amp;lt;ref name=&amp;quot;code&amp;quot; group=&amp;quot;Note&amp;quot;/&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;HTML&amp;gt;&lt;br /&gt;
&amp;lt;div style=&amp;quot;position:relative;height:0;padding-bottom:40%;overflow:hidden;&amp;quot;&amp;gt;&amp;lt;iframe style=&amp;quot;position:absolute;top:0;left:0;width:100%;height:100%;&amp;quot; src=&amp;quot;https://makecode.microbit.org/#pub:_4w40bdb7LTjV&amp;quot; frameborder=&amp;quot;0&amp;quot; sandbox=&amp;quot;allow-popups allow-forms allow-scripts allow-same-origin&amp;quot;&amp;gt;&amp;lt;/iframe&amp;gt;&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/html&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Sonification via MIDI (The micro:bit as a MIDI instrument) ===&lt;br /&gt;
&lt;br /&gt;
The sound produced by the speaker (buzzer) of the micro:bit has little power and does not play low frequencies. The micro:bit is also very limited in its capacity to generate multiple sounds simultaneously and sounds with more complex timbres. In the last example, we used a &amp;quot;trick&amp;quot; to sonify values of multiple inputs. We used the pause (duration of silence between consequent sounds) as a sonification output. Smart but what we would really enjoy would be several sounds simultaneously playing and expressing several layers of data. We can obtain better sound quality and play more instruments at the same time using the midi protocol.&lt;br /&gt;
&lt;br /&gt;
MIDI is a protocol that facilitates real-time communication between electronic musical instruments. MIDI stands for Musical Instrument Digital Interface and it was developed in the early ’80s for storing, editing, processing, and reproducing sequences of digital events connected to sound-producing electronic instruments, especially those using the 88-note chromatic compass of a piano-keyboard. &lt;br /&gt;
We can roughly, but easily, understand MIDI as the advanced successor of the “piano rolls”, which, more than a century ago, were perforated papers or pinned cylinders, in which music performances were either recorded (in real-time) or notated (in step time). These paper-rolls were then played automatically by specially designed mechanical instruments, the mechanical pianos (pianolas) or music machines, using them as their “program”.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==== Setup the MIDI ====&lt;br /&gt;
&lt;br /&gt;
The following video explains in detail how to connect the micro:bit to your DAW (Digital Audio Workstation) or digital synthesizer through MIDI on Windows:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;html&amp;gt;&lt;br /&gt;
&amp;lt;center&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;iframe width=&amp;quot;560&amp;quot; height=&amp;quot;315&amp;quot; src=&amp;quot;https://www.youtube.com/embed/Gfp9Ve_YUhg?si=jllM2VKnhaePNBS2&amp;amp;amp;start=24&amp;quot; title=&amp;quot;YouTube video player&amp;quot; frameborder=&amp;quot;0&amp;quot; allow=&amp;quot;accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share&amp;quot; referrerpolicy=&amp;quot;strict-origin-when-cross-origin&amp;quot; allowfullscreen&amp;gt;&amp;lt;/iframe&amp;gt;&lt;br /&gt;
&amp;lt;/center&amp;gt;&lt;br /&gt;
&amp;lt;/html&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Step-by-step instructions (see the video):&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
# Install the [https://makecode.microbit.org/pkg/microsoft/pxt-midi MIDI Extension] for Makecode.&lt;br /&gt;
# Create a [https://makecode.microbit.org/_RKp9zi8Jw11L very basic program using the MIDI extension] to test your setup.&lt;br /&gt;
# Install [https://projectgus.github.io/hairless-midiserial/ Hairless MIDI], open it, and from serial port drop-down menu select the com port (USB port) to which the micro:bit is connected to.&lt;br /&gt;
# Install [https://www.tobias-erichsen.de/software/loopmidi.html loopMIDI], open it, and click the &#039;&#039;&#039;+&#039;&#039;&#039; button at the bottom-left corner to create a new virtual port.&lt;br /&gt;
# Go back to the HairlessMIDI window and on the MIDI out drop-down menu select &#039;&#039;&#039;loopMIDI port&#039;&#039;&#039;&lt;br /&gt;
# You might need to unplug and plug in the micro:bit again for it to work.&lt;br /&gt;
# You are ready to play!&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;How it works:&#039;&#039;&#039; The micro:bit sends MIDI messages through serial communication. These messages are then received by Hairless MIDI, which forwards them to LoopMIDI. Acting as a virtual MIDI port, LoopMIDI makes the MIDI messages accessible to computer software/web apps (like DAWs or digital synthesizers) that receive these messages and generate the corresponding sounds, completing the connection.&lt;br /&gt;
&lt;br /&gt;
There are plenty of free (and some open-source, cross-platform) DAW stations like [https://lmms.io/ LMMS] that you can download and configure to play&lt;br /&gt;
MIDI input. The easiest method is to play directly from the browser through a web app such as [https://midi.city/ midi.city], the [https://onlinesequencer.net/ Online Sequencer] and many others to discover online. In principle, web apps such as midi.city will readily detect your midi instrument (the micro:bit in this case) and you are ready to play after giving the browser permissions to access your device (which you will be asked to do).&lt;br /&gt;
&lt;br /&gt;
MIDI is a powerful tool for sonification because it allows you to control a wide range of sound parameters, such as pitch, volume, and timbre. This setup allows for multiple Microbits to send MIDI data to a single synthesizer, enabling synchronized sonification of multiple data streams. It also  allows a single micro:bit to send  MIDI data over multiple MIDI chanels.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Note:&#039;&#039;&#039; On Linux install [http://www.varal.org/ttymidi/ ttymidi] instead of hairlesMIDI and loopMIDI.&lt;br /&gt;
&lt;br /&gt;
==== Sensor data over MIDI ====&lt;br /&gt;
&lt;br /&gt;
Previous examples using sensor data can be adapted to send data over MIDI with the Makecode MIDI extension, meaning that the sounds will play not on the micro:bit but through a properly configured computer software/web application. The following example maps the &#039;&#039;&#039;light level&#039;&#039;&#039; to MIDI notes and sends them through MIDI channel 1 &amp;lt;ref name=&amp;quot;code&amp;quot; group=&amp;quot;Note&amp;quot;/&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;HTML&amp;gt;&lt;br /&gt;
&amp;lt;div style=&amp;quot;position:relative;height:0;padding-bottom:60%;overflow:hidden;&amp;quot;&amp;gt;&amp;lt;iframe style=&amp;quot;position:absolute;top:0;left:0;width:100%;height:100%;&amp;quot; src=&amp;quot;https://makecode.microbit.org/#pub:_gdURLxbmvCqo&amp;quot; frameborder=&amp;quot;0&amp;quot; sandbox=&amp;quot;allow-popups allow-forms allow-scripts allow-same-origin&amp;quot;&amp;gt;&amp;lt;/iframe&amp;gt;&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/html&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Detailed explanation of the code:&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
The blocks inside the &#039;&#039;&#039;on start&#039;&#039;&#039; block are evaluated sequentially before anything else in the program when the micro:bit is turned on.&lt;br /&gt;
&lt;br /&gt;
# Show a fancy musical note icon on the LED screen just to make it nicer.&lt;br /&gt;
# Set the &#039;&#039;&#039;Instrument_1&#039;&#039;&#039; variable to &#039;&#039;&#039;midi channel 1&#039;&#039;&#039;. Thus any changes to the variable &#039;&#039;&#039;Instrument_1&#039;&#039;&#039; are actions on the MIDI channel 1. &lt;br /&gt;
# &#039;&#039;&#039;midi use raw serial&#039;&#039;&#039; is what will get the micro:bit to &amp;quot;talk&amp;quot; to the MIDI output device.&lt;br /&gt;
&lt;br /&gt;
The blocks within the block &#039;&#039;&#039;forever&#039;&#039;&#039; are evaluated sequentially in a loop from top to bottom after the &#039;&#039;&#039;on start&#039;&#039;&#039; sequence:&lt;br /&gt;
&lt;br /&gt;
# Set the &#039;&#039;&#039;Note&#039;&#039;&#039; variable to a MIDI note by mapping the &#039;&#039;&#039;light level&#039;&#039;&#039; range of possible values to the chosen MIDI range 40 to 85 (within 0 and 128) using the &#039;&#039;&#039;map&#039;&#039;&#039; block.&lt;br /&gt;
# Set the sound volume of &#039;&#039;&#039;Instrument_1&#039;&#039;&#039; (on MIDI channel 1) to 100.&lt;br /&gt;
# Play MIDI note &#039;&#039;&#039;Note&#039;&#039;&#039; (measured light level mapped to MIDI) with &#039;&#039;&#039;Instrument_1&#039;&#039;&#039; (on MIDI channel 1).&lt;br /&gt;
# Pause for 250 ms.&lt;br /&gt;
# Stop playing the MIDI note &#039;&#039;&#039;Note&#039;&#039;&#039;.&lt;br /&gt;
# Pause for 100 ms.&lt;br /&gt;
&lt;br /&gt;
==== Using multiple MIDI channels ====&lt;br /&gt;
&lt;br /&gt;
This example maps the &#039;&#039;&#039;light level&#039;&#039;&#039; to MIDI and uses multiple MIDI channels allowing one to choose to play the notes either with a button or by shaking the micro:bit &amp;lt;ref name=&amp;quot;code&amp;quot; group=&amp;quot;Note&amp;quot;/&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;HTML&amp;gt;&lt;br /&gt;
&amp;lt;div style=&amp;quot;position:relative;height:0;padding-bottom:75%;overflow:hidden;&amp;quot;&amp;gt;&amp;lt;iframe style=&amp;quot;position:absolute;top:0;left:0;width:100%;height:100%;&amp;quot; src=&amp;quot;https://makecode.microbit.org/#pub:_it6bszWsMeyq&amp;quot; frameborder=&amp;quot;0&amp;quot; sandbox=&amp;quot;allow-popups allow-forms allow-scripts allow-same-origin&amp;quot;&amp;gt;&amp;lt;/iframe&amp;gt;&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/html&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Detailed explanation of the code:&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
The logic behind this example is very similar to the previous one. However, an extra MIDI channel 10 (it could have been any other number between 1 and 16) is set &#039;&#039;&#039;on start&#039;&#039;&#039; as variable &#039;&#039;&#039;Instrument_2&#039;&#039;&#039;. Thus, any changes on this variable are actions on the MIDI channel 10. The mapping of the light level to MIDI is still set within the loop, but the &#039;&#039;&#039;Instrument_1&#039;&#039;&#039; related blocks and &#039;&#039;&#039;pauses&#039;&#039;&#039; were moved to the input block &#039;&#039;&#039;on button B pressed&#039;&#039;&#039;. The input block &#039;&#039;&#039;on shake&#039;&#039;&#039; just repeats the same code for &#039;&#039;&#039;Instrument_2&#039;&#039;&#039;. Note, that when you play a note, irrespectively of the instrument chosen, a musical note appears and disappears from the LED screen.&lt;br /&gt;
&lt;br /&gt;
==Notes==&lt;br /&gt;
&lt;br /&gt;
&amp;lt;references group=&amp;quot;Note&amp;quot; /&amp;gt;&lt;/div&gt;</summary>
		<author><name>Mick</name></author>
	</entry>
	<entry>
		<id>https://wiki.soundscapes.nuclio.org:443/w/index.php?title=Sonification_in_practice&amp;diff=177</id>
		<title>Sonification in practice</title>
		<link rel="alternate" type="text/html" href="https://wiki.soundscapes.nuclio.org:443/w/index.php?title=Sonification_in_practice&amp;diff=177"/>
		<updated>2024-09-26T14:05:41Z</updated>

		<summary type="html">&lt;p&gt;Mick: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;A sonification activity consistes in the design and buidling of a sonification system. A sonification system can be accomplished in many different ways but 3 components must always be considered: 1) INPUT DATA; 2) MAPPING PROTOCOL; 3) AUDIO OUTPUT; &lt;br /&gt;
&lt;br /&gt;
== Data Input ==&lt;br /&gt;
&lt;br /&gt;
== Mapping Protocol ==&lt;br /&gt;
&lt;br /&gt;
== Audio Output ==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[3.1 Unplugged activities]]&lt;br /&gt;
&lt;br /&gt;
[[3.2 Real-time sonification]]&lt;br /&gt;
&lt;br /&gt;
[[Real-time sonification]]&lt;br /&gt;
&lt;br /&gt;
[[&#039;&#039;a posteriori&#039;&#039; sonification]]&lt;/div&gt;</summary>
		<author><name>Mick</name></author>
	</entry>
	<entry>
		<id>https://wiki.soundscapes.nuclio.org:443/w/index.php?title=Sonification_in_practice&amp;diff=176</id>
		<title>Sonification in practice</title>
		<link rel="alternate" type="text/html" href="https://wiki.soundscapes.nuclio.org:443/w/index.php?title=Sonification_in_practice&amp;diff=176"/>
		<updated>2024-09-26T14:04:53Z</updated>

		<summary type="html">&lt;p&gt;Mick: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;A sonification activity consistes in the design and buidling of a sonification system. A sonification system can be accomplished in many different ways but 3 components must always be considered: 1) INPUT DATA; 2) MAPPING PROTOCOL; 3) AUDIO OUTPUT; &lt;br /&gt;
&lt;br /&gt;
== Data Input ==&lt;br /&gt;
&lt;br /&gt;
== Mapping Protocol ==&lt;br /&gt;
&lt;br /&gt;
== Audio Output ==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[3.1 Unplugged activities]]&lt;br /&gt;
&lt;br /&gt;
[[3.2 Real-time sonification]]&lt;br /&gt;
&lt;br /&gt;
[[&#039;&#039;a posteriori&#039;&#039; sonification]]&lt;/div&gt;</summary>
		<author><name>Mick</name></author>
	</entry>
	<entry>
		<id>https://wiki.soundscapes.nuclio.org:443/w/index.php?title=3.3_%27%27a_posteriori%27%27_sonification&amp;diff=175</id>
		<title>3.3 &#039;&#039;a posteriori&#039;&#039; sonification</title>
		<link rel="alternate" type="text/html" href="https://wiki.soundscapes.nuclio.org:443/w/index.php?title=3.3_%27%27a_posteriori%27%27_sonification&amp;diff=175"/>
		<updated>2024-09-26T14:02:59Z</updated>

		<summary type="html">&lt;p&gt;Mick: Created page with &amp;quot;The great majority of sonification examples available on the web are audio files that represent a sequence of several data layers of a certain phenomena (physical, astronomical but also metadata, web statistics, economics, health parameters) during a certain period of time. These are stored data converted to an audio file.   We call &amp;#039;&amp;#039;a posteriori&amp;#039;&amp;#039; when the data is sonified after being collected and stored. &amp;#039;&amp;#039;a posteriori&amp;#039;&amp;#039; comes from Latin and means &amp;quot;from the latter&amp;quot; o...&amp;quot;&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;The great majority of sonification examples available on the web are audio files that represent a sequence of several data layers of a certain phenomena (physical, astronomical but also metadata, web statistics, economics, health parameters) during a certain period of time. These are stored data converted to an audio file. &lt;br /&gt;
&lt;br /&gt;
We call &#039;&#039;a posteriori&#039;&#039; when the data is sonified after being collected and stored. &#039;&#039;a posteriori&#039;&#039; comes from Latin and means &amp;quot;from the latter&amp;quot; or &amp;quot;from the one behind&amp;quot;. It is usually used in philosophy to refer to a statement that comes after experience. While in real-time sonification we do not know what exactly will be the next data input, in &#039;&#039;a posteriori&#039;&#039; sonification, when we store a sequence of data we can take our time to analyze it and adjust our output sounds and test them. The data set is translated into a sound piece as a whole.&lt;br /&gt;
&lt;br /&gt;
== TwoTone ==&lt;br /&gt;
&lt;br /&gt;
There are many &#039;&#039;a posteriori&#039;&#039; sonification applications that you can find online. [https://twotone.io/ TwoTone] is a software program by Google that allows you to generate sounds from data. This software has its own database, and it is updated regularly. This shows how popular sonification is.&lt;br /&gt;
&lt;br /&gt;
TwoTone is a flexible software that allows one to add multiple data tracks from a data source and map the data to a chosen scale, choosing the instrument, the octave, range, starting octave, tempo, etc. The user can also add soundtracks from the database, upload its soundtracks, and record sound from the microphone.&lt;br /&gt;
&lt;br /&gt;
=== Using TwoTone ===&lt;br /&gt;
&lt;br /&gt;
To start playing with it, TwoTone comes with a database that the user can explore to test the software features. After you feel comfortable with it, you can import your own data.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[File:Soundscapes twotone  db example.webm|center|a posterior sonification example with twotone.]]&lt;br /&gt;
&lt;br /&gt;
=== Prepare and import data ===&lt;br /&gt;
&lt;br /&gt;
After collecting and storing the data you want to sonify (either with a device like a microcontroller, a computer, a smartphone, from a web source, or by hand) you have to prepare the data in the right format. For instance, imagine you have collected air pollution data from an air quality station:&lt;br /&gt;
&lt;br /&gt;
# Open  Excel (or another equivalent spreadsheet software) and write the first row as headers. Use simple headers like &amp;quot;Timestamp&amp;quot; and &amp;quot;CO2&amp;quot;. This row will be treated as the names of the data fields.&lt;br /&gt;
# The rows below the headers should contain the actual data.&lt;br /&gt;
# Save your file in .csv format.&lt;br /&gt;
&lt;br /&gt;
In this example, the data file should look like the following:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[File:Soundscapes twotone file example.png|300 px|center|TwoTone file format.]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
When selecting a data source, upload your file in the rectangular box, either by clicking it and browsing your local folders or by dragging the file into it:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[File:Soundscapes twotone upload.png |500 px|center|TwoTone upload file]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Data from the Web ==&lt;br /&gt;
&lt;br /&gt;
There are many certified/credited sources of data online that you can use in your sonification projects. Here is a list of suggestions:&lt;br /&gt;
&lt;br /&gt;
* [https://ourworldindata.org/ Our World in Data] -  comprehensive online resource that provides accessible data and research on global development, covering topics such as health, education, and the environment.&lt;br /&gt;
* [https://www.pordata.pt/ Pordata] - a Portuguese online database that offers statistical information on various aspects of Portugal&#039;s society, economy, and demographics, facilitating access to data for research and analysis.&lt;br /&gt;
* [https://datosclima.es/ Datos Clima]- Spanish platform that provides access to climate data and information, focusing on the impacts of climate change and promoting awareness and research on environmental issues.&lt;br /&gt;
&lt;br /&gt;
== Collect and store data with micro:bit ==&lt;br /&gt;
&lt;br /&gt;
There are many different ways to collect and store data. Using a microcontroller can be very helpful if you are designing your own data collection device, and the micro:bit is a great choice, as it is flexible and easy to use. If you are not yet familiar with the micro:bit microcontroller, we recommend you to start by reading the [https://wiki.soundscapes.nuclio.org/wiki/Real-time_sonification seal-time sonification] SoundScapes wiki page where we introduce the readers to quick tutorials and examples.&lt;br /&gt;
&lt;br /&gt;
To store data on the micro:bit you need first to install the Makecode extension [https://makecode.microbit.org/reference/datalogger datalogger]:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[File:Soundscapes makecode datalogger.gif |600 px|center|Install datalogger extension]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Using internal sensors ===&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The following is an example of how you can program the micro:bit to collect and log data on the board. The example logs the &#039;&#039;&#039;acceleration strength&#039;&#039;&#039; input, but another internal sensor (sound level, light level, compass heading, temperature) or external sensor can be used.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;HTML&amp;gt;&lt;br /&gt;
&amp;lt;div style=&amp;quot;position:relative;height:0;padding-bottom:75%;overflow:hidden;&amp;quot;&amp;gt;&amp;lt;iframe style=&amp;quot;position:absolute;top:0;left:0;width:100%;height:100%;&amp;quot; src=&amp;quot;https://makecode.microbit.org/#pub:_2id6ata7gKa7&amp;quot; frameborder=&amp;quot;0&amp;quot; sandbox=&amp;quot;allow-popups allow-forms allow-scripts allow-same-origin&amp;quot;&amp;gt;&amp;lt;/iframe&amp;gt;&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/html&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
To access the data, open the micro:bit in you file explorer/manager and open the file &#039;&#039;&#039;MY_DATA.HTM&#039;&#039;&#039;. Notice you can also copy it, save it in .csv format (ready to import to TwoTone), or visualize it.&lt;br /&gt;
&lt;br /&gt;
[[File:Soundscapes mb datalog.png|600 px|center|Access logged data]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Using external sensors ===&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
If instead of using a micro:bit sensor you want to collect data from an external digital/analog sensor on a micro pin or using for instance the I2C protocol (all of these blocks can be found under the advanced categories) you can use the same program but simply replace the &#039;&#039;&#039;acceleration&#039;&#039;&#039; input block with the corresponding block as follows:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[File:Soundscapes digitalreadpin.png|350 px|center|Digital read pin]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[File:Soundscapes analogreadpin4.png|350 px|center|Analog read pin]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[File:Soundscapes i2c.png|700 px|center|i2c pin]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Attention to the pin number or the i2c address!&#039;&#039;&#039;&lt;/div&gt;</summary>
		<author><name>Mick</name></author>
	</entry>
	<entry>
		<id>https://wiki.soundscapes.nuclio.org:443/w/index.php?title=Real-time_sonification&amp;diff=174</id>
		<title>Real-time sonification</title>
		<link rel="alternate" type="text/html" href="https://wiki.soundscapes.nuclio.org:443/w/index.php?title=Real-time_sonification&amp;diff=174"/>
		<updated>2024-09-26T14:01:47Z</updated>

		<summary type="html">&lt;p&gt;Mick: Replaced content with &amp;quot;this page is accessible from the main page&amp;quot;&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;this page is accessible from the main page&lt;/div&gt;</summary>
		<author><name>Mick</name></author>
	</entry>
	<entry>
		<id>https://wiki.soundscapes.nuclio.org:443/w/index.php?title=3.2_Real-time_sonification&amp;diff=173</id>
		<title>3.2 Real-time sonification</title>
		<link rel="alternate" type="text/html" href="https://wiki.soundscapes.nuclio.org:443/w/index.php?title=3.2_Real-time_sonification&amp;diff=173"/>
		<updated>2024-09-26T14:01:30Z</updated>

		<summary type="html">&lt;p&gt;Mick: Created page with &amp;quot;Real-time sonification is an exciting technique that can strongly promote students&amp;#039; engagement in STEAM fields. Real-time sonification means that we are not able to perceive the time interval between the acquisition of the data and the respective sound produced by our sonification device because of the speed of the process. Moreover, the methods for creating sound representations of the data are defined simultaneously with data collection (in &amp;quot;real-time&amp;quot;).  Before starti...&amp;quot;&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Real-time sonification is an exciting technique that can strongly promote students&#039; engagement in STEAM fields. Real-time sonification means that we are not able to perceive the time interval between the acquisition of the data and the respective sound produced by our sonification device because of the speed of the process. Moreover, the methods for creating sound representations of the data are defined simultaneously with data collection (in &amp;quot;real-time&amp;quot;).&lt;br /&gt;
&lt;br /&gt;
Before starting, we want to emphasize that the quality of the sound, which is subjective and therefore depends on the user&#039;s taste, must be such that at least it does not disturb the user. On the contrary, if it were appealing enough to attract their attention it would be better. On the other hand, when trying to do something &amp;quot;pleasant&amp;quot; there is a risk of generating sound results that do not fulfill the objective of describing the behavior of the input data well. It is therefore necessary to find a compromise: the sound must be sufficiently pleasant as well as exhaustively informative&lt;br /&gt;
&lt;br /&gt;
== Real-time sonification devices ==&lt;br /&gt;
&lt;br /&gt;
[[File:Microbit.jpg|thumb|right|alt=View of the micro:bit on the both sides |The BBC micro:bit microcontroller]]&lt;br /&gt;
&lt;br /&gt;
To create a real-time sonification device it is useful to use a microcontroller. These are like &amp;quot;small and simple computers&amp;quot;  with a single processor unit. They are not computers though. Their architecture is much simpler and they cannot run an operating system. Still, they can be programmed to execute a single program at a time, which can perform multiple tasks but sequentially, according to the order of the instructions listed in the program. &lt;br /&gt;
There are several types of microcontrollers, the [https://www.arduino.cc/ Arduino (arduino.cc)] being the most popular.&lt;br /&gt;
&lt;br /&gt;
To begin with, the SoundScapes project suggests using the [https://microbit.org/ BBC micro:bit] microcontroller. This tool is very simple to use, versatile, and includes several embedded sensors readily available to use, eliminating the requirement to build a specific electrical circuit for operation. The micro:bit can be programmed online with [https://makecode.microbit.org/ Makecode] (using the [https://www.google.com/chrome/  Chrome browser] for better compatibility) in python, javascript, or blocks.&lt;br /&gt;
&lt;br /&gt;
== Sonification with micro:bit ==&lt;br /&gt;
&lt;br /&gt;
Before diving into sonification with the micro:bit you must first get familiarized with the [https://makecode.microbit.org/ Makecode] programming environment. On the main page, there are various tutorials, like the &amp;quot;Flashing Heart&amp;quot;, the &amp;quot;Name Tag&amp;quot;, etc, between which you can choose to get started.  If you sign up on the platform, your projects will be saved on your account and you can access them from any device as long as you sign in. Otherwise, they are anyway saved as cookies, however, you can loose them if you clear your browser cache.&lt;br /&gt;
&lt;br /&gt;
=== Sound notions in micro:bit ===&lt;br /&gt;
&lt;br /&gt;
In the [https://makecode.microbit.org/#editor Makecode editor], there is a useful and attractive library dedicated to music, especially for young students. This [https://makecode.microbit.org/reference/music music library] offers several commands/blocks that facilitate the generation of sounds and the creation of melodies. There are many blocks and combinations of blocks you can use to generate different kinds of sounds. Here we introduce you to the most basic bocks and progress to more complex examples. It is a good exercise to play with the different blocks and hear what happens to get familiar with them.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[File:Soundscapes makecode music.png |centre|800 px|alt=Makecode editor music category]]&lt;br /&gt;
&lt;br /&gt;
==== Generate a single tone ====&lt;br /&gt;
&lt;br /&gt;
The following code generates a single tone with a pre-specified frequency Middle C and duration 1 beat when button A is pressed, or a continuous Middle E ring when button B is pressed. It is possible to change the frequency of the tones by clicking the white input fields with values &amp;quot;Middle C&amp;quot; and &amp;quot;Middle E&amp;quot;. From the drop-down menu arrows, it is also possible to change the beat duration of the &amp;quot;Middle C&amp;quot; tone and whether the sound is played sequentially with other command blocks, in the background, or in loop &amp;lt;ref name=&amp;quot;code&amp;quot; group=&amp;quot;Note&amp;quot;&amp;gt; Click the button &amp;quot;Simulator&amp;quot; on the top bar to interact with a virtual micro:bit and test the code. You can edit the code by clicking &amp;quot;Edit&amp;quot; on the top-right corner.&amp;lt;/ref&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;HTML&amp;gt;&lt;br /&gt;
&amp;lt;div style=&amp;quot;position:relative;height:0;padding-bottom:40%;overflow:hidden;&amp;quot;&amp;gt;&amp;lt;iframe style=&amp;quot;position:absolute;top:0;left:0;width:100%;height:100%;&amp;quot; src=&amp;quot;https://makecode.microbit.org/#pub:_3PbcX84vRRuJ&amp;quot; frameborder=&amp;quot;0&amp;quot; sandbox=&amp;quot;allow-popups allow-forms allow-scripts allow-same-origin&amp;quot;&amp;gt;&amp;lt;/iframe&amp;gt;&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/html&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Play a melody ====&lt;br /&gt;
&lt;br /&gt;
To play a melody use the following block and click on it to create the melody:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[File:Soundscapes makecode melody.png  |centre|500 px|alt=Play melody block]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The following example code plays two melodies with different bpm values for buttons A and B and stops all sounds when A and B are pressed simultaneously. It is possible to change the melodies by clicking the white input fields with the colorful music notes. As in the previous example, it is also possible to change the beat duration and whether the sound is played sequentially with other command blocks, in the background, or in loop &amp;lt;ref name=&amp;quot;code&amp;quot; group=&amp;quot;Note&amp;quot;/&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;HTML&amp;gt;&lt;br /&gt;
&amp;lt;div style=&amp;quot;position:relative;height:0;padding-bottom:40%;overflow:hidden;&amp;quot;&amp;gt;&amp;lt;iframe style=&amp;quot;position:absolute;top:0;left:0;width:100%;height:100%;&amp;quot; src=&amp;quot;https://makecode.microbit.org/#pub:_YoTh7YLWvFbm&amp;quot; frameborder=&amp;quot;0&amp;quot; sandbox=&amp;quot;allow-popups allow-forms allow-scripts allow-same-origin&amp;quot;&amp;gt;&amp;lt;/iframe&amp;gt;&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/html&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Manipulate frequency change, waveform, volume and duration ====&lt;br /&gt;
&lt;br /&gt;
It is also possible to generate more complex sounds by manipulating frequency change, waveform, volume, and duration with the following block:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[File:Soundscapes makecode complex sounds.png |centre|500 px|alt=Complex sounds block]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The following example plays two complex sounds sequentially forever &amp;lt;ref name=&amp;quot;code&amp;quot; group=&amp;quot;Note&amp;quot;/&amp;gt;:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;html&amp;gt;&lt;br /&gt;
&amp;lt;div style=&amp;quot;position:relative;height:0;padding-bottom:40%;overflow:hidden;&amp;quot;&amp;gt;&amp;lt;iframe style=&amp;quot;position:absolute;top:0;left:0;width:100%;height:100%;&amp;quot; src=&amp;quot;https://makecode.microbit.org/#pub:_gE8UsCAhe2dR&amp;quot; frameborder=&amp;quot;0&amp;quot; sandbox=&amp;quot;allow-popups allow-forms allow-scripts allow-same-origin&amp;quot;&amp;gt;&amp;lt;/iframe&amp;gt;&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/html&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Sonification of a Boolean ===&lt;br /&gt;
&lt;br /&gt;
In computer science, a Boolean, or logical, data type is a fundamental primitive that can hold one of two possible values: true or false, often represented as 1 or 0. To illustrate this concept, we will sonify the simplest data type, the Boolean. Common examples of sensors that produce Boolean data include presence sensors, contact sensors, switches, and buttons.&lt;br /&gt;
&lt;br /&gt;
The following implements the sonification of a Boolean sensor using the micro:Bit, specifically focusing on button A. When the button is pressed, we will hear the note C, and when it is released, the note will change to F. This auditory feedback provides a clear representation of the button&#039;s state, enhancing our understanding of Boolean data in a practical context &amp;lt;ref name=&amp;quot;code&amp;quot; group=&amp;quot;Note&amp;quot;/&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;HTML&amp;gt;&lt;br /&gt;
&amp;lt;div style=&amp;quot;position:relative;height:0;padding-bottom:50%;overflow:hidden;&amp;quot;&amp;gt;&amp;lt;iframe style=&amp;quot;position:absolute;top:0;left:0;width:100%;height:100%;&amp;quot; src=&amp;quot;https://makecode.microbit.org/#pub:_4LULCW5kwiPi&amp;quot; frameborder=&amp;quot;0&amp;quot; sandbox=&amp;quot;allow-popups allow-forms allow-scripts allow-same-origin&amp;quot;&amp;gt;&amp;lt;/iframe&amp;gt;&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/html&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Detailed explanation of the code:&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
The blocks are evaluated sequentially from the top to the bottom within the loop block &#039;&#039;&#039;forever&#039;&#039;&#039; which repeats the following evaluation sequence until something stops the program:&lt;br /&gt;
&lt;br /&gt;
# Set the variable &#039;&#039;&#039;X&#039;&#039;&#039; to the button state (&#039;&#039;&#039;true&#039;&#039;&#039; or &#039;&#039;&#039;false&#039;&#039;&#039; whether the button is pressed by the time of the pink block &#039;&#039;&#039;button A is pressed&#039;&#039;&#039; evaluation)&lt;br /&gt;
# &#039;&#039;&#039;If&#039;&#039;&#039; the variable/condition &#039;&#039;&#039;X&#039;&#039;&#039; holds &#039;&#039;&#039;true&#039;&#039;&#039; (the button was pressed), &#039;&#039;&#039;ring tone (Hz) Middle C&#039;&#039;&#039;, else, &#039;&#039;&#039;ring tone (Hz) Middle E&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
=== Sonification of a range of values (using input sensors) ===&lt;br /&gt;
&lt;br /&gt;
Most sensors provide a range of values, not just 0 or 1, in which case we must first find out what the lowest and highest possible values are before defining the mapping for sonification. This variable input from the sensor can originate from the light level sensor, the accelerometer, the magnetometer, the intensity of the sound captured by the microphone, or other sensors connected to the micro:bit through the pins. This data can easily be collected by the microcontroller. &lt;br /&gt;
&lt;br /&gt;
==== Change pith with fixed rhythm ====&lt;br /&gt;
&lt;br /&gt;
In this example, we show how to map the &#039;&#039;&#039;light level&#039;&#039;&#039; to a frequency range. The internal light sensor of the micro:bit provides a value between 0 (dark) and 255 (very bright). We call this input value variable &#039;&#039;&#039;x&#039;&#039;&#039;. We also define the variables &#039;&#039;&#039;x-Min&#039;&#039;&#039; and &#039;&#039;&#039;x-Max&#039;&#039;&#039; with the minimum and maximum values of our sensor. For the purpose of sonifying the measured light level, we will map the value of the light level to a pitch between 200 Hz (minimum value) and 2000 Hz (maximum value), played at a fixed rhythm &amp;lt;ref name=&amp;quot;code&amp;quot; group=&amp;quot;Note&amp;quot;/&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;HTML&amp;gt;&lt;br /&gt;
&amp;lt;div style=&amp;quot;position:relative;height:0;padding-bottom:70%;overflow:hidden;&amp;quot;&amp;gt;&amp;lt;iframe style=&amp;quot;position:absolute;top:0;left:0;width:100%;height:100%;&amp;quot; src=&amp;quot;https://makecode.microbit.org/#pub:S29417-89547-25165-22076&amp;quot; frameborder=&amp;quot;0&amp;quot; sandbox=&amp;quot;allow-popups allow-forms allow-scripts allow-same-origin&amp;quot;&amp;gt;&amp;lt;/iframe&amp;gt;&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/html&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Detailed explanation of the code:&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
The blocks within the &#039;&#039;&#039;on start&#039;&#039;&#039; block are evaluated sequentially before anything else in the program when the micro:bit is turned on.&lt;br /&gt;
&lt;br /&gt;
# Set the &#039;&#039;&#039;x-Min&#039;&#039;&#039; variable  to the light level lowest possible measured value &#039;&#039;&#039;0&#039;&#039;&#039;.&lt;br /&gt;
# Set the &#039;&#039;&#039;x-Max&#039;&#039;&#039; variable to the light level highest possible measured value &#039;&#039;&#039;255&#039;&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
The blocks within the block &#039;&#039;&#039;forever&#039;&#039;&#039; are evaluated sequentially in a loop from top to bottom after the &#039;&#039;&#039;on start&#039;&#039;&#039; sequence:&lt;br /&gt;
&lt;br /&gt;
# Set the  &#039;&#039;&#039;x&#039;&#039;&#039; variable to the measured &#039;&#039;&#039;light  level&#039;&#039;&#039;&lt;br /&gt;
# Play a one 1 beat tone with a frequency resulting from mapping the &#039;&#039;&#039;x&#039;&#039;&#039; value (in the &#039;&#039;&#039;x-Min&#039;&#039;&#039; to &#039;&#039;&#039;x-Max&#039;&#039;&#039; range) to the chosen frequency range in the &#039;&#039;&#039;map&#039;&#039;&#039; block.&lt;br /&gt;
&lt;br /&gt;
==== Change rhythm with fixed pitch ====&lt;br /&gt;
&lt;br /&gt;
Another option is to maintain a fixed pitch while varying the rhythm based on the light level. We can achieve this by playing a short-duration note and introducing pauses that vary in length, ranging from 1000 ms (for dark conditions) to 20 ms (for very bright conditions). This approach allows for a dynamic auditory representation of the changing light levels &amp;lt;ref name=&amp;quot;code&amp;quot; group=&amp;quot;Note&amp;quot;/&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;HTML&amp;gt;&lt;br /&gt;
&amp;lt;div style=&amp;quot;position:relative;height:0;padding-bottom:70%;overflow:hidden;&amp;quot;&amp;gt;&amp;lt;iframe style=&amp;quot;position:absolute;top:0;left:0;width:100%;height:100%;&amp;quot; src=&amp;quot;https://makecode.microbit.org/#pub:_F4g6Y9Fd6WRW&amp;quot; frameborder=&amp;quot;0&amp;quot; sandbox=&amp;quot;allow-popups allow-forms allow-scripts allow-same-origin&amp;quot;&amp;gt;&amp;lt;/iframe&amp;gt;&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/html&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Detailed explanation of the code:&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
The blocks within the &#039;&#039;&#039;on start&#039;&#039;&#039; block are evaluated sequentially before anything else in the program when the micro:bit is turned on.&lt;br /&gt;
&lt;br /&gt;
# Set the &#039;&#039;&#039;x-Min&#039;&#039;&#039; variable  to the light level lowest possible measured value &#039;&#039;&#039;0&#039;&#039;&#039;.&lt;br /&gt;
# Set the &#039;&#039;&#039;x-Max&#039;&#039;&#039; variable to the light level highest possible measured value &#039;&#039;&#039;255&#039;&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
The blocks within the block &#039;&#039;&#039;forever&#039;&#039;&#039; are evaluated sequentially in a loop from top to bottom after the &#039;&#039;&#039;on start&#039;&#039;&#039; sequence:&lt;br /&gt;
&lt;br /&gt;
# Set the  &#039;&#039;&#039;x&#039;&#039;&#039; variable to the measured &#039;&#039;&#039;light  level&#039;&#039;&#039;&lt;br /&gt;
# Play a one 1 beat High D tone.&lt;br /&gt;
# Pause for a period calculated from mapping the &#039;&#039;&#039;x&#039;&#039;&#039; value (in the &#039;&#039;&#039;x-Min&#039;&#039;&#039; to &#039;&#039;&#039;x-Max&#039;&#039;&#039; range) to the chosen time range in the &#039;&#039;&#039;map&#039;&#039;&#039; block.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Reminder:&#039;&#039;&#039; You can replace the &#039;&#039;&#039;light level&#039;&#039;&#039; input block with any other micro:bit sensor [https://makecode.microbit.org/reference/input input block] (or any other sensors connected to the micro:bit through the pins) that provide a range of values. Just be sure, to redefine the &#039;&#039;&#039;x-Min&#039;&#039;&#039; and &#039;&#039;&#039;x-Max&#039;&#039;&#039; values accordingly, as the [https://makecode.microbit.org/reference/input/acceleration accelerometer] and the [https://makecode.microbit.org/reference/input/compass-heading compass] heading, for instance, work on a different range.&lt;br /&gt;
&lt;br /&gt;
==== Using external input sensors ====&lt;br /&gt;
&lt;br /&gt;
To use an external digital/analog sensor on a micro pin or using for instance the I2C protocol (all of these blocks can be found under the advanced categories) you can use the same programs but simply replace the &#039;&#039;&#039;light level&#039;&#039;&#039; input block with the corresponding block as follows:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[File:Soundscapes realtime digitalread.png|350 px|center|Digital read pin]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[File:Soundscapes realtime analogread.png|350 px|center|Analog read pin]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[File:Soudnscapes realtime i2c.png|700 px|center|i2c]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Attention to the pin number or the i2c address!&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
=== Multiple inputs mapped to a single sound ===&lt;br /&gt;
&lt;br /&gt;
Sonification systems often serve to provide more than one piece of information. We can map as many variables as the amount of sound parameters we can control. As long as the sound does not become confusing due to the multiple sound layers playing simultaneously. If we consider that a philharmonic orchestra can have over one hundred elements we have some room for overlaying several sounds. Opposite to the visual stimuli where we cannot exceed a certain number, usually inferior to that of audio stimuli. Finally, like in the orchestra, the sounds have to be carefully arranged together in case of large numbers.&lt;br /&gt;
&lt;br /&gt;
The following sonifies the &#039;&#039;&#039;light level&#039;&#039;&#039; mapped to pith with a pause detefined by the &#039;&#039;&#039;compass heading&#039;&#039;&#039; mapped to milliseconds &amp;lt;ref name=&amp;quot;code&amp;quot; group=&amp;quot;Note&amp;quot;/&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;HTML&amp;gt;&lt;br /&gt;
&amp;lt;div style=&amp;quot;position:relative;height:0;padding-bottom:40%;overflow:hidden;&amp;quot;&amp;gt;&amp;lt;iframe style=&amp;quot;position:absolute;top:0;left:0;width:100%;height:100%;&amp;quot; src=&amp;quot;https://makecode.microbit.org/#pub:_4w40bdb7LTjV&amp;quot; frameborder=&amp;quot;0&amp;quot; sandbox=&amp;quot;allow-popups allow-forms allow-scripts allow-same-origin&amp;quot;&amp;gt;&amp;lt;/iframe&amp;gt;&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/html&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Sonification via MIDI (The micro:bit as a MIDI instrument) ===&lt;br /&gt;
&lt;br /&gt;
The sound produced by the speaker (buzzer) of the micro:bit has little power and does not play low frequencies. The micro:bit is also very limited in its capacity to generate multiple sounds simultaneously and sounds with more complex timbres. In the last example, we used a &amp;quot;trick&amp;quot; to sonify values of multiple inputs. We used the pause (duration of silence between consequent sounds) as a sonification output. Smart but what we would really enjoy would be several sounds simultaneously playing and expressing several layers of data. We can obtain better sound quality and play more instruments at the same time using the midi protocol.&lt;br /&gt;
&lt;br /&gt;
MIDI is a protocol that facilitates real-time communication between electronic musical instruments. MIDI stands for Musical Instrument Digital Interface and it was developed in the early ’80s for storing, editing, processing, and reproducing sequences of digital events connected to sound-producing electronic instruments, especially those using the 88-note chromatic compass of a piano-keyboard. &lt;br /&gt;
We can roughly, but easily, understand MIDI as the advanced successor of the “piano rolls”, which, more than a century ago, were perforated papers or pinned cylinders, in which music performances were either recorded (in real-time) or notated (in step time). These paper-rolls were then played automatically by specially designed mechanical instruments, the mechanical pianos (pianolas) or music machines, using them as their “program”.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==== Setup the MIDI ====&lt;br /&gt;
&lt;br /&gt;
The following video explains in detail how to connect the micro:bit to your DAW (Digital Audio Workstation) or digital synthesizer through MIDI on Windows:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;html&amp;gt;&lt;br /&gt;
&amp;lt;center&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;iframe width=&amp;quot;560&amp;quot; height=&amp;quot;315&amp;quot; src=&amp;quot;https://www.youtube.com/embed/Gfp9Ve_YUhg?si=jllM2VKnhaePNBS2&amp;amp;amp;start=24&amp;quot; title=&amp;quot;YouTube video player&amp;quot; frameborder=&amp;quot;0&amp;quot; allow=&amp;quot;accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share&amp;quot; referrerpolicy=&amp;quot;strict-origin-when-cross-origin&amp;quot; allowfullscreen&amp;gt;&amp;lt;/iframe&amp;gt;&lt;br /&gt;
&amp;lt;/center&amp;gt;&lt;br /&gt;
&amp;lt;/html&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Step-by-step instructions (see the video):&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
# Install the [https://makecode.microbit.org/pkg/microsoft/pxt-midi MIDI Extension] for Makecode.&lt;br /&gt;
# Create a [https://makecode.microbit.org/_RKp9zi8Jw11L very basic program using the MIDI extension] to test your setup.&lt;br /&gt;
# Install [https://projectgus.github.io/hairless-midiserial/ Hairless MIDI], open it, and from serial port drop-down menu select the com port (USB port) to which the micro:bit is connected to.&lt;br /&gt;
# Install [https://www.tobias-erichsen.de/software/loopmidi.html loopMIDI], open it, and click the &#039;&#039;&#039;+&#039;&#039;&#039; button at the bottom-left corner to create a new virtual port.&lt;br /&gt;
# Go back to the HairlessMIDI window and on the MIDI out drop-down menu select &#039;&#039;&#039;loopMIDI port&#039;&#039;&#039;&lt;br /&gt;
# You might need to unplug and plug in the micro:bit again for it to work.&lt;br /&gt;
# You are ready to play!&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;How it works:&#039;&#039;&#039; The micro:bit sends MIDI messages through serial communication. These messages are then received by Hairless MIDI, which forwards them to LoopMIDI. Acting as a virtual MIDI port, LoopMIDI makes the MIDI messages accessible to computer software/web apps (like DAWs or digital synthesizers) that receive these messages and generate the corresponding sounds, completing the connection.&lt;br /&gt;
&lt;br /&gt;
There are plenty of free (and some open-source, cross-platform) DAW stations like [https://lmms.io/ LMMS] that you can download and configure to play&lt;br /&gt;
MIDI input. The easiest method is to play directly from the browser through a web app such as [https://midi.city/ midi.city], the [https://onlinesequencer.net/ Online Sequencer] and many others to discover online. In principle, web apps such as midi.city will readily detect your midi instrument (the micro:bit in this case) and you are ready to play after giving the browser permissions to access your device (which you will be asked to do).&lt;br /&gt;
&lt;br /&gt;
MIDI is a powerful tool for sonification because it allows you to control a wide range of sound parameters, such as pitch, volume, and timbre. This setup allows for multiple Microbits to send MIDI data to a single synthesizer, enabling synchronized sonification of multiple data streams. It also  allows a single micro:bit to send  MIDI data over multiple MIDI chanels.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Note:&#039;&#039;&#039; On Linux install [http://www.varal.org/ttymidi/ ttymidi] instead of hairlesMIDI and loopMIDI.&lt;br /&gt;
&lt;br /&gt;
==== Sensor data over MIDI ====&lt;br /&gt;
&lt;br /&gt;
Previous examples using sensor data can be adapted to send data over MIDI with the Makecode MIDI extension, meaning that the sounds will play not on the micro:bit but through a properly configured computer software/web application. The following example maps the &#039;&#039;&#039;light level&#039;&#039;&#039; to MIDI notes and sends them through MIDI channel 1 &amp;lt;ref name=&amp;quot;code&amp;quot; group=&amp;quot;Note&amp;quot;/&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;HTML&amp;gt;&lt;br /&gt;
&amp;lt;div style=&amp;quot;position:relative;height:0;padding-bottom:60%;overflow:hidden;&amp;quot;&amp;gt;&amp;lt;iframe style=&amp;quot;position:absolute;top:0;left:0;width:100%;height:100%;&amp;quot; src=&amp;quot;https://makecode.microbit.org/#pub:_gdURLxbmvCqo&amp;quot; frameborder=&amp;quot;0&amp;quot; sandbox=&amp;quot;allow-popups allow-forms allow-scripts allow-same-origin&amp;quot;&amp;gt;&amp;lt;/iframe&amp;gt;&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/html&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Detailed explanation of the code:&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
The blocks inside the &#039;&#039;&#039;on start&#039;&#039;&#039; block are evaluated sequentially before anything else in the program when the micro:bit is turned on.&lt;br /&gt;
&lt;br /&gt;
# Show a fancy musical note icon on the LED screen just to make it nicer.&lt;br /&gt;
# Set the &#039;&#039;&#039;Instrument_1&#039;&#039;&#039; variable to &#039;&#039;&#039;midi channel 1&#039;&#039;&#039;. Thus any changes to the variable &#039;&#039;&#039;Instrument_1&#039;&#039;&#039; are actions on the MIDI channel 1. &lt;br /&gt;
# &#039;&#039;&#039;midi use raw serial&#039;&#039;&#039; is what will get the micro:bit to &amp;quot;talk&amp;quot; to the MIDI output device.&lt;br /&gt;
&lt;br /&gt;
The blocks within the block &#039;&#039;&#039;forever&#039;&#039;&#039; are evaluated sequentially in a loop from top to bottom after the &#039;&#039;&#039;on start&#039;&#039;&#039; sequence:&lt;br /&gt;
&lt;br /&gt;
# Set the &#039;&#039;&#039;Note&#039;&#039;&#039; variable to a MIDI note by mapping the &#039;&#039;&#039;light level&#039;&#039;&#039; range of possible values to the chosen MIDI range 40 to 85 (within 0 and 128) using the &#039;&#039;&#039;map&#039;&#039;&#039; block.&lt;br /&gt;
# Set the sound volume of &#039;&#039;&#039;Instrument_1&#039;&#039;&#039; (on MIDI channel 1) to 100.&lt;br /&gt;
# Play MIDI note &#039;&#039;&#039;Note&#039;&#039;&#039; (measured light level mapped to MIDI) with &#039;&#039;&#039;Instrument_1&#039;&#039;&#039; (on MIDI channel 1).&lt;br /&gt;
# Pause for 250 ms.&lt;br /&gt;
# Stop playing the MIDI note &#039;&#039;&#039;Note&#039;&#039;&#039;.&lt;br /&gt;
# Pause for 100 ms.&lt;br /&gt;
&lt;br /&gt;
==== Using multiple MIDI channels ====&lt;br /&gt;
&lt;br /&gt;
This example maps the &#039;&#039;&#039;light level&#039;&#039;&#039; to MIDI and uses multiple MIDI channels allowing one to choose to play the notes either with a button or by shaking the micro:bit &amp;lt;ref name=&amp;quot;code&amp;quot; group=&amp;quot;Note&amp;quot;/&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;HTML&amp;gt;&lt;br /&gt;
&amp;lt;div style=&amp;quot;position:relative;height:0;padding-bottom:75%;overflow:hidden;&amp;quot;&amp;gt;&amp;lt;iframe style=&amp;quot;position:absolute;top:0;left:0;width:100%;height:100%;&amp;quot; src=&amp;quot;https://makecode.microbit.org/#pub:_it6bszWsMeyq&amp;quot; frameborder=&amp;quot;0&amp;quot; sandbox=&amp;quot;allow-popups allow-forms allow-scripts allow-same-origin&amp;quot;&amp;gt;&amp;lt;/iframe&amp;gt;&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/html&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Detailed explanation of the code:&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
The logic behind this example is very similar to the previous one. However, an extra MIDI channel 10 (it could have been any other number between 1 and 16) is set &#039;&#039;&#039;on start&#039;&#039;&#039; as variable &#039;&#039;&#039;Instrument_2&#039;&#039;&#039;. Thus, any changes on this variable are actions on the MIDI channel 10. The mapping of the light level to MIDI is still set within the loop, but the &#039;&#039;&#039;Instrument_1&#039;&#039;&#039; related blocks and &#039;&#039;&#039;pauses&#039;&#039;&#039; were moved to the input block &#039;&#039;&#039;on button B pressed&#039;&#039;&#039;. The input block &#039;&#039;&#039;on shake&#039;&#039;&#039; just repeats the same code for &#039;&#039;&#039;Instrument_2&#039;&#039;&#039;. Note, that when you play a note, irrespectively of the instrument chosen, a musical note appears and disappears from the LED screen.&lt;br /&gt;
&lt;br /&gt;
==Notes==&lt;br /&gt;
&lt;br /&gt;
&amp;lt;references group=&amp;quot;Note&amp;quot; /&amp;gt;&lt;/div&gt;</summary>
		<author><name>Mick</name></author>
	</entry>
	<entry>
		<id>https://wiki.soundscapes.nuclio.org:443/w/index.php?title=Unplugged_activities&amp;diff=172</id>
		<title>Unplugged activities</title>
		<link rel="alternate" type="text/html" href="https://wiki.soundscapes.nuclio.org:443/w/index.php?title=Unplugged_activities&amp;diff=172"/>
		<updated>2024-09-26T14:01:06Z</updated>

		<summary type="html">&lt;p&gt;Mick: Replaced content with &amp;quot;this is page is accessible from the main page&amp;quot;&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;this is page is accessible from the main page&lt;/div&gt;</summary>
		<author><name>Mick</name></author>
	</entry>
	<entry>
		<id>https://wiki.soundscapes.nuclio.org:443/w/index.php?title=3.1_Unplugged_activities&amp;diff=171</id>
		<title>3.1 Unplugged activities</title>
		<link rel="alternate" type="text/html" href="https://wiki.soundscapes.nuclio.org:443/w/index.php?title=3.1_Unplugged_activities&amp;diff=171"/>
		<updated>2024-09-26T13:59:39Z</updated>

		<summary type="html">&lt;p&gt;Mick: Created page with &amp;quot;As a first approach to a class of students or another audience who are new to the sonification process, and also to digital tools, it is important to introduce some games and activities based on personal human-to-human communication. This will help break the ice within a group of students for them to understand the fundamental concepts of the sonification workflow, and create a positive and relaxed atmosphere that will help in the following steps. It also allows us to ex...&amp;quot;&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;As a first approach to a class of students or another audience who are new to the sonification process, and also to digital tools, it is important to introduce some games and activities based on personal human-to-human communication. This will help break the ice within a group of students for them to understand the fundamental concepts of the sonification workflow, and create a positive and relaxed atmosphere that will help in the following steps. It also allows us to explain and apply the core concepts of a sonification system (it must be composed of one input data, one output sound, and a relation (protocol/mapping) between them). We can use the clapping as an output sound. Its rhythm can easily be modulated. Real instruments would be welcome.&lt;br /&gt;
&lt;br /&gt;
== Example 1: sonifying a person&#039;s position in a trajectory ==&lt;br /&gt;
&lt;br /&gt;
[[File:Soundscapes pilot 1.jpg|thumb|right|alt=Person walking before student class between two points A and B |Unplugged Sonification]]&lt;br /&gt;
&lt;br /&gt;
A person positions herself in the space before an audience, between two points A and B. The audience starts clapping similarly to a car parking system beeping sound, with clapping frequency proportional to the position of the person between the two points.&lt;br /&gt;
&lt;br /&gt;
This is a good moment to introduce students to diffrent types of variables:&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Binary variables&#039;&#039;&#039;: These are the simplest variables that basically give an ON/OFF information. &lt;br /&gt;
&lt;br /&gt;
Example: In a scenario where someone is in front of a crowd, people clap to indicate that the person is &#039;on&#039; or present. Conversely, if someone is not on stage, they are not clapping.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Single variable with a range&#039;&#039;&#039;: it represents a quantity with values from a minimum to a maximum. &lt;br /&gt;
&lt;br /&gt;
Example: sonify the position of a student moving along a line, between points A and B. &lt;br /&gt;
&lt;br /&gt;
# Divide the class into two groups: (1) “active sound generators” and (2) “listeners”&lt;br /&gt;
# The students  of group 2 look at the moving student and clap “more or less” (let them decide what that means) depending on the proximity of the student to point A or point B,  &lt;br /&gt;
# The students of group (2) evaluate the result of clapping without visual contact of the moving student. &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;The purpose is to make the students think and discuss about all the details that a sonification system has to take into account.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
The experiment can be developed. For example:&lt;br /&gt;
&lt;br /&gt;
# A student from group 2 can reproduce the movement of the moving student according to the clapping coming from group 1 (thus evaluating the sonification information). &lt;br /&gt;
# His/her classmates can try to help him/her, through verbal contact, to refine the position using only their ears. &lt;br /&gt;
# The teacher can asks the class to freeze at a particular moment ask the two groups to evaluate their positions. &lt;br /&gt;
# Groups’ role interchanges.&lt;br /&gt;
&lt;br /&gt;
Other modifications to this example can be done, i.e. the clapping can be performed by a single person, or by many to emphasize two concepts: 1) the protocol is subjective and 2) many sounds together generate confusion (“noise”).&lt;br /&gt;
&lt;br /&gt;
== Example 2 ==&lt;br /&gt;
&lt;br /&gt;
[[File:4seasons data ss.png |thumb|right|alt=Four seasons data sheet example |Four seasons data sheet example. Source: https://datosclima.es/]]&lt;br /&gt;
&lt;br /&gt;
Start by asking your students what spring, summer, autumn, or winter sounds like?  Brainstorm sounds that determine a season, perhaps precipitation, temperature, wind speed? &lt;br /&gt;
&lt;br /&gt;
Provide the students with a table with data on temperature, precipitation, atmospheric pressure, wind speed, and other data that you consider relevant for sounding a station. You can provide the daily data for a specific year in your region. &lt;br /&gt;
&lt;br /&gt;
Students will have to analyze this data and reflect on the following questions: how can this data be represented? Maybe the idea comes up to represent this information with graphs, in a visual way, but, how can this data be encoded with sound?&lt;br /&gt;
&lt;br /&gt;
Having done the introduction and thought about how to sonify this data, let&#039;s get down to work. &lt;br /&gt;
&lt;br /&gt;
# &#039;&#039;&#039;Data collection and analysis&#039;&#039;&#039;. Students should take the most important features and data to represent the seasons. They should choose only the data for a particular season, or perhaps be encouraged to sonify the data for a whole year, thus showing all four seasons. They can choose to sonify each value of the table (one per day) or make monthly averages or by fortnights. Let them experiment and see what is most representative for interpreting the data. They should also choose the variables to sonify the season, e.g. temperature and atmospheric pressure.&lt;br /&gt;
# &#039;&#039;&#039;Organize the data&#039;&#039;&#039;. Students must organize the selected data, identify patterns, and define how to represent them with different sounds and instruments. Students can choose to play a variable when its value is higher or lower than a certain threshold value or play a sound that gets louder when the values are higher. Students can define the maximum and minimum values for each feature, make monthly averages, or create scales of values that will then represent different sounds. They can play with volume, pitch, and timbre to sonify the range of values. Encourage them to use their body (clapping, voice) or use items around them (a table, a pen with a bottle) as instruments. Remember that this is an unplugged activity, no digital device will be used for data collection, analysis, or reproduction.&lt;br /&gt;
# &#039;&#039;&#039;Create the algorithm to sonify a season&#039;&#039;&#039;. Students have to create the composition based on the selected data. At this point, students must define the mapping protocol. To do so, they must create the algorithm that associates certain sounds to defined data, i.e. create the set of rules by which the output sounds correspond to the input data assigning to each value or range of values an instrument, pitch, loudness, timbre, and rhythm.&lt;br /&gt;
# &#039;&#039;&#039;Time to perform!&#039;&#039;&#039; Once the composition has been created and rehearsed, it is time to perform it and share it with the rest of the class. Can you guess what season it is, what variables you have sounded, is it a season with a lot of rain and low temperatures or are temperatures on the rise?&lt;br /&gt;
&lt;br /&gt;
Enjoy the results, go back to the previous steps to improve the sonification if necessary and experiment with new variables and ways of representing them. &lt;br /&gt;
&lt;br /&gt;
Finally, you can bring to the classroom The Four Seasons composition, a group of four violin concerti by Italian composer Antonio Vivaldi, each of which gives musical expression to a season of the year. Will you be able to guess which one it is? What data is represented in this composition? The composition includes accompanying poems describing what Vivaldi wanted to represent in relation to each of the seasons, find out more information and discover it for yourself!&lt;/div&gt;</summary>
		<author><name>Mick</name></author>
	</entry>
	<entry>
		<id>https://wiki.soundscapes.nuclio.org:443/w/index.php?title=Sonification_in_practice&amp;diff=170</id>
		<title>Sonification in practice</title>
		<link rel="alternate" type="text/html" href="https://wiki.soundscapes.nuclio.org:443/w/index.php?title=Sonification_in_practice&amp;diff=170"/>
		<updated>2024-09-26T13:59:09Z</updated>

		<summary type="html">&lt;p&gt;Mick: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;A sonification activity consistes in the design and buidling of a sonification system. A sonification system can be accomplished in many different ways but 3 components must always be considered: 1) INPUT DATA; 2) MAPPING PROTOCOL; 3) AUDIO OUTPUT; &lt;br /&gt;
&lt;br /&gt;
== Data Input ==&lt;br /&gt;
&lt;br /&gt;
== Mapping Protocol ==&lt;br /&gt;
&lt;br /&gt;
== Audio Output ==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[3.1 Unplugged activities]]&lt;br /&gt;
&lt;br /&gt;
[[3.2 Real-time sonification]]&lt;br /&gt;
&lt;br /&gt;
[[3.3 &#039;&#039;a posteriori&#039;&#039; sonification]]&lt;/div&gt;</summary>
		<author><name>Mick</name></author>
	</entry>
	<entry>
		<id>https://wiki.soundscapes.nuclio.org:443/w/index.php?title=Sonification_in_practice&amp;diff=169</id>
		<title>Sonification in practice</title>
		<link rel="alternate" type="text/html" href="https://wiki.soundscapes.nuclio.org:443/w/index.php?title=Sonification_in_practice&amp;diff=169"/>
		<updated>2024-09-26T13:58:51Z</updated>

		<summary type="html">&lt;p&gt;Mick: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;A sonification activity consistes in the design and buidling of a sonification system. A sonification system can be accomplished in many different ways but 3 components must always be considered: 1) INPUT DATA; 2) MAPPING PROTOCOL; 3) AUDIO OUTPUT; &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Data Input ==&lt;br /&gt;
&lt;br /&gt;
== Mapping Protocol ==&lt;br /&gt;
&lt;br /&gt;
== Audio Output ==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[3.1 Unplugged activities]]&lt;br /&gt;
&lt;br /&gt;
[[3.2 Real-time sonification]]&lt;br /&gt;
&lt;br /&gt;
[[3.3 &#039;&#039;a posteriori&#039;&#039; sonification]]&lt;/div&gt;</summary>
		<author><name>Mick</name></author>
	</entry>
	<entry>
		<id>https://wiki.soundscapes.nuclio.org:443/w/index.php?title=Sonification_in_practice&amp;diff=168</id>
		<title>Sonification in practice</title>
		<link rel="alternate" type="text/html" href="https://wiki.soundscapes.nuclio.org:443/w/index.php?title=Sonification_in_practice&amp;diff=168"/>
		<updated>2024-09-26T13:55:43Z</updated>

		<summary type="html">&lt;p&gt;Mick: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;A sonification activity consistes in the design and buidling of a sonification system. A sonification system can be accomplished in many different ways but 3 components must always be considered: 1) INPUT DATA; 2) MAPPING PROTOCOL; 3) AUDIO OUTPUT; &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Data Input ==&lt;br /&gt;
&lt;br /&gt;
== Mapping Protocol ==&lt;br /&gt;
&lt;br /&gt;
== Audio Output ==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[3.1 Unplugged activities]]&lt;br /&gt;
3.2 Real-time sonification&lt;br /&gt;
3.3 &#039;&#039;a posteriori&#039;&#039; sonification&lt;/div&gt;</summary>
		<author><name>Mick</name></author>
	</entry>
	<entry>
		<id>https://wiki.soundscapes.nuclio.org:443/w/index.php?title=Sonification_in_practice&amp;diff=167</id>
		<title>Sonification in practice</title>
		<link rel="alternate" type="text/html" href="https://wiki.soundscapes.nuclio.org:443/w/index.php?title=Sonification_in_practice&amp;diff=167"/>
		<updated>2024-09-26T13:55:25Z</updated>

		<summary type="html">&lt;p&gt;Mick: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;A sonification activity consistes in the design and buidling of a sonification system. A sonification system can be accomplished in many different ways but 3 components must always be considered: 1) INPUT DATA; 2) MAPPING PROTOCOL; 3) AUDIO OUTPUT; &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Data Input ==&lt;br /&gt;
&lt;br /&gt;
== Mapping Protocol ==&lt;br /&gt;
&lt;br /&gt;
== Audio Output ==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
3.1 Unplugged activities&lt;br /&gt;
3.2 Real-time sonification&lt;br /&gt;
3.3 &#039;&#039;a posteriori&#039;&#039; sonification&lt;/div&gt;</summary>
		<author><name>Mick</name></author>
	</entry>
	<entry>
		<id>https://wiki.soundscapes.nuclio.org:443/w/index.php?title=Sonification_in_practice&amp;diff=166</id>
		<title>Sonification in practice</title>
		<link rel="alternate" type="text/html" href="https://wiki.soundscapes.nuclio.org:443/w/index.php?title=Sonification_in_practice&amp;diff=166"/>
		<updated>2024-09-26T13:52:54Z</updated>

		<summary type="html">&lt;p&gt;Mick: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Inputs ==&lt;br /&gt;
&lt;br /&gt;
== Outputs ==&lt;br /&gt;
&lt;br /&gt;
== Mapping or protocol ==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
3.1 Unplugged activities&lt;br /&gt;
3.2 Real-time sonification&lt;br /&gt;
3.3 &#039;&#039;a posteriori&#039;&#039; sonification&lt;/div&gt;</summary>
		<author><name>Mick</name></author>
	</entry>
	<entry>
		<id>https://wiki.soundscapes.nuclio.org:443/w/index.php?title=Main_Page&amp;diff=165</id>
		<title>Main Page</title>
		<link rel="alternate" type="text/html" href="https://wiki.soundscapes.nuclio.org:443/w/index.php?title=Main_Page&amp;diff=165"/>
		<updated>2024-09-25T19:17:56Z</updated>

		<summary type="html">&lt;p&gt;Mick: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[File:SoundscapesLogo.png|centre|500 px|alt=SoundScapes logo]]&lt;br /&gt;
&lt;br /&gt;
= The SoundScapes Project =&lt;br /&gt;
&lt;br /&gt;
[[File:Pillars of creation.mp4|thumb|right|Sonification example of astronomy data: Hubble’s telescope photograph of the Eagle nebula – The pillars of creation.]]&lt;br /&gt;
&lt;br /&gt;
[https://soundscapes.nuclio.org/ SoundScapes] is a groundbreaking student and STEAM-centered approach focusing on the arts, empowering students to design their sonification systems for project-based learning of school curricula. This innovative method enhances student engagement and motivation, promoting inclusion, diversity, and competence development. Through exploration of the auditory sense, students will learn to communicate and connect using the universal language of music.&lt;br /&gt;
&lt;br /&gt;
Sound conveys information that is readily accessible to you through mere observation of your body sensations. But did you know it is also possible to “hear the stars”, the brain waves of someone thinking, or even plants “talking to each other”? The process of translating data, like voltage fluctuations, color brightness, light frequencies, or any kind of data, into sound is called sonification. Auditory representations of data offer intuitive understanding and can reveal patterns not easily discernible visually. But besides its aesthetic appeal to students in general, data sonification enhances accessibility for the visually impaired, fostering inclusion in the classroom.&lt;br /&gt;
&lt;br /&gt;
The project offers training opportunities and workshops for teachers and students to implement and support the design, building, and application of sonification environments in school curricula. The goal is to increase their competence profile and digital readiness, develop skills such as programming, electronics, sensors, microcontrollers, etc., and increase their motivation and interest in school and STEAM.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;strong&amp;gt;The SoundScapes project wiki aims to support the project implementation in schools and go beyond it to reach the general audience, teaching everyone what sonification is and how to use it in STEAM, science communication and arts. The [https://soundscapes.nuclio.org/index.php/about/ team] invites you to know the [https://soundscapes.nuclio.org/ project], explore the wiki, and get involved in the [https://soundscapes.nuclio.org/index.php/community/ community].&amp;lt;/strong&amp;gt;&lt;br /&gt;
&lt;br /&gt;
🎶 Explore the world of sound with us as we delve into the fascinating realm of sonification, where data comes alive through auditory sensations. From “hearing the stars” to interpreting brain waves, we’re unlocking a universe of knowledge through sound. Join us on this journey of discovery and learning STEAM through sonification.&lt;br /&gt;
&lt;br /&gt;
== Begin the journey ==&lt;br /&gt;
&lt;br /&gt;
Start here to learn what sonification is and how to use it in unplugged exercises, real-time digital sonification, and retrospective analysis. By following this framework, you will develop skills in conceptualizing, implementing, and assessing sonification techniques. Learn how to use it in STEAM project-based and design thinking based creative and holistic activities. &lt;br /&gt;
&lt;br /&gt;
; 1. [[The SoundScapes approach to STEAM education]]&lt;br /&gt;
; 2. [[What is sonification]]&lt;br /&gt;
; 3. [[Sonification in practice]]&lt;br /&gt;
: 3.1 [[Unplugged activities]]&lt;br /&gt;
: 3.2 [[Real-time sonification]]&lt;br /&gt;
: 3.3 [[&#039;&#039;a  posteriori&#039;&#039; sonification]]&lt;br /&gt;
; 4. [[Inclusion, diversity and student assessment]]&lt;br /&gt;
; 5. [[Technical analysis of existing solutions for the creation of sonification tools]]&lt;br /&gt;
&lt;br /&gt;
== External links ==&lt;br /&gt;
&lt;br /&gt;
* [https://soundscapes.nuclio.org/ SoundScapes project  website]&lt;br /&gt;
* [https://soundscapes.nuclio.org/index.php/about/ The team]&lt;br /&gt;
* [https://soundscapes.nuclio.org/index.php/community/ Community]&lt;br /&gt;
* [https://soundscapes.nuclio.org/index.php/news/ News]&lt;br /&gt;
* [https://soundscapes.nuclio.org/index.php/contact/ Contact]&lt;br /&gt;
* [https://soundscapes.nuclio.org/index.php/newsletter/ Subscribe to the Newsletter]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--&lt;br /&gt;
&lt;br /&gt;
Consult the [https://www.mediawiki.org/wiki/Special:MyLanguage/Help:Contents User&#039;s Guide] for information on using the wiki software.&lt;br /&gt;
&lt;br /&gt;
== Getting started ==&lt;br /&gt;
* [https://www.mediawiki.org/wiki/Special:MyLanguage/Manual:Configuration_settings Configuration settings list]&lt;br /&gt;
* [https://www.mediawiki.org/wiki/Special:MyLanguage/Manual:FAQ MediaWiki FAQ]&lt;br /&gt;
* [https://lists.wikimedia.org/postorius/lists/mediawiki-announce.lists.wikimedia.org/ MediaWiki release mailing list]&lt;br /&gt;
* [https://www.mediawiki.org/wiki/Special:MyLanguage/Localisation#Translation_resources Localise MediaWiki for your language]&lt;br /&gt;
* [https://www.mediawiki.org/wiki/Special:MyLanguage/Manual:Combating_spam Learn how to combat spam on your wiki]&lt;br /&gt;
&lt;br /&gt;
--&amp;gt;&lt;/div&gt;</summary>
		<author><name>Mick</name></author>
	</entry>
	<entry>
		<id>https://wiki.soundscapes.nuclio.org:443/w/index.php?title=Technical_analysis_of_existing_solutions_for_the_creation_of_sonification_tools&amp;diff=164</id>
		<title>Technical analysis of existing solutions for the creation of sonification tools</title>
		<link rel="alternate" type="text/html" href="https://wiki.soundscapes.nuclio.org:443/w/index.php?title=Technical_analysis_of_existing_solutions_for_the_creation_of_sonification_tools&amp;diff=164"/>
		<updated>2024-09-25T18:56:20Z</updated>

		<summary type="html">&lt;p&gt;Mick: /* TECHNICAL ANALYSIS */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Technical analysis of existing solutions to selection of the most cost-effective and sustainable tools and materials for the creation of sonification tools.  &lt;br /&gt;
For an effective and interactive class that will allow students to be interested and inspired, and for teachers to be able to guide a sonification class, we had to look into all the hardware and software tools available and their effectiveness, availability, cost, and the ability to promote important digital and technological skills and the enhancement of the digital readiness of the involved schools. Our project stands out from similar educational projects because we combine the programming skills with electronics in order to produce data sets in real-time and build interactive digital systems that receive data through sensors and express them as particular sounds. For example data received from the classroom environment, or the school area. This is made possible when we use microcontrollers to control sensors and feed their data into softwares or even sonify them through the microcontrollers itself.&lt;br /&gt;
&lt;br /&gt;
=SOFTWARE=&lt;br /&gt;
There is a vast choice of software that can be used for the sonification of data both in real-time and “a posteriori”..&lt;br /&gt;
&lt;br /&gt;
Although the teachers and students have the choice to choose what software to use, we recommend using online software when possible, especially in the case of short duration courses because it usually saves the time of having to install additional software for each student.&lt;br /&gt;
In the vast field of audio softwares online some of them are more sonification oriented and most have customizable parameters to some degree.&lt;br /&gt;
&lt;br /&gt;
For example MusicaAlgorithms [1] offers the possibility to upload our own data. The drawback is that it assumes that your data will be mapped onto pitch and duration. Allowing your choice on the type of scale and not other aspects like the timbre (what instrument is going to play).&lt;br /&gt;
 &lt;br /&gt;
The common and universal MIDI protocol suffices when needed to control custom types of sound and serves as a common format to exchange musical information between audio platforms. Despite some great tools being available like libraries based on programming languages such as python (for example Sonecules [2] or MIDItime [3]) and dedicated software (ex. SonicPi, Pure Data). We choose to use the TwoTone [4] software. It is  free to use, versatile and with a  user-friendly interface that allows even beginners with reduced skills in programming and minimal expertise in music and audio, to generate a consistent sonification output.&lt;br /&gt;
&lt;br /&gt;
[1] https://musicalgorithms.org/3.2/&lt;br /&gt;
[2] https://github.com/interactive-sonification/sonecules/&lt;br /&gt;
[3] https://github.com/mikejcorey/miditime&lt;br /&gt;
[4] https://twotone.io/&lt;br /&gt;
&lt;br /&gt;
=HARDWARE=&lt;br /&gt;
Apart from using computers we also looked into microcontrollers to handle sensors and actuators (ex. leds, motors) to add some hands-on approach to the generation of the data to be sonified, with a DIY attitude that has a greater impact on young students than theoretical books and manuals. And giving priority to low-cost sustainable materials&lt;br /&gt;
There are many low-cost microcontrollers available on the market (Arduino, BBC micro:bit, Raspberry Pi Pico, ESP32, Teensy, Particle Argon/Boron, etc…)&lt;br /&gt;
&lt;br /&gt;
The most widely used microcontroller is likely the Arduino, which has many different versions and copies due to its open-source nature. Other  options include the more complex Raspberry Pi and the more educationally accessible Micro:bit.&lt;br /&gt;
&lt;br /&gt;
We choose the micro:bit because it has some advantages over the other microcontrollers available: &lt;br /&gt;
the device is programmable with a graphical user interface accessible through an internet browser for free, without the need to create an account;&lt;br /&gt;
the board already has included several sensors including environmental sensors for light, temperature, magnetism, acceleration and sound, and also a small piezoelectric speaker allowing the students to build interactive digital sonification systems that receive data through sensors in a very short time;&lt;br /&gt;
the micro:bit can also function as a gateway to more complex projects and act as an interface with other devices through MIDI, USB, Bluetooth, radio and other protocols;&lt;br /&gt;
There is an official data logging library available for the micro:bit[1]  that allows the students to record data over time very easily;&lt;br /&gt;
&lt;br /&gt;
=CONCLUSION=&lt;br /&gt;
In sum, the  Micro:bit microcontroller and the TwoTone software were chosen as the main basic technologies to start working with sonification. The reason is that both provide an interactive and hands-on learning experience for both students and teachers. The Micro:bit is affordable hardware, with many embedded sensors, and it is designed for educational purposes. Its programming environment makecode [2] is easily accessible through any internet browser and can be programmed with the simple Blockly [33] visual code editor  and also in Javascript and Python. &lt;br /&gt;
Also, the micro:bit is already available in most of the partner institutions and schools allowing the students to give them a second life and reducing waste.  &lt;br /&gt;
The free software TwoTone is designed to allow users with little experience to upload data and generate an audio file that is the corresponding sonification. A variety of instruments and musical parameters like scale and tempo can be customised by users. These are the main reasons that made them practical choices for schools. Both can work with various tools, data sets, sound outputs, and complementary devices (including other electronic components and MIDI) allowing students to create and manipulate sound creatively. And always keeping in mind that other tools exist and can be explored and used for particular, or more advanced projects.&lt;br /&gt;
&lt;br /&gt;
[1] https://makecode.microbit.org/reference/datalogger&lt;br /&gt;
[2] https://www.microsoft.com/en-us/makecode&lt;br /&gt;
[3] https://developers.google.com/blockly&lt;/div&gt;</summary>
		<author><name>Mick</name></author>
	</entry>
	<entry>
		<id>https://wiki.soundscapes.nuclio.org:443/w/index.php?title=Technical_analysis_of_existing_solutions_for_the_creation_of_sonification_tools&amp;diff=163</id>
		<title>Technical analysis of existing solutions for the creation of sonification tools</title>
		<link rel="alternate" type="text/html" href="https://wiki.soundscapes.nuclio.org:443/w/index.php?title=Technical_analysis_of_existing_solutions_for_the_creation_of_sonification_tools&amp;diff=163"/>
		<updated>2024-09-25T18:55:48Z</updated>

		<summary type="html">&lt;p&gt;Mick: /* 1 SOFTWARE */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=TECHNICAL ANALYSIS= &lt;br /&gt;
&lt;br /&gt;
Technical analysis of existing solutions to selection of the most cost-effective and sustainable tools and materials for the creation of sonification tools.  &lt;br /&gt;
For an effective and interactive class that will allow students to be interested and inspired, and for teachers to be able to guide a sonification class, we had to look into all the hardware and software tools available and their effectiveness, availability, cost, and the ability to promote important digital and technological skills and the enhancement of the digital readiness of the involved schools. Our project stands out from similar educational projects because we combine the programming skills with electronics in order to produce data sets in real-time and build interactive digital systems that receive data through sensors and express them as particular sounds. For example data received from the classroom environment, or the school area. This is made possible when we use microcontrollers to control sensors and feed their data into softwares or even sonify them through the microcontrollers itself. &lt;br /&gt;
&lt;br /&gt;
=SOFTWARE=&lt;br /&gt;
There is a vast choice of software that can be used for the sonification of data both in real-time and “a posteriori”..&lt;br /&gt;
&lt;br /&gt;
Although the teachers and students have the choice to choose what software to use, we recommend using online software when possible, especially in the case of short duration courses because it usually saves the time of having to install additional software for each student.&lt;br /&gt;
In the vast field of audio softwares online some of them are more sonification oriented and most have customizable parameters to some degree.&lt;br /&gt;
&lt;br /&gt;
For example MusicaAlgorithms [1] offers the possibility to upload our own data. The drawback is that it assumes that your data will be mapped onto pitch and duration. Allowing your choice on the type of scale and not other aspects like the timbre (what instrument is going to play).&lt;br /&gt;
 &lt;br /&gt;
The common and universal MIDI protocol suffices when needed to control custom types of sound and serves as a common format to exchange musical information between audio platforms. Despite some great tools being available like libraries based on programming languages such as python (for example Sonecules [2] or MIDItime [3]) and dedicated software (ex. SonicPi, Pure Data). We choose to use the TwoTone [4] software. It is  free to use, versatile and with a  user-friendly interface that allows even beginners with reduced skills in programming and minimal expertise in music and audio, to generate a consistent sonification output.&lt;br /&gt;
&lt;br /&gt;
[1] https://musicalgorithms.org/3.2/&lt;br /&gt;
[2] https://github.com/interactive-sonification/sonecules/&lt;br /&gt;
[3] https://github.com/mikejcorey/miditime&lt;br /&gt;
[4] https://twotone.io/&lt;br /&gt;
&lt;br /&gt;
=HARDWARE=&lt;br /&gt;
Apart from using computers we also looked into microcontrollers to handle sensors and actuators (ex. leds, motors) to add some hands-on approach to the generation of the data to be sonified, with a DIY attitude that has a greater impact on young students than theoretical books and manuals. And giving priority to low-cost sustainable materials&lt;br /&gt;
There are many low-cost microcontrollers available on the market (Arduino, BBC micro:bit, Raspberry Pi Pico, ESP32, Teensy, Particle Argon/Boron, etc…)&lt;br /&gt;
&lt;br /&gt;
The most widely used microcontroller is likely the Arduino, which has many different versions and copies due to its open-source nature. Other  options include the more complex Raspberry Pi and the more educationally accessible Micro:bit.&lt;br /&gt;
&lt;br /&gt;
We choose the micro:bit because it has some advantages over the other microcontrollers available: &lt;br /&gt;
the device is programmable with a graphical user interface accessible through an internet browser for free, without the need to create an account;&lt;br /&gt;
the board already has included several sensors including environmental sensors for light, temperature, magnetism, acceleration and sound, and also a small piezoelectric speaker allowing the students to build interactive digital sonification systems that receive data through sensors in a very short time;&lt;br /&gt;
the micro:bit can also function as a gateway to more complex projects and act as an interface with other devices through MIDI, USB, Bluetooth, radio and other protocols;&lt;br /&gt;
There is an official data logging library available for the micro:bit[1]  that allows the students to record data over time very easily;&lt;br /&gt;
&lt;br /&gt;
=CONCLUSION=&lt;br /&gt;
In sum, the  Micro:bit microcontroller and the TwoTone software were chosen as the main basic technologies to start working with sonification. The reason is that both provide an interactive and hands-on learning experience for both students and teachers. The Micro:bit is affordable hardware, with many embedded sensors, and it is designed for educational purposes. Its programming environment makecode [2] is easily accessible through any internet browser and can be programmed with the simple Blockly [33] visual code editor  and also in Javascript and Python. &lt;br /&gt;
Also, the micro:bit is already available in most of the partner institutions and schools allowing the students to give them a second life and reducing waste.  &lt;br /&gt;
The free software TwoTone is designed to allow users with little experience to upload data and generate an audio file that is the corresponding sonification. A variety of instruments and musical parameters like scale and tempo can be customised by users. These are the main reasons that made them practical choices for schools. Both can work with various tools, data sets, sound outputs, and complementary devices (including other electronic components and MIDI) allowing students to create and manipulate sound creatively. And always keeping in mind that other tools exist and can be explored and used for particular, or more advanced projects.&lt;br /&gt;
&lt;br /&gt;
[1] https://makecode.microbit.org/reference/datalogger&lt;br /&gt;
[2] https://www.microsoft.com/en-us/makecode&lt;br /&gt;
[3] https://developers.google.com/blockly&lt;/div&gt;</summary>
		<author><name>Mick</name></author>
	</entry>
	<entry>
		<id>https://wiki.soundscapes.nuclio.org:443/w/index.php?title=Technical_analysis_of_existing_solutions_for_the_creation_of_sonification_tools&amp;diff=162</id>
		<title>Technical analysis of existing solutions for the creation of sonification tools</title>
		<link rel="alternate" type="text/html" href="https://wiki.soundscapes.nuclio.org:443/w/index.php?title=Technical_analysis_of_existing_solutions_for_the_creation_of_sonification_tools&amp;diff=162"/>
		<updated>2024-09-25T18:55:36Z</updated>

		<summary type="html">&lt;p&gt;Mick: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=TECHNICAL ANALYSIS= &lt;br /&gt;
&lt;br /&gt;
Technical analysis of existing solutions to selection of the most cost-effective and sustainable tools and materials for the creation of sonification tools.  &lt;br /&gt;
For an effective and interactive class that will allow students to be interested and inspired, and for teachers to be able to guide a sonification class, we had to look into all the hardware and software tools available and their effectiveness, availability, cost, and the ability to promote important digital and technological skills and the enhancement of the digital readiness of the involved schools. Our project stands out from similar educational projects because we combine the programming skills with electronics in order to produce data sets in real-time and build interactive digital systems that receive data through sensors and express them as particular sounds. For example data received from the classroom environment, or the school area. This is made possible when we use microcontrollers to control sensors and feed their data into softwares or even sonify them through the microcontrollers itself. &lt;br /&gt;
&lt;br /&gt;
=1 SOFTWARE=&lt;br /&gt;
There is a vast choice of software that can be used for the sonification of data both in real-time and “a posteriori”..&lt;br /&gt;
&lt;br /&gt;
Although the teachers and students have the choice to choose what software to use, we recommend using online software when possible, especially in the case of short duration courses because it usually saves the time of having to install additional software for each student.&lt;br /&gt;
In the vast field of audio softwares online some of them are more sonification oriented and most have customizable parameters to some degree.&lt;br /&gt;
&lt;br /&gt;
For example MusicaAlgorithms [1] offers the possibility to upload our own data. The drawback is that it assumes that your data will be mapped onto pitch and duration. Allowing your choice on the type of scale and not other aspects like the timbre (what instrument is going to play).&lt;br /&gt;
 &lt;br /&gt;
The common and universal MIDI protocol suffices when needed to control custom types of sound and serves as a common format to exchange musical information between audio platforms. Despite some great tools being available like libraries based on programming languages such as python (for example Sonecules [2] or MIDItime [3]) and dedicated software (ex. SonicPi, Pure Data). We choose to use the TwoTone [4] software. It is  free to use, versatile and with a  user-friendly interface that allows even beginners with reduced skills in programming and minimal expertise in music and audio, to generate a consistent sonification output.&lt;br /&gt;
&lt;br /&gt;
[1] https://musicalgorithms.org/3.2/&lt;br /&gt;
[2] https://github.com/interactive-sonification/sonecules/&lt;br /&gt;
[3] https://github.com/mikejcorey/miditime&lt;br /&gt;
[4] https://twotone.io/&lt;br /&gt;
&lt;br /&gt;
=HARDWARE=&lt;br /&gt;
Apart from using computers we also looked into microcontrollers to handle sensors and actuators (ex. leds, motors) to add some hands-on approach to the generation of the data to be sonified, with a DIY attitude that has a greater impact on young students than theoretical books and manuals. And giving priority to low-cost sustainable materials&lt;br /&gt;
There are many low-cost microcontrollers available on the market (Arduino, BBC micro:bit, Raspberry Pi Pico, ESP32, Teensy, Particle Argon/Boron, etc…)&lt;br /&gt;
&lt;br /&gt;
The most widely used microcontroller is likely the Arduino, which has many different versions and copies due to its open-source nature. Other  options include the more complex Raspberry Pi and the more educationally accessible Micro:bit.&lt;br /&gt;
&lt;br /&gt;
We choose the micro:bit because it has some advantages over the other microcontrollers available: &lt;br /&gt;
the device is programmable with a graphical user interface accessible through an internet browser for free, without the need to create an account;&lt;br /&gt;
the board already has included several sensors including environmental sensors for light, temperature, magnetism, acceleration and sound, and also a small piezoelectric speaker allowing the students to build interactive digital sonification systems that receive data through sensors in a very short time;&lt;br /&gt;
the micro:bit can also function as a gateway to more complex projects and act as an interface with other devices through MIDI, USB, Bluetooth, radio and other protocols;&lt;br /&gt;
There is an official data logging library available for the micro:bit[1]  that allows the students to record data over time very easily;&lt;br /&gt;
&lt;br /&gt;
=CONCLUSION=&lt;br /&gt;
In sum, the  Micro:bit microcontroller and the TwoTone software were chosen as the main basic technologies to start working with sonification. The reason is that both provide an interactive and hands-on learning experience for both students and teachers. The Micro:bit is affordable hardware, with many embedded sensors, and it is designed for educational purposes. Its programming environment makecode [2] is easily accessible through any internet browser and can be programmed with the simple Blockly [33] visual code editor  and also in Javascript and Python. &lt;br /&gt;
Also, the micro:bit is already available in most of the partner institutions and schools allowing the students to give them a second life and reducing waste.  &lt;br /&gt;
The free software TwoTone is designed to allow users with little experience to upload data and generate an audio file that is the corresponding sonification. A variety of instruments and musical parameters like scale and tempo can be customised by users. These are the main reasons that made them practical choices for schools. Both can work with various tools, data sets, sound outputs, and complementary devices (including other electronic components and MIDI) allowing students to create and manipulate sound creatively. And always keeping in mind that other tools exist and can be explored and used for particular, or more advanced projects.&lt;br /&gt;
&lt;br /&gt;
[1] https://makecode.microbit.org/reference/datalogger&lt;br /&gt;
[2] https://www.microsoft.com/en-us/makecode&lt;br /&gt;
[3] https://developers.google.com/blockly&lt;/div&gt;</summary>
		<author><name>Mick</name></author>
	</entry>
	<entry>
		<id>https://wiki.soundscapes.nuclio.org:443/w/index.php?title=Technical_analysis_of_existing_solutions_for_the_creation_of_sonification_tools&amp;diff=161</id>
		<title>Technical analysis of existing solutions for the creation of sonification tools</title>
		<link rel="alternate" type="text/html" href="https://wiki.soundscapes.nuclio.org:443/w/index.php?title=Technical_analysis_of_existing_solutions_for_the_creation_of_sonification_tools&amp;diff=161"/>
		<updated>2024-09-25T18:53:57Z</updated>

		<summary type="html">&lt;p&gt;Mick: Created page with &amp;quot;TECHNICAL ANALYSIS  Technical analysis of existing solutions to selection of the most cost-effective and sustainable tools and materials for the creation of sonification tools.   For an effective and interactive class that will allow students to be interested and inspired, and for teachers to be able to guide a sonification class, we had to look into all the hardware and software tools available and their effectiveness, availability, cost, and the ability to promote impo...&amp;quot;&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;TECHNICAL ANALYSIS &lt;br /&gt;
Technical analysis of existing solutions to selection of the most cost-effective and sustainable tools and materials for the creation of sonification tools.  &lt;br /&gt;
For an effective and interactive class that will allow students to be interested and inspired, and for teachers to be able to guide a sonification class, we had to look into all the hardware and software tools available and their effectiveness, availability, cost, and the ability to promote important digital and technological skills and the enhancement of the digital readiness of the involved schools. Our project stands out from similar educational projects because we combine the programming skills with electronics in order to produce data sets in real-time and build interactive digital systems that receive data through sensors and express them as particular sounds. For example data received from the classroom environment, or the school area. This is made possible when we use microcontrollers to control sensors and feed their data into softwares or even sonify them through the microcontrollers itself. &lt;br /&gt;
1 SOFTWARE&lt;br /&gt;
There is a vast choice of software that can be used for the sonification of data both in real-time and “a posteriori”..&lt;br /&gt;
&lt;br /&gt;
Although the teachers and students have the choice to choose what software to use, we recommend using online software when possible, especially in the case of short duration courses because it usually saves the time of having to install additional software for each student.&lt;br /&gt;
In the vast field of audio softwares online some of them are more sonification oriented and most have customizable parameters to some degree.&lt;br /&gt;
&lt;br /&gt;
For example MusicaAlgorithms [1] offers the possibility to upload our own data. The drawback is that it assumes that your data will be mapped onto pitch and duration. Allowing your choice on the type of scale and not other aspects like the timbre (what instrument is going to play).&lt;br /&gt;
 &lt;br /&gt;
The common and universal MIDI protocol suffices when needed to control custom types of sound and serves as a common format to exchange musical information between audio platforms. Despite some great tools being available like libraries based on programming languages such as python (for example Sonecules [2] or MIDItime [3]) and dedicated software (ex. SonicPi, Pure Data). We choose to use the TwoTone [4] software. It is  free to use, versatile and with a  user-friendly interface that allows even beginners with reduced skills in programming and minimal expertise in music and audio, to generate a consistent sonification output.&lt;br /&gt;
&lt;br /&gt;
[1] https://musicalgorithms.org/3.2/&lt;br /&gt;
[2] https://github.com/interactive-sonification/sonecules/&lt;br /&gt;
[3] https://github.com/mikejcorey/miditime&lt;br /&gt;
[4] https://twotone.io/&lt;br /&gt;
HARDWARE&lt;br /&gt;
Apart from using computers we also looked into microcontrollers to handle sensors and actuators (ex. leds, motors) to add some hands-on approach to the generation of the data to be sonified, with a DIY attitude that has a greater impact on young students than theoretical books and manuals. And giving priority to low-cost sustainable materials&lt;br /&gt;
There are many low-cost microcontrollers available on the market (Arduino, BBC micro:bit, Raspberry Pi Pico, ESP32, Teensy, Particle Argon/Boron, etc…)&lt;br /&gt;
&lt;br /&gt;
The most widely used microcontroller is likely the Arduino, which has many different versions and copies due to its open-source nature. Other  options include the more complex Raspberry Pi and the more educationally accessible Micro:bit.&lt;br /&gt;
&lt;br /&gt;
We choose the micro:bit because it has some advantages over the other microcontrollers available: &lt;br /&gt;
the device is programmable with a graphical user interface accessible through an internet browser for free, without the need to create an account;&lt;br /&gt;
the board already has included several sensors including environmental sensors for light, temperature, magnetism, acceleration and sound, and also a small piezoelectric speaker allowing the students to build interactive digital sonification systems that receive data through sensors in a very short time;&lt;br /&gt;
the micro:bit can also function as a gateway to more complex projects and act as an interface with other devices through MIDI, USB, Bluetooth, radio and other protocols;&lt;br /&gt;
There is an official data logging library available for the micro:bit[1]  that allows the students to record data over time very easily;&lt;br /&gt;
CONCLUSION&lt;br /&gt;
In sum, the  Micro:bit microcontroller and the TwoTone software were chosen as the main basic technologies to start working with sonification. The reason is that both provide an interactive and hands-on learning experience for both students and teachers. The Micro:bit is affordable hardware, with many embedded sensors, and it is designed for educational purposes. Its programming environment makecode [2] is easily accessible through any internet browser and can be programmed with the simple Blockly [33] visual code editor  and also in Javascript and Python. &lt;br /&gt;
Also, the micro:bit is already available in most of the partner institutions and schools allowing the students to give them a second life and reducing waste.  &lt;br /&gt;
The free software TwoTone is designed to allow users with little experience to upload data and generate an audio file that is the corresponding sonification. A variety of instruments and musical parameters like scale and tempo can be customised by users. These are the main reasons that made them practical choices for schools. Both can work with various tools, data sets, sound outputs, and complementary devices (including other electronic components and MIDI) allowing students to create and manipulate sound creatively. And always keeping in mind that other tools exist and can be explored and used for particular, or more advanced projects.&lt;br /&gt;
&lt;br /&gt;
[1] https://makecode.microbit.org/reference/datalogger&lt;br /&gt;
[2] https://www.microsoft.com/en-us/makecode&lt;br /&gt;
[3] https://developers.google.com/blockly&lt;/div&gt;</summary>
		<author><name>Mick</name></author>
	</entry>
	<entry>
		<id>https://wiki.soundscapes.nuclio.org:443/w/index.php?title=Main_Page&amp;diff=160</id>
		<title>Main Page</title>
		<link rel="alternate" type="text/html" href="https://wiki.soundscapes.nuclio.org:443/w/index.php?title=Main_Page&amp;diff=160"/>
		<updated>2024-09-25T18:53:19Z</updated>

		<summary type="html">&lt;p&gt;Mick: /* Begin the journey */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[File:SoundscapesLogo.png|centre|500 px|alt=SoundScapes logo]]&lt;br /&gt;
&lt;br /&gt;
= The SoundScapes Project =&lt;br /&gt;
&lt;br /&gt;
[[File:Pillars of creation.mp4|thumb|right|Sonification example of astronomy data: Hubble’s telescope photograph of the Eagle nebula – The pillars of creation.]]&lt;br /&gt;
&lt;br /&gt;
[https://soundscapes.nuclio.org/ SoundScapes] is a groundbreaking student and STEAM-centered approach focusing on the arts, empowering students to design their sonification systems for project-based learning of school curricula. This innovative method enhances student engagement and motivation, promoting inclusion, diversity, and competence development. Through exploration of the auditory sense, students will learn to communicate and connect using the universal language of music.&lt;br /&gt;
&lt;br /&gt;
Sound conveys information that is readily accessible to you through mere observation of your body sensations. But did you know it is also possible to “hear the stars”, the brain waves of someone thinking, or even plants “talking to each other”? The process of translating data, like voltage fluctuations, color brightness, light frequencies, or any kind of data, into sound is called sonification. Auditory representations of data offer intuitive understanding and can reveal patterns not easily discernible visually. But besides its aesthetic appeal to students in general, data sonification enhances accessibility for the visually impaired, fostering inclusion in the classroom.&lt;br /&gt;
&lt;br /&gt;
The project offers training opportunities and workshops for teachers and students to implement and support the design, building, and application of sonification environments in school curricula. The goal is to increase their competence profile and digital readiness, develop skills such as programming, electronics, sensors, microcontrollers, etc., and increase their motivation and interest in school and STEAM.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;strong&amp;gt;The SoundScapes project wiki aims to support the project implementation in schools and go beyond it to reach the general audience, teaching everyone what sonification is and how to use it in STEAM, science communication and arts. The [https://soundscapes.nuclio.org/index.php/about/ team] invites you to know the [https://soundscapes.nuclio.org/ project], explore the wiki, and get involved in the [https://soundscapes.nuclio.org/index.php/community/ community].&amp;lt;/strong&amp;gt;&lt;br /&gt;
&lt;br /&gt;
🎶 Explore the world of sound with us as we delve into the fascinating realm of sonification, where data comes alive through auditory sensations. From “hearing the stars” to interpreting brain waves, we’re unlocking a universe of knowledge through sound. Join us on this journey of discovery and learning STEAM through sonification.&lt;br /&gt;
&lt;br /&gt;
== Begin the journey ==&lt;br /&gt;
&lt;br /&gt;
Start here to learn what sonification is and how to use it in unplugged exercises, real-time digital sonification, and retrospective analysis. By following this framework, you will develop skills in conceptualizing, implementing, and assessing sonification techniques. Learn how to use it in STEAM project-based and design thinking based creative and holistic activities. &lt;br /&gt;
&lt;br /&gt;
; 1. [[The SoundScapes approach to STEAM education]]&lt;br /&gt;
; 2. [[What is sonification]]&lt;br /&gt;
; 3. Sonification in practice&lt;br /&gt;
: 3.1 [[Unplugged activities]]&lt;br /&gt;
: 3.2 [[Real-time sonification]]&lt;br /&gt;
: 3.3 [[&#039;&#039;a  posteriori&#039;&#039; sonification]]&lt;br /&gt;
; 4. [[Inclusion, diversity and student assessment]]&lt;br /&gt;
; 5. [[Technical analysis of existing solutions for the creation of sonification tools]]&lt;br /&gt;
&lt;br /&gt;
== External links ==&lt;br /&gt;
&lt;br /&gt;
* [https://soundscapes.nuclio.org/ SoundScapes project  website]&lt;br /&gt;
* [https://soundscapes.nuclio.org/index.php/about/ The team]&lt;br /&gt;
* [https://soundscapes.nuclio.org/index.php/community/ Community]&lt;br /&gt;
* [https://soundscapes.nuclio.org/index.php/news/ News]&lt;br /&gt;
* [https://soundscapes.nuclio.org/index.php/contact/ Contact]&lt;br /&gt;
* [https://soundscapes.nuclio.org/index.php/newsletter/ Subscribe to the Newsletter]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--&lt;br /&gt;
&lt;br /&gt;
Consult the [https://www.mediawiki.org/wiki/Special:MyLanguage/Help:Contents User&#039;s Guide] for information on using the wiki software.&lt;br /&gt;
&lt;br /&gt;
== Getting started ==&lt;br /&gt;
* [https://www.mediawiki.org/wiki/Special:MyLanguage/Manual:Configuration_settings Configuration settings list]&lt;br /&gt;
* [https://www.mediawiki.org/wiki/Special:MyLanguage/Manual:FAQ MediaWiki FAQ]&lt;br /&gt;
* [https://lists.wikimedia.org/postorius/lists/mediawiki-announce.lists.wikimedia.org/ MediaWiki release mailing list]&lt;br /&gt;
* [https://www.mediawiki.org/wiki/Special:MyLanguage/Localisation#Translation_resources Localise MediaWiki for your language]&lt;br /&gt;
* [https://www.mediawiki.org/wiki/Special:MyLanguage/Manual:Combating_spam Learn how to combat spam on your wiki]&lt;br /&gt;
&lt;br /&gt;
--&amp;gt;&lt;/div&gt;</summary>
		<author><name>Mick</name></author>
	</entry>
	<entry>
		<id>https://wiki.soundscapes.nuclio.org:443/w/index.php?title=Main_Page&amp;diff=159</id>
		<title>Main Page</title>
		<link rel="alternate" type="text/html" href="https://wiki.soundscapes.nuclio.org:443/w/index.php?title=Main_Page&amp;diff=159"/>
		<updated>2024-09-25T18:52:45Z</updated>

		<summary type="html">&lt;p&gt;Mick: /* Begin the journey */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[File:SoundscapesLogo.png|centre|500 px|alt=SoundScapes logo]]&lt;br /&gt;
&lt;br /&gt;
= The SoundScapes Project =&lt;br /&gt;
&lt;br /&gt;
[[File:Pillars of creation.mp4|thumb|right|Sonification example of astronomy data: Hubble’s telescope photograph of the Eagle nebula – The pillars of creation.]]&lt;br /&gt;
&lt;br /&gt;
[https://soundscapes.nuclio.org/ SoundScapes] is a groundbreaking student and STEAM-centered approach focusing on the arts, empowering students to design their sonification systems for project-based learning of school curricula. This innovative method enhances student engagement and motivation, promoting inclusion, diversity, and competence development. Through exploration of the auditory sense, students will learn to communicate and connect using the universal language of music.&lt;br /&gt;
&lt;br /&gt;
Sound conveys information that is readily accessible to you through mere observation of your body sensations. But did you know it is also possible to “hear the stars”, the brain waves of someone thinking, or even plants “talking to each other”? The process of translating data, like voltage fluctuations, color brightness, light frequencies, or any kind of data, into sound is called sonification. Auditory representations of data offer intuitive understanding and can reveal patterns not easily discernible visually. But besides its aesthetic appeal to students in general, data sonification enhances accessibility for the visually impaired, fostering inclusion in the classroom.&lt;br /&gt;
&lt;br /&gt;
The project offers training opportunities and workshops for teachers and students to implement and support the design, building, and application of sonification environments in school curricula. The goal is to increase their competence profile and digital readiness, develop skills such as programming, electronics, sensors, microcontrollers, etc., and increase their motivation and interest in school and STEAM.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;strong&amp;gt;The SoundScapes project wiki aims to support the project implementation in schools and go beyond it to reach the general audience, teaching everyone what sonification is and how to use it in STEAM, science communication and arts. The [https://soundscapes.nuclio.org/index.php/about/ team] invites you to know the [https://soundscapes.nuclio.org/ project], explore the wiki, and get involved in the [https://soundscapes.nuclio.org/index.php/community/ community].&amp;lt;/strong&amp;gt;&lt;br /&gt;
&lt;br /&gt;
🎶 Explore the world of sound with us as we delve into the fascinating realm of sonification, where data comes alive through auditory sensations. From “hearing the stars” to interpreting brain waves, we’re unlocking a universe of knowledge through sound. Join us on this journey of discovery and learning STEAM through sonification.&lt;br /&gt;
&lt;br /&gt;
== Begin the journey ==&lt;br /&gt;
&lt;br /&gt;
Start here to learn what sonification is and how to use it in unplugged exercises, real-time digital sonification, and retrospective analysis. By following this framework, you will develop skills in conceptualizing, implementing, and assessing sonification techniques. Learn how to use it in STEAM project-based and design thinking based creative and holistic activities. &lt;br /&gt;
&lt;br /&gt;
; 1. [[The SoundScapes approach to STEAM education]]&lt;br /&gt;
; 2. [[What is sonification]]&lt;br /&gt;
; 3. Sonification in practice&lt;br /&gt;
: 3.1 [[Unplugged activities]]&lt;br /&gt;
: 3.2 [[Real-time sonification]]&lt;br /&gt;
: 3.3 [[&#039;&#039;a  posteriori&#039;&#039; sonification]]&lt;br /&gt;
; 4. [[Inclusion, diversity and student assessment]]&lt;br /&gt;
; 5. [[Technical analysis of existing solutions to selection of the most cost-effective and sustainable tools and materials for the creation of sonification tools]]&lt;br /&gt;
&lt;br /&gt;
== External links ==&lt;br /&gt;
&lt;br /&gt;
* [https://soundscapes.nuclio.org/ SoundScapes project  website]&lt;br /&gt;
* [https://soundscapes.nuclio.org/index.php/about/ The team]&lt;br /&gt;
* [https://soundscapes.nuclio.org/index.php/community/ Community]&lt;br /&gt;
* [https://soundscapes.nuclio.org/index.php/news/ News]&lt;br /&gt;
* [https://soundscapes.nuclio.org/index.php/contact/ Contact]&lt;br /&gt;
* [https://soundscapes.nuclio.org/index.php/newsletter/ Subscribe to the Newsletter]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--&lt;br /&gt;
&lt;br /&gt;
Consult the [https://www.mediawiki.org/wiki/Special:MyLanguage/Help:Contents User&#039;s Guide] for information on using the wiki software.&lt;br /&gt;
&lt;br /&gt;
== Getting started ==&lt;br /&gt;
* [https://www.mediawiki.org/wiki/Special:MyLanguage/Manual:Configuration_settings Configuration settings list]&lt;br /&gt;
* [https://www.mediawiki.org/wiki/Special:MyLanguage/Manual:FAQ MediaWiki FAQ]&lt;br /&gt;
* [https://lists.wikimedia.org/postorius/lists/mediawiki-announce.lists.wikimedia.org/ MediaWiki release mailing list]&lt;br /&gt;
* [https://www.mediawiki.org/wiki/Special:MyLanguage/Localisation#Translation_resources Localise MediaWiki for your language]&lt;br /&gt;
* [https://www.mediawiki.org/wiki/Special:MyLanguage/Manual:Combating_spam Learn how to combat spam on your wiki]&lt;br /&gt;
&lt;br /&gt;
--&amp;gt;&lt;/div&gt;</summary>
		<author><name>Mick</name></author>
	</entry>
	<entry>
		<id>https://wiki.soundscapes.nuclio.org:443/w/index.php?title=What_is_sonification&amp;diff=158</id>
		<title>What is sonification</title>
		<link rel="alternate" type="text/html" href="https://wiki.soundscapes.nuclio.org:443/w/index.php?title=What_is_sonification&amp;diff=158"/>
		<updated>2024-09-23T16:03:25Z</updated>

		<summary type="html">&lt;p&gt;Mick: /* Examples */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;When we make a sound to inform about something we are applying a sonification system. We represent data in the auditory field. We turn data into sounds, these data usually can be representing anything that can be expressed in numbers: a physical measurement, a notion, an action or the vectorial tracking of a sequence of values from a sensor. Many definitions were created for this process called sonification:  from “subtype of auditory displays that use non-speech audio to represent information”, to “transformation of data relations into perceived relations in an acoustic signal for the purposes of facilitating communication or interpretation” &amp;lt;ref&amp;gt; &amp;quot;The Sonification Report: Status of the Field and Research Agenda&amp;quot;, Gregory Kramer, Bruce N. Walker, Terri Bonebright, Perry Cook, John Flowers, Nadine Miner, 1999, International Community for Auditory Display (ICAD)&amp;lt;/ref&amp;gt; and, in a more definitive and precise way, “data-dependent generation of sound, if the transformation is systematic, objective and reproducible” &amp;lt;ref&amp;gt;Hermann, T., Walker, B., &amp;amp; Cook, P. R. (2011). Sonification handbook. Springer.&amp;lt;/ref&amp;gt;, and finally “technique of transforming non-audible data into sound that can be perceived by human hearing” &amp;lt;ref&amp;gt;wikipedia as on 9th of April 2024&amp;lt;/ref&amp;gt;.&lt;br /&gt;
To make it simple in the context of this manual we can state briefly that “sonification is the process of generating sound from any sort of data to represent their information as audio”.&lt;br /&gt;
In even more simple terms we can say to a student that sonification describes data with sound as visualization does with graphs, flow charts, histograms etc. &lt;br /&gt;
&lt;br /&gt;
So basically we want to combine data (Input) and sounds (Output), and decide the way these two are related (mapping or protocol). &lt;br /&gt;
So a sonification system is defined by these 3 parts:&lt;br /&gt;
&lt;br /&gt;
1 - Input data&lt;br /&gt;
2 - Output sounds&lt;br /&gt;
3 - Mapping or protocol&lt;br /&gt;
&lt;br /&gt;
== Type of Data and Sonification use ==&lt;br /&gt;
&lt;br /&gt;
Sonification is increasingly used as a scientific tool to analyze and monitor data of several phenomena, and it evolved especially in the astronomical community due to the large amounts of data produced from observing the cosmos, but also as an artistic tool, and educational complement to other disciplines like medicine, mathematics, physics, chemistry but also geography, economy or even literature. For example in medicine, doctors monitor patients’ bio-metric reactions in real time without having to look at a screen. In literature an audio representation can be created &amp;quot;a posteriori&amp;quot; (in post-time) using the number of adjectives in a book, the number of times a certain word appears in an article. Any kind of data is made of numbers. And numbers can trigger audio because music and sound are fundamentally resumed to numbers, in the sense that we can describe those using numbers.&lt;br /&gt;
&lt;br /&gt;
== Sonification uses ==&lt;br /&gt;
&lt;br /&gt;
The purpose of sonification is representing, displaying and sharing data. Using the auditory field the data can be more accessible and understandable to as many users as possible, especially for people who have difficulty understanding visual representations of data and it can also be used to make data more engaging and memorable for everyone.&lt;br /&gt;
Sonification can be used in a variety of applications, such as visualizing scientific data, monitoring environmental conditions, and creating interactive multimedia experiences but also in education when engaging students in the conception of a scientific notion using audio instead of visual stimuli. &lt;br /&gt;
Here are some examples of how sonification is used in the real world:&lt;br /&gt;
Analyzing scientific data: Sonification can be used to analyze data that is too complex or abstract to be represented visually. For example, scientists have used sonification to analyze the behavior of atoms (The Sounds of Atoms)&amp;lt;ref&amp;gt;&amp;quot;The sound of an atom has been captured&amp;quot; (K 2025 news article) - http://www.themindgap.nl/?p=245&amp;lt;/ref&amp;gt;, the activity of neurons in the brain (Interactive software for the sonification of neuronal activity | HAL) &amp;lt;ref&amp;gt; Argan Verrier, Vincent Goudard, Elim Hong, Hugues Genevois. Interactive software for the sonifica-&lt;br /&gt;
tion of neuronal activity. Sound and Music Computing Conference, AIMI (Associazione Italiana di&lt;br /&gt;
Informatica Musicale); Conservatorio “Giuseppe Verdi” di Torino, Università di Torino, Politecnico di&lt;br /&gt;
Torino, Jun 2020, Torino (Virtual Conference), Italy. hal-04041917 &amp;lt;/ref&amp;gt; , and the evolution of galaxies (https://chandra.si.edu/sound/gcenter.html). Sonification can also be applied when data is recorded in a too dense sequence and therefore time manipulation allows audible up-scaling or sound transformations in larger or shorter duration, such as when transforming the seismograph of an earthquake into sound.&lt;br /&gt;
Monitoring environmental conditions: Sonification can be used to monitor environmental conditions in real time, for example, to monitor the sound of the ocean to track changes in water temperature and pollution levels &amp;lt;ref&amp;gt;(Data Sonification: Acclaimed Musician Transforms Ocean Data into Music) https://www.hubocean.earth/blog/data-sonification as on 23rd September 2024&amp;lt;/ref&amp;gt;&lt;br /&gt;
Creating interactive multimedia experiences: Sonification can be used to create interactive multimedia experiences that are more immersive and engaging than traditional visual interfaces. For example, sonification has been used to create interactive maps &amp;lt;ref&amp;gt;Interactive 3D sonification for the exploration of city maps | Proceedings of the 4th Nordic conference on Human-computer interaction: changing roles&amp;lt;/ref&amp;gt;, educational games (CosmoBally - Sonokids), and virtual reality experiences.&lt;br /&gt;
&lt;br /&gt;
== Real-time sonification vs &#039;a posteriori&#039; ==&lt;br /&gt;
&lt;br /&gt;
According to the use of the sonification system (to analyze or to monitor a certain phenomena) we distinguish two “modes”: 1) in real-time (to monitor) - a stream of data is sonifed instantly and a sound is produced to display the value and behavior of the data in that particular moment; 2) “ a posteriori” (to analyze) - time-series sonification of a set of pre-recorded data is converted into an audio file that displays the values and behavior of the data over the period of time covered by the time-series. &lt;br /&gt;
These two methods are not mutually exclusive and can eventually display the same sounds.The difference is that in “a posteriori”, as the sound is produced after the events that originated the data happened, the parameters of the final piece can be adapted, i.e.the total duration. In a real-time case, you can control the time resolution: that is the time interval at which the sound can change and is played.&lt;br /&gt;
&lt;br /&gt;
== Acoustic ecology ==&lt;br /&gt;
&lt;br /&gt;
The aesthetic is important. A sound can be mapped very precisely but sounds “awful” to the user. This could be considered as a defect and therefore  it could limit the efficacy of the system because the user will not bear listening to it.  On the other side (i.e. in alarms) the sound can be intentionally noisy and aggressive. The choice of the output sound is in some way artistic in a sense that it must take into consideration the type of audience and its taste. It does not mean that we are obliged to play something that the user will like, but at least be aware of what type of sound is familiar to him/her. Even if the taste is subjective we would like to recall the work done in the field of acoustic ecology. There are some common factors indicated by psychology studies and also cultural models of “beauty”. In the present project, as the name of the project suggests, we reference the work and vision of the Canadian composer Murray Schafer, who popularized the term “soundscape” in the book “The Tuning Of The World” in 1977&amp;lt;ref&amp;gt;Schafer, R. M. (1977). The Tuning of the World. New York: Knopf. &amp;lt;/ref&amp;gt;. &lt;br /&gt;
Soundscapes can be simply considered as a composition of the anthrophony, geophony and biophony of a particular environment. The author argues that we&#039;ve become desensitized to the rich sounds of our environment, which he calls the &amp;quot;soundscape.&amp;quot; This soundscape encompasses all the natural and human-made sounds that surround us, and Schafer believes we should learn to appreciate and manage it for a better world.His work generated the “acoustic ecology movement” which aims to study the relationship between humans, animals and nature, in terms of sound and soundscapes. The Acoustic Ecology Institute was founded to raise consciousness of the effect of noisy acoustic environments, proven to be harmful for increasing stress levels on individuals when immersed in these.&lt;br /&gt;
&lt;br /&gt;
== Examples  ==&lt;br /&gt;
SonarX is a software designed to transform images and video into meaningful sound for blind individuals and all&amp;lt;ref&amp;gt;S. Cavaco, J.T. Henrique, M. Mengucci, N. Correia, F. Medeiros, Color sonification for the visually impaired, in Procedia Technology, M. M. Cruz-Cunha, J. Varajão, H. Krcmar and R. Martinho (Eds.), Elsevier, volume 9, pages 1048-1057, 2013.&amp;lt;/ref&amp;gt;. It runs on Pure Data &amp;lt;ref&amp;gt;http://puredata.info/&amp;lt;/ref&amp;gt; and can be downloaded at this &amp;lt;ref&amp;gt;https://github.com/LabIO/Sonarx-45 as on 23rd September 2024&amp;lt;/ref&amp;gt; github repository.&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;/div&gt;</summary>
		<author><name>Mick</name></author>
	</entry>
	<entry>
		<id>https://wiki.soundscapes.nuclio.org:443/w/index.php?title=What_is_sonification&amp;diff=157</id>
		<title>What is sonification</title>
		<link rel="alternate" type="text/html" href="https://wiki.soundscapes.nuclio.org:443/w/index.php?title=What_is_sonification&amp;diff=157"/>
		<updated>2024-09-23T16:02:54Z</updated>

		<summary type="html">&lt;p&gt;Mick: /* Examples */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;When we make a sound to inform about something we are applying a sonification system. We represent data in the auditory field. We turn data into sounds, these data usually can be representing anything that can be expressed in numbers: a physical measurement, a notion, an action or the vectorial tracking of a sequence of values from a sensor. Many definitions were created for this process called sonification:  from “subtype of auditory displays that use non-speech audio to represent information”, to “transformation of data relations into perceived relations in an acoustic signal for the purposes of facilitating communication or interpretation” &amp;lt;ref&amp;gt; &amp;quot;The Sonification Report: Status of the Field and Research Agenda&amp;quot;, Gregory Kramer, Bruce N. Walker, Terri Bonebright, Perry Cook, John Flowers, Nadine Miner, 1999, International Community for Auditory Display (ICAD)&amp;lt;/ref&amp;gt; and, in a more definitive and precise way, “data-dependent generation of sound, if the transformation is systematic, objective and reproducible” &amp;lt;ref&amp;gt;Hermann, T., Walker, B., &amp;amp; Cook, P. R. (2011). Sonification handbook. Springer.&amp;lt;/ref&amp;gt;, and finally “technique of transforming non-audible data into sound that can be perceived by human hearing” &amp;lt;ref&amp;gt;wikipedia as on 9th of April 2024&amp;lt;/ref&amp;gt;.&lt;br /&gt;
To make it simple in the context of this manual we can state briefly that “sonification is the process of generating sound from any sort of data to represent their information as audio”.&lt;br /&gt;
In even more simple terms we can say to a student that sonification describes data with sound as visualization does with graphs, flow charts, histograms etc. &lt;br /&gt;
&lt;br /&gt;
So basically we want to combine data (Input) and sounds (Output), and decide the way these two are related (mapping or protocol). &lt;br /&gt;
So a sonification system is defined by these 3 parts:&lt;br /&gt;
&lt;br /&gt;
1 - Input data&lt;br /&gt;
2 - Output sounds&lt;br /&gt;
3 - Mapping or protocol&lt;br /&gt;
&lt;br /&gt;
== Type of Data and Sonification use ==&lt;br /&gt;
&lt;br /&gt;
Sonification is increasingly used as a scientific tool to analyze and monitor data of several phenomena, and it evolved especially in the astronomical community due to the large amounts of data produced from observing the cosmos, but also as an artistic tool, and educational complement to other disciplines like medicine, mathematics, physics, chemistry but also geography, economy or even literature. For example in medicine, doctors monitor patients’ bio-metric reactions in real time without having to look at a screen. In literature an audio representation can be created &amp;quot;a posteriori&amp;quot; (in post-time) using the number of adjectives in a book, the number of times a certain word appears in an article. Any kind of data is made of numbers. And numbers can trigger audio because music and sound are fundamentally resumed to numbers, in the sense that we can describe those using numbers.&lt;br /&gt;
&lt;br /&gt;
== Sonification uses ==&lt;br /&gt;
&lt;br /&gt;
The purpose of sonification is representing, displaying and sharing data. Using the auditory field the data can be more accessible and understandable to as many users as possible, especially for people who have difficulty understanding visual representations of data and it can also be used to make data more engaging and memorable for everyone.&lt;br /&gt;
Sonification can be used in a variety of applications, such as visualizing scientific data, monitoring environmental conditions, and creating interactive multimedia experiences but also in education when engaging students in the conception of a scientific notion using audio instead of visual stimuli. &lt;br /&gt;
Here are some examples of how sonification is used in the real world:&lt;br /&gt;
Analyzing scientific data: Sonification can be used to analyze data that is too complex or abstract to be represented visually. For example, scientists have used sonification to analyze the behavior of atoms (The Sounds of Atoms)&amp;lt;ref&amp;gt;&amp;quot;The sound of an atom has been captured&amp;quot; (K 2025 news article) - http://www.themindgap.nl/?p=245&amp;lt;/ref&amp;gt;, the activity of neurons in the brain (Interactive software for the sonification of neuronal activity | HAL) &amp;lt;ref&amp;gt; Argan Verrier, Vincent Goudard, Elim Hong, Hugues Genevois. Interactive software for the sonifica-&lt;br /&gt;
tion of neuronal activity. Sound and Music Computing Conference, AIMI (Associazione Italiana di&lt;br /&gt;
Informatica Musicale); Conservatorio “Giuseppe Verdi” di Torino, Università di Torino, Politecnico di&lt;br /&gt;
Torino, Jun 2020, Torino (Virtual Conference), Italy. hal-04041917 &amp;lt;/ref&amp;gt; , and the evolution of galaxies (https://chandra.si.edu/sound/gcenter.html). Sonification can also be applied when data is recorded in a too dense sequence and therefore time manipulation allows audible up-scaling or sound transformations in larger or shorter duration, such as when transforming the seismograph of an earthquake into sound.&lt;br /&gt;
Monitoring environmental conditions: Sonification can be used to monitor environmental conditions in real time, for example, to monitor the sound of the ocean to track changes in water temperature and pollution levels &amp;lt;ref&amp;gt;(Data Sonification: Acclaimed Musician Transforms Ocean Data into Music) https://www.hubocean.earth/blog/data-sonification as on 23rd September 2024&amp;lt;/ref&amp;gt;&lt;br /&gt;
Creating interactive multimedia experiences: Sonification can be used to create interactive multimedia experiences that are more immersive and engaging than traditional visual interfaces. For example, sonification has been used to create interactive maps &amp;lt;ref&amp;gt;Interactive 3D sonification for the exploration of city maps | Proceedings of the 4th Nordic conference on Human-computer interaction: changing roles&amp;lt;/ref&amp;gt;, educational games (CosmoBally - Sonokids), and virtual reality experiences.&lt;br /&gt;
&lt;br /&gt;
== Real-time sonification vs &#039;a posteriori&#039; ==&lt;br /&gt;
&lt;br /&gt;
According to the use of the sonification system (to analyze or to monitor a certain phenomena) we distinguish two “modes”: 1) in real-time (to monitor) - a stream of data is sonifed instantly and a sound is produced to display the value and behavior of the data in that particular moment; 2) “ a posteriori” (to analyze) - time-series sonification of a set of pre-recorded data is converted into an audio file that displays the values and behavior of the data over the period of time covered by the time-series. &lt;br /&gt;
These two methods are not mutually exclusive and can eventually display the same sounds.The difference is that in “a posteriori”, as the sound is produced after the events that originated the data happened, the parameters of the final piece can be adapted, i.e.the total duration. In a real-time case, you can control the time resolution: that is the time interval at which the sound can change and is played.&lt;br /&gt;
&lt;br /&gt;
== Acoustic ecology ==&lt;br /&gt;
&lt;br /&gt;
The aesthetic is important. A sound can be mapped very precisely but sounds “awful” to the user. This could be considered as a defect and therefore  it could limit the efficacy of the system because the user will not bear listening to it.  On the other side (i.e. in alarms) the sound can be intentionally noisy and aggressive. The choice of the output sound is in some way artistic in a sense that it must take into consideration the type of audience and its taste. It does not mean that we are obliged to play something that the user will like, but at least be aware of what type of sound is familiar to him/her. Even if the taste is subjective we would like to recall the work done in the field of acoustic ecology. There are some common factors indicated by psychology studies and also cultural models of “beauty”. In the present project, as the name of the project suggests, we reference the work and vision of the Canadian composer Murray Schafer, who popularized the term “soundscape” in the book “The Tuning Of The World” in 1977&amp;lt;ref&amp;gt;Schafer, R. M. (1977). The Tuning of the World. New York: Knopf. &amp;lt;/ref&amp;gt;. &lt;br /&gt;
Soundscapes can be simply considered as a composition of the anthrophony, geophony and biophony of a particular environment. The author argues that we&#039;ve become desensitized to the rich sounds of our environment, which he calls the &amp;quot;soundscape.&amp;quot; This soundscape encompasses all the natural and human-made sounds that surround us, and Schafer believes we should learn to appreciate and manage it for a better world.His work generated the “acoustic ecology movement” which aims to study the relationship between humans, animals and nature, in terms of sound and soundscapes. The Acoustic Ecology Institute was founded to raise consciousness of the effect of noisy acoustic environments, proven to be harmful for increasing stress levels on individuals when immersed in these.&lt;br /&gt;
&lt;br /&gt;
== Examples  ==&lt;br /&gt;
SonarX is a software designed to transform images and video into meaningful sound for blind individuals and all&amp;lt;ref&amp;gt;S. Cavaco, J.T. Henrique, M. Mengucci, N. Correia, F. Medeiros, Color sonification for the visually impaired, in Procedia Technology, M. M. Cruz-Cunha, J. Varajão, H. Krcmar and R. Martinho (Eds.), Elsevier, volume 9, pages 1048-1057, 2013.&amp;lt;/ref&amp;gt;. It runs on Pure Data &amp;lt;ref&amp;gt;http://puredata.info/&amp;lt;/ref&amp;gt; and can be downloaded at this &amp;lt;ref&amp;gt;https://github.com/LabIO/Sonarx-45&amp;lt;/ref&amp;gt; as on 23rd September 2024&amp;gt; github repository.&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;/div&gt;</summary>
		<author><name>Mick</name></author>
	</entry>
	<entry>
		<id>https://wiki.soundscapes.nuclio.org:443/w/index.php?title=What_is_sonification&amp;diff=156</id>
		<title>What is sonification</title>
		<link rel="alternate" type="text/html" href="https://wiki.soundscapes.nuclio.org:443/w/index.php?title=What_is_sonification&amp;diff=156"/>
		<updated>2024-09-23T16:02:29Z</updated>

		<summary type="html">&lt;p&gt;Mick: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;When we make a sound to inform about something we are applying a sonification system. We represent data in the auditory field. We turn data into sounds, these data usually can be representing anything that can be expressed in numbers: a physical measurement, a notion, an action or the vectorial tracking of a sequence of values from a sensor. Many definitions were created for this process called sonification:  from “subtype of auditory displays that use non-speech audio to represent information”, to “transformation of data relations into perceived relations in an acoustic signal for the purposes of facilitating communication or interpretation” &amp;lt;ref&amp;gt; &amp;quot;The Sonification Report: Status of the Field and Research Agenda&amp;quot;, Gregory Kramer, Bruce N. Walker, Terri Bonebright, Perry Cook, John Flowers, Nadine Miner, 1999, International Community for Auditory Display (ICAD)&amp;lt;/ref&amp;gt; and, in a more definitive and precise way, “data-dependent generation of sound, if the transformation is systematic, objective and reproducible” &amp;lt;ref&amp;gt;Hermann, T., Walker, B., &amp;amp; Cook, P. R. (2011). Sonification handbook. Springer.&amp;lt;/ref&amp;gt;, and finally “technique of transforming non-audible data into sound that can be perceived by human hearing” &amp;lt;ref&amp;gt;wikipedia as on 9th of April 2024&amp;lt;/ref&amp;gt;.&lt;br /&gt;
To make it simple in the context of this manual we can state briefly that “sonification is the process of generating sound from any sort of data to represent their information as audio”.&lt;br /&gt;
In even more simple terms we can say to a student that sonification describes data with sound as visualization does with graphs, flow charts, histograms etc. &lt;br /&gt;
&lt;br /&gt;
So basically we want to combine data (Input) and sounds (Output), and decide the way these two are related (mapping or protocol). &lt;br /&gt;
So a sonification system is defined by these 3 parts:&lt;br /&gt;
&lt;br /&gt;
1 - Input data&lt;br /&gt;
2 - Output sounds&lt;br /&gt;
3 - Mapping or protocol&lt;br /&gt;
&lt;br /&gt;
== Type of Data and Sonification use ==&lt;br /&gt;
&lt;br /&gt;
Sonification is increasingly used as a scientific tool to analyze and monitor data of several phenomena, and it evolved especially in the astronomical community due to the large amounts of data produced from observing the cosmos, but also as an artistic tool, and educational complement to other disciplines like medicine, mathematics, physics, chemistry but also geography, economy or even literature. For example in medicine, doctors monitor patients’ bio-metric reactions in real time without having to look at a screen. In literature an audio representation can be created &amp;quot;a posteriori&amp;quot; (in post-time) using the number of adjectives in a book, the number of times a certain word appears in an article. Any kind of data is made of numbers. And numbers can trigger audio because music and sound are fundamentally resumed to numbers, in the sense that we can describe those using numbers.&lt;br /&gt;
&lt;br /&gt;
== Sonification uses ==&lt;br /&gt;
&lt;br /&gt;
The purpose of sonification is representing, displaying and sharing data. Using the auditory field the data can be more accessible and understandable to as many users as possible, especially for people who have difficulty understanding visual representations of data and it can also be used to make data more engaging and memorable for everyone.&lt;br /&gt;
Sonification can be used in a variety of applications, such as visualizing scientific data, monitoring environmental conditions, and creating interactive multimedia experiences but also in education when engaging students in the conception of a scientific notion using audio instead of visual stimuli. &lt;br /&gt;
Here are some examples of how sonification is used in the real world:&lt;br /&gt;
Analyzing scientific data: Sonification can be used to analyze data that is too complex or abstract to be represented visually. For example, scientists have used sonification to analyze the behavior of atoms (The Sounds of Atoms)&amp;lt;ref&amp;gt;&amp;quot;The sound of an atom has been captured&amp;quot; (K 2025 news article) - http://www.themindgap.nl/?p=245&amp;lt;/ref&amp;gt;, the activity of neurons in the brain (Interactive software for the sonification of neuronal activity | HAL) &amp;lt;ref&amp;gt; Argan Verrier, Vincent Goudard, Elim Hong, Hugues Genevois. Interactive software for the sonifica-&lt;br /&gt;
tion of neuronal activity. Sound and Music Computing Conference, AIMI (Associazione Italiana di&lt;br /&gt;
Informatica Musicale); Conservatorio “Giuseppe Verdi” di Torino, Università di Torino, Politecnico di&lt;br /&gt;
Torino, Jun 2020, Torino (Virtual Conference), Italy. hal-04041917 &amp;lt;/ref&amp;gt; , and the evolution of galaxies (https://chandra.si.edu/sound/gcenter.html). Sonification can also be applied when data is recorded in a too dense sequence and therefore time manipulation allows audible up-scaling or sound transformations in larger or shorter duration, such as when transforming the seismograph of an earthquake into sound.&lt;br /&gt;
Monitoring environmental conditions: Sonification can be used to monitor environmental conditions in real time, for example, to monitor the sound of the ocean to track changes in water temperature and pollution levels &amp;lt;ref&amp;gt;(Data Sonification: Acclaimed Musician Transforms Ocean Data into Music) https://www.hubocean.earth/blog/data-sonification as on 23rd September 2024&amp;lt;/ref&amp;gt;&lt;br /&gt;
Creating interactive multimedia experiences: Sonification can be used to create interactive multimedia experiences that are more immersive and engaging than traditional visual interfaces. For example, sonification has been used to create interactive maps &amp;lt;ref&amp;gt;Interactive 3D sonification for the exploration of city maps | Proceedings of the 4th Nordic conference on Human-computer interaction: changing roles&amp;lt;/ref&amp;gt;, educational games (CosmoBally - Sonokids), and virtual reality experiences.&lt;br /&gt;
&lt;br /&gt;
== Real-time sonification vs &#039;a posteriori&#039; ==&lt;br /&gt;
&lt;br /&gt;
According to the use of the sonification system (to analyze or to monitor a certain phenomena) we distinguish two “modes”: 1) in real-time (to monitor) - a stream of data is sonifed instantly and a sound is produced to display the value and behavior of the data in that particular moment; 2) “ a posteriori” (to analyze) - time-series sonification of a set of pre-recorded data is converted into an audio file that displays the values and behavior of the data over the period of time covered by the time-series. &lt;br /&gt;
These two methods are not mutually exclusive and can eventually display the same sounds.The difference is that in “a posteriori”, as the sound is produced after the events that originated the data happened, the parameters of the final piece can be adapted, i.e.the total duration. In a real-time case, you can control the time resolution: that is the time interval at which the sound can change and is played.&lt;br /&gt;
&lt;br /&gt;
== Acoustic ecology ==&lt;br /&gt;
&lt;br /&gt;
The aesthetic is important. A sound can be mapped very precisely but sounds “awful” to the user. This could be considered as a defect and therefore  it could limit the efficacy of the system because the user will not bear listening to it.  On the other side (i.e. in alarms) the sound can be intentionally noisy and aggressive. The choice of the output sound is in some way artistic in a sense that it must take into consideration the type of audience and its taste. It does not mean that we are obliged to play something that the user will like, but at least be aware of what type of sound is familiar to him/her. Even if the taste is subjective we would like to recall the work done in the field of acoustic ecology. There are some common factors indicated by psychology studies and also cultural models of “beauty”. In the present project, as the name of the project suggests, we reference the work and vision of the Canadian composer Murray Schafer, who popularized the term “soundscape” in the book “The Tuning Of The World” in 1977&amp;lt;ref&amp;gt;Schafer, R. M. (1977). The Tuning of the World. New York: Knopf. &amp;lt;/ref&amp;gt;. &lt;br /&gt;
Soundscapes can be simply considered as a composition of the anthrophony, geophony and biophony of a particular environment. The author argues that we&#039;ve become desensitized to the rich sounds of our environment, which he calls the &amp;quot;soundscape.&amp;quot; This soundscape encompasses all the natural and human-made sounds that surround us, and Schafer believes we should learn to appreciate and manage it for a better world.His work generated the “acoustic ecology movement” which aims to study the relationship between humans, animals and nature, in terms of sound and soundscapes. The Acoustic Ecology Institute was founded to raise consciousness of the effect of noisy acoustic environments, proven to be harmful for increasing stress levels on individuals when immersed in these.&lt;br /&gt;
&lt;br /&gt;
== Examples  ==&lt;br /&gt;
SonarX is a software designed to transform images and video into meaningful sound for blind individuals and all&amp;lt;ref&amp;gt;S. Cavaco, J.T. Henrique, M. Mengucci, N. Correia, F. Medeiros, Color sonification for the visually impaired, in Procedia Technology, M. M. Cruz-Cunha, J. Varajão, H. Krcmar and R. Martinho (Eds.), Elsevier, volume 9, pages 1048-1057, 2013.&amp;lt;/ref&amp;gt;. It runs on Pure Data &amp;lt;ref&amp;gt;http://puredata.info/&amp;lt;/ref&amp;gt; and can be downloaded at this &amp;lt;ref&amp;gt;https://github.com/LabIO/Sonarx-45&amp;lt;/ref as on 23rd September 2024&amp;gt; github repository.   &lt;br /&gt;
&lt;br /&gt;
== References ==&lt;/div&gt;</summary>
		<author><name>Mick</name></author>
	</entry>
	<entry>
		<id>https://wiki.soundscapes.nuclio.org:443/w/index.php?title=What_is_sonification&amp;diff=155</id>
		<title>What is sonification</title>
		<link rel="alternate" type="text/html" href="https://wiki.soundscapes.nuclio.org:443/w/index.php?title=What_is_sonification&amp;diff=155"/>
		<updated>2024-09-23T15:47:09Z</updated>

		<summary type="html">&lt;p&gt;Mick: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;When we make a sound to inform about something we are applying a sonification system. We represent data in the auditory field. We turn data into sounds, these data usually can be representing anything that can be expressed in numbers: a physical measurement, a notion, an action or the vectorial tracking of a sequence of values from a sensor. Many definitions were created for this process called sonification:  from “subtype of auditory displays that use non-speech audio to represent information”, to “transformation of data relations into perceived relations in an acoustic signal for the purposes of facilitating communication or interpretation” &amp;lt;ref&amp;gt; &amp;quot;The Sonification Report: Status of the Field and Research Agenda&amp;quot;, Gregory Kramer, Bruce N. Walker, Terri Bonebright, Perry Cook, John Flowers, Nadine Miner, 1999, International Community for Auditory Display (ICAD)&amp;lt;/ref&amp;gt; and, in a more definitive and precise way, “data-dependent generation of sound, if the transformation is systematic, objective and reproducible” &amp;lt;ref&amp;gt;Hermann, T., Walker, B., &amp;amp; Cook, P. R. (2011). Sonification handbook. Springer.&amp;lt;/ref&amp;gt;, and finally “technique of transforming non-audible data into sound that can be perceived by human hearing” &amp;lt;ref&amp;gt;wikipedia as on 9th of April 2024&amp;lt;/ref&amp;gt;.&lt;br /&gt;
To make it simple in the context of this manual we can state briefly that “sonification is the process of generating sound from any sort of data to represent their information as audio”.&lt;br /&gt;
In even more simple terms we can say to a student that sonification describes data with sound as visualization does with graphs, flow charts, histograms etc. &lt;br /&gt;
&lt;br /&gt;
So basically we want to combine data (Input) and sounds (Output), and decide the way these two are related (mapping or protocol). &lt;br /&gt;
So a sonification system is defined by these 3 parts:&lt;br /&gt;
&lt;br /&gt;
1 - Input data&lt;br /&gt;
2 - Output sounds&lt;br /&gt;
3 - Mapping or protocol&lt;br /&gt;
&lt;br /&gt;
== Type of Data and Sonification use ==&lt;br /&gt;
&lt;br /&gt;
Sonification is increasingly used as a scientific tool to analyze and monitor data of several phenomena, and it evolved especially in the astronomical community due to the large amounts of data produced from observing the cosmos, but also as an artistic tool, and educational complement to other disciplines like medicine, mathematics, physics, chemistry but also geography, economy or even literature. For example in medicine, doctors monitor patients’ bio-metric reactions in real time without having to look at a screen. In literature an audio representation can be created &amp;quot;a posteriori&amp;quot; (in post-time) using the number of adjectives in a book, the number of times a certain word appears in an article. Any kind of data is made of numbers. And numbers can trigger audio because music and sound are fundamentally resumed to numbers, in the sense that we can describe those using numbers.&lt;br /&gt;
&lt;br /&gt;
== Sonification uses ==&lt;br /&gt;
&lt;br /&gt;
The purpose of sonification is representing, displaying and sharing data. Using the auditory field the data can be more accessible and understandable to as many users as possible, especially for people who have difficulty understanding visual representations of data and it can also be used to make data more engaging and memorable for everyone.&lt;br /&gt;
Sonification can be used in a variety of applications, such as visualizing scientific data, monitoring environmental conditions, and creating interactive multimedia experiences but also in education when engaging students in the conception of a scientific notion using audio instead of visual stimuli. &lt;br /&gt;
Here are some examples of how sonification is used in the real world:&lt;br /&gt;
Analyzing scientific data: Sonification can be used to analyze data that is too complex or abstract to be represented visually. For example, scientists have used sonification to analyze the behavior of atoms (The Sounds of Atoms)&amp;lt;ref&amp;gt;&amp;quot;The sound of an atom has been captured&amp;quot; (K 2025 news article) - http://www.themindgap.nl/?p=245&amp;lt;/ref&amp;gt;, the activity of neurons in the brain (Interactive software for the sonification of neuronal activity | HAL) &amp;lt;ref&amp;gt; Argan Verrier, Vincent Goudard, Elim Hong, Hugues Genevois. Interactive software for the sonifica-&lt;br /&gt;
tion of neuronal activity. Sound and Music Computing Conference, AIMI (Associazione Italiana di&lt;br /&gt;
Informatica Musicale); Conservatorio “Giuseppe Verdi” di Torino, Università di Torino, Politecnico di&lt;br /&gt;
Torino, Jun 2020, Torino (Virtual Conference), Italy. �hal-04041917� &amp;lt;/ref&amp;gt; , and the evolution of galaxies (https://chandra.si.edu/sound/gcenter.html).collision of particles (http://quantizer.media.mit.edu/). Sonification can also be applied when data is recorded in a too dense sequence and therefore time manipulation allows audible up-scaling or sound transformations in larger or shorter duration, such as when transforming the seismograph of an earthquake into sound.&lt;br /&gt;
Monitoring environmental conditions: Sonification can be used to monitor environmental conditions in real time, for example, to monitor the sound of the ocean to track changes in water temperature and pollution levels (Data Sonification: Acclaimed Musician Transforms Ocean Data into Music). .&lt;br /&gt;
Creating interactive multimedia experiences: Sonification can be used to create interactive multimedia experiences that are more immersive and engaging than traditional visual interfaces. For example, sonification has been used to create interactive maps (Interactive 3D sonification for the exploration of city maps | Proceedings of the 4th Nordic conference on Human-computer interaction: changing roles), educational games (CosmoBally - Sonokids), and virtual reality experiences.&lt;br /&gt;
&lt;br /&gt;
== Real-time sonification vs &#039;a posteriori&#039; ==&lt;br /&gt;
&lt;br /&gt;
According to the use of the sonification system (to analyze or to monitor a certain phenomena) we distinguish two “modes”: 1) in real-time (to monitor) - a stream of data is sonifed instantly and a sound is produced to display the value and behavior of the data in that particular moment; 2) “ a posteriori” (to analyze) - time-series sonification of a set of pre-recorded data is converted into an audio file that displays the values and behavior of the data over the period of time covered by the time-series. &lt;br /&gt;
These two methods are not mutually exclusive and can eventually display the same sounds.The difference is that in “a posteriori”, as the sound is produced after the events that originated the data happened, the parameters of the final piece can be adapted, i.e.the total duration. In a real-time case, you can control the time resolution: that is the time interval at which the sound can change and is played.&lt;br /&gt;
&lt;br /&gt;
== Acoustic ecology ==&lt;br /&gt;
&lt;br /&gt;
The aesthetic is important. A sound can be mapped very precisely but sounds “awful” to the user. This could be considered as a defect and therefore  it could limit the efficacy of the system because the user will not bear listening to it.  On the other side (i.e. in alarms) the sound can be intentionally noisy and aggressive. The choice of the output sound is in some way artistic in a sense that it must take into consideration the type of audience and its taste. It does not mean that we are obliged to play something that the user will like, but at least be aware of what type of sound is familiar to him/her. Even if the taste is subjective we would like to recall the work done in the field of acoustic ecology. There are some common factors indicated by psychology studies and also cultural models of “beauty”. In the present project, as the name of the project suggests, we reference the work and vision of the Canadian composer Murray Schafer, who popularized the term “soundscape” in the book “The Tuning Of The World” in 1977&amp;lt;ref&amp;gt;Schafer, R. M. (1977). The Tuning of the World. New York: Knopf. &amp;lt;/ref&amp;gt;. &lt;br /&gt;
Soundscapes can be simply considered as a composition of the anthrophony, geophony and biophony of a particular environment. The author argues that we&#039;ve become desensitized to the rich sounds of our environment, which he calls the &amp;quot;soundscape.&amp;quot; This soundscape encompasses all the natural and human-made sounds that surround us, and Schafer believes we should learn to appreciate and manage it for a better world.His work generated the “acoustic ecology movement” which aims to study the relationship between humans, animals and nature, in terms of sound and soundscapes. The Acoustic Ecology Institute was founded to raise consciousness of the effect of noisy acoustic environments, proven to be harmful for increasing stress levels on individuals when immersed in these.&lt;br /&gt;
&lt;br /&gt;
== Examples  ==&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;/div&gt;</summary>
		<author><name>Mick</name></author>
	</entry>
	<entry>
		<id>https://wiki.soundscapes.nuclio.org:443/w/index.php?title=What_is_sonification&amp;diff=154</id>
		<title>What is sonification</title>
		<link rel="alternate" type="text/html" href="https://wiki.soundscapes.nuclio.org:443/w/index.php?title=What_is_sonification&amp;diff=154"/>
		<updated>2024-09-23T15:20:28Z</updated>

		<summary type="html">&lt;p&gt;Mick: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;When we make a sound to inform about something we are applying a sonification system. We represent data in the auditory field. We turn data into sounds, these data usually can be representing anything that can be expressed in numbers: a physical measurement, a notion, an action or the vectorial tracking of a sequence of values from a sensor. Many definitions were created for this process called sonification:  from “subtype of auditory displays that use non-speech audio to represent information”, to “transformation of data relations into perceived relations in an acoustic signal for the purposes of facilitating communication or interpretation” &amp;lt;ref&amp;gt; &amp;quot;The Sonification Report: Status of the Field and Research Agenda&amp;quot;, Gregory Kramer, Bruce N. Walker, Terri Bonebright, Perry Cook, John Flowers, Nadine Miner, 1999, International Community for Auditory Display (ICAD)&amp;lt;/ref&amp;gt; and, in a more definitive and precise way, “data-dependent generation of sound, if the transformation is systematic, objective and reproducible” &amp;lt;ref&amp;gt;Hermann, T., Walker, B., &amp;amp; Cook, P. R. (2011). Sonification handbook. Springer.&amp;lt;/ref&amp;gt;, and finally “technique of transforming non-audible data into sound that can be perceived by human hearing” &amp;lt;ref&amp;gt;wikipedia as on 9th of April 2024&amp;lt;/ref&amp;gt;.&lt;br /&gt;
To make it simple in the context of this manual we can state briefly that “sonification is the process of generating sound from any sort of data to represent their information as audio”.&lt;br /&gt;
In even more simple terms we can say to a student that sonification describes data with sound as visualization does with graphs, flow charts, histograms etc. &lt;br /&gt;
&lt;br /&gt;
So basically we want to combine data (Input) and sounds (Output), and decide the way these two are related (mapping or protocol). &lt;br /&gt;
So a sonification system is defined by these 3 parts:&lt;br /&gt;
&lt;br /&gt;
1 - Input data&lt;br /&gt;
2 - Output sounds&lt;br /&gt;
3 - Mapping or protocol&lt;br /&gt;
&lt;br /&gt;
== Type of Data and Sonification use ==&lt;br /&gt;
&lt;br /&gt;
Sonification is increasingly used as a scientific tool to analyze and monitor data of several phenomena, and it evolved especially in the astronomical community due to the large amounts of data produced from observing the cosmos, but also as an artistic tool, and educational complement to other disciplines like medicine, mathematics, physics, chemistry but also geography, economy or even literature. For example in medicine, doctors monitor patients’ bio-metric reactions in real time without having to look at a screen. In literature an audio representation can be created &amp;quot;a posteriori&amp;quot; (in post-time) using the number of adjectives in a book, the number of times a certain word appears in an article. Any kind of data is made of numbers. And numbers can trigger audio because music and sound are fundamentally resumed to numbers, in the sense that we can describe those using numbers.&lt;br /&gt;
&lt;br /&gt;
== Sonification uses ==&lt;br /&gt;
&lt;br /&gt;
The purpose of sonification is representing, displaying and sharing data. Using the auditory field the data can be more accessible and understandable to as many users as possible, especially for people who have difficulty understanding visual representations of data and it can also be used to make data more engaging and memorable for everyone.&lt;br /&gt;
Sonification can be used in a variety of applications, such as visualizing scientific data, monitoring environmental conditions, and creating interactive multimedia experiences but also in education when engaging students in the conception of a scientific notion using audio instead of visual stimuli. &lt;br /&gt;
Here are some examples of how sonification is used in the real world:&lt;br /&gt;
Analyzing scientific data: Sonification can be used to analyze data that is too complex or abstract to be represented visually. For example, scientists have used sonification to analyze the behavior of atoms (The Sounds of Atoms)  and molecules (Molecular sonification for molecule to music information transfer - Digital Discovery (RSC Publishing)), the activity of neurons in the brain (Interactive software for the sonification of neuronal activity | HAL) , and the evolution of galaxies (https://chandra.si.edu/sound/gcenter.html).collision of particles (http://quantizer.media.mit.edu/). Sonification can also be applied when data is recorded in a too dense sequence and therefore time manipulation allows audible up-scaling or sound transformations in larger or shorter duration, such as when transforming the seismograph of an earthquake into sound.&lt;br /&gt;
Monitoring environmental conditions: Sonification can be used to monitor environmental conditions in real time, for example, to monitor the sound of the ocean to track changes in water temperature and pollution levels (Data Sonification: Acclaimed Musician Transforms Ocean Data into Music). .&lt;br /&gt;
Creating interactive multimedia experiences: Sonification can be used to create interactive multimedia experiences that are more immersive and engaging than traditional visual interfaces. For example, sonification has been used to create interactive maps (Interactive 3D sonification for the exploration of city maps | Proceedings of the 4th Nordic conference on Human-computer interaction: changing roles), educational games (CosmoBally - Sonokids), and virtual reality experiences.&lt;br /&gt;
&lt;br /&gt;
== Real-time sonification vs &#039;a posteriori&#039; ==&lt;br /&gt;
&lt;br /&gt;
According to the use of the sonification system (to analyze or to monitor a certain phenomena) we distinguish two “modes”: 1) in real-time (to monitor) - a stream of data is sonifed instantly and a sound is produced to display the value and behavior of the data in that particular moment; 2) “ a posteriori” (to analyze) - time-series sonification of a set of pre-recorded data is converted into an audio file that displays the values and behavior of the data over the period of time covered by the time-series. &lt;br /&gt;
These two methods are not mutually exclusive and can eventually display the same sounds.The difference is that in “a posteriori”, as the sound is produced after the events that originated the data happened, the parameters of the final piece can be adapted, i.e.the total duration. In a real-time case, you can control the time resolution: that is the time interval at which the sound can change and is played.&lt;br /&gt;
&lt;br /&gt;
== Acoustic ecology ==&lt;br /&gt;
&lt;br /&gt;
The aesthetic is important. A sound can be mapped very precisely but sounds “awful” to the user. This could be considered as a defect and therefore  it could limit the efficacy of the system because the user will not bear listening to it.  On the other side (i.e. in alarms) the sound can be intentionally noisy and aggressive. The choice of the output sound is in some way artistic in a sense that it must take into consideration the type of audience and its taste. It does not mean that we are obliged to play something that the user will like, but at least be aware of what type of sound is familiar to him/her. Even if the taste is subjective we would like to recall the work done in the field of acoustic ecology. There are some common factors indicated by psychology studies and also cultural models of “beauty”. In the present project, as the name of the project suggests, we reference the work and vision of the Canadian composer Murray Schafer, who popularized the term “soundscape” in the book “The Tuning Of The World” in 1977. &lt;br /&gt;
Soundscapes can be simply considered as a composition of the anthrophony, geophony and biophony of a particular environment. The author argues that we&#039;ve become desensitized to the rich sounds of our environment, which he calls the &amp;quot;soundscape.&amp;quot; This soundscape encompasses all the natural and human-made sounds that surround us, and Schafer believes we should learn to appreciate and manage it for a better world.His work generated the “acoustic ecology movement” which aims to study the relationship between humans, animals and nature, in terms of sound and soundscapes. The Acoustic Ecology Institute was founded to raise consciousness of the effect of noisy acoustic environments, proven to be harmful for increasing stress levels on individuals when immersed in these.&lt;br /&gt;
&lt;br /&gt;
== Examples  ==&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;/div&gt;</summary>
		<author><name>Mick</name></author>
	</entry>
	<entry>
		<id>https://wiki.soundscapes.nuclio.org:443/w/index.php?title=What_is_sonification&amp;diff=153</id>
		<title>What is sonification</title>
		<link rel="alternate" type="text/html" href="https://wiki.soundscapes.nuclio.org:443/w/index.php?title=What_is_sonification&amp;diff=153"/>
		<updated>2024-09-23T15:16:51Z</updated>

		<summary type="html">&lt;p&gt;Mick: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;When we make a sound to inform about something we are applying a sonification system. We represent data in the auditory field. We turn data into sounds, these data usually can be representing anything that can be expressed in numbers: a physical measurement, a notion, an action or the vectorial tracking of a sequence of values from a sensor. Many definitions were created for this process called sonification:  from “subtype of auditory displays that use non-speech audio to represent information”, to “transformation of data relations into perceived relations in an acoustic signal for the purposes of facilitating communication or interpretation” &amp;lt;ref&amp;gt; &amp;quot;The Sonification Report: Status of the Field and Research Agenda&amp;quot;, Gregory Kramer, Bruce N. Walker, Terri Bonebright, Perry Cook, John Flowers, Nadine Miner, 1999, International Community for Auditory Display (ICAD)&amp;lt;/ref&amp;gt; and, in a more definitive and precise way, “data-dependent generation of sound, if the transformation is systematic, objective and reproducible” &amp;lt;ref&amp;gt;Hermann et al.,  2011&amp;lt;/ref&amp;gt;, and finally “technique of transforming non-audible data into sound that can be perceived by human hearing” (wikipedia on 9th of April 2024).&lt;br /&gt;
To make it simple in the context of this manual we can state briefly that “sonification is the process of generating sound from any sort of data to represent their information as audio”.&lt;br /&gt;
In even more simple terms we can say to a student that sonification describes data with sound as visualization does with graphs, flow charts, histograms etc. &lt;br /&gt;
&lt;br /&gt;
So basically we want to combine data (Input) and sounds (Output), and decide the way these two are related (mapping or protocol). &lt;br /&gt;
So a sonification system is defined by these 3 parts:&lt;br /&gt;
&lt;br /&gt;
1 - Input data&lt;br /&gt;
2 - Output sounds&lt;br /&gt;
3 - Mapping or protocol&lt;br /&gt;
&lt;br /&gt;
== Type of Data and Sonification use ==&lt;br /&gt;
&lt;br /&gt;
Sonification is increasingly used as a scientific tool to analyze and monitor data of several phenomena, and it evolved especially in the astronomical community due to the large amounts of data produced from observing the cosmos, but also as an artistic tool, and educational complement to other disciplines like medicine, mathematics, physics, chemistry but also geography, economy or even literature. For example in medicine, doctors monitor patients’ bio-metric reactions in real time without having to look at a screen. In literature an audio representation can be created &amp;quot;a posteriori&amp;quot; (in post-time) using the number of adjectives in a book, the number of times a certain word appears in an article. Any kind of data is made of numbers. And numbers can trigger audio because music and sound are fundamentally resumed to numbers, in the sense that we can describe those using numbers.&lt;br /&gt;
&lt;br /&gt;
== Sonification uses ==&lt;br /&gt;
&lt;br /&gt;
The purpose of sonification is representing, displaying and sharing data. Using the auditory field the data can be more accessible and understandable to as many users as possible, especially for people who have difficulty understanding visual representations of data and it can also be used to make data more engaging and memorable for everyone.&lt;br /&gt;
Sonification can be used in a variety of applications, such as visualizing scientific data, monitoring environmental conditions, and creating interactive multimedia experiences but also in education when engaging students in the conception of a scientific notion using audio instead of visual stimuli. &lt;br /&gt;
Here are some examples of how sonification is used in the real world:&lt;br /&gt;
Analyzing scientific data: Sonification can be used to analyze data that is too complex or abstract to be represented visually. For example, scientists have used sonification to analyze the behavior of atoms (The Sounds of Atoms)  and molecules (Molecular sonification for molecule to music information transfer - Digital Discovery (RSC Publishing)), the activity of neurons in the brain (Interactive software for the sonification of neuronal activity | HAL) , and the evolution of galaxies (https://chandra.si.edu/sound/gcenter.html).collision of particles (http://quantizer.media.mit.edu/). Sonification can also be applied when data is recorded in a too dense sequence and therefore time manipulation allows audible up-scaling or sound transformations in larger or shorter duration, such as when transforming the seismograph of an earthquake into sound.&lt;br /&gt;
Monitoring environmental conditions: Sonification can be used to monitor environmental conditions in real time, for example, to monitor the sound of the ocean to track changes in water temperature and pollution levels (Data Sonification: Acclaimed Musician Transforms Ocean Data into Music). .&lt;br /&gt;
Creating interactive multimedia experiences: Sonification can be used to create interactive multimedia experiences that are more immersive and engaging than traditional visual interfaces. For example, sonification has been used to create interactive maps (Interactive 3D sonification for the exploration of city maps | Proceedings of the 4th Nordic conference on Human-computer interaction: changing roles), educational games (CosmoBally - Sonokids), and virtual reality experiences.&lt;br /&gt;
&lt;br /&gt;
== Real-time sonification vs &#039;a posteriori&#039; ==&lt;br /&gt;
&lt;br /&gt;
According to the use of the sonification system (to analyze or to monitor a certain phenomena) we distinguish two “modes”: 1) in real-time (to monitor) - a stream of data is sonifed instantly and a sound is produced to display the value and behavior of the data in that particular moment; 2) “ a posteriori” (to analyze) - time-series sonification of a set of pre-recorded data is converted into an audio file that displays the values and behavior of the data over the period of time covered by the time-series. &lt;br /&gt;
These two methods are not mutually exclusive and can eventually display the same sounds.The difference is that in “a posteriori”, as the sound is produced after the events that originated the data happened, the parameters of the final piece can be adapted, i.e.the total duration. In a real-time case, you can control the time resolution: that is the time interval at which the sound can change and is played.&lt;br /&gt;
&lt;br /&gt;
== Acoustic ecology ==&lt;br /&gt;
&lt;br /&gt;
The aesthetic is important. A sound can be mapped very precisely but sounds “awful” to the user. This could be considered as a defect and therefore  it could limit the efficacy of the system because the user will not bear listening to it.  On the other side (i.e. in alarms) the sound can be intentionally noisy and aggressive. The choice of the output sound is in some way artistic in a sense that it must take into consideration the type of audience and its taste. It does not mean that we are obliged to play something that the user will like, but at least be aware of what type of sound is familiar to him/her. Even if the taste is subjective we would like to recall the work done in the field of acoustic ecology. There are some common factors indicated by psychology studies and also cultural models of “beauty”. In the present project, as the name of the project suggests, we reference the work and vision of the Canadian composer Murray Schafer, who popularized the term “soundscape” in the book “The Tuning Of The World” in 1977. &lt;br /&gt;
Soundscapes can be simply considered as a composition of the anthrophony, geophony and biophony of a particular environment. The author argues that we&#039;ve become desensitized to the rich sounds of our environment, which he calls the &amp;quot;soundscape.&amp;quot; This soundscape encompasses all the natural and human-made sounds that surround us, and Schafer believes we should learn to appreciate and manage it for a better world.His work generated the “acoustic ecology movement” which aims to study the relationship between humans, animals and nature, in terms of sound and soundscapes. The Acoustic Ecology Institute was founded to raise consciousness of the effect of noisy acoustic environments, proven to be harmful for increasing stress levels on individuals when immersed in these.&lt;br /&gt;
&lt;br /&gt;
== Examples  ==&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;/div&gt;</summary>
		<author><name>Mick</name></author>
	</entry>
	<entry>
		<id>https://wiki.soundscapes.nuclio.org:443/w/index.php?title=What_is_sonification&amp;diff=152</id>
		<title>What is sonification</title>
		<link rel="alternate" type="text/html" href="https://wiki.soundscapes.nuclio.org:443/w/index.php?title=What_is_sonification&amp;diff=152"/>
		<updated>2024-09-18T16:43:31Z</updated>

		<summary type="html">&lt;p&gt;Mick: /* Acoustic ecology */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;When we make a sound to inform about something we are applying a sonification system. We represent data in the auditory field. We turn data into sounds, these data usually can be representing anything that can be expressed in numbers: a physical measurement, a notion, an action or the vectorial tracking of a sequence of values from a sensor. Many definitions were created for this process called sonification:  from “subtype of auditory displays that use non-speech audio to represent information”, to “transformation of data relations into perceived relations in an acoustic signal for the purposes of facilitating communication or interpretation” (Kramer et al., 1999) and, in a more definitive and precise way, “data-dependent generation of sound, if the transformation is systematic, objective and reproducible” (Hermann et al.,  2011), and finally “technique of transforming non-audible data into sound that can be perceived by human hearing” (wikipedia on 9th of April 2024).&lt;br /&gt;
To make it simple in the context of this manual we can state briefly that “sonification is the process of generating sound from any sort of data to represent their information as audio”.&lt;br /&gt;
In even more simple terms we can say to a student that sonification describes data with sound as visualization does with graphs, flow charts, histograms etc. &lt;br /&gt;
&lt;br /&gt;
So basically we want to combine data (Input) and sounds (Output), and decide the way these two are related (mapping or protocol). &lt;br /&gt;
So a sonification system is defined by these 3 parts:&lt;br /&gt;
&lt;br /&gt;
1 - Input data&lt;br /&gt;
2 - Output sounds&lt;br /&gt;
3 - Mapping or protocol&lt;br /&gt;
&lt;br /&gt;
== Type of Data and Sonification use ==&lt;br /&gt;
&lt;br /&gt;
Sonification is increasingly used as a scientific tool to analyze and monitor data of several phenomena, and it evolved especially in the astronomical community due to the large amounts of data produced from observing the cosmos, but also as an artistic tool, and educational complement to other disciplines like medicine, mathematics, physics, chemistry but also geography, economy or even literature. For example in medicine, doctors monitor patients’ biometric reactions in real time without having to look at a screen. In literature an audio representation can be created a posteriori (in post-time) using the number of adjectives in a book, the number of times a certain word appears in an article. Any kind of data is made of numbers. And numbers can trigger audio because music and sound are fundamentally resumed to numbers, in the sense that we can describe those using numbers.&lt;br /&gt;
&lt;br /&gt;
== Sonification uses ==&lt;br /&gt;
&lt;br /&gt;
The purpose of sonification is representing, displaying and sharing data. Using the auditory field the data can be more accessible and understandable to as many users as possible, especially for people who have difficulty understanding visual representations of data and it can also be used to make data more engaging and memorable for everyone.&lt;br /&gt;
Sonification can be used in a variety of applications, such as visualizing scientific data, monitoring environmental conditions, and creating interactive multimedia experiences but also in education when engaging students in the conception of a scientific notion using audio instead of visual stimuli. &lt;br /&gt;
Here are some examples of how sonification is used in the real world:&lt;br /&gt;
Analyzing scientific data: Sonification can be used to analyze data that is too complex or abstract to be represented visually. For example, scientists have used sonification to analyze the behavior of atoms (The Sounds of Atoms)  and molecules (Molecular sonification for molecule to music information transfer - Digital Discovery (RSC Publishing)), the activity of neurons in the brain (Interactive software for the sonification of neuronal activity | HAL) , and the evolution of galaxies (https://chandra.si.edu/sound/gcenter.html).collision of particles (http://quantizer.media.mit.edu/). Sonification can also be applied when data is recorded in a too dense sequence and therefore time manipulation allows audible up-scaling or sound transformations in larger or shorter durations, such as when transforming the seismogram of an earthquake into sound.&lt;br /&gt;
Monitoring environmental conditions: Sonification can be used to monitor environmental conditions in real time, for example, to monitor the sound of the ocean to track changes in water temperature and pollution levels (Data Sonification: Acclaimed Musician Transforms Ocean Data into Music). .&lt;br /&gt;
Creating interactive multimedia experiences: Sonification can be used to create interactive multimedia experiences that are more immersive and engaging than traditional visual interfaces. For example, sonification has been used to create interactive maps (Interactive 3D sonification for the exploration of city maps | Proceedings of the 4th Nordic conference on Human-computer interaction: changing roles), educational games (CosmoBally - Sonokids), and virtual reality experiences.&lt;br /&gt;
&lt;br /&gt;
== Real-time sonification vs &#039;a posteriori&#039; ==&lt;br /&gt;
&lt;br /&gt;
According to the use of the sonification system (to analyze or to monitor a certain phenomena) we distinguish two “modes”: 1) in real-time (to monitor) - a stream of data is sonifed instantly and a sound is produced to display the value and behavior of the data in that particular moment; 2) “ a posteriori” (to analyze) - time-series sonification of a set of pre-recorded data is converted into an audio file that displays the values and behavior of the data over the period of time covered by the time-series. &lt;br /&gt;
These two methods are not mutually exclusive and can eventually display the same sounds.The difference is that in “a posteriori”, as the sound is produced after the events that originated the data happened, the parameters of the final piece can be adapted, i.e.the total duration. In a real-time case, you can control the time resolution: that is the time interval at which the sound can change and is played.&lt;br /&gt;
&lt;br /&gt;
== Acoustic ecology ==&lt;br /&gt;
&lt;br /&gt;
The aesthetic is important. A sound can be mapped very precisely but sounds “awful” to the user. This could be considered as a defect and therefore  it could limit the efficacy of the system because the user will not bear listening to it.  On the other side (i.e. in alarms) the sound can be intentionally noisy and aggressive. The choice of the output sound is in some way artistic in a sense that it must take into consideration the type of audience and its taste. It does not mean that we are obliged to play something that the user will like, but at least be aware of what type of sound is familiar to him/her. Even if the taste is subjective we would like to recall the work done in the field of acoustic ecology. There are some common factors indicated by psychology studies and also cultural models of “beauty”. In the present project, as the name of the project suggests, we reference the work and vision of the Canadian composer Murray Schafer, who popularized the term “soundscape” in the book “The Tuning Of The World” in 1977. &lt;br /&gt;
Soundscapes can be simply considered as a composition of the anthrophony, geophony and biophony of a particular environment. The author argues that we&#039;ve become desensitized to the rich sounds of our environment, which he calls the &amp;quot;soundscape.&amp;quot; This soundscape encompasses all the natural and human-made sounds that surround us, and Schafer believes we should learn to appreciate and manage it for a better world.His work generated the “acoustic ecology movement” which aims to study the relationship between humans, animals and nature, in terms of sound and soundscapes. The Acoustic Ecology Institute was founded to raise consciousness of the effect of noisy acoustic environments, proven to be harmful for increasing stress levels on individuals when immersed in these.&lt;br /&gt;
&lt;br /&gt;
== State of the art examples  ==&lt;/div&gt;</summary>
		<author><name>Mick</name></author>
	</entry>
	<entry>
		<id>https://wiki.soundscapes.nuclio.org:443/w/index.php?title=What_is_sonification&amp;diff=151</id>
		<title>What is sonification</title>
		<link rel="alternate" type="text/html" href="https://wiki.soundscapes.nuclio.org:443/w/index.php?title=What_is_sonification&amp;diff=151"/>
		<updated>2024-09-18T16:42:57Z</updated>

		<summary type="html">&lt;p&gt;Mick: /* Real-time sonification vs &amp;#039;a posteriori&amp;#039; */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;When we make a sound to inform about something we are applying a sonification system. We represent data in the auditory field. We turn data into sounds, these data usually can be representing anything that can be expressed in numbers: a physical measurement, a notion, an action or the vectorial tracking of a sequence of values from a sensor. Many definitions were created for this process called sonification:  from “subtype of auditory displays that use non-speech audio to represent information”, to “transformation of data relations into perceived relations in an acoustic signal for the purposes of facilitating communication or interpretation” (Kramer et al., 1999) and, in a more definitive and precise way, “data-dependent generation of sound, if the transformation is systematic, objective and reproducible” (Hermann et al.,  2011), and finally “technique of transforming non-audible data into sound that can be perceived by human hearing” (wikipedia on 9th of April 2024).&lt;br /&gt;
To make it simple in the context of this manual we can state briefly that “sonification is the process of generating sound from any sort of data to represent their information as audio”.&lt;br /&gt;
In even more simple terms we can say to a student that sonification describes data with sound as visualization does with graphs, flow charts, histograms etc. &lt;br /&gt;
&lt;br /&gt;
So basically we want to combine data (Input) and sounds (Output), and decide the way these two are related (mapping or protocol). &lt;br /&gt;
So a sonification system is defined by these 3 parts:&lt;br /&gt;
&lt;br /&gt;
1 - Input data&lt;br /&gt;
2 - Output sounds&lt;br /&gt;
3 - Mapping or protocol&lt;br /&gt;
&lt;br /&gt;
== Type of Data and Sonification use ==&lt;br /&gt;
&lt;br /&gt;
Sonification is increasingly used as a scientific tool to analyze and monitor data of several phenomena, and it evolved especially in the astronomical community due to the large amounts of data produced from observing the cosmos, but also as an artistic tool, and educational complement to other disciplines like medicine, mathematics, physics, chemistry but also geography, economy or even literature. For example in medicine, doctors monitor patients’ biometric reactions in real time without having to look at a screen. In literature an audio representation can be created a posteriori (in post-time) using the number of adjectives in a book, the number of times a certain word appears in an article. Any kind of data is made of numbers. And numbers can trigger audio because music and sound are fundamentally resumed to numbers, in the sense that we can describe those using numbers.&lt;br /&gt;
&lt;br /&gt;
== Sonification uses ==&lt;br /&gt;
&lt;br /&gt;
The purpose of sonification is representing, displaying and sharing data. Using the auditory field the data can be more accessible and understandable to as many users as possible, especially for people who have difficulty understanding visual representations of data and it can also be used to make data more engaging and memorable for everyone.&lt;br /&gt;
Sonification can be used in a variety of applications, such as visualizing scientific data, monitoring environmental conditions, and creating interactive multimedia experiences but also in education when engaging students in the conception of a scientific notion using audio instead of visual stimuli. &lt;br /&gt;
Here are some examples of how sonification is used in the real world:&lt;br /&gt;
Analyzing scientific data: Sonification can be used to analyze data that is too complex or abstract to be represented visually. For example, scientists have used sonification to analyze the behavior of atoms (The Sounds of Atoms)  and molecules (Molecular sonification for molecule to music information transfer - Digital Discovery (RSC Publishing)), the activity of neurons in the brain (Interactive software for the sonification of neuronal activity | HAL) , and the evolution of galaxies (https://chandra.si.edu/sound/gcenter.html).collision of particles (http://quantizer.media.mit.edu/). Sonification can also be applied when data is recorded in a too dense sequence and therefore time manipulation allows audible up-scaling or sound transformations in larger or shorter durations, such as when transforming the seismogram of an earthquake into sound.&lt;br /&gt;
Monitoring environmental conditions: Sonification can be used to monitor environmental conditions in real time, for example, to monitor the sound of the ocean to track changes in water temperature and pollution levels (Data Sonification: Acclaimed Musician Transforms Ocean Data into Music). .&lt;br /&gt;
Creating interactive multimedia experiences: Sonification can be used to create interactive multimedia experiences that are more immersive and engaging than traditional visual interfaces. For example, sonification has been used to create interactive maps (Interactive 3D sonification for the exploration of city maps | Proceedings of the 4th Nordic conference on Human-computer interaction: changing roles), educational games (CosmoBally - Sonokids), and virtual reality experiences.&lt;br /&gt;
&lt;br /&gt;
== Real-time sonification vs &#039;a posteriori&#039; ==&lt;br /&gt;
&lt;br /&gt;
According to the use of the sonification system (to analyze or to monitor a certain phenomena) we distinguish two “modes”: 1) in real-time (to monitor) - a stream of data is sonifed instantly and a sound is produced to display the value and behavior of the data in that particular moment; 2) “ a posteriori” (to analyze) - time-series sonification of a set of pre-recorded data is converted into an audio file that displays the values and behavior of the data over the period of time covered by the time-series. &lt;br /&gt;
These two methods are not mutually exclusive and can eventually display the same sounds.The difference is that in “a posteriori”, as the sound is produced after the events that originated the data happened, the parameters of the final piece can be adapted, i.e.the total duration. In a real-time case, you can control the time resolution: that is the time interval at which the sound can change and is played.&lt;br /&gt;
&lt;br /&gt;
== Acoustic ecology ==&lt;br /&gt;
&lt;br /&gt;
== State of the art examples  ==&lt;/div&gt;</summary>
		<author><name>Mick</name></author>
	</entry>
	<entry>
		<id>https://wiki.soundscapes.nuclio.org:443/w/index.php?title=What_is_sonification&amp;diff=150</id>
		<title>What is sonification</title>
		<link rel="alternate" type="text/html" href="https://wiki.soundscapes.nuclio.org:443/w/index.php?title=What_is_sonification&amp;diff=150"/>
		<updated>2024-09-18T16:42:24Z</updated>

		<summary type="html">&lt;p&gt;Mick: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;When we make a sound to inform about something we are applying a sonification system. We represent data in the auditory field. We turn data into sounds, these data usually can be representing anything that can be expressed in numbers: a physical measurement, a notion, an action or the vectorial tracking of a sequence of values from a sensor. Many definitions were created for this process called sonification:  from “subtype of auditory displays that use non-speech audio to represent information”, to “transformation of data relations into perceived relations in an acoustic signal for the purposes of facilitating communication or interpretation” (Kramer et al., 1999) and, in a more definitive and precise way, “data-dependent generation of sound, if the transformation is systematic, objective and reproducible” (Hermann et al.,  2011), and finally “technique of transforming non-audible data into sound that can be perceived by human hearing” (wikipedia on 9th of April 2024).&lt;br /&gt;
To make it simple in the context of this manual we can state briefly that “sonification is the process of generating sound from any sort of data to represent their information as audio”.&lt;br /&gt;
In even more simple terms we can say to a student that sonification describes data with sound as visualization does with graphs, flow charts, histograms etc. &lt;br /&gt;
&lt;br /&gt;
So basically we want to combine data (Input) and sounds (Output), and decide the way these two are related (mapping or protocol). &lt;br /&gt;
So a sonification system is defined by these 3 parts:&lt;br /&gt;
&lt;br /&gt;
1 - Input data&lt;br /&gt;
2 - Output sounds&lt;br /&gt;
3 - Mapping or protocol&lt;br /&gt;
&lt;br /&gt;
== Type of Data and Sonification use ==&lt;br /&gt;
&lt;br /&gt;
Sonification is increasingly used as a scientific tool to analyze and monitor data of several phenomena, and it evolved especially in the astronomical community due to the large amounts of data produced from observing the cosmos, but also as an artistic tool, and educational complement to other disciplines like medicine, mathematics, physics, chemistry but also geography, economy or even literature. For example in medicine, doctors monitor patients’ biometric reactions in real time without having to look at a screen. In literature an audio representation can be created a posteriori (in post-time) using the number of adjectives in a book, the number of times a certain word appears in an article. Any kind of data is made of numbers. And numbers can trigger audio because music and sound are fundamentally resumed to numbers, in the sense that we can describe those using numbers.&lt;br /&gt;
&lt;br /&gt;
== Sonification uses ==&lt;br /&gt;
&lt;br /&gt;
The purpose of sonification is representing, displaying and sharing data. Using the auditory field the data can be more accessible and understandable to as many users as possible, especially for people who have difficulty understanding visual representations of data and it can also be used to make data more engaging and memorable for everyone.&lt;br /&gt;
Sonification can be used in a variety of applications, such as visualizing scientific data, monitoring environmental conditions, and creating interactive multimedia experiences but also in education when engaging students in the conception of a scientific notion using audio instead of visual stimuli. &lt;br /&gt;
Here are some examples of how sonification is used in the real world:&lt;br /&gt;
Analyzing scientific data: Sonification can be used to analyze data that is too complex or abstract to be represented visually. For example, scientists have used sonification to analyze the behavior of atoms (The Sounds of Atoms)  and molecules (Molecular sonification for molecule to music information transfer - Digital Discovery (RSC Publishing)), the activity of neurons in the brain (Interactive software for the sonification of neuronal activity | HAL) , and the evolution of galaxies (https://chandra.si.edu/sound/gcenter.html).collision of particles (http://quantizer.media.mit.edu/). Sonification can also be applied when data is recorded in a too dense sequence and therefore time manipulation allows audible up-scaling or sound transformations in larger or shorter durations, such as when transforming the seismogram of an earthquake into sound.&lt;br /&gt;
Monitoring environmental conditions: Sonification can be used to monitor environmental conditions in real time, for example, to monitor the sound of the ocean to track changes in water temperature and pollution levels (Data Sonification: Acclaimed Musician Transforms Ocean Data into Music). .&lt;br /&gt;
Creating interactive multimedia experiences: Sonification can be used to create interactive multimedia experiences that are more immersive and engaging than traditional visual interfaces. For example, sonification has been used to create interactive maps (Interactive 3D sonification for the exploration of city maps | Proceedings of the 4th Nordic conference on Human-computer interaction: changing roles), educational games (CosmoBally - Sonokids), and virtual reality experiences.&lt;br /&gt;
&lt;br /&gt;
== Real-time sonification vs &#039;a posteriori&#039; ==&lt;br /&gt;
&lt;br /&gt;
== Acoustic ecology ==&lt;br /&gt;
&lt;br /&gt;
== State of the art examples  ==&lt;/div&gt;</summary>
		<author><name>Mick</name></author>
	</entry>
	<entry>
		<id>https://wiki.soundscapes.nuclio.org:443/w/index.php?title=What_is_sonification&amp;diff=149</id>
		<title>What is sonification</title>
		<link rel="alternate" type="text/html" href="https://wiki.soundscapes.nuclio.org:443/w/index.php?title=What_is_sonification&amp;diff=149"/>
		<updated>2024-09-18T16:41:27Z</updated>

		<summary type="html">&lt;p&gt;Mick: /* Type of Data */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;When we make a sound to inform about something we are applying a sonification system. We represent data in the auditory field. We turn data into sounds, these data usually can be representing anything that can be expressed in numbers: a physical measurement, a notion, an action or the vectorial tracking of a sequence of values from a sensor. Many definitions were created for this process called sonification:  from “subtype of auditory displays that use non-speech audio to represent information”, to “transformation of data relations into perceived relations in an acoustic signal for the purposes of facilitating communication or interpretation” (Kramer et al., 1999) and, in a more definitive and precise way, “data-dependent generation of sound, if the transformation is systematic, objective and reproducible” (Hermann et al.,  2011), and finally “technique of transforming non-audible data into sound that can be perceived by human hearing” (wikipedia on 9th of April 2024).&lt;br /&gt;
To make it simple in the context of this manual we can state briefly that “sonification is the process of generating sound from any sort of data to represent their information as audio”.&lt;br /&gt;
In even more simple terms we can say to a student that sonification describes data with sound as visualization does with graphs, flow charts, histograms etc. &lt;br /&gt;
&lt;br /&gt;
So basically we want to combine data (Input) and sounds (Output), and decide the way these two are related (mapping or protocol). &lt;br /&gt;
So a sonification system is defined by these 3 parts:&lt;br /&gt;
&lt;br /&gt;
1 - Input data&lt;br /&gt;
2 - Output sounds&lt;br /&gt;
3 - Mapping or protocol&lt;br /&gt;
&lt;br /&gt;
== Type of Data and Sonification use ==&lt;br /&gt;
&lt;br /&gt;
Sonification is increasingly used as a scientific tool to analyze and monitor data of several phenomena, and it evolved especially in the astronomical community due to the large amounts of data produced from observing the cosmos, but also as an artistic tool, and educational complement to other disciplines like medicine, mathematics, physics, chemistry but also geography, economy or even literature. For example in medicine, doctors monitor patients’ biometric reactions in real time without having to look at a screen. In literature an audio representation can be created a posteriori (in post-time) using the number of adjectives in a book, the number of times a certain word appears in an article. Any kind of data is made of numbers. And numbers can trigger audio because music and sound are fundamentally resumed to numbers, in the sense that we can describe those using numbers.&lt;br /&gt;
&lt;br /&gt;
== Real-time sonification vs &#039;a posteriori&#039; ==&lt;br /&gt;
&lt;br /&gt;
== Acoustic ecology ==&lt;br /&gt;
&lt;br /&gt;
== State of the art examples  ==&lt;/div&gt;</summary>
		<author><name>Mick</name></author>
	</entry>
	<entry>
		<id>https://wiki.soundscapes.nuclio.org:443/w/index.php?title=What_is_sonification&amp;diff=148</id>
		<title>What is sonification</title>
		<link rel="alternate" type="text/html" href="https://wiki.soundscapes.nuclio.org:443/w/index.php?title=What_is_sonification&amp;diff=148"/>
		<updated>2024-09-18T16:40:47Z</updated>

		<summary type="html">&lt;p&gt;Mick: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;When we make a sound to inform about something we are applying a sonification system. We represent data in the auditory field. We turn data into sounds, these data usually can be representing anything that can be expressed in numbers: a physical measurement, a notion, an action or the vectorial tracking of a sequence of values from a sensor. Many definitions were created for this process called sonification:  from “subtype of auditory displays that use non-speech audio to represent information”, to “transformation of data relations into perceived relations in an acoustic signal for the purposes of facilitating communication or interpretation” (Kramer et al., 1999) and, in a more definitive and precise way, “data-dependent generation of sound, if the transformation is systematic, objective and reproducible” (Hermann et al.,  2011), and finally “technique of transforming non-audible data into sound that can be perceived by human hearing” (wikipedia on 9th of April 2024).&lt;br /&gt;
To make it simple in the context of this manual we can state briefly that “sonification is the process of generating sound from any sort of data to represent their information as audio”.&lt;br /&gt;
In even more simple terms we can say to a student that sonification describes data with sound as visualization does with graphs, flow charts, histograms etc. &lt;br /&gt;
&lt;br /&gt;
So basically we want to combine data (Input) and sounds (Output), and decide the way these two are related (mapping or protocol). &lt;br /&gt;
So a sonification system is defined by these 3 parts:&lt;br /&gt;
&lt;br /&gt;
1 - Input data&lt;br /&gt;
2 - Output sounds&lt;br /&gt;
3 - Mapping or protocol&lt;br /&gt;
&lt;br /&gt;
== Type of Data and Sonification use ==&lt;br /&gt;
&lt;br /&gt;
== Real-time sonification vs &#039;a posteriori&#039; ==&lt;br /&gt;
&lt;br /&gt;
== Acoustic ecology ==&lt;br /&gt;
&lt;br /&gt;
== State of the art examples  ==&lt;/div&gt;</summary>
		<author><name>Mick</name></author>
	</entry>
	<entry>
		<id>https://wiki.soundscapes.nuclio.org:443/w/index.php?title=What_is_sonification&amp;diff=147</id>
		<title>What is sonification</title>
		<link rel="alternate" type="text/html" href="https://wiki.soundscapes.nuclio.org:443/w/index.php?title=What_is_sonification&amp;diff=147"/>
		<updated>2024-09-18T16:39:35Z</updated>

		<summary type="html">&lt;p&gt;Mick: /* Types of data and of use */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;When we make a sound to inform about something we are applying a sonification system. We represent data in the auditory field. We turn data into sounds, these data usually can be representing anything that can be expressed in numbers: a physical measurement, a notion, an action or the vectorial tracking of a sequence of values from a sensor. Many definitions were created for this process called sonification:  from “subtype of auditory displays that use non-speech audio to represent information”, to “transformation of data relations into perceived relations in an acoustic signal for the purposes of facilitating communication or interpretation” (Kramer et al., 1999) and, in a more definitive and precise way, “data-dependent generation of sound, if the transformation is systematic, objective and reproducible” (Hermann et al.,  2011), and finally “technique of transforming non-audible data into sound that can be perceived by human hearing” (wikipedia on 9th of April 2024).&lt;br /&gt;
To make it simple in the context of this manual we can state briefly that “sonification is the process of generating sound from any sort of data to represent their information as audio”.&lt;br /&gt;
In even more simple terms we can say to a student that sonification describes data with sound as visualization does with graphs, flow charts, histograms etc. &lt;br /&gt;
&lt;br /&gt;
So basically we want to combine data (Input) and sounds (Output), and decide the way these two are related (mapping or protocol). &lt;br /&gt;
So a sonification system is defined by these 3 parts:&lt;br /&gt;
&lt;br /&gt;
1 - Input data&lt;br /&gt;
2 - Output sounds&lt;br /&gt;
3 - Mapping or protocol&lt;br /&gt;
&lt;br /&gt;
Sonification is increasingly used as a scientific tool to analyze and monitor data of several phenomena, and it evolved especially in the astronomical community due to the large amounts of data produced from observing the cosmos, but also as an artistic tool, and educational complement to other disciplines like medicine, mathematics, physics, chemistry but also geography, economy or even literature. For example in medicine, doctors monitor patients’ biometric reactions in real time without having to look at a screen. In literature an audio representation can be created a posteriori (in post-time) using the number of adjectives in a book, the number of times a certain word appears in an article. Any kind of data is made of numbers. And numbers can trigger audio because music and sound are fundamentally resumed to numbers, in the sense that we can describe those using numbers. &lt;br /&gt;
&lt;br /&gt;
The purpose of sonification is representing, displaying and sharing data. Using the auditory field the data can be more accessible and understandable to as many users as possible, especially for people who have difficulty understanding visual representations of data and it can also be used to make data more engaging and memorable for everyone.&lt;br /&gt;
Sonification can be used in a variety of applications, such as visualizing scientific data, monitoring environmental conditions, and creating interactive multimedia experiences but also in education when engaging students in the conception of a scientific notion using audio instead of visual stimuli. &lt;br /&gt;
Here are some examples of how sonification is used in the real world:&lt;br /&gt;
Analyzing scientific data: Sonification can be used to analyze data that is too complex or abstract to be represented visually. For example, scientists have used sonification to analyze the behavior of atoms (The Sounds of Atoms)  and molecules (Molecular sonification for molecule to music information transfer - Digital Discovery (RSC Publishing)), the activity of neurons in the brain (Interactive software for the sonification of neuronal activity | HAL) , and the evolution of galaxies (https://chandra.si.edu/sound/gcenter.html).collision of particles (http://quantizer.media.mit.edu/). Sonification can also be applied when data is recorded in a too dense sequence and therefore time manipulation allows audible up-scaling or sound transformations in larger or shorter durations, such as when transforming the seismogram of an earthquake into sound.&lt;br /&gt;
Monitoring environmental conditions: Sonification can be used to monitor environmental conditions in real time, for example, to monitor the sound of the ocean to track changes in water temperature and pollution levels (Data Sonification: Acclaimed Musician Transforms Ocean Data into Music). .&lt;br /&gt;
Creating interactive multimedia experiences: Sonification can be used to create interactive multimedia experiences that are more immersive and engaging than traditional visual interfaces. For example, sonification has been used to create interactive maps (Interactive 3D sonification for the exploration of city maps | Proceedings of the 4th Nordic conference on Human-computer interaction: changing roles), educational games (CosmoBally - Sonokids), and virtual reality experiences.&lt;br /&gt;
&lt;br /&gt;
== Real-time sonification vs &#039;a posteriori&#039; ==&lt;br /&gt;
&lt;br /&gt;
== Acoustic ecology ==&lt;br /&gt;
&lt;br /&gt;
== State of the art examples  ==&lt;/div&gt;</summary>
		<author><name>Mick</name></author>
	</entry>
	<entry>
		<id>https://wiki.soundscapes.nuclio.org:443/w/index.php?title=What_is_sonification&amp;diff=146</id>
		<title>What is sonification</title>
		<link rel="alternate" type="text/html" href="https://wiki.soundscapes.nuclio.org:443/w/index.php?title=What_is_sonification&amp;diff=146"/>
		<updated>2024-09-18T16:29:20Z</updated>

		<summary type="html">&lt;p&gt;Mick: /* What does sonification mean? */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;When we make a sound to inform about something we are applying a sonification system. We represent data in the auditory field. We turn data into sounds, these data usually can be representing anything that can be expressed in numbers: a physical measurement, a notion, an action or the vectorial tracking of a sequence of values from a sensor. Many definitions were created for this process called sonification:  from “subtype of auditory displays that use non-speech audio to represent information”, to “transformation of data relations into perceived relations in an acoustic signal for the purposes of facilitating communication or interpretation” (Kramer et al., 1999) and, in a more definitive and precise way, “data-dependent generation of sound, if the transformation is systematic, objective and reproducible” (Hermann et al.,  2011), and finally “technique of transforming non-audible data into sound that can be perceived by human hearing” (wikipedia on 9th of April 2024).&lt;br /&gt;
To make it simple in the context of this manual we can state briefly that “sonification is the process of generating sound from any sort of data to represent their information as audio”.&lt;br /&gt;
In even more simple terms we can say to a student that sonification describes data with sound as visualization does with graphs, flow charts, histograms etc. &lt;br /&gt;
&lt;br /&gt;
So basically we want to combine data (Input) and sounds (Output), and decide the way these two are related (mapping or protocol). &lt;br /&gt;
So a sonification system is defined by these 3 parts:&lt;br /&gt;
&lt;br /&gt;
1 - Input data&lt;br /&gt;
2 - Output sounds&lt;br /&gt;
3 - Mapping or protocol&lt;br /&gt;
&lt;br /&gt;
== Types of data ==&lt;br /&gt;
&lt;br /&gt;
== Real-time sonification vs &#039;a posteriori&#039; ==&lt;br /&gt;
&lt;br /&gt;
== Acoustic ecology ==&lt;br /&gt;
&lt;br /&gt;
== State of the art examples  ==&lt;/div&gt;</summary>
		<author><name>Mick</name></author>
	</entry>
</feed>