Export translations
Jump to navigation
Jump to search
Settings
Group
''a posteriori'' sonification
Inclusion, diversity and student assessment
Main Page
Real-time sonification
Sonification in practice
Technical analysis of existing solutions for the creation of sonification tools
The SoundScapes approach to STEAM education
Unplugged activities
What is sonification
Language
aa - Afar
ab - Abkhazian
abs - Ambonese Malay
ace - Achinese
acm - Iraqi Arabic
ady - Adyghe
ady-cyrl - Adyghe (Cyrillic script)
aeb - Tunisian Arabic
aeb-arab - Tunisian Arabic (Arabic script)
aeb-latn - Tunisian Arabic (Latin script)
af - Afrikaans
aln - Gheg Albanian
alt - Southern Altai
am - Amharic
ami - Amis
an - Aragonese
ang - Old English
ann - Obolo
anp - Angika
ar - Arabic
arc - Aramaic
arn - Mapuche
arq - Algerian Arabic
ary - Moroccan Arabic
arz - Egyptian Arabic
as - Assamese
ase - American Sign Language
ast - Asturian
atj - Atikamekw
av - Avaric
avk - Kotava
awa - Awadhi
ay - Aymara
az - Azerbaijani
azb - South Azerbaijani
ba - Bashkir
ban - Balinese
ban-bali - Balinese (Balinese script)
bar - Bavarian
bbc - Batak Toba
bbc-latn - Batak Toba (Latin script)
bcc - Southern Balochi
bci - Baoulé
bcl - Central Bikol
bdr - West Coast Bajau
be - Belarusian
be-tarask - Belarusian (Taraškievica orthography)
bew - Betawi
bg - Bulgarian
bgn - Western Balochi
bh - Bhojpuri
bho - Bhojpuri
bi - Bislama
bjn - Banjar
blk - Pa'O
bm - Bambara
bn - Bangla
bo - Tibetan
bpy - Bishnupriya
bqi - Bakhtiari
br - Breton
brh - Brahui
bs - Bosnian
btm - Batak Mandailing
bto - Iriga Bicolano
bug - Buginese
bxr - Russia Buriat
ca - Catalan
cbk-zam - Chavacano
cdo - Mindong
ce - Chechen
ceb - Cebuano
ch - Chamorro
cho - Choctaw
chr - Cherokee
chy - Cheyenne
ckb - Central Kurdish
co - Corsican
cps - Capiznon
cpx - Pu–Xian Min
cpx-hans - Pu–Xian Min (Simplified Han script)
cpx-hant - Pu–Xian Min (Traditional Han script)
cpx-latn - Pu–Xian Min (Latin script)
cr - Cree
crh - Crimean Tatar
crh-cyrl - Crimean Tatar (Cyrillic script)
crh-latn - Crimean Tatar (Latin script)
crh-ro - Dobrujan Tatar
cs - Czech
csb - Kashubian
cu - Church Slavic
cv - Chuvash
cy - Welsh
da - Danish
dag - Dagbani
de - German
de-at - Austrian German
de-ch - Swiss High German
de-formal - German (formal address)
dga - Dagaare
din - Dinka
diq - Zazaki
dsb - Lower Sorbian
dtp - Central Dusun
dty - Doteli
dv - Divehi
dz - Dzongkha
ee - Ewe
egl - Emilian
el - Greek
eml - Emiliano-Romagnolo
en - English
en-ca - Canadian English
en-gb - British English
eo - Esperanto
es - Spanish
es-419 - Latin American Spanish
es-formal - Spanish (formal address)
et - Estonian
eu - Basque
ext - Extremaduran
fa - Persian
fat - Fanti
ff - Fula
fi - Finnish
fit - Tornedalen Finnish
fj - Fijian
fo - Faroese
fon - Fon
fr - French
frc - Cajun French
frp - Arpitan
frr - Northern Frisian
fur - Friulian
fy - Western Frisian
ga - Irish
gaa - Ga
gag - Gagauz
gan - Gan
gan-hans - Gan (Simplified Han script)
gan-hant - Gan (Traditional Han script)
gcr - Guianan Creole
gd - Scottish Gaelic
gl - Galician
gld - Nanai
glk - Gilaki
gn - Guarani
gom - Goan Konkani
gom-deva - Goan Konkani (Devanagari script)
gom-latn - Goan Konkani (Latin script)
gor - Gorontalo
got - Gothic
gpe - Ghanaian Pidgin
grc - Ancient Greek
gsw - Alemannic
gu - Gujarati
guc - Wayuu
gur - Frafra
guw - Gun
gv - Manx
ha - Hausa
hak - Hakka Chinese
haw - Hawaiian
he - Hebrew
hi - Hindi
hif - Fiji Hindi
hif-latn - Fiji Hindi (Latin script)
hil - Hiligaynon
hno - Northern Hindko
ho - Hiri Motu
hr - Croatian
hrx - Hunsrik
hsb - Upper Sorbian
hsn - Xiang
ht - Haitian Creole
hu - Hungarian
hu-formal - Hungarian (formal address)
hy - Armenian
hyw - Western Armenian
hz - Herero
ia - Interlingua
id - Indonesian
ie - Interlingue
ig - Igbo
igl - Igala
ii - Sichuan Yi
ik - Inupiaq
ike-cans - Eastern Canadian (Aboriginal syllabics)
ike-latn - Eastern Canadian (Latin script)
ilo - Iloko
inh - Ingush
io - Ido
is - Icelandic
it - Italian
iu - Inuktitut
ja - Japanese
jam - Jamaican Creole English
jbo - Lojban
jut - Jutish
jv - Javanese
ka - Georgian
kaa - Kara-Kalpak
kab - Kabyle
kai - Karekare
kbd - Kabardian
kbd-cyrl - Kabardian (Cyrillic script)
kbp - Kabiye
kcg - Tyap
kea - Kabuverdianu
kg - Kongo
khw - Khowar
ki - Kikuyu
kiu - Kirmanjki
kj - Kuanyama
kjh - Khakas
kjp - Eastern Pwo
kk - Kazakh
kk-arab - Kazakh (Arabic script)
kk-cn - Kazakh (China)
kk-cyrl - Kazakh (Cyrillic script)
kk-kz - Kazakh (Kazakhstan)
kk-latn - Kazakh (Latin script)
kk-tr - Kazakh (Turkey)
kl - Kalaallisut
km - Khmer
kn - Kannada
ko - Korean
ko-kp - Korean (North Korea)
koi - Komi-Permyak
kr - Kanuri
krc - Karachay-Balkar
kri - Krio
krj - Kinaray-a
krl - Karelian
ks - Kashmiri
ks-arab - Kashmiri (Arabic script)
ks-deva - Kashmiri (Devanagari script)
ksh - Colognian
ksw - S'gaw Karen
ku - Kurdish
ku-arab - Kurdish (Arabic script)
ku-latn - Kurdish (Latin script)
kum - Kumyk
kus - Kʋsaal
kv - Komi
kw - Cornish
ky - Kyrgyz
la - Latin
lad - Ladino
lb - Luxembourgish
lbe - Lak
lez - Lezghian
lfn - Lingua Franca Nova
lg - Ganda
li - Limburgish
lij - Ligurian
liv - Livonian
lki - Laki
lld - Ladin
lmo - Lombard
ln - Lingala
lo - Lao
loz - Lozi
lrc - Northern Luri
lt - Lithuanian
ltg - Latgalian
lus - Mizo
luz - Southern Luri
lv - Latvian
lzh - Literary Chinese
lzz - Laz
mad - Madurese
mag - Magahi
mai - Maithili
map-bms - Basa Banyumasan
mdf - Moksha
mg - Malagasy
mh - Marshallese
mhr - Eastern Mari
mi - Māori
min - Minangkabau
mk - Macedonian
ml - Malayalam
mn - Mongolian
mnc - Manchu
mnc-latn - Manchu (Latin script)
mnc-mong - Manchu (Mongolian script)
mni - Manipuri
mnw - Mon
mo - Moldovan
mos - Mossi
mr - Marathi
mrh - Mara
mrj - Western Mari
ms - Malay
ms-arab - Malay (Jawi script)
mt - Maltese
mus - Muscogee
mwl - Mirandese
my - Burmese
myv - Erzya
mzn - Mazanderani
na - Nauru
nah - Nāhuatl
nan - Minnan
nap - Neapolitan
nb - Norwegian Bokmål
nds - Low German
nds-nl - Low Saxon
ne - Nepali
new - Newari
ng - Ndonga
nia - Nias
niu - Niuean
nl - Dutch
nl-informal - Dutch (informal address)
nmz - Nawdm
nn - Norwegian Nynorsk
no - Norwegian
nod - Northern Thai
nog - Nogai
nov - Novial
nqo - N’Ko
nrm - Norman
nso - Northern Sotho
nv - Navajo
ny - Nyanja
nyn - Nyankole
nys - Nyungar
oc - Occitan
ojb - Northwestern Ojibwa
olo - Livvi-Karelian
om - Oromo
or - Odia
os - Ossetic
pa - Punjabi
pag - Pangasinan
pam - Pampanga
pap - Papiamento
pcd - Picard
pcm - Nigerian Pidgin
pdc - Pennsylvania German
pdt - Plautdietsch
pfl - Palatine German
pi - Pali
pih - Norfuk / Pitkern
pl - Polish
pms - Piedmontese
pnb - Western Punjabi
pnt - Pontic
prg - Prussian
ps - Pashto
pt - Portuguese
pt-br - Brazilian Portuguese
pwn - Paiwan
qqq - Message documentation
qu - Quechua
qug - Chimborazo Highland Quichua
rgn - Romagnol
rif - Riffian
rki - Arakanese
rm - Romansh
rmc - Carpathian Romani
rmy - Vlax Romani
rn - Rundi
ro - Romanian
roa-tara - Tarantino
rsk - Pannonian Rusyn
ru - Russian
rue - Rusyn
rup - Aromanian
ruq - Megleno-Romanian
ruq-cyrl - Megleno-Romanian (Cyrillic script)
ruq-latn - Megleno-Romanian (Latin script)
rw - Kinyarwanda
ryu - Okinawan
sa - Sanskrit
sah - Yakut
sat - Santali
sc - Sardinian
scn - Sicilian
sco - Scots
sd - Sindhi
sdc - Sassarese Sardinian
sdh - Southern Kurdish
se - Northern Sami
se-fi - Northern Sami (Finland)
se-no - Northern Sami (Norway)
se-se - Northern Sami (Sweden)
sei - Seri
ses - Koyraboro Senni
sg - Sango
sgs - Samogitian
sh - Serbo-Croatian
sh-cyrl - Serbo-Croatian (Cyrillic script)
sh-latn - Serbo-Croatian (Latin script)
shi - Tachelhit
shi-latn - Tachelhit (Latin script)
shi-tfng - Tachelhit (Tifinagh script)
shn - Shan
shy - Shawiya
shy-latn - Shawiya (Latin script)
si - Sinhala
simple - Simple English
sjd - Kildin Sami
sje - Pite Sami
sk - Slovak
skr - Saraiki
skr-arab - Saraiki (Arabic script)
sl - Slovenian
sli - Lower Silesian
sm - Samoan
sma - Southern Sami
smn - Inari Sami
sms - Skolt Sami
sn - Shona
so - Somali
sq - Albanian
sr - Serbian
sr-ec - Serbian (Cyrillic script)
sr-el - Serbian (Latin script)
srn - Sranan Tongo
sro - Campidanese Sardinian
ss - Swati
st - Southern Sotho
stq - Saterland Frisian
sty - Siberian Tatar
su - Sundanese
sv - Swedish
sw - Swahili
syl - Sylheti
szl - Silesian
szy - Sakizaya
ta - Tamil
tay - Tayal
tcy - Tulu
tdd - Tai Nuea
te - Telugu
tet - Tetum
tg - Tajik
tg-cyrl - Tajik (Cyrillic script)
tg-latn - Tajik (Latin script)
th - Thai
ti - Tigrinya
tk - Turkmen
tl - Tagalog
tly - Talysh
tly-cyrl - Talysh (Cyrillic script)
tn - Tswana
to - Tongan
tok - Toki Pona
tpi - Tok Pisin
tr - Turkish
tru - Turoyo
trv - Taroko
ts - Tsonga
tt - Tatar
tt-cyrl - Tatar (Cyrillic script)
tt-latn - Tatar (Latin script)
tum - Tumbuka
tw - Twi
ty - Tahitian
tyv - Tuvinian
tzm - Central Atlas Tamazight
udm - Udmurt
ug - Uyghur
ug-arab - Uyghur (Arabic script)
ug-latn - Uyghur (Latin script)
uk - Ukrainian
ur - Urdu
uz - Uzbek
uz-cyrl - Uzbek (Cyrillic script)
uz-latn - Uzbek (Latin script)
ve - Venda
vec - Venetian
vep - Veps
vi - Vietnamese
vls - West Flemish
vmf - Main-Franconian
vmw - Makhuwa
vo - Volapük
vot - Votic
vro - Võro
wa - Walloon
wal - Wolaytta
war - Waray
wls - Wallisian
wo - Wolof
wuu - Wu
wuu-hans - Wu (Simplified Han script)
wuu-hant - Wu (Traditional Han script)
xal - Kalmyk
xh - Xhosa
xmf - Mingrelian
xsy - Saisiyat
yi - Yiddish
yo - Yoruba
yrl - Nheengatu
yue - Cantonese
yue-hans - Cantonese (Simplified Han script)
yue-hant - Cantonese (Traditional Han script)
za - Zhuang
zea - Zeelandic
zgh - Standard Moroccan Tamazight
zh - Chinese
zh-cn - Chinese (China)
zh-hans - Simplified Chinese
zh-hant - Traditional Chinese
zh-hk - Chinese (Hong Kong)
zh-mo - Chinese (Macau)
zh-my - Chinese (Malaysia)
zh-sg - Chinese (Singapore)
zh-tw - Chinese (Taiwan)
zu - Zulu
Format
Export for off-line translation
Export in native format
Export in CSV format
Fetch
{{DISPLAYTITLE:Ηχοποίηση στην πράξη}}<languages/> <div lang="en" dir="ltr" class="mw-content-ltr"> Sonification for educational purposes in practice is a process of exploring every possibility that essentially answers the question: “How can I use sound to highlight or demonstrate one or more pieces of information or conclusions arising from a movement, measurement, or phenomenon that either exists, has occurred, or is unfolding over time?» The existing data at our disposal, the conditions and methods of their collection, as well as the educational purpose for which the sonification is intended, are the determining factors for its effective use. </div> <div lang="en" dir="ltr" class="mw-content-ltr"> The aspects below shape the relationship between educational needs and the concept of sound along with its structured arrangement over time—that is, the concept of music. </div> <div lang="en" dir="ltr" class="mw-content-ltr"> ==Aspects of teaching with Sonification as a musical practice== </div> <div lang="en" dir="ltr" class="mw-content-ltr"> The indisputable connection between sound and numbers—specifically, the concept of breaking down sound into frequencies or harmonics—provides a sufficiently structured framework for interdisciplinary teaching using sound, within which all aspects of STEAM can be addressed. Since the concept of time defines the sound phenomenon, the representational act of a sound effect cannot but be at the center of any pedagogical approach. Consequently, the organized arrangement of sound elements in time in a harmonious manner—both in terms of rhythm, intensity, timbre, pitch, and their positional placement on the musical scale, whether diatonic or not—constitutes a musical result. This rational organization can serve as a field for experimentation in musical composition, while the parameterization of all the above concepts can enrich any educational objective that depends on the evolution of a phenomenon over time or the conversion of data into sound. </div> <div lang="en" dir="ltr" class="mw-content-ltr"> Thus, we can reasonably distinguish the concept of sonification for educational purposes into three basic approaches: </div> <div lang="en" dir="ltr" class="mw-content-ltr"> '''• The symbolic''' </div> <div lang="en" dir="ltr" class="mw-content-ltr"> '''• The mathematical''' </div> <div lang="en" dir="ltr" class="mw-content-ltr"> '''• The adaptive''' </div> <div lang="en" dir="ltr" class="mw-content-ltr"> ==Symbolic Sonification== </div> <div lang="en" dir="ltr" class="mw-content-ltr"> The reproduction of sound characteristics, namely: pitch, intensity, timbre, repetition rate (if any), and duration—which are linked to scientific concepts, terms, and quantities without being logically mapped to a data-set (data-mapping)— constitutes the subject of symbolic sonification. </div> <div lang="en" dir="ltr" class="mw-content-ltr"> A simple example would be to “sound-paint” a gray cloud using low-frequency noise and a white cloud using high-frequency noise. Another example would be a class of students representing the sound of rain by randomly tapping their fingernails on their desks. Another example that relates composition with music representation is the leitmotif. A leitmotif is a short melodic theme consisting of a few specific notes which, as a unique motif (pattern), is associated with a character in an opera and played by the orchestra, particularly in Wagner’s operas. A character’s leitmotif brings the character to mind throughout the entire work, whether the character is on stage or not! Translating this to a data series, such a leitmotif could replace the expected sound of a prominent low or high value (or a specific value or even a range of values) without having any coherence or arising from the neighboring data. </div> <div lang="en" dir="ltr" class="mw-content-ltr"> ==Mathematical Sonification== </div> <div lang="en" dir="ltr" class="mw-content-ltr"> When pitch, intensity, timbre, rhythm (if any), and duration as sound-characteristics, runs through a series of data-measurements connected to a physical term, or a scientific concept, they form a logical map of one or more parts of that series (direct data-mapping). The sounding result of this match is mathematical sonification. </div> <div lang="en" dir="ltr" class="mw-content-ltr"> An example that perfectly illustrates the above distinction, primarily by exploiting the characteristic of rhythm, is that of the mechanism for audibly indicating the distance between a car and the one next to it while parking, a feature found in many cars. The repetition frequency of this momentary acoustic signal forms a repeating pattern whose rhythm varies (slow-fast) depending on the proximity data to the obstacle, which is detected with high precision by a sensor. </div> <div lang="en" dir="ltr" class="mw-content-ltr"> To understand the difference between symbolic and mathematical representation, we can adapt the previous examples as “unplugged activities” in a classroom. Mathematical sonification in the cloud example would occur if we defined a color threshold for white or gray and represented the droplets that make up the clouds with millions of frequency particles of minimal duration (sound nebulae) the droplets of which clouds are composed. In the example of rain, we would have a mathematical sonification if students represented with absolute precision, one by one, every raindrop at a specific time and surface area. Finally, in the example of “parking,” we would have symbolic representation if the students’ eyes took on the role of the sensor, where data would be estimated visually without absolute mathematical measurement. </div> <div lang="en" dir="ltr" class="mw-content-ltr"> ==Adaptive Sonification== </div> <div lang="en" dir="ltr" class="mw-content-ltr"> It is a sound design or musical composition (by expanding this notion), resulting from mathematical sonification in which, however, methods of aesthetic sound-rendering are creatively utilized to meet teaching objectives in describing learning concepts. </div> <div lang="en" dir="ltr" class="mw-content-ltr"> Furthermore, the analysis of data-mapping methods in conjunction with the diatonic scale opens up a fruitful field for exploring teaching tools that allow sound to be processed in terms of musical composition. The use of MIDI for sound processing or the highlighting of musical motifs—which can serve as a starting point for creating musical compositions—perfectly extends adaptive sonification. In fact, the graphical representation of data (graphical display) can be creatively transformed into sound by treating the display as a two-dimensional scheme, or even a photograph as a three-dimensional image. The result is referred to as “schematic sonification”. </div> <div lang="en" dir="ltr" class="mw-content-ltr"> This adaptable approach broadens access to the auditory outcome of data sonification across a wide range of age groups and grade levels, inviting educators from other disciplines—such as Art, Theater, and Music, to actively participate in interdisciplinary teaching. An example of this approach has been implemented in the "Sounds of the Stars" scenario <ref>https://soundscapes.nuclio.org/wp-content/uploads/2026/03/Sounds-of-the-Stars-A-SoundScapes-Scenario.pdf</ref> in collaboration with the National Observatory of Athens (community: Ήχοι των Άστρων<ref>https://www.schoolofthefuture.eu/en/community/oi-ihoi-ton-astron</ref>). The scenario is part of the SoundScapes learning scenarios repository <ref>https://soundscapes.nuclio.org/index.php/344-2/</ref>. </div> <div lang="en" dir="ltr" class="mw-content-ltr"> ==THE MIDI protocol. Why is it useful for sonification in school?== </div> <div lang="en" dir="ltr" class="mw-content-ltr"> MIDI Protocol stands for Musical Instrument Digital Interface and was introduced in early ’80s as machine language allowing analog and later digital instruments interconnection. This language interprets several aspects of music performance and notation in an electronic format. </div> <div lang="en" dir="ltr" class="mw-content-ltr"> MIDI enables the user to receive, transmit, store and edit electronically produced signals that correspond to several aspects of music. Main parameters of these aspects include note-on, note-off, velocity, timbre and pitch. All these parameters can be stored as code in timeline fashion within a MIDI file. A MIDI file resembles the “program” in the form of a revolving cylinder or perforated paper used in late 18th c. music boxes or early 20th c. “pianolas”, which are musical automata. It is this characteristic that can be proved enormously useful for educational purposes, as numerous midi applications, sensors and programs are widely spread throughout the internet. However, it is the ability to edit the output as a musical score or as a part of a polyphonic composition that makes MIDI an exceptionally powerful educational tool. Within the present WIKI pages sensors using MIDI in particular, are widely displayed. </div> <div lang="en" dir="ltr" class="mw-content-ltr"> ==Sonification components== </div> <div lang="en" dir="ltr" class="mw-content-ltr"> A sonification activity consists in the design and building of a sonification system. A sonification system can be accomplished in many different ways but 3 components must always be considered: </div> <div lang="en" dir="ltr" class="mw-content-ltr"> 1) INPUT DATA; </div> <div lang="en" dir="ltr" class="mw-content-ltr"> 2) MAPPING PROTOCOL; </div> <div lang="en" dir="ltr" class="mw-content-ltr"> 3) AUDIO OUTPUT; </div> <div lang="en" dir="ltr" class="mw-content-ltr"> == Input Data == </div> <div lang="en" dir="ltr" class="mw-content-ltr"> In a sonification system, which is our final product, the data is the source of the sound engine, and some particular sounds will be the output. The inputs and outputs are mapped onto each other following a protocol that establishes which sounds are played according to which data. So first we need to know and understand the data we want to sonify. We must know what we want to say with our system - what we will talk about. We must know how the data change (usually we have time based data but there can also be spatially referenced data, like maps) and what characteristics of its behavior we want to represent. For example if you have a single value (like luminosity of a star, linear position of a car, amount of likes in a youtube channel, number of new posts on wikipedia, etc) you can choose to play a sound when this value is more than a certain threshold value, or play a sound that gets louder when the values becomes higher, or a sound when the values are raising or decreasing in time. In some cases it is useful to determine the highest and the lowest value within the whole range of values available. In terms of outputs this can help define a “container” of initial values that can define the range of deviations in the output. We can highlight certain features of data. There are many types of data. The most common are: </div> <div lang="en" dir="ltr" class="mw-content-ltr"> '''Single data:''' indicating a state ON-OFF (boolean data). </div> <div lang="en" dir="ltr" class="mw-content-ltr"> '''A single data value covering a range of values:''' usually mapped to a single sound or sound feature like the pitch, or bpm (beats per minute), or an effect, but it can control more than one feature or sound at once. </div> <div lang="en" dir="ltr" class="mw-content-ltr"> '''Multiple data:''' more than one data of the previous type. Usually there are many types of data collected at the same time so these data sets consist of several layers of synchronized data. </div> <div lang="en" dir="ltr" class="mw-content-ltr"> Sound has the advantage over visual perception that more layers of data can be perceived at the same time. Changes in patterns are more easily detected listening than looking at. Especially if the amount of data is very large. So, in sum, we need to consider the data we have, how they evolve in time, how they are arranged and what are the salient parts we want to use to feed our sonification system. We have to ask ourselves “what will the sound mean?” We need to understand that data is not the message! We must metabolize the data and their behavior and find what message will be triggering sound. </div> <div lang="en" dir="ltr" class="mw-content-ltr"> And, therefore, before this we need to ask ourselves what is the purpose of the sonification? Will it be applied continuously, maybe in the background, or just after some time of collecting data, or both? </div> <div lang="en" dir="ltr" class="mw-content-ltr"> ===Real-Time Sonification vs “A Posteriori”=== </div> <div lang="en" dir="ltr" class="mw-content-ltr"> Acording to the use of the sonification system (to analyze or to monitor a certain phenomena) we distinguish two “modes”: </div> <div lang="en" dir="ltr" class="mw-content-ltr"> '''Real-time (to monitor)''' - a stream of data is sonifed instantly and a sound is produced to display the value and behavior of the data in that particular moment; </div> <div lang="en" dir="ltr" class="mw-content-ltr"> '''“A posteriori” (to analyze)''' - time-series sonification of a set of pre-recorded data is converted into an audio file that displays the values and behavior of the data over the period of time covered by the time-series. </div> <div lang="en" dir="ltr" class="mw-content-ltr"> These two methods are not mutually exclusive and can eventually display the same sounds. The difference is that in an “a posteriori” sonification, because the sound is produced after the events that originated the data, the parameters of the final piece can be adapted, i.e. the total duration. In a real-time case, you can control the time resolution: that is the time interval at which the sound can change and is played. </div> <div lang="en" dir="ltr" class="mw-content-ltr"> == Mapping Protocol == </div> <div lang="en" dir="ltr" class="mw-content-ltr"> The mapping protocol is the core of the sonification system. This is where knowledge of input data must be combined with creativity. According to his/her educational needs, the creator of the sonification system makes choices based on his/her character and artistic taste in translating data sets into sound pieces. The mapping protocol is the process or algorithm or function that associates particular sounds to defined data. It is the set of rules by which output sounds correspond to input data. A simple mapping can consist for example in a direct one-to-one correspondence between each value of an input data to a parameter of an output sound, like the pitch. This component of the system is key because here is where the designer of the system selects certain features of the data to be played in a particular manner, in order to highlight them, or not. </div> <div lang="en" dir="ltr" class="mw-content-ltr"> So this mapping consists in associating certain data aspects to different auditory parameters, such as pitch, loudness, timbre, and rhythm. For example, the amplitude of a sound can be mapped to the value of a light resistor, or the frequency of a sound can be mapped to the rate of change of the sea level (tides). </div> <div lang="en" dir="ltr" class="mw-content-ltr"> Usually the tendency is to map a single feature of the data to a single parameter of output sound but we humans are generally more capable of perceiving differences in sound if such differences manifest concurrently through different properties. So it is not a bad idea to map the same variable onto different psychoacoustic properties of a sound (pitch and volume as an example of the most evident) if we want to emphasize its change and dynamics. </div> <div lang="en" dir="ltr" class="mw-content-ltr"> Our sense of hearing is able to focus on a particular sound in between many others (see the “cocktail party effect”) <ref>Arons, B. (1992). A review of the cocktail party effect. Journal of the American Voice I/O society, 12(7), 35-50.</ref> based on timbre. Our auditory system can process information at a far higher rate than our visual system. For example, while video typically updates at 60 frames per second (60 Hz), standard audio is sampled at 44,100 times per second (44.1 kHz). This means that even a single, brief spike in an audio signal—lasting just one sample—is instantly perceived as a distinct "click." As a result, hearing allows us to monitor multiple layers of information simultaneously, often more efficiently than through visual perception alone <ref>Kramer, G., Walker, B. N., Bonebright, T., Cook, P., Flowers, J., Miner, N., et al. (1999). The Sonification Report: Status of the Field and Research Agenda. Report prepared for the National Science Foundation by members of the International Community for Auditory Display. Santa Fe, NM: International Community for Auditory Display (ICAD).</ref>. </div> <div lang="en" dir="ltr" class="mw-content-ltr"> == Audio Output == </div> <div lang="en" dir="ltr" class="mw-content-ltr"> The output sound of the system will be the first characteristic to be perceived by a user. It is its signature, its flavor. It will interact with the user’s taste and we must be aware of that. It is the auditive wrapping to be perceived by an audience and, as studies on sound perception show, it will immediately and unconsciously provoke a good or bad sensation to the listener. We should therefore get used to producing “nice” sound outputs with the device that will be used, be it a microcontroller buzzer or a pc virtual synthesizer or a DAW (Digital Audio Workstation) connected to speakers. We should practice some music, or at least make some noise! </div> <div lang="en" dir="ltr" class="mw-content-ltr"> Considering that sound perception is time-based, sonification is by and large focused on rendering continuous data stream over time: this means that the input data of a sonification system could be also come from another domain, like the profile of a territory (geographical data) but all of them will be transferred onto a representation in time which is sound. Sound exists only in time, as variation of pressure detected by our eardrums and transformed into electrical signals in our brain, or broadly in our nervous system. Without getting into the depth of such a fascinating subject we need to clarify a couple of concepts before we move on. Even those who never played or created music know some of the characteristics of sound that we describe here. </div> <div lang="en" dir="ltr" class="mw-content-ltr"> ===Music and Sound: Basic Concepts=== </div> <div lang="en" dir="ltr" class="mw-content-ltr"> Sound is detected by our brain when a variable pressure stimulates our timpani. This is a small membrane that when moved by air pressure (or water if you find yourself under water) generates electrical stimuli that the brain processes as “sound”. If this variable pressure is oscillating regularly at a certain frequency (a certain number of times per second) we hear a tone. That is why tones (or notes) are measured in Hertz (Hz), or cycles per second. </div> <div lang="en" dir="ltr" class="mw-content-ltr"> The human hearing is able to sense tones between 20 Hz and 20000 Hz (this range is unique to each one and usually gets smaller with age). The vibrations of pressure with frequencies lower than 20 Hz or higher than 20000 Hz are inaudible. They are called infrasounds and ultrasounds respectively. We do not hear them but we still can sense them, with the touch sense in the case of infrasound and with temperature sense in the case of ultrasound. </div> <div lang="en" dir="ltr" class="mw-content-ltr"> '''The main characteristics of sound are:''' </div> <div lang="en" dir="ltr" class="mw-content-ltr"> '''Volume or Intensity or loudness:''' The power of a soundwave (louder more power, softer less power). </div> <div lang="en" dir="ltr" class="mw-content-ltr"> '''Frequency or pitch:''' The number of times the sound pressure moves back and forth the timpani in our ears. According to music theory some of these frequencies are called notes in the context of tuning systems. </div> <div lang="en" dir="ltr" class="mw-content-ltr"> '''Timbre:''' Is the spectral characteristic of sound, its sound quality, its fingerprint, a sense of the “color” of the sound. This is what allows us to distinguish between a trumpet or a guitar when they are playing the same note with the same volume. It also allows us to distinguish between our human voices. </div> <div lang="en" dir="ltr" class="mw-content-ltr"> There are several other characteristics that define sound but these are the main ones we can use in this context. Other characteristics that could be easily employed in a classroom for sonification are: </div> <div lang="en" dir="ltr" class="mw-content-ltr"> '''Duration:''' How long each sound lasts. </div> <div lang="en" dir="ltr" class="mw-content-ltr"> '''Rhythm:''' How frequently the sounds repeat and in what pattern. For example, a metronome is a device that produces short, evenly spaced sounds at a set number of beats per minute (BPM). Other devices use this characteristic as a sonification output ([https://en.wikipedia.org/wiki/Geiger_counter Geiger counter - Wikipedia]) and parking devices for cars. </div> <div lang="en" dir="ltr" class="mw-content-ltr"> '''3D Positioning:''' The position of a sound source in space - for example if the sound comes from the left or right speaker in a stereo system. Far more complex but basically the same concept, are the surround systems 5.1 or 7.1 up to ambisonic systems where the position of the sound source can be even more detailed by using multiple channels ([https://en.wikipedia.org/wiki/Ambisonic_reproduction_systems Ambisonic reproduction systems - Wikipedia]) </div> <div lang="en" dir="ltr" class="mw-content-ltr"> === Context is important === </div> <div lang="en" dir="ltr" class="mw-content-ltr"> When designing the output sounds we need to consider what will be the audience of the designed system. In which settings they will listen to its sounds. It is impossible to be sure about it and to know the taste of our target listeners but it is convenient to think about it. </div> <div lang="en" dir="ltr" class="mw-content-ltr"> What is the profile of the listener? Are they young students? What type of sound would they be interested in hearing? but also in what kind of sound-producing interaction could they be engaged in, according to their skills and potentials? Do they perceive changes in less evident sound features, (i.e. timbre)? </div> <div lang="en" dir="ltr" class="mw-content-ltr"> The sounds we produce must be considered in the context where they will be played. They should be able to capture the listener's attention and emerge from the background noise, and if possible, not be perceived as noise or annoying. For example, mapping all the values of a single variable to all the values of frequency in a certain range may sound unpleasant compared to mapping it onto a familiar music scale, like the chromatic scale in the western world. Or manipulating the speed of a regular beat instead of playing random time durations can be more effective. It depends on the listener's attitude and taste, of course. Additionally, it is important to consider the sound designers’ own taste. It is convenient to consider who will be the listener, but, on the other side, it is not mandatory to produce mainstream sounds in order to please the supposed “common taste”. </div> <div lang="en" dir="ltr" class="mw-content-ltr"> Apart from taste and esthetic considerations, we need to consider factual conditions: in the case of background continuous sound as a product of a sonification to monitor some data stream we should therefore take into account the potential listener fatigue to that type of sound. We can consider the difference of using familiar sounds (for example even recorded samples of voices and sentences of the target listeners) compared to new and special digitally synthesized sounds. Designers of sonification systems should at least be be aware of the variety of different impacts that their sounds can have upon the listener (synthesized sounds could surprise!). </div> <div lang="en" dir="ltr" class="mw-content-ltr"> We need to assemble a diverse toolkit of musical techniques and resources. Sonification designers should ensure their palette of sounds is as rich and varied as the data they aim to represent. </div> <div lang="en" dir="ltr" class="mw-content-ltr"> === Quality of Output === </div> <div lang="en" dir="ltr" class="mw-content-ltr"> Sonification should not only be comprehensible but also engaging, ideally offering information as effectively as, or even more clearly than visual graph. The quality of the sonification is equally important. This includes both the technical excellence of the audio and its "musical narrative" - how well it describes the evolution of the data while remaining aesthetically pleasing. While "pleasantness" is subjective, an appealing sonification helps maintain the listener’s attention and ensures the data is effectively communicated, as discussed in the Context is Important section. </div> <div lang="en" dir="ltr" class="mw-content-ltr"> Sonifications can use either physical (natural) or digital sounds, depending on resources and approach. Physical sounds come from acoustic sources like the human body, percussion, or traditional instruments, performed through notation, gestures, or improvisation. Digital sounds, however, are generated or processed using computers, digital audio workstations (DAWs), or electronic devices. While technical details like compression, sample rate, or bit depth influence digital audio quality, the key point is the impact of the playback system: a high-quality sound system (e.g., computer speakers) will deliver a richer experience than a simple buzzer. </div> <div lang="en" dir="ltr" class="mw-content-ltr"> '''Musical Quality''' </div> <div lang="en" dir="ltr" class="mw-content-ltr"> The designer should consider what type of narrative he/she is inducing in the listener. That means for example using low and scary sounds to represent parameters of global warming ([https://youtu.be/5t08CLczdK4?si=dLNDaHfCRrG-5Y-6 A Song of Our Warming Planet] or [https://youtu.be/-V2Uc8Kax_g?si=YmgaJK3IlpmExpZm The sound of climate change from the Amazon to the Arctic]). As we want to stimulate the user to pay attention to our system output it can also be useful to have a survey about what type of music the listener appreciates. A generally and initially acceptable musical sound, with the least possible chance of being rejected by the majority of recipients, would be the one that would obey the fundamental principles of symmetry and proportion, as these have shaped our common perception of “music" in today's world. </div> <div lang="en" dir="ltr" class="mw-content-ltr"> However, Soundscapes project encourages every approach on sonification if it satisfies the creator's inspiration or cultural demands as well as the aesthetical or informative needs of the audience, or the target group it is addressed to. </div> <div lang="en" dir="ltr" class="mw-content-ltr"> In the following pages, practical ways to implement the above approaches with or without handling data sets coming either from measurements or from sensors, are entitled as: '''[[Special:MyLanguage/Unplugged activities|Unplugged activities]], [[Special:MyLanguage/Real-time sonification|Real-time sonification]] and [[Special:MyLanguage/''a posteriori'' sonification|''a posteriori'' sonification]]'''. </div> <div lang="en" dir="ltr" class="mw-content-ltr"> ==References== </div>
Navigation menu
Personal tools
English
Create account
Log in
Namespaces
Translate
English
Views
Language statistics
Message group statistics
Export
More
Search
Navigation
Main page
Recent changes
Random page
Help about MediaWiki
Tools
Special pages
Printable version