Recently we visualized a month's worth of global earthquake data in a short minute-and-a-half spatial animation. I was happy with the result, but I felt it was missing something. The visualization felt clinical, and it didn't convey the unfolding drama I saw in the dataset. So, we upgraded our visual by adding another layer of information storytelling - a soundtrack.
The soundtrack - or sonification - was built using various data points relating to earthquakes and carefully arranged to synchronize with the animated visualization. The result: everything you hear is a piece of data reflected in the visualization. Take a listen below.
Here's a basic overview of the sonification.
Each bass-note represents an hour (which serves to keep a tempo).
The volume of the hi-hat correlates to the combined magnitude of earthquakes happening during the hour.
The number of sounds you hear at any given time reflects the actual number of earthquakes happening during the hour.
Vocal chants indicate a high volume of earthquakes occurring (above six individual 'quakes in an hour).
The drone sound (you can hear in the video intro and outro, but it persists through the entire video) is the audio profile of an actual earthquake passed through a synthesizer to add harmonic content.
We added a dramatic tone to the soundtrack to provide drama and intensify the data - a buddy described it as a John Carpenter-esque vibe, so we'll go with that. It's a little ominous and 80's-sounding.