I was distracted yesterday. I saw a tweet by Alberto Cairo (@albertocairo) about a new web-based tool to convert data to sound. I have thought a lot about 'visualization' in sound since my days doing electrophysiology of locust brains at Caltech (I was even acknowledged for 'helpful comments' in a Nature paper). The tool, 'Two Tone' was "made by Datavized Technologies with support from Google News Initiative".

My masterpiece is here: Chicago vs. Redwood City (temperatures, 2015, SoundCloud)

I wanted to try it, and the first thoughts I had were to use data from a fitness tracker (e.g. steps per day), but I don't have one. Next I thought to nerdily plot my Github contributions over time. But for now I've done something a little more conventional: plot the minimum temperature over a year in Chicago, and in the Bay Area.

I started by getting data from Google's BigQuery accessing NOAA's Global Historical Climate Network (GHCN). After a little trial and error based on Google's existing examples, I figured out that the daily minimum temperature in Chicago for 2015 can be found with:

SELECT
wx.date,
wx.value/10.0 AS min_temperature
FROM
`bigquery-public-data.ghcn_d.ghcnd_2015` AS wx
WHERE
id = 'USW00094846'
AND qflag IS NULL
AND element = 'TMIN'
ORDER BY
wx.date

To do the same for Redwood City, CA (the nearest station to me), I used
id = 'USC00047339'
(found here).

Lots of caveats: TwoTone probably normalizes the dynamic range, so we don't get a fair comparison of the two cities. I'd also like more control over features of the audio: the maximum tempo is 300 bpm, but audio data can be processed much faster by the brain; I'd also like to be able to combine tracks with an eye to meaningfully comparing the data. I did this in Garage Band, but by then, the data had been converted to audio waves. One of the great features of TwoTone is to manipulate sound visualizations, while still looking at the source data. I look forward to the evolution of this tool, and generally to the field of audio 'visualizations' of data.