It’s the beginning of a new year and time for me to look back at what I learned last year. Rather than a long narrative, let’s focus on the data. The local newspaper did a “community profile” of me this year and it was focused on my curiosity about the world around us and how we can measure and analyze it to better understand our lives. This post is a brief summary of that sort of analysis for my small corner of the world in the year that was 2013.
2013 was the year I decided to, and did, run the Equinox Marathon, so I spent a lot of time running this year and a lot less time bicycling. Since the race, I’ve been having hip problems that have kept me from skiing or running much at all. The roads aren’t cleared well enough to bicycle on them in the winter so I got a fat bike to commute on the trails I’d normally ski.
Here are my totals in tabular form:
I spent just about the same amount of time running, bicycling and skiing this year, and much less time hiking around on the trails than in the past. Because of all the running, and my hip injury, I didn’t manage to commute to work with non-motorized transport quite as much this year (55% of work days instead of 63% in 2012), but the exercise totals are all higher.
One new addition this year is a heart rate monitor, which allows me to much more accurately estimate energy consumption than formulas based on the type of activity, speed, and time. Riding my fat bike, it’s pretty clear that this form of travel is so much less efficient than a road bike with smooth tires that it can barely be called “bicycling,” at least in terms of how much energy it takes to travel over a certain distance.
Here’s the equations from Keytel LR, Goedecke JH, Noakes TD, Hiiloskorpi H, Laukkanen R, van der Merwe L, Lambert EV. 2005. Prediction of energy expenditure from heart rate monitoring during submaximal exercise. J Sports Sci. 23(3):289-97.
- hr = Heart rate (in beats/minute)
- w = Weight (in pounds)
- a = Age (in years)
- t = Exercise duration time (in hours)
And a SQL function that implements the version for men (to use it, you’d replace
yyyy-mm-dd with the appropriate values for you):
--- Kcalories burned based on average heart rate and number --- of hours at that rate. CREATE OR REPLACE FUNCTION kcal_from_hr(hr numeric, hours numeric) RETURNS numeric LANGUAGE plpgsql AS $$ DECLARE weight_lb numeric := nnn; resting_hr numeric := nn; birthday date := 'yyyy-mm-dd'; resting_kcal numeric; exercise_kcal numeric; BEGIN resting_kcal := ((-55.0969+(0.6309*(resting_hr))+ (0.0901*weight_lb)+ (0.2017*(extract(epoch from now()-birthday)/ (365.242*24*60*60))))/4.184)*60*hours; exercise_kcal := ((-55.0969+(0.6309*(hr))+ (0.0901*weight_lb)+ (0.2017*(extract(epoch from now()-birthday)/ (365.242*24*60*60))))/4.184)*60*hours; RETURN exercise_kcal - resting_kcal; END; $$;
Here’s a graphical comparison of my exercise data over the past four years:
It was a pretty remarkable year, although the drop in exercise this fall is disappointing.
Another way to visualize the 2013 data is in the form of a heatmap, where each block represents a day on the calendar, and the color is how many calories I burned on that day. During the summer you can see my long runs on the weekends showing up in red. Equinox was on September 21st, the last deep red day of the year.
2013 was quite remarkable for the number of days where the daily temperature was dramatically different from the 30-year average. The heatmap below shows each day in 2013, and the color indicates how many standard deviations that day’s temperature was from the 30-year average. To put the numbers in perspective, approximately 95.5% of all observations will fall within two standard deviations from the mean, and 99.7% will be within three standard deviations. So the very dark red or dark blue squares on the plot below indicate temperature anomalies that happen less than 1% of the time. Of course, in a full year, you’d expect to see a few of these remarkable differences, but 2013 had a lot of remarkable differences.
2013 saw 45 days where the temperature was more than 2 standard deviations from the mean (19 that were colder than normal and 26 that were warmer), something that should only happen 16 days out of a normal year [ 365.25(1 − 0.9545) ]. There were four days ouside of 3 standard deviations from the mean anomaly. Normally there’d only be a single day [ 365.25(1 − 0.9973) ] with such a remarkably cold or warm temperature.
April and most of May were remarkably cold, resulting in many people skiing long past what is normal in Fairbanks. On May first, Fairbanks still had 17 inches of snow on the ground. Late May, almost all of June and the month of October were abnormally warm, including what may be the warmest week on record in Alaska from June 23rd to the 29th. Although it wasn’t exceptional, you can see the brief cold snap preceding and including the Equinox Marathon on September 21st this year. The result was bitter cold temperatures on race day (my hands and feet didn’t get warm until I was climbing Ester Dome Road an hour into the race), as well as an inch or two of snow on most of the trail sections of the course above 1,000 feet.
Most memorable was the ice and wind storm on November 13th and 14th that dumped several inches of snow and instantly freezing rain, followed by record high winds that knocked power out for 14,000 residents of the area, and then a drop in temperatures to colder than ‒20°F. My office didn’t get power restored for four days.
I’m moving more and more of my work into git, which is a distributed revision control system (or put another way, it’s a system that stores stuff and keeps track of all the changes). Because it’s distributed, anything I have on my computer at home can be easily replicated to my computer at work or anywhere else, and any changes that I make to these files on any system, are easy to recover anywhere else. And it’s all backed up on the master repository, and all changes are recorded. If I decide I’ve made a mistake, it’s easy to go back to an earlier version.
Using this sort of system for software code is pretty common, but I’m also using
this for normal text files (the
docs repository below), and have
starting moving other things into git such as all my eBooks.
The following figure shows the number of file changes made in three of my
repositories over the course of the year. I don’t know why April was such an
active month for Python, but I clearly did a lot of programming that month. The
large number of file changes during the summer in the
docs repository is
because I was keeping my running (and physical therapy) logs in that repository.
The dog barn was the big summer project. It’s a seven by eleven foot building with large dog boxes inside that we keep warm. When the temperatures are too cold for the dogs to stay outside, we put them into their boxes in the dog barn and turn the heat up to 40°F. I have a real-time visualization of the conditions inside and outside the barn, and because the whole thing is run with a small Linux computer and Arduino board, I’m able to collect a lot of data about how the barn is performing.
One such analysis will be to see how much heat the dogs produce when they are in the barn. To estimate that, we need a baseline of how much heat we’re adding at various temperatures in order to keep it at temperature. I haven’t collected enough cold temperature data to really see what the relationship looks like, but here’s the pattern so far.
The graph shows the relationship between the temperature differential between the outside and inside of the barn plotted against the percentage of time the heater is on in order to maintain that differential, for all 12-hour periods where the dogs weren’t in the barn and there’s less than four missing observations. I’ve also run a linear and quadratic regression in order to predict how much heat will be required at various temperature differentials.
The two r2 values shows how much of the variation in heating is explained by the temperature differential for the linear and the quadratic regressions. I know that this isn’t a linear relationship, but that model still fits the data better than the quadratic model does. It may be that it’s some other form of non-linear relationship that’s not well expressed by a second order polynomial.
Once we can predict how much heat it should take to keep the barn warm at a particular temperature differential, we can see how much less heat we’re using when the dogs are in the barn. One complication is that the dogs produce enough moisture when they are in the barn that we need to ventilate it when they are in there. So in addition to the additive heating from the dogs themselves, there will be increased heat losses because we have to keep it better ventilated.
It’ll be an interesting data set.
Power consumption is a concern now that we’ve set up the dog barn and are keeping it heated with an electric heater. It’s an oil-filled radiator-style heater, and uses around 1,100 Watts when it’s on.
This table shows our overall usage by year for the period we have data.
|year||average watts||total KWH|
Our overall energy use continues to go down, which is a little surprising to me, actually, since we eliminated most of the devices known to use a lot electricity (incandescent light bulbs, halogen floodlights) years ago. Despite that, and bringing the dog barn on line in late November, we used less electricity in 2013 than in the prior three years.
Here’s the pattern by month, and year.
The spike in usage in November is a bit concerning, since it’s the highest overall monthly consumption for the past four years. Hopefully this was primarily due to the heavy use of the heater during the final phases of the dog barn construction. December wasn’t a particularly cold month relative to years past, but it’s good to see that our consumption was actually quite low even with the barn heater being on the entire month.
That wraps it up. Have a happy and productive 2014!
At work, on my Linux box, I listen to music with the Music Player Daemon (MPD) and command-line mpc client. I have managed to build both MPD and mpc on my MacBook Pro, but I haven’t figured out how to configure MPD to play through AirTunes to my stereo. There are commercial solutions, but I don’t want to have to pay money just to listen to my music on my stereo wirelessly. DVD Jon figured out how the AirTunes encryption works, and even wrote a program that you can use at the end of a pipe to stream music to the AirPort. But I haven’t figured out how to glue the MPD audio output together with JustePort.exe so I can play music this way on my Mac.
So what’s wrong with just using iTunes? Well, it’s a snazzy program, but I can’t script it the way I want to, I can’t easily control it from the command line, and it is limited in what file types it can handle (like OGG for example: you can get a plugin that will play OGG, but you can’t burn CDs using those OGG files).
Here’s an example of why I like MPD. The Telarc label recently joined eMusic and I was pleased to discover that Simone Dinnerstein’s recent, well-reviewed interpretation of Bach’s Goldberg Variations appeared on the site. I downloaded it, and also ripped the Bach Edition version (by Pieter-Jan Belder on harpsichord) from the set of Bach CDs I’m working my way through. After listening to them all the way through several times, album by album, I decided I wanted to listen to them with the two albums interlaced so I’d hear the Belder version of a variation followed directly by the Dinnerstein version. There are 32 of them. Imagine how many mouse movements and clicks it would be to get iTunes to do this.
Here’s how I did it with MPD and mpc. It took about 30 seconds, mostly trying to figure out how to get bash to do math.
First, here’s what the playlist looked like initially. The Belder harpsichord version was first, occupying tracks 1-32 in the playlist. Dinnerstein’s 1903 Hamburg Steinway concert grand version occupies tracks 33-64. So I needed to move track 33 to position 2, track 34 to position 4 (because track 2 was now in position 3), 35 to 6. See the pattern? [Dinnerstein track] moved to [([Dinnerstein track] - 32) * 2]
$ mpc playlist >1) Pieter-Jan Belder - Bach: Goldberg Variations - Aria 2) Pieter-Jan Belder - Bach: Goldberg Variations - Variatio 1 3) Pieter-Jan Belder - Bach: Goldberg Variations - Variatio 2 4) Pieter-Jan Belder - Bach: Goldberg Variations - Variatio 3, canone all'unisono ... 33) Simone Dinnerstein - Bach: Goldberg Variations: Aria 34) Simone Dinnerstein - Bach: Goldberg Variations: Variation 1 35) Simone Dinnerstein - Bach: Goldberg Variations: Variation 2 36) Simone Dinnerstein - Bach: Goldberg Variations: Variation 3 (Canone all'Unisono)
And here’s the command to reorder the playlist. The extra dollar signs and parentheses are just bash’s way of escaping parentheses so they’ll be interpreted as an arithmetic expression.
$ for i in `seq 33 64`; do mpc move $i $(($(($i-32))*2)); done
And the playlist afterwards:
$ mpc playlist >1) Pieter-Jan Belder - Bach: Goldberg Variations - Aria 2) Simone Dinnerstein - Bach: Goldberg Variations: Aria 3) Pieter-Jan Belder - Bach: Goldberg Variations - Variatio 1 4) Simone Dinnerstein - Bach: Goldberg Variations: Variation 1 5) Pieter-Jan Belder - Bach: Goldberg Variations - Variatio 2 6) Simone Dinnerstein - Bach: Goldberg Variations: Variation 2 7) Pieter-Jan Belder - Bach: Goldberg Variations - Variatio 3, canone all'unisono 8) Simone Dinnerstein - Bach: Goldberg Variations: Variation 3 (Canone all'Unisono) ...
$ mpc play
Yesterday I found myself with 43 eMusic downloads available and my refresh date approaching quickly. I’ve got quite a few records in my queue, and choosing among them to exactly consume my available downloads is difficult to do by hand. So I wrote a program to do it.
It’s a Python script, so it’ll run any any platform. Click this link to download it: choose_albums.py
To use it, you’ll need to create a separate file that contains a list of the albums you’re interested in and the number of tracks on each album. Here’s the file I was working with yesterday, called queue:
clientele 14 rosebuds 9 okkervil river 11 stravinsky 19 saint-saens violin 3 8 mapmaker 12 of montreal 5 long blondes 14 glass #4 7 widor #5&9 9 bonnie billy 13
Each line contains an album name, a space, and the lines end with the number of tracks on the album.
To run the program, call it and pass the name of your file and the number of downloads you’ve got left:
$ ./choose_albums.py queue 43 glass #4, saint-saens violin 3, stravinsky, widor #5&9: 43
This is one (of many) ways to use up my 43 downloads. The script chooses albums randomly, so if you want to see all the possibilities, you’ll need to run it a lot. I wrote a very simple shell script to do that:
#! /bin/sh tracks=$@ for i in `seq 1 100`; do ./choose_albums.py queue $tracks; done | sort | uniq
You can download it here: doit.sh
Depending on how large your queue is, you may need to increase the number of times it runs the script. Because it’s random and not deterministic, it can take a lot of runs to find all possible options (in fact, with 25 albums in the queue and 90 tracks available, there are more than 40,000 possible combinations, so this script is best at choosing from a small set of options, unless a random choice is what you're after). You’ll also need to change the name of your queue file if it’s not called queue.
Here’s what I did yesterday:
$ ./doit.sh 43 | grep blonde | grep glass bonnie billy, glass #4, long blondes, rosebuds: 43 bonnie billy, glass #4, long blondes, widor #5&9: 43 clientele, glass #4, long blondes, saint-saens violin 3: 43 glass #4, long blondes, of montreal, rosebuds, saint-saens violin 3: 43 glass #4, long blondes, of montreal, saint-saens violin 3, widor #5&9: 43
The two grep commands were included because I knew I wanted to include the new Long Blondes album and Philip Glass’s Fourth Symphony in my selections. I wound up going with the second choice, adding Bonnie “Prince” Billy’s The Letting Go and Widor’s Fifth and Ninth Symphonies.
One final note: if after 1,000 tries, the script doesn’t find a set of choices that uses up all your downloads, it’ll report the last set of albums it found and the number of tracks used up. Be sure that the final number reported matches the number you passed in or you won’t be using all your downloads for the month. The script isn’t smart enough to find the “best” solution in this situation, so if this happens, you’ll need to run it a bunch of times to maximize the number you’re downloading (or better, add more items to the queue file and run it again).