
Cold November
Several years ago I showed some R code to make a heatmap showing the rank of the Oakland A’s players for various hitting and pitching statistics.
Last week I used this same style of plot to make a new weather visualization on my web site: a calendar heatmap of the difference between daily average temperature and the “climate normal” daily temperature for all dates in the last ten years. “Climate normals” are generated every ten years and are the averages for a variety of statistics for the previous 30-year period, currently 1981—2010.
A calendar heatmap looks like a normal calendar, except that each date box is colored according to the statistic of interest, in this case the difference in temperature between the temperature on that date and the climate normal temperature for that date. I also created a normalized version based on the standard deviations of temperature on each date.
Here’s the temperature anomaly plot showing all the temperature differences for the last ten years:
It’s a pretty incredible way to look at a lot of data at the same time, and it makes it really easy to pick out anomalous events such as the cold November and December of 2012. One thing you can see in this plot is that the more dramatic temperature differences are always in the winter; summer anomalies are generally smaller. This is because the range of likely temperatures is much larger in winter, and in order to equalize that difference, we need to normalize the anomalies by this range.
One way to do that is to divide the actual temperature difference by the standard deviation of the 30-year climate normal mean temperature. Because of the nature of the distribution standard deviations are based on, approximately 66% of the variation occurrs within -1 and 1 standard deviation, 95% between -2 and 2, and 99% between -3 and 3 standard deviations. That means that deep red or blue dates, those outside of -3 and 3, in the normalized calendar plot are fairly rare occurrances.
Here’s the normalized anomalies for the last twelve months:
The tricky part in generating either of these plots is getting the temperature data into the right format. The plots are faceted by month and year (or YYYYY-MM in the twelve month plot), so each record needs to have month and year. That part is easy. Each individual plot is a single calendar month, and is organized by day of the week along the x-axis, and the inverse of week number along the y-axis (the first week in a month is at the top of the plot, the last at the bottom).
Here’s how to get the data formatted properly:
library(lubridate)
cal <- function(dt) {
# Reads a date object and returns a tuple (weekrow, daycol)
# where weekrow starts at 1 and daycol starts at 1 for Sunday
year <- year(dt)
month <- month(dt)
day <- day(dt)
wday_first <- wday(ymd(paste(year, month, 1, sep = '-'), quiet = TRUE))
offset <- 7 + (wday_first - 2)
weekrow <- ((day + offset) %/% 7) - 1
daycol <- (day + offset) %% 7
c(weekrow, daycol)
}
weekrow <- function(dt) {
cal(dt)[1]
}
daycol <- function(dt) {
cal(dt)[2]
}
vweekrow <- function(dts) {
sapply(dts, weekrow)
}
vdaycol <- function(dts) {
sapply(dts, daycol)
}
pafg$temp_anomaly <- pafg$mean_temp - pafg$average_mean_temp
pafg$month <- month(pafg$dt, label = TRUE, abbr = TRUE)
pafg$year <- year(pafg$dt)
pafg$weekrow <- factor(vweekrow(pafg$dt),
levels = c(5, 4, 3, 2, 1, 0),
labels = c('6', '5', '4', '3', '2', '1'))
pafg$daycol <- factor(vdaycol(pafg$dt),
labels = c('u', 'm', 't', 'w', 'r', 'f', 's'))
And the plotting code:
library(ggplot2)
library(scales)
library(grid)
svg('temp_anomaly_heatmap.svg', width = 11, height = 10)
q <- ggplot(data = subset(pafg, year > max(pafg$year) - 11),
aes(x = daycol, y = weekrow, fill = temp_anomaly)) +
theme_bw() +
theme(axis.text.x = element_blank(),
axis.text.y = element_blank(),
panel.grid.major = element_blank(),
panel.grid.minor = element_blank(),
axis.ticks.x = element_blank(),
axis.ticks.y = element_blank(),
axis.title.x = element_blank(),
axis.title.y = element_blank(),
legend.position = "bottom",
legend.key.width = unit(1, "in"),
legend.margin = unit(0, "in")) +
geom_tile(colour = "white") +
facet_grid(year ~ month) +
scale_fill_gradient2(name = "Temperature anomaly (°F)",
low = 'blue', mid = 'lightyellow', high = 'red',
breaks = pretty_breaks(n = 10)) +
ggtitle("Difference between daily mean temperature\
and 30-year average mean temperature")
print(q)
dev.off()
You can find the current versions of the temperature and normalized anomaly plots at:

Yesterday I attempted to watch the A’s Opening Day game against the Mariners in the Oakland Coliseum. Unfortunately, it turns out the entire state of Alaska is within the blackout region for all Seattle Mariners games, regardless of where the Mariners are playing.
The A’s lost the game, so maybe I’m not too disappointed I didn’t see it, but for curiousity, let’s extend Major League Baseball’s Alaska blackout “logic” to the rest of the country and see what that looks like.
First, load in the locations of all Major League stadiums into a PostGIS database. There are a variety of sources for this on the Internet, but many are missing the more recent stadiums. I downloaded one and updated those that weren’t correct using information on Wikipedia.
CREATE TABLE stadiums (
division text, opened integer, longitude numeric,
latitude numeric, class text, team text,
stadium text, league text, capacity integer
);
COPY stadiums FROM 'stadium_data.csv' CSV HEADER;
Turn the latitude and longitudes into geographic objects:
SELECT addgeometrycolumn('stadiums', 'geom_wgs84', 4326, 'POINT', 2);
CREATE SEQUENCE stadium_id_seq;
ALTER TABLE stadiums
ADD COLUMN id integer default (nextval('stadium_id_seq'));
UPDATE stadiums
SET geom_wgs84 = ST_SetSRID(
ST_MakePoint(longitude, latitude), 4326
);
Now load in a states polygon layer (which came from: http://www.arcgis.com/home/item.html?id=f7f805eb65eb4ab787a0a3e1116ca7e5):
$ shp2pgsql -s 4269 states.shp | psql -d stadiums
Calculate how far the farthest location in Alaska is to Safeco Field:
SELECT ST_MaxDistance(safeco, alaska) / 1609.344 AS miles
FROM (
SELECT ST_Transform(geom_wgs84, 3338) AS safeco
FROM stadiums WHERE stadium ~* 'safeco'
) AS foo, (
SELECT ST_Transform(geom, 3338) AS alaska
FROM states
WHERE state_name ~* 'alaska'
) AS bar;
miles
------------------
2467.67499410842
Yep. Folks in Alaska are supposed to travel 2,468 miles (3,971,338 meters, to be exact) in order to attend a game at Safeco Field (or farther if they’re playing elsewhere because the blackout rules aren’t just for home games anymore).
Now we add a 2,468 mile buffer around each of the 30 Major League stadiums to see what the blackout rules would look like if the rule for Alaska was applied everywhere:
SELECT addgeometrycolumn(
'stadiums', 'ak_rules_buffer', 3338, 'POLYGON', 2);
UPDATE stadiums
SET ak_rules_buffer =
ST_Buffer(ST_Transform(geom_wgs84, 3338), 3971338);
Here's a map showing what these buffers look like:
Or, shown another way, here’s the number of teams that would be blacked out for everyone in each state. Keep in mind that there are 30 teams in Major League Baseball so the states on the top of the list shouldn’t be able to watch any team if Alaska rules were applied evenly to the country:
Blacked out teams | States |
---|---|
30 | Alabama, Arkansas, Colorado, Illinois, Indiana, Iowa, Kansas, Kentucky, Louisiana, Michigan, Minnesota, Mississippi, Missouri, Nebraska, New Mexico, North Dakota, Ohio, Oklahoma, South Dakota, Tennessee, Texas, Utah, Wisconsin, Wyoming |
29 | Idaho, Montana, Arizona |
28 | District of Columbia, West Virginia |
27 | Georgia, South Carolina |
26 | Nevada |
25 | Pennsylvania, Florida, Vermont |
24 | Rhode Island, New Hampshire, Connecticut, Delaware, New Jersey, New York, North Carolina, Virginia, Maryland, Massachusetts |
23 | Maine |
22 | Oregon, Washington, California |
1 | Alaska |
0 | Hawaii |
Based on this, Hawaii would be the place to live as they can actually watch all 30 teams! Except the blackout rules for Hawaii are even stupider than Alaska: they can’t watch any of the teams on the West Coast. Ouch!
This is stupid-crazy. It seems to me that a more appropriate rule might be buffers that represent a two hour drive, or whatever distance is considered “reasonable” for traveling to attend a game. When I lived in Davis California we drove to the Bay Area several times a season to watch the A’s or Giants play, and that’s about the farthest I’d expect someone to go in order to see a three hour baseball game.
This makes a much more reasonable map:
I’m sure there are other issues going on here, such as cable television deals and other sources of revenue that MLB is trying to protect. But for the fans, those who ultimately pay for this game, the current situation is idiotic. I don’t have access to cable television (and even if I did, I won’t buy it until it’s possible to pick and choose which stations I want to pay for), and I’m more than 2,000 miles from the nearest ballpark. My only avenue for enjoying the game is to pay MLB $130 for a MLB.TV subscription. Which I happily do. Unfortunately, this year I can’t watch my teams whenever they happen to play the Mariners.
Earlier today our monitor stopped working and left us without heat when it was −35°F outside. I drove home and swapped the broken heater with our spare, but the heat was off for several hours and the temperature in the house dropped into the 50s until I got the replacement running. While I waited for the house to warm up, I took a look at the heat loss data for the building.
To do this, I experimented with the “Python scientific computing stack,”: the IPython shell (I used the notebook functionality to produce the majority of this blog post), Pandas for data wrangling, matplotlib for plotting, and NumPy in the background. Ordinarily I would have performed the entire analysis in R, but I’m much more comfortable in Python and the IPython notebook is pretty compelling. What is lacking, in my opinion, is the solid graphics provided by the ggplot2 package in R.
First, I pulled the data from the database for the period the heater was off (and probably a little extra on either side):
import psycopg2
from pandas.io import sql
con = psycopg2.connect(host = 'localhost', database = 'arduino_wx')
temps = sql.read_frame("""
SELECT obs_dt, downstairs,
(lead(downstairs) over (order by obs_dt) - downstairs) /
interval_to_seconds(lead(obs_dt) over (order by obs_dt) - obs_dt)
* 3600 as downstairs_rate,
upstairs,
(lead(upstairs) over (order by obs_dt) - upstairs) /
interval_to_seconds(lead(obs_dt) over (order by obs_dt) - obs_dt)
* 3600 as upstairs_rate,
outside
FROM arduino
WHERE obs_dt between '2013-03-27 07:00:00' and '2013-03-27 12:00:00'
ORDER BY obs_dt;""", con, index_col = 'obs_dt')
SQL window functions calculate the rate the temperature is changing from one observation to the next, and convert the units to the change in temperature per hour (Δ°F/hour).
Adding the index_col attribute in the sql.read_frame() function is very important so that the Pandas data frame doesn’t have an arbitrary numerical index. When plotting, the index column is typically used for the x-axis / independent variable.
Next, calculate the difference between the indoor and outdoor temperatures, which is important in any heat loss calculations (the greater this difference, the greater the loss):
temps['downstairs_diff'] = temps['downstairs'] - temps['outside']
temps['upstairs_diff'] = temps['upstairs'] - temps['outside']
I took a quick look at the data and it looks like the downstairs temperatures are smoother so I subset the data so it only contains the downstairs (and outside) temperature records.
temps_up = temps[['outside', 'downstairs', 'downstairs_diff', 'downstairs_rate']]
print(u"Minimum temperature loss (°f/hour) = {0}".format(
temps_up['downstairs_rate'].min()))
temps_up.head(10)
Minimum temperature loss (deg F/hour) = -3.7823079517
obs_dt | outside | downstairs | diff | rate |
---|---|---|---|---|
2013-03-27 07:02:32 | -33.09 | 65.60 | 98.70 | 0.897 |
2013-03-27 07:07:32 | -33.19 | 65.68 | 98.87 | 0.661 |
2013-03-27 07:12:32 | -33.26 | 65.73 | 98.99 | 0.239 |
2013-03-27 07:17:32 | -33.52 | 65.75 | 99.28 | -2.340 |
2013-03-27 07:22:32 | -33.60 | 65.56 | 99.16 | -3.782 |
2013-03-27 07:27:32 | -33.61 | 65.24 | 98.85 | -3.545 |
2013-03-27 07:32:31 | -33.54 | 64.95 | 98.49 | -2.930 |
2013-03-27 07:37:32 | -33.58 | 64.70 | 98.28 | -2.761 |
2013-03-27 07:42:32 | -33.48 | 64.47 | 97.95 | -3.603 |
2013-03-27 07:47:32 | -33.28 | 64.17 | 97.46 | -3.780 |
You can see from the first bit of data that when the heater first went off, the differential between inside and outside was almost 100 degrees, and the temperature was dropping at a rate of 3.8 degrees per hour. Starting at 65°F, we’d be below freezing in just under nine hours at this rate, but as the differential drops, the rate that the inside temperature drops will slow down. I'd guess the house would stay above freezing for more than twelve hours even with outside temperatures as cold as we had this morning.
Here’s a plot of the data. The plot looks pretty reasonable with very little code:
import matplotlib.pyplot as plt
plt.figure()
temps_up.plot(subplots = True, figsize = (8.5, 11),
title = u"Heat loss from our house at −35°F",
style = ['bo-', 'ro-', 'ro-', 'ro-', 'go-', 'go-', 'go-'])
plt.legend()
# plt.subplots_adjust(hspace = 0.15)
plt.savefig('downstairs_loss.pdf')
plt.savefig('downstairs_loss.svg')
You’ll notice that even before I came home and replaced the heater, the temperature in the house had started to rise. This is certainly due to solar heating as it was a clear day with more than twelve hours of sunlight.
The plot shows what looks like a relationship between the rate of change inside and the temperature differential between inside and outside, so we’ll test this hypothesis using linear regression.
First, get the data where the temperature in the house was dropping.
cooling = temps_up[temps_up['downstairs_rate'] < 0]
Now run the regression between rate of change and outside temperature:
import pandas as pd
results = pd.ols(y = cooling['downstairs_rate'], x = cooling.ix[:, 'outside'])
results
-------------------------Summary of Regression Analysis------------------------- Formula: Y ~ <x> + <intercept> Number of Observations: 38 Number of Degrees of Freedom: 2 R-squared: 0.9214 Adj R-squared: 0.9192 Rmse: 0.2807 F-stat (1, 36): 421.7806, p-value: 0.0000 Degrees of Freedom: model 1, resid 36 -----------------------Summary of Estimated Coefficients------------------------ Variable Coef Std Err t-stat p-value CI 2.5% CI 97.5% -------------------------------------------------------------------------------- x 0.1397 0.0068 20.54 0.0000 0.1263 0.1530 intercept 1.3330 0.1902 7.01 0.0000 0.9603 1.7057 ---------------------------------End of Summary---------------------------------
You can see there’s a very strong positive relationship between the outside temperature and the rate that the inside temperature changes. As it warms outside, the drop in inside temperature slows.
The real relationship is more likely to be related to the differential between inside and outside. In this case, the relationship isn’t quite as strong. I suspect that the heat from the sun is confounding the analysis.
results = pd.ols(y = cooling['downstairs_rate'], x = cooling.ix[:, 'downstairs_diff'])
results
-------------------------Summary of Regression Analysis------------------------- Formula: Y ~ <x> + <intercept> Number of Observations: 38 Number of Degrees of Freedom: 2 R-squared: 0.8964 Adj R-squared: 0.8935 Rmse: 0.3222 F-stat (1, 36): 311.5470, p-value: 0.0000 Degrees of Freedom: model 1, resid 36 -----------------------Summary of Estimated Coefficients------------------------ Variable Coef Std Err t-stat p-value CI 2.5% CI 97.5% -------------------------------------------------------------------------------- x -0.1032 0.0058 -17.65 0.0000 -0.1146 -0.0917 intercept 6.6537 0.5189 12.82 0.0000 5.6366 7.6707 ---------------------------------End of Summary---------------------------------
con.close()
I’m not sure how much information I really got out of this, but I am pleasantly surprised that the house held it’s heat as well as it did even with the very cold temperatures. It might be interesting to intentionally turn off the heater in the middle of winter and examine these relationship for a longer period and without the influence of the sun.
And I’ve enjoyed learning a new set of tools for data analysis. Thanks to my friend Ryan for recommending them.

The 2013 Tournament of Books starts on Monday, and the brackets were announced yesterday. There’s a great tournament diagram available for download, but rather than try to figure out how to fill it in electronically so it was legible, I generated a traditional bracket-style diagram with my picks filled out (at the bottom of the post).
The diagram creation script I wrote last year doesn’t include a pre-round bracket, used this year to decide between Billy Lynn’s Long Halftime Walk (my pick), Fobbit, and Yellow Birds, so I just filled in the obvious (to me!) choice in the top spot of the first round.
The first round presents some challenges for the judges. I’ll be interested in the contest between Bring Up the Bodies and HHhH both of which are historical fiction and were very good. Mantel’s previous book in the story of Thomas Cromwell (Wolf Hall) won the Tournament in 2009, and Bodies is as good as that book. But HHhH is an creative and entertaining mix of semi-fiction, history, and the author’s thoughts and concerns about the story / history he is writing. It’s a different style of book than what is typically in the Tournament (although more traditional that Building Stories, I suppose), and I wonder how it will compete against such an obvious favorite to win.
The Round House vs. The Fault In Our Stars and The Orphan Master’s Son vs. Where’d You Go, Bernedette are contests between excellent books that are completely different in tone and subject (Native American sexual assault and terminal cancer in adolescents / North Korean oppression and madcap antics from a stifled architectural genius). Instead of getting two pairings that are similar in the first round (like Bodies and HHhH) and then two very different books in the second, the different books will be competing against each other initially.
Although Billy Lynn has acheived some literary notariety and is the book I’d like to see win, I think the real heavyweights in the Tournament are Gone Girl and Bring Up the Bodies. I expect to see these two go far, maybe to the final round.
Finally, will there be another book that goes deep into the Tournament that I really didn’t like (like Lightning Rods last year)? I didn’t even try to read Dear Life, couldn’t finish Ivyland, and didn’t enjoy How Should a Person Be? I’m hoping for round one elimination for these books.
Get your diagrams ready and start reading the results on Monday!
A couple days ago I got an email from a Galoot who was hoping to come north to see the aurora and wondered if March was a good time to come to Fairbanks. I know that March and September are two of my favorite months, but wanted to check to see if my perception of how sunny it is in March was because it really is sunny in March or if it’s because March is the month when winter begins to turn to spring in Fairbanks and it just seems brighter and sunnier, with longer days and white snow on the ground.
I found three sources of data for “cloudiness.” I’ve been parsing the Fairbanks Airport daily climate summary since 2002, and it has a value in it called Average Sky Cover which ranges from 0.0 (completely clear) to 1.0 (completely cloudy). I’ll call this data “pafa.”
The second source is the Global Historical Climatology - Daily for the Fairbanks Airport station. There’s a variable in there named ACMH, which is described as Cloudiness, midnight to midnight (percentage). For the Airport station, this value appears in the database from 1965 through 1997. One reassuring thing about this parameter is that it specifically says it’s from midnight to midnight, so it would include cloudiness when it was dark outside (and the aurora would be visible if it was present). This data set is named “ghcnd.”
The final source is modelled data from the North American Regional Reanalysis. This data set includes TCDC, or total cloud cover (percentage), and is available in three-hour increments over a grid covering North America. I chose the nearest grid point to the Fairbanks Airport and retrieved the daily mean of total cloud cover for the period of the database I have downloaded (1979—2012). In the plots that follow, this is named “narr.”
After reading the data and merging the three data sets together, I generate monthly means of cloud cover (scaled to percentages from 0 to 100) in each of the data sets, in R:
library(plyr)
cloud_cover <- merge(pafa, ghcnd, by = 'date', all = TRUE)
cloud_cover <- merge(cloud_cover, narr, by = 'date', all = TRUE)
cloud_cover$month <- month(cloud_cover$date)
by_month_mean <- ddply(
subset(cloud_cover,
select = c('month', 'pafa', 'ghcnd', 'narr')),
.(month),
summarise,
pafa = mean(pafa, na.rm = TRUE),
ghcnd = mean(ghcnd, na.rm = TRUE),
narr = mean(narr, na.rm = TRUE))
by_month_mean$mon <- factor(by_month_mean$month,
labels = c('jan', 'feb', 'mar',
'apr', 'may', 'jun',
'jul', 'aug', 'sep',
'oct', 'nov', 'dec'))
In order to plot it, I generate text labels for the year range of each data set and melt the data so it can be faceted:
library(lubridate)
library(reshape2)
text_labels <- rbind(
data.frame(variable = 'pafa',
str = paste(min(year(pafa$date)), '-', max(year(pafa$date)))),
data.frame(variable = 'ghcnd',
str = paste(min(year(ghcnd$date)), '-', max(year(ghcnd$date)))),
data.frame(variable = 'narr',
str = paste(min(year(narr$date)), '-', max(year(narr$date)))))
mean_melted <- melt(by_month_mean,
id.vars = 'mon',
measure.vars = c('pafa', 'ghcnd', 'narr'))
Finally, the plotting:
library(ggplot2)
q <- ggplot(data = mean_melted, aes(x = mon, y = value))
q +
theme_bw() +
geom_bar(stat = 'identity', colour = "darkred", fill = "darkorange") +
facet_wrap(~ variable, ncol = 1) +
scale_x_discrete(name = "Month") +
scale_y_continuous(name = "Mean cloud cover") +
ggtitle('Cloud cover data for Fairbanks Airport Station') +
geom_text(data = text_labels, aes(x = 'feb', y = 70, label = str), size = 4) +
geom_text(aes(label = round(value, digits = 1)), vjust = 1.5, size = 3)
The good news for the guy coming to see the northern lights is that March is indeed the least cloudy month in Fairbanks, and all three data sources show similar patterns, although the NARR dataset has September and October as the cloudiest months, and anyone who has lived in Fairbanks knows that August is the rainiest (and probably cloudiest) month. PAFA and GHCND have a late summer pattern that seems more like what I recall.
Another way to slice the data is to get the average number of days in a month with less than 20% cloud cover; a measure of the clearest days. This is a pretty easy calculation:
by_month_less_than_20 <- ddply(
subset(cloud_cover,
select = c('month', 'pafa', 'ghcnd', 'narr')),
.(month),
summarise,
pafa = sum(pafa < 20, na.rm = TRUE) / sum(!is.na(pafa)) * 100,
ghcnd = sum(ghcnd < 20, na.rm = TRUE) / sum(!is.na(ghcnd)) * 100,
narr = sum(narr < 20, na.rm = TRUE) / sum(!is.na(narr)) * 100);
And the results:
We see the same pattern as in the mean cloudiness plot. March is the month with the greatest number of days with less that 20% cloud cover. Depending on the data set, between 17 and 24 percent of March days are quite clear. In contrast, the summer months rarely see days with no cloud cover. In June and July, the days are long and convection often builds large clouds in the late afternoon, and by August, the rain has started. Just like in the previous plot, NARR has September as the month with the fewest clear days, which doesn’t match my experience.