tue, 02-apr-2013, 19:08
A’s v Mariners

Yesterday I attempted to watch the A’s Opening Day game against the Mariners in the Oakland Coliseum. Unfortunately, it turns out the entire state of Alaska is within the blackout region for all Seattle Mariners games, regardless of where the Mariners are playing.

The A’s lost the game, so maybe I’m not too disappointed I didn’t see it, but for curiousity, let’s extend Major League Baseball’s Alaska blackout “logic” to the rest of the country and see what that looks like.

First, load in the locations of all Major League stadiums into a PostGIS database. There are a variety of sources for this on the Internet, but many are missing the more recent stadiums. I downloaded one and updated those that weren’t correct using information on Wikipedia.

CREATE TABLE stadiums (
   division text, opened integer, longitude numeric,
   latitude numeric, class text, team text,
   stadium text, league text, capacity integer
);
COPY stadiums FROM 'stadium_data.csv' CSV HEADER;

Turn the latitude and longitudes into geographic objects:

SELECT addgeometrycolumn('stadiums', 'geom_wgs84', 4326, 'POINT', 2);
CREATE SEQUENCE stadium_id_seq;
ALTER TABLE stadiums
   ADD COLUMN id integer default (nextval('stadium_id_seq'));
UPDATE stadiums
   SET geom_wgs84 = ST_SetSRID(
      ST_MakePoint(longitude, latitude), 4326
   );

Now load in a states polygon layer (which came from: http://www.arcgis.com/home/item.html?id=f7f805eb65eb4ab787a0a3e1116ca7e5):

$ shp2pgsql -s 4269 states.shp | psql -d stadiums

Calculate how far the farthest location in Alaska is to Safeco Field:

SELECT ST_MaxDistance(safeco, alaska) / 1609.344 AS miles
FROM (
   SELECT ST_Transform(geom_wgs84, 3338) AS safeco
   FROM stadiums WHERE stadium ~* 'safeco'
) AS foo, (
   SELECT ST_Transform(geom, 3338) AS alaska
   FROM states
   WHERE state_name ~* 'alaska'
) AS bar;

miles
------------------
 2467.67499410842

Yep. Folks in Alaska are supposed to travel 2,468 miles (3,971,338 meters, to be exact) in order to attend a game at Safeco Field (or farther if they’re playing elsewhere because the blackout rules aren’t just for home games anymore).

Now we add a 2,468 mile buffer around each of the 30 Major League stadiums to see what the blackout rules would look like if the rule for Alaska was applied everywhere:

SELECT addgeometrycolumn(
   'stadiums', 'ak_rules_buffer', 3338, 'POLYGON', 2);
UPDATE stadiums
SET ak_rules_buffer =
   ST_Buffer(ST_Transform(geom_wgs84, 3338), 3971338);

Here's a map showing what these buffers look like:

Or, shown another way, here’s the number of teams that would be blacked out for everyone in each state. Keep in mind that there are 30 teams in Major League Baseball so the states on the top of the list shouldn’t be able to watch any team if Alaska rules were applied evenly to the country:

Number of blacked out teams using Alaska blackout rules
Blacked out teams States
30 Alabama, Arkansas, Colorado, Illinois, Indiana, Iowa, Kansas, Kentucky, Louisiana, Michigan, Minnesota, Mississippi, Missouri, Nebraska, New Mexico, North Dakota, Ohio, Oklahoma, South Dakota, Tennessee, Texas, Utah, Wisconsin, Wyoming
29 Idaho, Montana, Arizona
28 District of Columbia, West Virginia
27 Georgia, South Carolina
26 Nevada
25 Pennsylvania, Florida, Vermont
24 Rhode Island, New Hampshire, Connecticut, Delaware, New Jersey, New York, North Carolina, Virginia, Maryland, Massachusetts
23 Maine
22 Oregon, Washington, California
1 Alaska
0 Hawaii

Based on this, Hawaii would be the place to live as they can actually watch all 30 teams! Except the blackout rules for Hawaii are even stupider than Alaska: they can’t watch any of the teams on the West Coast. Ouch!

This is stupid-crazy. It seems to me that a more appropriate rule might be buffers that represent a two hour drive, or whatever distance is considered “reasonable” for traveling to attend a game. When I lived in Davis California we drove to the Bay Area several times a season to watch the A’s or Giants play, and that’s about the farthest I’d expect someone to go in order to see a three hour baseball game.

This makes a much more reasonable map:

I’m sure there are other issues going on here, such as cable television deals and other sources of revenue that MLB is trying to protect. But for the fans, those who ultimately pay for this game, the current situation is idiotic. I don’t have access to cable television (and even if I did, I won’t buy it until it’s possible to pick and choose which stations I want to pay for), and I’m more than 2,000 miles from the nearest ballpark. My only avenue for enjoying the game is to pay MLB $130 for a MLB.TV subscription. Which I happily do. Unfortunately, this year I can’t watch my teams whenever they happen to play the Mariners.

tags: baseball  SQL  GIS 
wed, 27-mar-2013, 18:35

Earlier today our monitor stopped working and left us without heat when it was −35°F outside. I drove home and swapped the broken heater with our spare, but the heat was off for several hours and the temperature in the house dropped into the 50s until I got the replacement running. While I waited for the house to warm up, I took a look at the heat loss data for the building.

To do this, I experimented with the “Python scientific computing stack,”: the IPython shell (I used the notebook functionality to produce the majority of this blog post), Pandas for data wrangling, matplotlib for plotting, and NumPy in the background. Ordinarily I would have performed the entire analysis in R, but I’m much more comfortable in Python and the IPython notebook is pretty compelling. What is lacking, in my opinion, is the solid graphics provided by the ggplot2 package in R.

First, I pulled the data from the database for the period the heater was off (and probably a little extra on either side):

import psycopg2
from pandas.io import sql
con = psycopg2.connect(host = 'localhost', database = 'arduino_wx')
temps = sql.read_frame("""
    SELECT obs_dt, downstairs,
        (lead(downstairs) over (order by obs_dt) - downstairs) /
            interval_to_seconds(lead(obs_dt) over (order by obs_dt) - obs_dt)
            * 3600 as downstairs_rate,
        upstairs,
        (lead(upstairs) over (order by obs_dt) - upstairs) /
            interval_to_seconds(lead(obs_dt) over (order by obs_dt) - obs_dt)
            * 3600 as upstairs_rate,
        outside
    FROM arduino
    WHERE obs_dt between '2013-03-27 07:00:00' and '2013-03-27 12:00:00'
    ORDER BY obs_dt;""", con, index_col = 'obs_dt')

SQL window functions calculate the rate the temperature is changing from one observation to the next, and convert the units to the change in temperature per hour (Δ°F/hour).

Adding the index_col attribute in the sql.read_frame() function is very important so that the Pandas data frame doesn’t have an arbitrary numerical index. When plotting, the index column is typically used for the x-axis / independent variable.

Next, calculate the difference between the indoor and outdoor temperatures, which is important in any heat loss calculations (the greater this difference, the greater the loss):

temps['downstairs_diff'] = temps['downstairs'] - temps['outside']
temps['upstairs_diff'] = temps['upstairs'] - temps['outside']

I took a quick look at the data and it looks like the downstairs temperatures are smoother so I subset the data so it only contains the downstairs (and outside) temperature records.

temps_up = temps[['outside', 'downstairs', 'downstairs_diff', 'downstairs_rate']]
print(u"Minimum temperature loss (°f/hour) = {0}".format(
    temps_up['downstairs_rate'].min()))
temps_up.head(10)

Minimum temperature loss (deg F/hour) = -3.7823079517
obs_dt outside downstairs diff rate
2013-03-27 07:02:32 -33.09 65.60 98.70 0.897
2013-03-27 07:07:32 -33.19 65.68 98.87 0.661
2013-03-27 07:12:32 -33.26 65.73 98.99 0.239
2013-03-27 07:17:32 -33.52 65.75 99.28 -2.340
2013-03-27 07:22:32 -33.60 65.56 99.16 -3.782
2013-03-27 07:27:32 -33.61 65.24 98.85 -3.545
2013-03-27 07:32:31 -33.54 64.95 98.49 -2.930
2013-03-27 07:37:32 -33.58 64.70 98.28 -2.761
2013-03-27 07:42:32 -33.48 64.47 97.95 -3.603
2013-03-27 07:47:32 -33.28 64.17 97.46 -3.780

You can see from the first bit of data that when the heater first went off, the differential between inside and outside was almost 100 degrees, and the temperature was dropping at a rate of 3.8 degrees per hour. Starting at 65°F, we’d be below freezing in just under nine hours at this rate, but as the differential drops, the rate that the inside temperature drops will slow down. I'd guess the house would stay above freezing for more than twelve hours even with outside temperatures as cold as we had this morning.

Here’s a plot of the data. The plot looks pretty reasonable with very little code:

import matplotlib.pyplot as plt
plt.figure()
temps_up.plot(subplots = True, figsize = (8.5, 11),
    title = u"Heat loss from our house at −35°F",
    style = ['bo-', 'ro-', 'ro-', 'ro-', 'go-', 'go-', 'go-'])
plt.legend()
# plt.subplots_adjust(hspace = 0.15)
plt.savefig('downstairs_loss.pdf')
plt.savefig('downstairs_loss.svg')

You’ll notice that even before I came home and replaced the heater, the temperature in the house had started to rise. This is certainly due to solar heating as it was a clear day with more than twelve hours of sunlight.

The plot shows what looks like a relationship between the rate of change inside and the temperature differential between inside and outside, so we’ll test this hypothesis using linear regression.

First, get the data where the temperature in the house was dropping.

cooling = temps_up[temps_up['downstairs_rate'] < 0]

Now run the regression between rate of change and outside temperature:

import pandas as pd
results = pd.ols(y = cooling['downstairs_rate'], x = cooling.ix[:, 'outside'])
results
-------------------------Summary of Regression Analysis-------------------------

Formula: Y ~ <x> + <intercept>

Number of Observations:         38
Number of Degrees of Freedom:   2

R-squared:         0.9214
Adj R-squared:     0.9192

Rmse:              0.2807

F-stat (1, 36):   421.7806, p-value:     0.0000

Degrees of Freedom: model 1, resid 36

-----------------------Summary of Estimated Coefficients------------------------
      Variable       Coef    Std Err     t-stat    p-value    CI 2.5%   CI 97.5%
--------------------------------------------------------------------------------
             x     0.1397     0.0068      20.54     0.0000     0.1263     0.1530
     intercept     1.3330     0.1902       7.01     0.0000     0.9603     1.7057
---------------------------------End of Summary---------------------------------

You can see there’s a very strong positive relationship between the outside temperature and the rate that the inside temperature changes. As it warms outside, the drop in inside temperature slows.

The real relationship is more likely to be related to the differential between inside and outside. In this case, the relationship isn’t quite as strong. I suspect that the heat from the sun is confounding the analysis.

results = pd.ols(y = cooling['downstairs_rate'], x = cooling.ix[:, 'downstairs_diff'])
results
-------------------------Summary of Regression Analysis-------------------------

Formula: Y ~ <x> + <intercept>

Number of Observations:         38
Number of Degrees of Freedom:   2

R-squared:         0.8964
Adj R-squared:     0.8935

Rmse:              0.3222

F-stat (1, 36):   311.5470, p-value:     0.0000

Degrees of Freedom: model 1, resid 36

-----------------------Summary of Estimated Coefficients------------------------
      Variable       Coef    Std Err     t-stat    p-value    CI 2.5%   CI 97.5%
--------------------------------------------------------------------------------
             x    -0.1032     0.0058     -17.65     0.0000    -0.1146    -0.0917
     intercept     6.6537     0.5189      12.82     0.0000     5.6366     7.6707
---------------------------------End of Summary---------------------------------
con.close()

I’m not sure how much information I really got out of this, but I am pleasantly surprised that the house held it’s heat as well as it did even with the very cold temperatures. It might be interesting to intentionally turn off the heater in the middle of winter and examine these relationship for a longer period and without the influence of the sun.

And I’ve enjoyed learning a new set of tools for data analysis. Thanks to my friend Ryan for recommending them.

tags: weather  house  Pandas  Python  IPython 
sat, 02-mar-2013, 07:51
ToB read, 2013

The 2013 Tournament of Books starts on Monday, and the brackets were announced yesterday. There’s a great tournament diagram available for download, but rather than try to figure out how to fill it in electronically so it was legible, I generated a traditional bracket-style diagram with my picks filled out (at the bottom of the post).

The diagram creation script I wrote last year doesn’t include a pre-round bracket, used this year to decide between Billy Lynn’s Long Halftime Walk (my pick), Fobbit, and Yellow Birds, so I just filled in the obvious (to me!) choice in the top spot of the first round.

The first round presents some challenges for the judges. I’ll be interested in the contest between Bring Up the Bodies and HHhH both of which are historical fiction and were very good. Mantel’s previous book in the story of Thomas Cromwell (Wolf Hall) won the Tournament in 2009, and Bodies is as good as that book. But HHhH is an creative and entertaining mix of semi-fiction, history, and the author’s thoughts and concerns about the story / history he is writing. It’s a different style of book than what is typically in the Tournament (although more traditional that Building Stories, I suppose), and I wonder how it will compete against such an obvious favorite to win.

The Round House vs. The Fault In Our Stars and The Orphan Master’s Son vs. Where’d You Go, Bernedette are contests between excellent books that are completely different in tone and subject (Native American sexual assault and terminal cancer in adolescents / North Korean oppression and madcap antics from a stifled architectural genius). Instead of getting two pairings that are similar in the first round (like Bodies and HHhH) and then two very different books in the second, the different books will be competing against each other initially.

Although Billy Lynn has acheived some literary notariety and is the book I’d like to see win, I think the real heavyweights in the Tournament are Gone Girl and Bring Up the Bodies. I expect to see these two go far, maybe to the final round.

Finally, will there be another book that goes deep into the Tournament that I really didn’t like (like Lightning Rods last year)? I didn’t even try to read Dear Life, couldn’t finish Ivyland, and didn’t enjoy How Should a Person Be? I’m hoping for round one elimination for these books.

Get your diagrams ready and start reading the results on Monday!

2013 Tournament of Books
tue, 05-feb-2013, 18:19
House from the slough

House from the slough

A couple days ago I got an email from a Galoot who was hoping to come north to see the aurora and wondered if March was a good time to come to Fairbanks. I know that March and September are two of my favorite months, but wanted to check to see if my perception of how sunny it is in March was because it really is sunny in March or if it’s because March is the month when winter begins to turn to spring in Fairbanks and it just seems brighter and sunnier, with longer days and white snow on the ground.

I found three sources of data for “cloudiness.” I’ve been parsing the Fairbanks Airport daily climate summary since 2002, and it has a value in it called Average Sky Cover which ranges from 0.0 (completely clear) to 1.0 (completely cloudy). I’ll call this data “pafa.”

The second source is the Global Historical Climatology - Daily for the Fairbanks Airport station. There’s a variable in there named ACMH, which is described as Cloudiness, midnight to midnight (percentage). For the Airport station, this value appears in the database from 1965 through 1997. One reassuring thing about this parameter is that it specifically says it’s from midnight to midnight, so it would include cloudiness when it was dark outside (and the aurora would be visible if it was present). This data set is named “ghcnd.”

The final source is modelled data from the North American Regional Reanalysis. This data set includes TCDC, or total cloud cover (percentage), and is available in three-hour increments over a grid covering North America. I chose the nearest grid point to the Fairbanks Airport and retrieved the daily mean of total cloud cover for the period of the database I have downloaded (1979—2012). In the plots that follow, this is named “narr.”

After reading the data and merging the three data sets together, I generate monthly means of cloud cover (scaled to percentages from 0 to 100) in each of the data sets, in R:

library(plyr)
cloud_cover <- merge(pafa, ghcnd, by = 'date', all = TRUE)
cloud_cover <- merge(cloud_cover, narr, by = 'date', all = TRUE)
cloud_cover$month <- month(cloud_cover$date)

by_month_mean <- ddply(
    subset(cloud_cover,
           select = c('month', 'pafa', 'ghcnd', 'narr')),
   .(month),
   summarise,
   pafa = mean(pafa, na.rm = TRUE),
   ghcnd = mean(ghcnd, na.rm = TRUE),
   narr = mean(narr, na.rm = TRUE))
by_month_mean$mon <- factor(by_month_mean$month,
                            labels = c('jan', 'feb', 'mar',
                                       'apr', 'may', 'jun',
                                       'jul', 'aug', 'sep',
                                       'oct', 'nov', 'dec'))

In order to plot it, I generate text labels for the year range of each data set and melt the data so it can be faceted:

library(lubridate)
library(reshape2)
text_labels <- rbind(
    data.frame(variable = 'pafa',
        str = paste(min(year(pafa$date)), '-', max(year(pafa$date)))),
    data.frame(variable = 'ghcnd',
        str = paste(min(year(ghcnd$date)), '-', max(year(ghcnd$date)))),
    data.frame(variable = 'narr',
        str = paste(min(year(narr$date)), '-', max(year(narr$date)))))

mean_melted <- melt(by_month_mean,
                    id.vars = 'mon',
                    measure.vars = c('pafa', 'ghcnd', 'narr'))

Finally, the plotting:

library(ggplot2)
q <- ggplot(data = mean_melted, aes(x = mon, y = value))
q +
    theme_bw() +
    geom_bar(stat = 'identity', colour = "darkred", fill = "darkorange") +
    facet_wrap(~ variable, ncol = 1) +
    scale_x_discrete(name = "Month") +
    scale_y_continuous(name = "Mean cloud cover") +
    ggtitle('Cloud cover data for Fairbanks Airport Station') +
    geom_text(data = text_labels, aes(x = 'feb', y = 70, label = str), size = 4) +
    geom_text(aes(label = round(value, digits = 1)), vjust = 1.5, size = 3)

The good news for the guy coming to see the northern lights is that March is indeed the least cloudy month in Fairbanks, and all three data sources show similar patterns, although the NARR dataset has September and October as the cloudiest months, and anyone who has lived in Fairbanks knows that August is the rainiest (and probably cloudiest) month. PAFA and GHCND have a late summer pattern that seems more like what I recall.

Another way to slice the data is to get the average number of days in a month with less than 20% cloud cover; a measure of the clearest days. This is a pretty easy calculation:

by_month_less_than_20 <- ddply(
    subset(cloud_cover,
           select = c('month', 'pafa', 'ghcnd', 'narr')),
    .(month),
    summarise,
    pafa = sum(pafa < 20, na.rm = TRUE) / sum(!is.na(pafa)) * 100,
    ghcnd = sum(ghcnd < 20, na.rm = TRUE) / sum(!is.na(ghcnd)) * 100,
    narr = sum(narr < 20, na.rm = TRUE) / sum(!is.na(narr)) * 100);

And the results:

We see the same pattern as in the mean cloudiness plot. March is the month with the greatest number of days with less that 20% cloud cover. Depending on the data set, between 17 and 24 percent of March days are quite clear. In contrast, the summer months rarely see days with no cloud cover. In June and July, the days are long and convection often builds large clouds in the late afternoon, and by August, the rain has started. Just like in the previous plot, NARR has September as the month with the fewest clear days, which doesn’t match my experience.

tags: R  weather  Fairbanks  cloud cover 
tue, 15-jan-2013, 08:48

Over the past couple days in Fairbanks, there has been a strong flow of warm, moist air from the Pacific which culminated in a record (for January 14th) 0.22 inches of precipitation, most of which fell as rain. Nasty. Similar events happened in 2011 and in November 2010, which everyone will remember for the inch or more of ice that glazed the roads for the rest of the winter that year.

The question people always ask after a series of events like this is whether this is a new weather pattern (let’s hope not!) and whether it may be the result of global climate change (which I probably can’t answer).

To look at this, I examined the historical record for Fairbanks, searching for dates that met the following criteria:

  • At least six inches of snow on the ground
  • During the winter months (October through February)
  • Daily high temperature above freezing
  • Precipitation falling as rain

The last criteria isn’t part of the historical record, but we can guess the amount of rain by comparing the amount of snow (measured each day on a snow board that is cleared after measurement) with the amount of liquid precipitation gathered in a tube and melted, if necessary. In my experience, the ratio of snow to liquid precipitation is almost always less than 10 to 1 (meaning that 10 inches of snow melts down to less than an inch of liquid), so I’m looking for dates where the precipitation amount is greater than 10 times the snowfall for that date. I’m also estimating the amount of rain by subtracting (snow × 10) from the precipitation total.

Here’s the query:

SELECT dte, tmin_f, tmax_f, prcp_in, snow_in, rain_in,
       row_number() OVER (ORDER BY rain_in desc) AS rank
FROM (
    SELECT to_char(dte, 'YYYY-MM') AS dte, round(avg(tmin_f), 1) AS tmin_f,
           round(avg(tmax_f), 1) AS tmax_f, sum(prcp_in) AS prcp_in,
           sum(snow_in) AS snow_in, sum(rain_in) AS rain_in
    FROM (
        SELECT dte, tmin_f, tmax_f, prcp_in, snow_in, snwd_in,
               round(prcp_in - (snow_in / 10.0), 2) AS rain_in
        FROM get_ghcnd('Fairbanks Intl Ap')
        WHERE extract(month from dte) IN (10, 11, 12, 1, 2)
            AND snwd_in > 6
            AND tmax_f > 32
            AND prcp_in * 10 > snow_in
        ORDER BY dte
    ) AS foo
    GROUP BY to_char(dte, 'YYYY-MM')
) AS bar
ORDER BY dte;

And the results, ordered by the year and month of the event. None of the winter rain events stretched across a month boundary, so it was convenient to aggregate them this way (although 1937 is problematic as I mention below).

Winter rains, Fairbanks Airport station
Date Min Temp (°F) Max Temp (°F) Precip (in) Snow (in) “Rain” (in) Rank
1920-02 27.7 38.4 0.26 0.9 0.17 11
1931-01 12.0 33.1 0.13 0.0 0.13 12
1932-02 7.0 33.1 0.77 7.1 0.06 16
1933-11 25.0 41.0 0.11 0.0 0.11 14
1935-11 30.4 37.2 1.51 3.2 1.19 2
1936-11 30.0 37.0 0.44 0.0 0.44 5
1937-01 24.3 36.2 2.83 16.1 1.22 1
1941-02 28.0 42.1 0.02 0.0 0.02 23
1941-11 -2.9 33.1 0.20 0.9 0.11 15
1943-02 30.5 41.0 0.12 0.0 0.12 13
1944-02 21.5 36.5 0.65 2.9 0.36 7
1948-01 7.0 33.1 0.01 0.0 0.01 26
1957-01 30.9 35.1 0.03 0.0 0.03 22
1961-01 17.1 33.1 0.04 0.0 0.04 20
1963-01 22.5 35.1 0.56 0.7 0.49 4
1967-12 20.0 33.1 0.43 0.5 0.38 6
1970-02 10.9 43.0 0.05 0.0 0.05 17
1970-10 28.0 44.1 0.04 0.0 0.04 19
1970-12 5.0 36.0 0.43 2.4 0.19 9
1986-02 10.9 37.9 0.03 0.0 0.03 21
1989-02 24.1 37.0 0.40 3.8 0.02 24
2003-02 27.0 35.0 0.29 0.0 0.29 8
2006-02 17.1 42.1 0.06 0.1 0.05 18
2010-11 26.1 34.3 0.95 0.1 0.94 3
2011-12 26.1 46.9 0.03 0.2 0.01 25
2013-01 24.0 37.0 0.22 0.4 0.18 10

The 2010 event was had the third highest rainfall in the historical record; yesterday’s rain was the tenth highest. The January 1937 event is actually two events, one on the 10th and 11th and one on the 20th and 21st. If we split them up into two events, the 2010 rainfall amount is the second largest amount and the two January 1937 rainfalls come in third and tied for fifth, with November 1935 holding the record.

Grouping the events into decades, we get the following:

Winter rains by decade
Decade Rain events
1920s 1
1930s 6
1940s 5
1950s 1
1960s 3
1970s 3
1980s 2
1990s 0
2000s 2
2010s 3

Here’s a visualization of the same data:

I don’t think there’s evidence that what we’ve seen in the last few years is exceptional in the historical record, but it does seem like the frequency of winter rainfall does come in cycles, with a peak in the 30s and 40s, and something of a decline in the 80s and 90s. That we’ve already had three events in this decade, in just over two years, seems like a bad sign to me. I wonder if there are larger scale climatological phenomena that could help to explain the pattern shown here?

tags: weather  rain  winter  SQL 

Meta Photolog Archives