sat, 27-jul-2013, 08:03
Gold Discovery Run, 2013

Gold Discovery Run, 2013

This spring I ran the Beat Beethoven 5K and had such a good time that I decided to give running another try. I’d tried adding running to my usual exercise routines in the past, but knee problems always sidelined me after a couple months. It’s been three months of slow increases in mileage using a marathon training plan by Hal Higdon, and so far so good.

My goal for this year, beyond staying healthy, is to participate in the 51st running of the Equinox Marathon here in Fairbanks.

One of the challenges for a beginning runner is how pace yourself during a race and how to know what your body can handle. Since Beat Beethoven I've run in the Lulu’s 10K, the Midnight Sun Run (another 10K), and last weekend I ran the 16.5 mile Gold Discovery Run from Cleary Summit down to Silver Gulch Brewery. I completed the race in two hours and twenty-nine minutes, at a pace of 9:02 minutes per mile. Based on this performance, I should be able to estimate my finish time and pace for Equinox by comparing the times for runners that participated in the 2012 Gold Discovery and Equinox.

The first challenge is extracting the data from the PDF files SportAlaska publishes after the race. I found that opening the PDF result files, selecting all the text on each page, and pasting it into a text file is the best way to preserve the formatting of each line. Then I process it through a Python function that extracts the bits I want:

import re
def parse_sportalaska(line):
    """ lines appear to contain:
        place, bib, name, town (sometimes missing), state (sometimes missing),
        birth_year, age_class, class_place, finish_time, off_win, pace,
        points (often missing) """
    fields = line.split()
    place = int(fields.pop(0))
    bib = int(fields.pop(0))
    name = fields.pop(0)
    while True:
        n = fields.pop(0)
        name = '{} {}'.format(name, n)
        if re.search('^[A-Z.-]+$', n):
            break
    pre_birth_year = []
    pre_birth_year.append(fields.pop(0))
    while True:
        try:
            f = fields.pop(0)
        except:
            print("Warning: couldn't parse: '{0}'".format(line.strip()))
            break
        else:
            if re.search('^[0-9]{4}$', f):
                birth_year = int(f)
                break
            else:
                pre_birth_year.append(f)
    if re.search('^[A-Z]{2}$', pre_birth_year[-1]):
        state = pre_birth_year[-1]
        town = ' '.join(pre_birth_year[:-1])
    else:
        state = None
        town = None
    try:
        (age_class, class_place, finish_time, off_win, pace) = fields[:5]
        class_place = int(class_place[1:-1])
        finish_minutes = time_to_min(finish_time)
        fpace = strpace_to_fpace(pace)
    except:
        print("Warning: couldn't parse: '{0}', skipping".format(
              line.strip()))
        return None
    else:
        return (place, bib, name, town, state, birth_year, age_class,
                class_place, finish_time, finish_minutes, off_win,
                pace, fpace)

The function uses a a couple helper functions that convert pace and time strings into floating point numbers, which are easier to analyze.

def strpace_to_fpace(p):
    """ Converts a MM:SS" pace to a float (minutes) """
    (mm, ss) = p.split(':')
    (mm, ss) = [int(x) for x in (mm, ss)]
    fpace = mm + (float(ss) / 60.0)

    return fpace

def time_to_min(t):
    """ Converts an HH:MM:SS time to a float (minutes) """
    (hh, mm, ss) = t.split(':')
    (hh, mm) = [int(x) for x in (hh, mm)]
    ss = float(ss)
    minutes = (hh * 60) + mm + (ss / 60.0)

    return minutes

Once I process the Gold Discovery and Equnox result files through this routine, I dump the results in a properly formatted comma-delimited file, read the data into R and combine the two race results files by matching the runner’s name. Note that these results only include the men competing in the race.

gd <- read.csv('gd_2012_men.csv', header=TRUE)
gd <- gd[,c('name', 'birth_year', 'finish_minutes', 'fpace')]
eq <- read.csv('eq_2012_men.csv', header=TRUE)
eq <- eq[,c('name', 'birth_year', 'finish_minutes', 'fpace')]
combined <- merge(gd, eq, by='name')
names(combined) <- c('name', 'birth_year', 'gd_finish', 'gd_pace',
                     'year', 'eq_finish', 'eq_pace')

When I look at a plot of the data I can see four outliers; two where the runners ran Equinox much faster based on their Gold Discovery pace, and two where the opposite was the case. The two races are two months apart, so I think it’s reasonable to exclude these four rows from the data since all manner of things could happen to a runner in two months of hard training (or on race day!).

attach(combined)
combined <- combined[!((gd_pace > 10 & gd_pace < 11 & eq_pace > 15)
                       | (gd_pace > 15)),]

Let’s test the hypothesis that we can predict Equinox pace from Gold Discovery Pace:

model <- lm(eq_pace ~ birth_year, data=combined)
summary(model)

Call:
lm(formula = eq_pace ~ gd_pace, data = combined)

Residuals:
     Min       1Q   Median       3Q      Max
-1.47121 -0.36833 -0.04207  0.51361  1.42971

Coefficients:
            Estimate Std. Error t value Pr(>|t|)
(Intercept)  0.77392    0.52233   1.482    0.145
gd_pace      1.08880    0.05433  20.042   <2e-16 ***
---
Signif. codes:  0 *** 0.001 ** 0.01 * 0.05 . 0.1   1

Residual standard error: 0.6503 on 48 degrees of freedom
Multiple R-squared:  0.8933,    Adjusted R-squared:  0.891
F-statistic: 401.7 on 1 and 48 DF,  p-value: < 2.2e-16

Indeed, we can explain 65% of the variation in Equinox Marathon pace times using Gold Discovery pace times, and both the model and the model coefficient are significant.

Here’s what the results look like:

The red line shows a relationship where the Gold Discovery pace is identical to the Equinox pace for each running. Because the actual data (and the prediced results based on the regression model) are above this line, that means that all the runners were slower in the longer (and harder) Equinox Marathon.

As for me, my 9:02 Gold Discovery pace should translate into an Equinox pace around 10:30. Here are the 2012 runners who were born within ten years of me, and who finished within ten minutes of my 2013 Gold Discovery time:

2012 Race Results
Runner DOB Gold Discovery Equinox Time Equinox Pace
Dan Bross 1964 2:24 4:20 9:55
Chris Hartman 1969 2:25 4:45 10:53
Mike Hayes 1972 2:27 4:58 11:22
Ben Roth 1968 2:28 4:47 10:57
Jim Brader 1965 2:31 4:09 9:30
Erik Anderson 1971 2:32 5:03 11:34
John Scherzer 1972 2:33 4:49 11:01
Trent Hubbard 1972 2:33 4:48 11:00

Based on this, and the regression results, I expect to finish the Equinox Marathon in just under five hours if my training over the next two months goes well.

thu, 02-may-2013, 07:52
Still snowing

In a post last week I examined how often Fairbanks gets more than two inches of snow in late spring. We only got 1.1 inches on April 24th, so that event didn’t qualify, but another snowstorm hit Fairbanks this week. Enough that I skied to work a couple days ago (April 30th) and could have skied this morning too.

Another, probably more relevant statistic would be to look at storm totals rather than the amount of snow that fell within a single, somewhat arbitrary 24-hour period (midnight to midnight for the Fairbanks Airport station, 6 AM to 6 AM for my COOP station). With SQL window functions we can examine the totals over a moving window, in this case five days and see what the largest late season snowfall totals were in the historical record.

Here’s a list of the late spring (after April 21st) snowfall totals for Fairbanks where the five day snowfall was greater than three inches:

Late spring snow storm totals
Storm start Five day snowfall (inches)
1916-05-03 3.6
1918-04-26 5.1
1918-05-15 2.5
1923-05-03 3.0
1937-04-24 3.6
1941-04-22 8.1
1948-04-26 4.0
1952-05-05 3.0
1964-05-13 4.7
1982-04-30 3.1
1992-05-12 12.3
2001-05-04 6.4
2002-04-25 6.7
2008-04-30 4.3
2013-04-29 3.6

Anyone who was here in 1992 remembers that “summer,” with more than a foot of snow in mid May, and two feet of snow in a pair of storms starting on September 11th, 1992. I don’t expect that all the late spring cold weather and snow we’re experiencing this year will necessarily translate into a short summer like 1992, but we should keep the possibility in mind.

tags: Fairbanks  snow  weather 
wed, 01-may-2013, 05:41
normalized temperature anomaly heatmap

I’m writing this blog post on May 1st, looking outside as the snow continues to fall. We’ve gotten three inches in the last day and a half, and I even skied to work yesterday. It’s still winter here in Fairbanks.

The image shows the normalized temperature anomaly calendar heatmap for April. The bluer the squares are, the colder that day was compared with the 30-year climate normal daily temperature for Fairbanks. There were several days where the temperature was more than three standard deviations colder than the mean anomaly (zero), something that happens very infrequently.

Here are the top ten coldest average April temperatures for the Fairbanks Airport Station.

Coldest April temperature, Fairbanks Airport Station
Rank Year Average temp (°F) Rank Year Average temp (°F)
1 1924 14.8 6 1972 20.8
2 1911 17.4 7 1955 21.6
3 2013 18.2 8 1910 22.9
4 1927 19.5 9 1948 23.2
5 1985 20.7 10 2002 23.2

The averages come from the Global Historical Climate Network - Daily data set, with some fairly dubious additions to extend the Fairbanks record back before the 1956 start of the current station. Here’s the query to get the historical data:

SELECT rank() OVER (ORDER BY tavg) AS rank,
       year, round(c_to_f(tavg), 1) AS tavg
FROM (
    SELECT year, avg(tavg) AS tavg
    FROM (
        SELECT extract(year from dte) AS year,
               dte, (tmin + tmax) / 2.0 AS tavg
        FROM (
            SELECT dte,
                   sum(CASE WHEN variable = 'TMIN'
                            THEN raw_value * 0.1
                            ELSE 0 END) AS tmin,
                   sum(CASE WHEN variable = 'TMAX'
                            THEN raw_value * 0.1
                            ELSE 0 END) AS tmax
            FROM ghcnd_obs
            WHERE variable IN ('TMIN', 'TMAX')
                  AND station_id = 'USW00026411'
                  AND extract(month from dte) = 4 GROUP BY dte
        ) AS foo
    ) AS bar GROUP BY year
) AS foobie
ORDER BY rank;

And the way I calculated the average temperature for this April. pafg is a text file that includes the data from each day’s National Weather Service Daily Climate Summary. Average daily temperature is in column 9.

$ tail -n 30 pafg | \
  awk 'BEGIN {sum = 0; n = 0}; {n = n + 1; sum += $9} END { print sum / n; }'
18.1667
tags: SQL  temperature  weather 
tue, 23-apr-2013, 07:01

This morning’s weather forecast includes this section:

.WEDNESDAY...CLOUDY. A CHANCE OF SNOW IN THE MORNING...THEN SNOW LIKELY IN THE AFTERNOON. SNOW ACCUMULATION OF 1 TO 2 INCHES. HIGHS AROUND 40. WEST WINDS INCREASING TO 15 TO 20 MPH.

.WEDNESDAY NIGHT...CLOUDY. SNOW LIKELY IN THE EVENING...THEN A CHANCE OF SNOW AFTER MIDNIGHT. LOWS IN THE 20S. WEST WINDS TO 20 MPH DIMINISHING.

Here’s a look at how often Fairbanks gets two or more inches of snow later than April 23rd:

Late spring snowfall amounts, Fairbanks Airport
Date Snow (in) Date Snow (in)
1915‑04‑27 2.0 1964‑05‑13 4.5
1916‑05‑03 2.0 1968‑05‑11 2.7
1918‑04‑26 4.1 1982‑04‑30 2.8
1918‑05‑15 2.0 1992‑05‑12 9.4
1923‑05‑03 3.0 2001‑05‑04 3.2
1931‑05‑06 2.0 2001‑05‑05 2.9
1948‑04‑26 4.0 2002‑04‑25 2.0
1952‑05‑05 2.8 2002‑04‑26 4.4
1962‑05‑07 2.0 2008‑04‑30 3.4

It’s not all that frequent, with only 18 occurrences in the last 98 years, and two of those 18 coming two days in a row. The pattern is also curious, with several in the early 1900s, one or two in each decade until the 2000s when there were several events.

In any case, I’m not looking forward to it. We’ve still got a lot of hardpack on the road from the 5+ inches we got a couple weeks ago and I’ve just started riding my bicycle to work every day. If we do get 2 inches of snow, that’ll slow breakup even more, and mess up the shoulders of the road for a few days.

tags: snow  weather 
fri, 12-apr-2013, 19:00

Next month I’ll be attending a game at Wrigley Field and my brother and I had some discussion about the best strategy for us to meet up somewhere in Chicago after the game. Knowing how long a game could be, and how many people are likely to be crowding the train platforms is useful information that can be inferred from the game data that http://www.retrosheet.org/ collects and distributes.

It’s also a good excuse to fiddle around the the IPython Notebook, pandas and the rest of the Python scientific computing stack.

I downloaded the game log data from http://www.retrosheet.org/gamelogs/index.html using this bash one-liner:

for i in `seq 1871 2012`; \
do  wget http://www.retrosheet.org/gamelogs/gl${i}.zip ; \
    unzip gl${i}.zip; \
    rm gl${i}.zip; \
done

Game Length

Game length in minutes is in column 19. Let’s read it and analyze it with Pandas.

import pandas as pd
import matplotlib.pyplot as plt
import datetime

def fix_df(input_df, cols):
    """ Pulls out the columns from cols, labels them and returns
        a new DataFrame """
    df = pd.DataFrame(index=range(len(input_df)))
    for k, v in cols.items():
        df[v] = input_df[k]

    return df

cols = {0:'dt_str', 2:'day_of_week', 3:'visiting_team',
        6:'home_team', 9:'visitor_score', 10:'home_score',
       11:'game_outs', 12:'day_night', 13:'complete',
       16:'park_id', 17:'attendance', 18:'game_time_min'}
raw_gamelogs = []
for i in range(1871, 2013):
    fn = "GL{0}.TXT".format(i)
    raw_gamelogs.append(pd.read_csv(fn, header=None, index_col=None))
raw_gamelog = pd.concat(raw_gamelogs, ignore_index=True)
gamelog = fix_df(raw_gamelog, cols)
gamelog['dte'] = gamelog.apply(
    lambda x: datetime.datetime.strptime(str(x['dt_str']), '%Y%m%d').date(),
    axis=1)
gamelog.ix[0:5, 1:] # .head() but without the dt_str col
  dow vis hme vis hme outs day park att time dte
0 Thu CL1 FW1 0 2 54 D FOR01 200 120 1871-05-04
1 Fri BS1 WS3 20 18 54 D WAS01 5000 145 1871-05-05
2 Sat CL1 RC1 12 4 54 D RCK01 1000 140 1871-05-06
3 Mon CL1 CH1 12 14 54 D CHI01 5000 150 1871-05-08
4 Tue BS1 TRO 9 5 54 D TRO01 3250 145 1871-05-09
5 Thu CH1 CL1 18 10 48 D CLE01 2500 120 1871-05-11

(Note that I’ve abbreviated the column names so they fit)

The data looks reasonable, although I don’t know quite what to make of the team names from back in 1871. Now I’ll take a look at the game time data for the whole data set:

times = gamelog['game_time_min']
times.describe()
count    162259.000000
mean        153.886145
std          74.850459
min          21.000000
25%         131.000000
50%         152.000000
75%         173.000000
max       16000.000000

The statistics look reasonable, except that it’s unlikley that there was a completed game in 21 minutes, or that a game took 11 days, so we have some outliers in the data. Let’s see if something else in the data might help us filter out these records.

First the games longer than 24 hours:

print(gamelog[gamelog.game_time_min > 60 * 24][['visiting_team',
                    'home_team', 'game_outs', 'game_time_min']])
print("Removing all NaN game_outs: {0}".format(
        len(gamelog[np.isnan(gamelog.game_outs)])))
print("Max date removed: {0}".format(
        max(gamelog[np.isnan(gamelog.game_outs)].dte)))
      visiting_team home_team  game_outs  game_time_min
26664           PHA       WS1        NaN          12963
26679           PHA       WS1        NaN           4137
26685           WS1       PHA        NaN          16000
26707           NYA       PHA        NaN          15115
26716           CLE       CHA        NaN           3478
26717           DET       SLA        NaN           1800
26787           PHA       NYA        NaN           3000
26801           PHA       NYA        NaN           6000
26880           CHA       WS1        NaN           2528
26914           SLA       WS1        NaN           2245
26921           SLA       WS1        NaN           1845
26929           CLE       WS1        NaN           3215
Removing all NaN game_outs: 37890
Max date removed: 1915-10-03

There’s no value for game_outs, so there isn’t data for how long the game actually was. We remove 37,890 records by eliminating this data, but these are all games from prior to the 1916 season, so it seems reasonable:

gamelog = gamelog[pd.notnull(gamelog.game_outs)]
gamelog.game_time_min.describe()
count    156639.000000
mean        154.808975
std          32.534916
min          21.000000
25%         133.000000
50%         154.000000
75%         174.000000
max        1150.000000

What about the really short games?

gamelog[gamelog.game_time_min < 60][[
        'dte', 'game_time_min', 'game_outs']].tail()
               dte  game_time_min  game_outs
79976   1948-07-02             24         54
80138   1948-07-22             59         30
80982   1949-05-29             21         42
113455  1971-07-30             48         27
123502  1976-09-10             57         30

Many of these aren’t nine inning games because there are less than 51 outs (8 innings for a home team, 9 for the visitor in a home team win ✕ 3 innings = 51). At the moment, I’m interested in looking at how long a game is likely to be, rather than the pattern of game length over time, so we can leave these records in the data.

Now we filter the data to just the games played since 2000.

twenty_first = gamelog[gamelog['dte'] > datetime.date(1999, 12, 31)]
times_21 = twenty_first['game_time_min']
times_21.describe()
count    31580.000000
mean       175.084421
std         26.621124
min         79.000000
25%        157.000000
50%        172.000000
75%        189.000000
max        413.000000

The average game length between 2000 and 2012 is 175 minutes, or just under three hours. And three quarters of all the games played are under three hours and ten minutes.

Here’s the code to look at the distribution of these times. The top plot shows a histogram (the count of the games in each bin) and a density estimation of the same data.

The second plot is a cumulative density plot, which makes it easier to see what percentage of games are played within a certain time.

from scipy import stats
# Calculate a kernel density function
density = stats.kde.gaussian_kde(times_21)
x = range(90, 300)
rc('grid', linestyle='-', alpha=0.25)
fig, axes = plt.subplots(ncols=1, nrows=2, figsize=(8, 6))

# First plot (histogram / density)
ax = axes[0]
ax2 = ax.twinx()
h = ax.hist(times_21, bins=50, facecolor="lightgrey", histtype="stepfilled")
d = ax2.plot(x, density(x))
ax2.set_ylabel("Density")
ax.set_ylabel("Count")
plt.title("Distribution of MLB game times, 2000-2012")
ax.grid(True)
ax.set_xlim(90, 300)
ax.set_xticks(range(90, 301, 15))

# Second plot (cumulative histogram)
ax1 = axes[1]
n, bins, patches = ax1.hist(times_21, bins=50, normed=True, cumulative=True,
        facecolor="lightgrey", histtype="stepfilled")
y = n / float(n[-1]) # Convert counts to percentage of total
y = np.concatenate(([0], y))
bins = bins - ((bins[1] - bins[0]) / 2.0) # Center curve on bars
ax1.plot(bins, y, color="blue")
ax1.set_ylabel("Cumulative percentage")
ax1.set_xlabel("Game time (minutes)")
ax1.grid(True)
ax1.set_xlim(90, 300)
ax1.set_xticks(range(90, 301, 15))
ax1.set_yticks(np.array(range(0, 101, 10)) / 100.0)
plt.savefig('game_time.png', dpi=87.5)
plt.savefig('game_time.svg')
plt.savefig('game_time.pdf')
//media.swingleydev.com/img/blog/2013/04/game_time.png

The two plots show what the descriptive statistics did: 70% of games are under three hours but it’s not uncommon for games to last three hours and fifteen minutes. Beyond that, it’s pretty uncommon.

Change in game times in the last 100 years

Now let’s look at how game times have changed over the years. First we eliminate all the games that weren’t complete in 51 or 54 innings to standardize the “game” were’re evaluating.

nine_innings = gamelog[gamelog.game_outs.isin([51, 54])]
nine_innings['year'] = nine_innings.apply(lambda x: x['dte'].year, axis=1)
nine_groupby_year = nine_innings.groupby('year')
mean_time_by_year = nine_groupby_year['game_time_min'].aggregate(np.mean)
fig = plt.figure()
p = mean_time_by_year.plot(figsize=(8, 4))
p.set_xlim(1915, 2015)
p.set_xlabel('Year')
p.set_ylabel('Game Time (minutes)')
p.set_title('Average MLB nine-inning game time since 1915')
p.set_xticks(range(1915, 2016, 10))
plt.savefig('game_time_by_year.png', dpi=87.5)
plt.savefig('game_time_by_year.svg')
plt.savefig('game_time_by_year.pdf')
//media.swingleydev.com/img/blog/2013/04/game_time_by_year.png

That shows a pretty clear upward trend interrupted by a slight decline between 1960 and 1975 (when offense was down across baseball). Since the mid-90s, game times have hovered around 2:50, so maybe MLB’s efforts at increasing the pace of the game have at least stopped what had been an almost continual rise in game times.

Attendance

I’ll be seeing a game at Wrigley Field, so let’s examine attendance at Wrigley.

Attendance is field 17, “Park ID” is field 16, but we can probably use Home team (field 6) = “CHN” for seasons after 1914 when it opened.

twenty_first[twenty_first['home_team'] == 'CHN']['attendance'].describe()
count     1053.000000
mean     37326.561254
std       5544.121773
min          0.000000
25%      36797.000000
50%      38938.000000
75%      40163.000000
max      55000.000000

We see the minimum is zero, which may indicate bad or missing data. Let’s look at all the games at Wrigley with less than 10,000 fans:

twenty_first[(twenty_first.home_team == 'CHN') &
             (twenty_first.attendance < 10000)][[
            'dte', 'visiting_team', 'home_team', 'visitor_score',
            'home_score', 'game_outs', 'day_night', 'attendance',
            'game_time_min']]
  dte vis hme vis hme outs day att game_time
172776 2000-06-01 ATL CHN 3 5 51 D 5267 160
173897 2000-08-25 LAN CHN 5 3 54 D 0 189
174645 2001-04-18 PHI CHN 3 4 51 D 0 159
176290 2001-08-20 MIL CHN 4 7 51 D 0 180
177218 2002-04-28 LAN CHN 5 4 54 D 0 196
177521 2002-05-21 PIT CHN 12 1 54 N 0 158
178910 2002-09-02 MIL CHN 4 2 54 D 0 193
181695 2003-09-27 PIT CHN 2 4 51 D 0 169
183806 2004-09-10 FLO CHN 7 0 54 D 0 146
184265 2005-04-13 SDN CHN 8 3 54 D 0 148
188186 2006-08-03 ARI CHN 10 2 54 D 0 197

Looks like the zeros are just missing data because these games have relevant data for the other fields and it’s impossible to believe that not a single person was in the stands. We’ll get rid of them for the attendance analysis.

Now we’ll filter the games so we’re only looking at day games played at Wrigley where the attendance value is above zero, and group the data by day of the week.

groupby_dow = twenty_first[(twenty_first['home_team'] == 'CHN') &
           (twenty_first['day_night'] == 'D') &
           (twenty_first['attendance'] > 0)].groupby('day_of_week')
groupby_dow['attendance'].aggregate(np.mean).order()
day_of_week
Tue            34492.428571
Wed            35312.265957
Thu            35684.737864
Mon            37938.757576
Fri            38066.060976
Sun            38583.833333
Sat            39737.428571
Name: attendance

And plot it:

filtered = twenty_first[(twenty_first['home_team'] == 'CHN') &
                           (twenty_first['day_night'] == 'D') &
                           (twenty_first['attendance'] > 0)]
dows = {'Sun':0, 'Mon':1, 'Tue':2, 'Wed':3, 'Thu':4, 'Fri':5, 'Sat':6}
filtered['dow'] = filtered.apply(lambda x: dows[x['day_of_week']], axis=1)
filtered['attendance'] = filtered['attendance'] / 1000.0
fig = plt.figure()
d = filtered.boxplot(column='attendance', by='dow', figsize=(8, 4))
plt.suptitle('')
d.set_xlabel('Day of the week')
d.set_ylabel('Attendance (thousands)')
d.set_title('Wrigley Field attendance by day of the week, 2000-2012')
labels = ['Sun', 'Mon', 'Tue', 'Wed', 'Thu', 'Fri', 'Sat']
l = d.set_xticklabels(labels)
d.set_ylim(15, 45)
plt.savefig('attendance.png', dpi=87.5)
plt.savefig('attendance.svg') # lines don't show up?
plt.savefig('attendance.pdf')
//media.swingleydev.com/img/blog/2013/04/attendance.png

There’s a pretty clear pattern here, with increasing attendance from Tuesday through Monday, and larger variances in attendance during the week.

It’s a bit surprising that Monday games are as popular as Saturday games, especially since we’re only looking at day games. On any given Monday when the Cubs are at home, there’s a good chance that there will be fourty thousand people not showing up to work!

tags: baseball  python 

Meta Photolog Archives