Following up on yesterday’s post about minimum temperatures, I was thinking that a cumulative measure of cold temperatures would probably be a better measure of how cold a winter is. We all remember the extremely cold days each winter when the propane gells or the car won’t start, but it’s the long periods of deep cold that really take their toll on buildings, equipment, and people in the Interior.
One way of measuring this is to find all the days in a winter year when the average temperature is below freezing and sum all the temperatures below freezing for that winter year. For example, if the temperature is 50°F, that’s not below freezing so it doesn’t count. If the temperature is −40°, that’s 72 freezing degrees (Fahrenheit). Do this for each day in a year and add up all the values.
Here’s the code to make the plot below (see my previous post for how we got fai_pivot).
fai_winter_year_freezing_degree_days <-
fai_pivot %>%
mutate(winter_year=year(dte - days(92)),
fdd=ifelse(TAVG < 0, -1*TAVG*9/5, 0)) %>%
filter(winter_year < 2014) %>%
group_by(station_name, winter_year) %>%
select(station_name, winter_year, fdd) %>%
summarize(fdd=sum(fdd, na.rm=TRUE), n=n()) %>%
filter(n>350) %>%
select(station_name, winter_year, fdd) %>%
spread(station_name, fdd)
fdd_gathered <-
fai_winter_year_freezing_degree_days %>%
gather(station_name, fdd, -winter_year) %>%
arrange(winter_year)
q <-
fdd_gathered %>%
ggplot(aes(x=winter_year, y=fdd, colour=station_name)) +
geom_point(size=1.5, position=position_jitter(w=0.5,h=0.0)) +
geom_smooth(data=subset(fdd_gathered, winter_year<1975),
method="lm", se=FALSE) +
geom_smooth(data=subset(fdd_gathered, winter_year>=1975),
method="lm", se=FALSE) +
scale_x_continuous(name="Winter Year",
breaks=pretty_breaks(n=20)) +
scale_y_continuous(name="Freezing degree days (degrees F)",
breaks=pretty_breaks(n=10)) +
scale_color_manual(name="Station",
labels=c("College Observatory",
"Fairbanks Airport",
"University Exp. Station"),
values=c("darkorange", "blue", "darkcyan")) +
theme_bw() +
theme(legend.position = c(0.875, 0.120)) +
theme(axis.text.x = element_text(angle=45, hjust=1))
rescale <- 0.65
svg('freezing_degree_days.svg', height=10*rescale, width=16*rescale)
print(q)
dev.off()
And the plot.
You’ll notice I’ve split the trend lines at 1975. When I ran the regressions for the entire period, none of them were statistically significant, but looking at the plot, it seems like something happens in 1975 where the cumulative freezing degree days suddenly drop. Since then, they've been increasing at a faster, and statistically significant rate.
This is odd, and it makes me wonder if I've made a mistake in the calculations because what this says is that, at least since 1975, the winters are getting colder as measured by the total number of degrees below freezing each winter. My previous post (and studies of climate in general) show that the climate is warming, not cooling.
One bias that's possible with cumulative calculations like this is that missing data becomes more important, but I looked at the same relationships when I only include years with at least 364 days of valid data (only one or two missing days) and the same pattern exists.
Curious. When combined, this analysis and yesterday's suggest that winters in Fairbanks are getting colder overall, but that the minimum temperature in any year is likely to be warmer than in the past.
The Weather Service is calling for our first −40° temperatures of the winter, which is pretty remarkable given how late in the winter it is. The 2014/2015 winter is turning out to be one of the warmest on record, and until this upcoming cold snap, we’ve only had a few days below normal, and mostly it’s been significantly warmer. You can see this on my Normalized temperature anomaly plot, where most of the last four months has been reddish.
I thought I’d take a look at the minimum winter temperatures for the three longest running Fairbanks weather stations to see what patterns emerge. This will be a good opportunity to further experiment with the dplyr and tidyr R packages I’m learning.
The data set is the Global Historical Climatology Network - Daily (GHCND) data from the National Climatic Data Center (NCDC). The data, at least as I’ve been collecting it, has been fully normalized, which is another way of saying that it’s stored in a way that makes database operations efficient, but not necessarily the way people want to look at it.
There are three main tables, ghchd_stations containing data about each station, ghcnd_variables containing information about the variables in the data, and ghcnd_obs which contains the observations. We need ghchd_stations in order to find what stations we’re interested in, by name or location, for example. And we need ghcnd_variables to convert the values in the observation table to the proper units. The observation table looks something like this:
station_id | dte | variable | raw_value | qual_flag |
---|---|---|---|---|
USW00026411 | 2014-12-25 | TMIN | -205 | |
USW00026411 | 2014-12-25 | TMAX | -77 | |
USW00026411 | 2014-12-25 | PRCP | 15 | |
USW00026411 | 2014-12-25 | SNOW | 20 | |
USW00026411 | 2014-12-25 | SNWD | 230 |
There are a few problems with using this table directly. First, the station_id column doesn’t tell us anything about the station (name, location, etc.) without joining it to the stations table. Second, we need to use the variables table to convert the raw values listed in the table to their actual values. For example, temperatures are in degrees Celsius × 10, so we need to divide the raw value to get actual temperatures. Finally, to get the so that we have one row per date, with columns for the variables we’re interested in we have to “pivot” the data (to use Excel terminology).
Here’s how we get all the data using R.
Load the libraries we will need:
library(dplyr)
library(tidyr)
library(ggplot2)
library(scales)
library(lubridate)
library(knitr)
Connect to the database and get the tables we need, choosing only the stations we want from the stations table. In the filter statement you can see we’re using a PostgreSQL specific operator ~ to do the filtering. In other databases we’d probably use %in% and include the station names as a list.
noaa_db <- src_postgres(host="localhost", user="cswingley", port=5434, dbname="noaa")
# Construct database table objects for the data
ghcnd_obs <- tbl(noaa_db, "ghcnd_obs")
ghcnd_vars <- tbl(noaa_db, "ghcnd_variables")
# Filter stations to just the long term Fairbanks stations:
fai_stations <-
tbl(noaa_db, "ghcnd_stations") %>%
filter(station_name %~% "(FAIRBANKS INT|UNIVERSITY EXP|COLLEGE OBSY)")
Here’s where we grab the data. We are using the magrittr package’s pipe operator (%>%) to chain operations together, making it really easy to follow exactly how we’re manipulating the data along the way.
# Get the raw data
fai_raw <-
ghcnd_obs %>%
inner_join(fai_stations, by="station_id") %>%
inner_join(ghcnd_vars, by="variable") %>%
mutate(value=raw_value*raw_multiplier) %>%
filter(qual_flag=='') %>%
select(station_name, dte, variable, value) %>%
collect()
# Save it
save(fai_raw, file="fai_raw.rdata", compress="xz")
In order, we start with the complete observation table (which contains 29 million rows at this moment), then we join it with our filtered stations using inner_join(fai_stations, by="station_id"). Now we’re down to 723 thousand rows of data. We join it with the variables table, then create a new column called value that is the raw value from the observation table multiplied by the multiplier from the variable table. We remove any observation that doesn’t have an empty string for the quality flag (a value in this fields indicates there’s something wrong with the data). Finally, we reduce the number of columns we’re keeping to just the station name, date, variable name, and the actual value.
We then use collect() to actually run all these operations and collect the results into an R object. One of the neat things about database operations using dplyr is that the SQL isn’t actually performed until it is actually necessary, which really speeds up the testing phase of the analysis. You can play around with joining, filtering and transforming the data using operations that are fast until you have it just right, then collect() to finalize the steps.
At this stage, the data is still in it’s normalized form. We’ve fixed the station name and the values in the data are now what was observed, but we still need to pivot the data to make is useful.
We’ll use the tidyr spread() function to make the value that appears in the variable column (TMIN, TMAX, etc.) appear as columns in the output, and put the data in the value column into the cells in each column and row. We’re also calculating an average daily temperature from the minimum and maximum temperatures and selecting just the columns we want.
# pivot, calculate average temp, include useful vars
fai_pivot <-
fai_raw %>%
spread(variable, value) %>%
transform(TAVG=(TMIN+TMAX)/2.0) %>%
select(station_name, dte, TAVG, TMIN, TMAX, TOBS, PRCP, SNOW, SNWD,
WSF1, WDF1, WSF2, WDF2, WSF5, WDF5, WSFG, WDFG, TSUN)
Now we’ve got a table with rows for each station name and date, and columns with all the observed variables we might be interested in.
Time for some analysis. Let’s get the minimum temperatures by year and station. When looking at winter temperatures, it makes more sense to group by “winter year” rather that the actual year. In our case, we’re subtracting 92 days from the date and getting the year. This makes the winter year start in April instead of January and means that the 2014/2015 winter has a winter year of 2014.
# Find coldest temperatures by winter year, as a nice table
fai_winter_year_minimum <-
fai_pivot %>%
mutate(winter_year=year(dte - days(92))) %>%
filter(winter_year < 2014) %>%
group_by(station_name, winter_year) %>%
select(station_name, winter_year, TMIN) %>%
summarize(tmin=min(TMIN*9/5+32, na.rm=TRUE), n=n()) %>%
filter(n>350) %>%
select(station_name, winter_year, tmin) %>%
spread(station_name, tmin)
In order, we’re taking the pivoted data (fai_pivot), adding a column for winter year (mutate), removing the data from the current year since the winter isn’t over (filter), grouping by station and winter year (group_by), reducing the columns down to just minimum temperature (select), summarizing by minimum temperature after converting to Fahrenheit and the number of days with valid data (summarize), only selecting years with 350 ore more days of data (select), and finally grabbing and formatting just the columns we want (select, spread).
Here’s the last 20 years and how we get a nice table of them.
last_twenty <-
fai_winter_year_minimum %>%
filter(winter_year > 1993)
# Write to an RST table
sink("last_twenty.rst")
print(kable(last_twenty, format="rst"))
sink()
Winter Year | College Obsy | Fairbanks Airport | University Exp Stn |
---|---|---|---|
1994 | -43.96 | -47.92 | -47.92 |
1995 | -45.04 | -45.04 | -47.92 |
1996 | -50.98 | -50.98 | -54.04 |
1997 | -43.96 | -47.92 | -47.92 |
1998 | -52.06 | -54.94 | -54.04 |
1999 | -50.08 | -52.96 | -50.98 |
2000 | -27.94 | -36.04 | -27.04 |
2001 | -40.00 | -43.06 | -36.04 |
2002 | -34.96 | -38.92 | -34.06 |
2003 | -45.94 | -45.94 | NA |
2004 | NA | -47.02 | -49.00 |
2005 | -47.92 | -50.98 | -49.00 |
2006 | NA | -43.96 | -41.98 |
2007 | -38.92 | -47.92 | -45.94 |
2008 | -47.02 | -47.02 | -49.00 |
2009 | -32.98 | -41.08 | -41.08 |
2010 | -36.94 | -43.96 | -38.02 |
2011 | -47.92 | -50.98 | -52.06 |
2012 | -43.96 | -47.92 | -45.04 |
2013 | -36.94 | -40.90 | NA |
To plot it, we need to re-normalize it so that each row in the data has winter_year, station_name, and tmin in it.
Here’s the plotting code, including the commands to re-normalize.
q <-
fai_winter_year_minimum %>%
gather(station_name, tmin, -winter_year) %>%
arrange(winter_year) %>%
ggplot(aes(x=winter_year, y=tmin, colour=station_name)) +
geom_point(size=1.5, position=position_jitter(w=0.5,h=0.0)) +
geom_smooth(method="lm", se=FALSE) +
scale_x_continuous(name="Winter Year",
breaks=pretty_breaks(n=20)) +
scale_y_continuous(name="Minimum temperature (degrees F)",
breaks=pretty_breaks(n=10)) +
scale_color_manual(name="Station",
labels=c("College Observatory",
"Fairbanks Airport",
"University Exp. Station"),
values=c("darkorange", "blue", "darkcyan")) +
theme_bw() +
theme(legend.position = c(0.875, 0.120)) +
theme(axis.text.x = element_text(angle=45, hjust=1))
The lines are the linear regression lines between winter year and minimum temperature. You can see that the trend is for increasing minimum temperatures. Each of these lines is statistically significant (both the coefficients and the overall model), but they only explain about 7% of the variation in temperatures. Given the spread of the points, that’s not surprising. The data shows that the lowest winter temperature at the Fairbanks airport is rising by 0.062 degrees each year.
I spent some time this weekend playing with a couple interesting new R packages that should help with some of the difficulty manipulating data with the base packages. Getting data into a format appropriate for plotting or running statistical models often seems to take more time than anything else, and the process can be very frustrating because of seemingly non-sensical error messages from R.
R guru Hadley Wickham has written some of the best packages for data manipulation (reshape2, plyr) and plotting (ggplot2). He’s got a new pair of packages (tidyr and dplyr) and a theory of getting data into the proper format (http://vita.had.co.nz/papers/tidy-data.pdf) that look very promising.
A couple other tools I wanted to look at include a new way of piping data from one operation to another (magrittr), which is a basic part of the Unix philosophy of having many tools that do one thing well and stringing them together to do neat things, and an interactive graphic package, ggvis, that should be really good for data investigation.
For this investigation, I’m looking at the names of our dogs, past and present, and how popular they are as people names in the United States. This data comes from the Social Security Administration and is available in the R package babynames.
install.packages("babynames")
library(babynames)
head(babynames)
Source: local data frame [6 x 5]
year sex name n prop
1 1880 F Mary 7065 0.07238359
2 1880 F Anna 2604 0.02667896
3 1880 F Emma 2003 0.02052149
4 1880 F Elizabeth 1939 0.01986579
5 1880 F Minnie 1746 0.01788843
6 1880 F Margaret 1578 0.01616720
The data has the number of registrations (n) and proportion of total registrations (prop) for each year from 1880 through 2013 for all name and sex combinations.
What I want to see is how popular out dog names are. All of our dogs have been adopted, but some we chose names for them (Nika, Piper, Kiva), and the rest came with names we didn’t change (Deuce, Buddy, Koidern, Lennier, Martin, Monte and the second Piper). Dog mushers often choose a theme for a litter of puppies, which accounts for some of the unusual names (Deuce came from a litter of classic cars, Lennier came from a litter of Babylon 5 character names, Koidern from a litter of Yukon River tributary names).
So what I want to do is subset the babynames database to just the names of our dogs, combine male and female names together, and plot the popularity of these names over time.
Here’s how that’s done using the magrittr pipe operator (%>%):
library(dplyr)
library(magrittr)
dog_names <- babynames %>%
filter(name %in% c("Nika", "Piper", "Buddy", "Koidern", "Deuce",
"Kiva", "Lennier", "Martin", "Monte")) %>%
group_by(year, name) %>%
summarise(prop=sum(prop)) %>%
transform(name=factor(name)) %>%
ungroup() %>%
arrange(name, year)
We are assigning the result of all the pipes to dog_names. In order, we take the babynames data set and filter it by our dog’s names. Then we group by year and name, and summarize the proportion values by sum. At this point we have data with just our dog’s names, with the proportions of both male and female baby names combined. Next, we convert the name variable to a factor, remove the grouping, and sort by name and year.
Here’s what it looks like now:
> head(dog_names)
year name prop
1 1893 Buddy 4.130866e-05
2 1894 Buddy 5.604573e-05
3 1896 Buddy 8.522045e-05
4 1898 Buddy 3.784782e-05
5 1899 Buddy 4.340353e-05
6 1900 Buddy 6.167015e-05
Now we’ve got tidy filtered and sorted data, so let’s plot it. I’ve been using ggplot2 for many years, and I think it’s the best way to produce publication quality figures. But usually you want to do some investigation of the data before doing that, and doing this in ggplot2 involves many cycles of code manipulation, plotting, viewing in order to see what you’ve got and how you want the final version to look.
ggvis is a new package that displays data interactively in a web browser. It also supports the pipe operator, so you can pipe the data directly into the plotting routine. It’s somewhat similar to ggplot2, but has some new conventions that are required in order to handle interactivity. Here’s a plot of my dog names data. The first part is the same as before, but I’m piping the result directly into ggivs.
library(ggvis)
babynames %>%
filter(name %in% c("Nika", "Piper", "Buddy", "Koidern", "Deuce",
"Kiva", "Lennier", "Martin", "Monte")) %>%
group_by(year, name) %>%
summarise(prop=sum(prop)) %>%
transform(name=factor(name)) %>%
ungroup() %>%
arrange(name, year) %>%
ggvis(~year, ~prop, stroke=~name, fill=~name) %>%
# layer_lines(strokeWidth:=2) %>%
layer_points(size:=15) %>%
add_axis("x", title="Year", format="####") %>%
add_axis("y", title="Proportion of total names", title_offset=50) %>%
add_legend(c("stroke", "fill"), title="Name")
Popularity of our dog names, ggvis version
Typically, I prefer to include lines and points in a timeseries plot like this, but I couldn't get ggvis to color the lines and the points without some very strange fill artifacts.
Here’s what I’d consider to be a high quality version of this, generated with ggplot:
library(ggplot2)
library(scales)
q <- ggplot(data=dog_names, aes(x=year, y=prop, colour=name)) +
geom_point(size=1.75) +
geom_line() +
theme_bw() +
scale_colour_brewer(palette="Set1") +
scale_x_continuous(name="Year", breaks=pretty_breaks(n=10)) +
scale_y_continuous(name="Proportion of total names", breaks=pretty_breaks(n=10))
rescale <- 0.50
svg("dog_names_ggplot2.svg", height=9*rescale, width=16*rescale)
print(q)
dev.off()
Popularity of our dog names, ggplot2 version
I think the two plots are pretty similar, and I’m impressed with how good the ggvis plot looks and how similar the language is to ggplot2. And I really like the pipe operator compared with a long list of individual statements or the way you add things together with ggplot2.
Both plots suffer from having too many groups (seven), which means it becomes difficult to interpret the colors on the plot. Choosing a good palette is key to this, and is one of those parts of figure production that can really take a long time. I don’t think my choices in the ggplot2 version is optimal, but I got tired of looking. The other problem is the collection of dog names with very low proportions among human babies. Because they’re all overlapping near the axis, this data is obscured. Both problems could be solved by stacking two plots on top of each other, one with the more popular names (Martin, Piper, Buddy and Monte) and one with the less popular ones (Deuce, Kiva, Nika) using different scales for the proportion axis.
What does the plot show? Among our dog’s names, Martin was the most popular, but it’s popularity has been declining since the 60s, and the name Piper has been increasing since 2000. Both Monte and Buddy were popular in the past, but have declined to low levels recently.
For reference, here are the number of babies in 2013 that were given names matching those of our dogs:
babynames %>%
filter(name %in% c("Nika", "Piper", "Buddy", "Koidern", "Deuce",
"Kiva", "Lennier", "Martin", "Monte")
& year==2013) %>%
group_by(year, name) %>%
summarise(n=sum(n)) %>%
transform(name=factor(name)) %>%
ungroup() %>%
arrange(desc(n))
year name n
1 2013 Piper 3166
2 2013 Martin 1330
3 2013 Monte 81
4 2013 Nika 67
5 2013 Buddy 21
6 2013 Kiva 18
7 2013 Deuce 5
Following up on my previous post, I tried the regression approach for predicting future snow depth from current values. As you recall, I produced a plot that showed how much snow we’ve had on the ground on each date at the Fairbanks Airport between 1917 and 2013. These boxplots gave us an idea of what a normal snow depth looks like on each date, but couldn’t really tell us much about what we might expect for snow depth for the rest of the winter.
Regression
I ran a linear regression analysis looking at how snow depth on November 8th relates to snow depth on November 27th and December 25th of the same year. Here’s the SQL:
SELECT * FROM (
SELECT extract(year from dte) AS year,
max(CASE WHEN to_char(dte, 'mm-dd') = '11-08'
THEN round(snwd_mm/25.4, 1)
ELSE NULL END) AS nov_8,
max(CASE WHEN to_char(dte, 'mm-dd') = '11-27'
THEN round(snwd_mm/25.4, 1)
ELSE NULL END) AS nov_27,
max(CASE WHEN to_char(dte, 'mm-dd') = '12-15'
THEN round(snwd_mm/25.4, 1)
ELSE NULL END) AS dec_25
FROM ghcnd_pivot
WHERE station_name = 'FAIRBANKS INTL AP'
AND snwd_mm IS NOT NULL
GROUP BY extract(year from dte)
ORDER BY year
) AS sub
WHERE nov_8 IS NOT NULL
AND nov_27 IS NOT NULL
AND dec_25 IS NOT NULL;
I’m grouping on year, then grabbing the snow depth for the three dates of interest. I would have liked to include dates in January and February in order to see how the relationship weakens as the winter progresses, but that’s a lot more complicated because then we are comparing the dates from one year to the next and the grouping I used in the query above wouldn’t work.
One note on this analysis: linear regression has a bunch of assumptions that need to be met before considering the analysis to be valid. One of these assumptions is that observations are independent from one another, which is problematic in this case because snow depth is a cumulative statistic; the depth tomorrow is necessarily related to the depth of the snow today (snow depth tomorrow = snow depth today + snowfall). Whether it’s necessarily related to the depth of the snow a month from now is less certain, and I’m making the possibly dubious assumption that autocorrelation disappears when the time interval between observations is longer than a few weeks.
Results
Here are the results comparing the snow depth on November 8th to November 27th:
> reg <- lm(data=results, nov_27 ~ nov_8)
> summary(reg)
Call:
lm(formula = nov_27 ~ nov_8, data = results)
Residuals:
Min 1Q Median 3Q Max
-8.7132 -3.0490 -0.6063 1.7258 23.8403
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) 3.1635 0.9707 3.259 0.0016 **
nov_8 1.1107 0.1420 7.820 1.15e-11 ***
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
Residual standard error: 4.775 on 87 degrees of freedom
Multiple R-squared: 0.4128, Adjusted R-squared: 0.406
F-statistic: 61.16 on 1 and 87 DF, p-value: 1.146e-11
And between November 8th and December 25th:
> reg <- lm(data=results, dec_25 ~ nov_8)
> summary(reg)
Call:
lm(formula = dec_25 ~ nov_8, data = results)
Residuals:
Min 1Q Median 3Q Max
-10.209 -3.195 -1.195 2.781 10.791
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) 6.2227 0.8723 7.133 2.75e-10 ***
nov_8 0.9965 0.1276 7.807 1.22e-11 ***
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
Residual standard error: 4.292 on 87 degrees of freedom
Multiple R-squared: 0.412, Adjusted R-squared: 0.4052
F-statistic: 60.95 on 1 and 87 DF, p-value: 1.219e-11
Both regressions are very similar. The coefficients and the overall model are both very significant, and the R² value indicates that in each case, the snow depth on November 8th explains about 40% of the variation in the snow depth on the later date. The amount of variation explained hardly changes at all, despite almost a month difference between the two analyses.
Here's a plot of the relationship between today’s date and Christmas (PDF version)
The blue line is the linear regression model.
Conclusions
For 2014, we’ve got 2 inches of snow on the ground on November 8th. The models predict we’ll have 5.4 inches on November 27th and 8 inches on December 25th. That isn’t great, but keep in mind that even though the relationship is quite strong, it explains less than half of the variation in the data, which means that it’s quite possible we will have a lot more, or less. Looking back at the plot, you can see that for all the years where we had two inches of snow on November 8th, we had between five and fifteen inches of snow in that same year on December 25th. I’m certainly hoping we’re closer to fifteen.
Winter started off very early this year with the first snow falling on October 4th and 5th, setting a two inch base several weeks earlier than normal. Since then, we’ve had only two days with more than a trace of snow.
This seems to be a common pattern in Fairbanks. After the first snowfall and the establishment of a thin snowpack on the ground, we all get excited for winter and expect the early snow to continue to build, filling the holes in the trails and starting the skiing, mushing and winter fat biking season. Then, nothing.
Analysis
I decided to take a quick look at the pattern of snow depth at the Fairbanks Airport station to see how uncommon it is to only have two inches of snow this late in the winter (November 8th at this writing). The plot below shows all of the snow depth data between November and April for the airport station, displayed as box and whisker plots.
Here’s the SQL:
SELECT extract(year from dte) AS year,
extract(month from dte) AS month,
to_char(dte, 'MM-DD') AS mmdd,
round(snwd_mm/25.4, 1) AS inches
FROM ghcnd_pivot
WHERE station_name = 'FAIRBANKS INTL AP'
AND snwd_mm IS NOT NULL
AND (extract(month from dte) BETWEEN 11 AND 12
OR extract(month from dte) BETWEEN 1 AND 4);
If you’re interested in the code that produces the plot, it’s at the bottom of the post. If the plot doesn’t show up in your browser or you want a copy for printing, here’s a link to the PDF version.
Box and whisker plots
For those unfamiliar with these, they’re a good way to evaluate the range of data grouped by some categorical variable (date, in our case) along with details about the expected values and possible extremes. The upper and lower limit of each box show the ranges where 25—75% of the data fall, meaning that half of all observed values are within this box for each date. For example, on today’s date, November 8th, half of all snow depth values for the period in question fell between four and eight inches. Our current snow depth of two inches falls below this range, so we can say that only having two inches of snow on the ground happens in less than 25% of the time.
The horizontal line near the middle of the box is the median of all observations for that date. Median is shown instead of average / mean because extreme values can skew the mean, so a median will often be more representative of the most likely value. For today’s date, the median snow depth is five inches. That’s what we’d expect to see on the ground now.
The vertical lines extending above and below the boxes show the points that are within 1.5 times the range of the boxes. These lines represent the values from the data outside the most likely, but not very unusual. If you scan across the November to December portion of the plot, you can see that the lower whisker touches zero for most of the period, but starting on December 26th, it rises above zero and doesn’t return until the spring. That means that there have been years where there was no snow on the ground on Christmas. Ouch.
The dots beyond the whiskers are outliers; observations so far from what is normal that they’re exceptional and not likely to be repeated. On this plot, most of these outliers are probably from one or two exceptional years where we got a ton of snow. Some of those outliers are pretty incredible; consider having two and a half feet of snow on the ground at the end of April, for example.
Conclusion
The conclusion I’d draw from comparing our current snow depth of two inches against the boxplots is that it is somewhat unusual to have this little snow on the ground, but that it’s not exceptional. It wouldn’t be unusual to have no snow on the ground.
Looking forward, we would normally expect to have a foot of snow on the ground by mid-December, and I’m certainly hoping that happens this year. But despite the probabilities shown on this plot it can’t say how likely that is when we know that there’s only two inches on the ground now. One downside to boxplots in an analysis like this is that the independent variable (date) is categorical, and the plot doesn’t have anything to say about how the values on one day relate to the values on the next day or any date in the future. One might expect, for example, that a low snow depth on November 8th means it’s more likely we’ll also have a low snow depth on December 25th, but this data can’t offer evidence on that question. It only shows us what each day’s pattern of snow depth is expected to be on it’s own.
Bayesian analysis, “given a snow depth of two inches on November 8th, what is the likelihood of normal snow depth on December 25th”, might be a good approach for further investigation. Or a more traditional regression analysis examining the relationship between snow depth on one date against snow depth on another.
Appendix: R Code
library(RPostgreSQL)
library(ggplot2)
library(scales)
library(gtable)
# Build plot "table"
make_gt <- function(nd, jf, ma) {
gt1 <- ggplot_gtable(ggplot_build(nd))
gt2 <- ggplot_gtable(ggplot_build(jf))
gt3 <- ggplot_gtable(ggplot_build(ma))
max_width <- unit.pmax(gt1$widths[2:3], gt2$widths[2:3], gt3$widths[2:3])
gt1$widths[2:3] <- max_width
gt2$widths[2:3] <- max_width
gt3$widths[2:3] <- max_width
gt <- gtable(widths = unit(c(11), "in"), heights = unit(c(3, 3, 3), "in"))
gt <- gtable_add_grob(gt, gt1, 1, 1)
gt <- gtable_add_grob(gt, gt2, 2, 1)
gt <- gtable_add_grob(gt, gt3, 3, 1)
gt
}
drv <- dbDriver("PostgreSQL")
con <- dbConnect(drv, dbname="DBNAME")
results <- dbGetQuery(con,
"SELECT extract(year from dte) AS year,
extract(month from dte) AS month,
to_char(dte, 'MM-DD') AS mmdd,
round(snwd_mm/25.4, 1) AS inches
FROM ghcnd_pivot
WHERE station_name = 'FAIRBANKS INTL AP'
AND snwd_mm IS NOT NULL
AND (extract(month from dte) BETWEEN 11 AND 12
OR extract(month from dte) BETWEEN 1 AND 4);")
results$mmdd <- as.factor(results$mmdd)
# NOV DEC
nd <- ggplot(data=subset(results, month == 11 | month == 12), aes(x=mmdd, y=inches)) +
geom_boxplot() +
theme_bw() +
theme(axis.title.x = element_blank()) +
theme(plot.margin = unit(c(1, 1, 0, 0.5), 'lines')) +
# scale_x_discrete(name="Date (mm-dd)") +
scale_y_discrete(name="Snow depth (inches)", breaks=pretty_breaks(n=10)) +
theme(axis.text.x = element_text(angle = 45, hjust = 1)) +
ggtitle('Snow depth by date, Fairbanks Airport, 1917-2013')
# JAN FEB
jf <- ggplot(data=subset(results, month == 1 | month == 2), aes(x=mmdd, y=inches)) +
geom_boxplot() +
theme_bw() +
theme(axis.title.x = element_blank()) +
theme(plot.margin = unit(c(0, 1, 0, 0.5), 'lines')) +
# scale_x_discrete(name="Date (mm-dd)") +
scale_y_discrete(name="Snow depth (inches)", breaks=pretty_breaks(n=10)) +
theme(axis.text.x = element_text(angle = 45, hjust = 1)) # +
# ggtitle('Snowdepth by date, Fairbanks Airport, 1917-2013')
# MAR APR
ma <- ggplot(data=subset(results, month == 3 | month == 4), aes(x=mmdd, y=inches)) +
geom_boxplot() +
theme_bw() +
theme(plot.margin = unit(c(0, 1, 1, 0.5), 'lines')) +
scale_x_discrete(name="Date (mm-dd)") +
scale_y_discrete(name="Snow depth (inches)", breaks=pretty_breaks(n=10)) +
theme(axis.text.x = element_text(angle = 45, hjust = 1)) # +
# ggtitle('Snowdepth by date, Fairbanks Airport, 1917-2013')
gt <- make_gt(nd, jf, ma)
svg("snowdepth_boxplots.svg", width=11, height=9)
grid.newpage()
grid.draw(gt)
dev.off()