sat, 24-jan-2015, 12:41

The Weather Service is calling for our first −40° temperatures of the winter, which is pretty remarkable given how late in the winter it is. The 2014/2015 winter is turning out to be one of the warmest on record, and until this upcoming cold snap, we’ve only had a few days below normal, and mostly it’s been significantly warmer. You can see this on my Normalized temperature anomaly plot, where most of the last four months has been reddish.

I thought I’d take a look at the minimum winter temperatures for the three longest running Fairbanks weather stations to see what patterns emerge. This will be a good opportunity to further experiment with the dplyr and tidyr R packages I’m learning.

The data set is the Global Historical Climatology Network - Daily (GHCND) data from the National Climatic Data Center (NCDC). The data, at least as I’ve been collecting it, has been fully normalized, which is another way of saying that it’s stored in a way that makes database operations efficient, but not necessarily the way people want to look at it.

There are three main tables, ghchd_stations containing data about each station, ghcnd_variables containing information about the variables in the data, and ghcnd_obs which contains the observations. We need ghchd_stations in order to find what stations we’re interested in, by name or location, for example. And we need ghcnd_variables to convert the values in the observation table to the proper units. The observation table looks something like this:

gnchd_obs
station_id dte variable raw_value qual_flag
USW00026411 2014-12-25 TMIN -205  
USW00026411 2014-12-25 TMAX -77  
USW00026411 2014-12-25 PRCP 15  
USW00026411 2014-12-25 SNOW 20  
USW00026411 2014-12-25 SNWD 230  

There are a few problems with using this table directly. First, the station_id column doesn’t tell us anything about the station (name, location, etc.) without joining it to the stations table. Second, we need to use the variables table to convert the raw values listed in the table to their actual values. For example, temperatures are in degrees Celsius × 10, so we need to divide the raw value to get actual temperatures. Finally, to get the so that we have one row per date, with columns for the variables we’re interested in we have to “pivot” the data (to use Excel terminology).

Here’s how we get all the data using R.

Load the libraries we will need:

library(dplyr)
library(tidyr)
library(ggplot2)
library(scales)
library(lubridate)
library(knitr)

Connect to the database and get the tables we need, choosing only the stations we want from the stations table. In the filter statement you can see we’re using a PostgreSQL specific operator ~ to do the filtering. In other databases we’d probably use %in% and include the station names as a list.

noaa_db <- src_postgres(host="localhost", user="cswingley", port=5434, dbname="noaa")

# Construct database table objects for the data
ghcnd_obs <- tbl(noaa_db, "ghcnd_obs")
ghcnd_vars <- tbl(noaa_db, "ghcnd_variables")

# Filter stations to just the long term Fairbanks stations:
fai_stations <-
   tbl(noaa_db, "ghcnd_stations") %>%
   filter(station_name %~% "(FAIRBANKS INT|UNIVERSITY EXP|COLLEGE OBSY)")

Here’s where we grab the data. We are using the magrittr package’s pipe operator (%>%) to chain operations together, making it really easy to follow exactly how we’re manipulating the data along the way.

# Get the raw data
fai_raw <-
   ghcnd_obs %>%
   inner_join(fai_stations, by="station_id") %>%
   inner_join(ghcnd_vars, by="variable") %>%
   mutate(value=raw_value*raw_multiplier) %>%
   filter(qual_flag=='') %>%
   select(station_name, dte, variable, value) %>%
   collect()

# Save it
save(fai_raw, file="fai_raw.rdata", compress="xz")

In order, we start with the complete observation table (which contains 29 million rows at this moment), then we join it with our filtered stations using inner_join(fai_stations, by="station_id"). Now we’re down to 723 thousand rows of data. We join it with the variables table, then create a new column called value that is the raw value from the observation table multiplied by the multiplier from the variable table. We remove any observation that doesn’t have an empty string for the quality flag (a value in this fields indicates there’s something wrong with the data). Finally, we reduce the number of columns we’re keeping to just the station name, date, variable name, and the actual value.

We then use collect() to actually run all these operations and collect the results into an R object. One of the neat things about database operations using dplyr is that the SQL isn’t actually performed until it is actually necessary, which really speeds up the testing phase of the analysis. You can play around with joining, filtering and transforming the data using operations that are fast until you have it just right, then collect() to finalize the steps.

At this stage, the data is still in it’s normalized form. We’ve fixed the station name and the values in the data are now what was observed, but we still need to pivot the data to make is useful.

We’ll use the tidyr spread() function to make the value that appears in the variable column (TMIN, TMAX, etc.) appear as columns in the output, and put the data in the value column into the cells in each column and row. We’re also calculating an average daily temperature from the minimum and maximum temperatures and selecting just the columns we want.

# pivot, calculate average temp, include useful vars
fai_pivot <-
   fai_raw %>%
   spread(variable, value) %>%
   transform(TAVG=(TMIN+TMAX)/2.0) %>%
   select(station_name, dte, TAVG, TMIN, TMAX, TOBS, PRCP, SNOW, SNWD,
         WSF1, WDF1, WSF2, WDF2, WSF5, WDF5, WSFG, WDFG, TSUN)

Now we’ve got a table with rows for each station name and date, and columns with all the observed variables we might be interested in.

Time for some analysis. Let’s get the minimum temperatures by year and station. When looking at winter temperatures, it makes more sense to group by “winter year” rather that the actual year. In our case, we’re subtracting 92 days from the date and getting the year. This makes the winter year start in April instead of January and means that the 2014/2015 winter has a winter year of 2014.

# Find coldest temperatures by winter year, as a nice table
fai_winter_year_minimum <-
   fai_pivot %>%
      mutate(winter_year=year(dte - days(92))) %>%
      filter(winter_year < 2014) %>%
      group_by(station_name, winter_year) %>%
      select(station_name, winter_year, TMIN) %>%
      summarize(tmin=min(TMIN*9/5+32, na.rm=TRUE), n=n()) %>%
      filter(n>350) %>%
      select(station_name, winter_year, tmin) %>%
      spread(station_name, tmin)

In order, we’re taking the pivoted data (fai_pivot), adding a column for winter year (mutate), removing the data from the current year since the winter isn’t over (filter), grouping by station and winter year (group_by), reducing the columns down to just minimum temperature (select), summarizing by minimum temperature after converting to Fahrenheit and the number of days with valid data (summarize), only selecting years with 350 ore more days of data (select), and finally grabbing and formatting just the columns we want (select, spread).

Here’s the last 20 years and how we get a nice table of them.

last_twenty <-
   fai_winter_year_minimum %>%
      filter(winter_year > 1993)

# Write to an RST table
sink("last_twenty.rst")
print(kable(last_twenty, format="rst"))
sink()
Minimum temperatures, last 20 years
Winter Year College Obsy Fairbanks Airport University Exp Stn
1994 -43.96 -47.92 -47.92
1995 -45.04 -45.04 -47.92
1996 -50.98 -50.98 -54.04
1997 -43.96 -47.92 -47.92
1998 -52.06 -54.94 -54.04
1999 -50.08 -52.96 -50.98
2000 -27.94 -36.04 -27.04
2001 -40.00 -43.06 -36.04
2002 -34.96 -38.92 -34.06
2003 -45.94 -45.94 NA
2004 NA -47.02 -49.00
2005 -47.92 -50.98 -49.00
2006 NA -43.96 -41.98
2007 -38.92 -47.92 -45.94
2008 -47.02 -47.02 -49.00
2009 -32.98 -41.08 -41.08
2010 -36.94 -43.96 -38.02
2011 -47.92 -50.98 -52.06
2012 -43.96 -47.92 -45.04
2013 -36.94 -40.90 NA

To plot it, we need to re-normalize it so that each row in the data has winter_year, station_name, and tmin in it.

Here’s the plotting code, including the commands to re-normalize.

q <-
   fai_winter_year_minimum %>%
      gather(station_name, tmin, -winter_year) %>%
      arrange(winter_year) %>%
      ggplot(aes(x=winter_year, y=tmin, colour=station_name)) +
            geom_point(size=1.5, position=position_jitter(w=0.5,h=0.0)) +
            geom_smooth(method="lm", se=FALSE) +
            scale_x_continuous(name="Winter Year",
                               breaks=pretty_breaks(n=20)) +
            scale_y_continuous(name="Minimum temperature (degrees F)",
                               breaks=pretty_breaks(n=10)) +
            scale_color_manual(name="Station",
                              labels=c("College Observatory",
                                       "Fairbanks Airport",
                                       "University Exp. Station"),
                              values=c("darkorange", "blue", "darkcyan")) +
            theme_bw() +
            theme(legend.position = c(0.875, 0.120)) +
            theme(axis.text.x = element_text(angle=45, hjust=1))

The lines are the linear regression lines between winter year and minimum temperature. You can see that the trend is for increasing minimum temperatures. Each of these lines is statistically significant (both the coefficients and the overall model), but they only explain about 7% of the variation in temperatures. Given the spread of the points, that’s not surprising. The data shows that the lowest winter temperature at the Fairbanks airport is rising by 0.062 degrees each year.

tags: R  weather  dplyr  climate  tidyr 
Meta Photolog Archives