tue, 21-apr-2015, 17:33

Abstract

The following is a document-style version of a presentation I gave at work a couple weeks ago. It's a little less useful for a general audience because you don't have access to the same database I have, but I figured it might be useful for someone who is looking at using dplyr or in manipulating the GHCND data from NCDC.

Introduction

Today we’re going to briefly take a look at the GHCND climate database and a couple new R packages (dplyr and tidyr) that make data import and manipulation a lot easier than using the standard library.

For further reading, consult the vignettes for dplyr and tidyr, and download the cheat sheet:

GHCND database

The GHCND database contains daily observation data from locations around the world. The README linked above describes the data set and the way the data is formatted. I have written scripts that process the station data and the yearly download files and insert it into a PostgreSQL database (noaa).

The script for inserting a yearly file (downloaded from http://www1.ncdc.noaa.gov/pub/data/ghcn/daily/by_year/) is here: ghcn-daily-by_year_process.py

“Tidy” data

Without going into too much detail on the subject (read Hadley Wickham’s paper) for more information, but the basic idea is that it is much easier to analyze data when it is in a particular, “tidy”, form. A Tidy dataset has a single table for each type of real world object or type of data, and each table has one column per variable measured and one row per observation.

For example, here’s a tidy table representing daily weather observations with station × date as rows and the various variables as columns.

Station Date tmin_c tmax_c prcp snow ...
PAFA 2014-01-01 12 24 0.00 0.0 ...
PAFA 2014-01-01 8 11 0.02 0.2 ...
... ... ... ... ... ... ...

Getting raw data into this format is what we’ll look at today.

R libraries & data import

First, let’s load the libraries we’ll need:

library(dplyr)      # data import
library(tidyr)      # column / row manipulation
library(knitr)      # tabular export
library(ggplot2)    # plotting
library(scales)     # “pretty” scaling
library(lubridate)  # date / time manipulations

dplyr and tidyr are the data import and manipulation libraries we will use, knitr is used to produce tabular data in report-quality forms, ggplot2 and scales are plotting libraries, and lubridate is a library that makes date and time manipulation easier.

Also note the warnings about how several R functions have been “masked” when we imported dplyr. This just means we'll be getting the dplyr versions instead of those we might be used to. In cases where we need both, you can preface the function with it's package: base::filter would us the normal filter function instead of the one from dplyr.

Next, connect to the database and the three tables we will need:

noaa_db <- src_postgres(host="mason",
                        dbname="noaa")
ghcnd_obs <- tbl(noaa_db, "ghcnd_obs")
ghcnd_vars <- tbl(noaa_db, "ghcnd_variables")

The first statement connects us to the database and the next two create table links to the observation table and the variables table.

Here’s what those two tables look like:

glimpse(ghcnd_obs)
## Observations: 29404870
## Variables:
## $ station_id  (chr) "USW00027502", "USW00027502", "USW00027502", "USW0...
## $ dte         (date) 2011-05-01, 2011-05-01, 2011-05-01, 2011-05-01, 2...
## $ variable    (chr) "AWND", "FMTM", "PRCP", "SNOW", "SNWD", "TMAX", "T...
## $ raw_value   (dbl) 32, 631, 0, 0, 229, -100, -156, 90, 90, 54, 67, 1,...
## $ meas_flag   (chr) "", "", "T", "T", "", "", "", "", "", "", "", "", ...
## $ qual_flag   (chr) "", "", "", "", "", "", "", "", "", "", "", "", ""...
## $ source_flag (chr) "X", "X", "X", "X", "X", "X", "X", "X", "X", "X", ...
## $ time_of_obs (int) NA, NA, 0, NA, NA, 0, 0, NA, NA, NA, NA, NA, NA, N...
glimpse(ghcnd_vars)
## Observations: 82
## Variables:
## $ variable       (chr) "AWND", "EVAP", "MDEV", "MDPR", "MNPN", "MXPN",...
## $ description    (chr) "Average daily wind speed (tenths of meters per...
## $ raw_multiplier (dbl) 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0....

Each row in the observation table rows contain the station_id, date, a variable code, the raw value for that variable, and a series of flags indicating data quality, source, and special measurements such as the “trace” value used for precipitation under the minimum measurable value.

Each row in the variables table contains a variable code, description and the multiplier used to convert the raw value from the observation table into an actual value.

This is an example of completely “normalized” data, and it’s stored this way because not all weather stations record all possible variables, and rather than having a single row for each station × date with a whole bunch of empty columns for those variables not measured, each row contains the station × data × variable data.

We are also missing information about the stations, so let’s load that data:

fai_stations <-
    tbl(noaa_db, "ghcnd_stations") %>%
    filter(station_name %in% c("FAIRBANKS INTL AP",
                               "UNIVERSITY EXP STN",
                               "COLLEGE OBSY"))
glimpse(fai_stations)
## Observations: 3
## Variables:
## $ station_id   (chr) "USC00502107", "USW00026411", "USC00509641"
## $ station_name (chr) "COLLEGE OBSY", "FAIRBANKS INTL AP", "UNIVERSITY ...
## $ latitude     (dbl) 64.86030, 64.80389, 64.85690
## $ longitude    (dbl) -147.8484, -147.8761, -147.8610
## $ elevation    (dbl) 181.9656, 131.6736, 144.7800
## $ coverage     (dbl) 0.96, 1.00, 0.98
## $ start_date   (date) 1948-05-16, 1904-09-04, 1904-09-01
## $ end_date     (date) 2015-04-03, 2015-04-02, 2015-03-13
## $ variables    (chr) "TMIN TOBS WT11 SNWD SNOW WT04 WT14 TMAX WT05 DAP...
## $ the_geom     (chr) "0101000020E6100000A5BDC117267B62C0EC2FBB270F3750...

The first part is the same as before, loading the ghcnd_stations table, but we are filtering that data down to just the Fairbanks area stations with long term records. To do this, we use the pipe operator %>% which takes the data from the left side and passes it to the function on the right side, the filter function in this case.

filter requires one or more conditional statements with variable names on the left side and the condition on the right. Multiple conditions can be separated by commas if you want all the conditions to be required (AND) or separated by a logic operator (& for AND, | for OR). For example: filter(latitude > 70, longitude < -140).

When used on database tables, filter can also use conditionals that are built into the database which are passed directly as part of a WHERE clause. In our code above, we’re using the %in% operator here to select the stations from a list.

Now we have the station_ids we need to get just the data we want from the observation table and combine it with the other tables.

Combining data

Here’s how we do it:

fai_raw <-
    ghcnd_obs %>%
    inner_join(fai_stations, by="station_id") %>%
    inner_join(ghcnd_vars, by="variable") %>%
    mutate(value=raw_value*raw_multiplier) %>%
    filter(qual_flag=='') %>%
    select(station_name, dte, variable, value) %>%
    collect()
glimpse(fai_raw)

In order, here’s what we’re doing:

  • Assign the result to fai_raw
  • Join the observation table with the filtered station data, using station_id as the variable to combine against. Because this is an “inner” join, we only get results where station_id matches in both the observation and the filtered station data. At this point we only have observation data from our long-term Fairbanks stations.
  • Join the variable table with the Fairbanks area observation data, using variable to link the tables.
  • Add a new variable called value which is calculated by multiplying raw_value (coming from the observation table) by raw_multiplier (coming from the variable table).
  • Remove rows where the quality flag is not an empty space.
  • Select only the station name, date, variable and actual value columns from the data. Before we did this, each row would contain every column from all three tables, and most of that information is not necessary.
  • Finally, we “collect” the results. dplyr doesn’t actually perform the full SQL until it absolutely has to. Instead it’s retrieving a small subset so that we can test our operations quickly. When we are happy with the results, we use collect() to grab the full data.

De-normalize it

The data is still in a format that makes it difficult to analyze, with each row in the result containing a single station × date × variable observation. A tidy version of this data requires each variable be a column in the table, each row being a single date at each station.

To “pivot” the data, we use the spread function, and we'll also calculate a new variable and reduce the number of columns in the result.

fai_pivot <-
    fai_raw %>%
    spread(variable, value) %>%
    mutate(TAVG=(TMIN+TMAX)/2.0) %>%
    select(station_name, dte, TAVG, TMIN, TMAX, TOBS, PRCP, SNOW, SNWD,
           WSF1, WDF1, WSF2, WDF2, WSF5, WDF5, WSFG, WDFG, TSUN)
head(fai_pivot)
## Source: local data frame [6 x 18]
##
##   station_name        dte  TAVG TMIN TMAX TOBS PRCP SNOW SNWD WSF1 WDF1
## 1 COLLEGE OBSY 1948-05-16 11.70  5.6 17.8 16.1   NA   NA   NA   NA   NA
## 2 COLLEGE OBSY 1948-05-17 15.55 12.2 18.9 17.8   NA   NA   NA   NA   NA
## 3 COLLEGE OBSY 1948-05-18 14.40  9.4 19.4 16.1   NA   NA   NA   NA   NA
## 4 COLLEGE OBSY 1948-05-19 14.15  9.4 18.9 12.2   NA   NA   NA   NA   NA
## 5 COLLEGE OBSY 1948-05-20 10.25  6.1 14.4 14.4   NA   NA   NA   NA   NA
## 6 COLLEGE OBSY 1948-05-21  9.75  1.7 17.8 17.8   NA   NA   NA   NA   NA
## Variables not shown: WSF2 (dbl), WDF2 (dbl), WSF5 (dbl), WDF5 (dbl), WSFG
##   (dbl), WDFG (dbl), TSUN (dbl)

spread takes two parameters, the variable we want to spread across the columns, and the variable we want to use as the data value for each row × column intersection.

Examples

Now that we've got the data in a format we can work with, let's look at a few examples.

Find the coldest temperatures by winter year

First, let’s find the coldest winter temperatures from each station, by winter year. “Winter year” is just a way of grouping winters into a single value. Instead of the 2014–2015 winter, it’s the 2014 winter year. We get this by subtracting 92 days (the days in January, February, March) from the date, then pulling off the year.

Here’s the code.

fai_winter_year_minimum <-
    fai_pivot %>%
        mutate(winter_year=year(dte - days(92))) %>%
        filter(winter_year < 2014) %>%
        group_by(station_name, winter_year) %>%
        select(station_name, winter_year, TMIN) %>%
        summarize(tmin=min(TMIN*9/5+32, na.rm=TRUE), n=n()) %>%
        filter(n>350) %>%
        select(station_name, winter_year, tmin) %>%
        spread(station_name, tmin)

last_twenty <-
    fai_winter_year_minimum %>%
        filter(winter_year > 1993)

last_twenty
## Source: local data frame [20 x 4]
##
##    winter_year COLLEGE OBSY FAIRBANKS INTL AP UNIVERSITY EXP STN
## 1         1994       -43.96            -47.92             -47.92
## 2         1995       -45.04            -45.04             -47.92
## 3         1996       -50.98            -50.98             -54.04
## 4         1997       -43.96            -47.92             -47.92
## 5         1998       -52.06            -54.94             -54.04
## 6         1999       -50.08            -52.96             -50.98
## 7         2000       -27.94            -36.04             -27.04
## 8         2001       -40.00            -43.06             -36.04
## 9         2002       -34.96            -38.92             -34.06
## 10        2003       -45.94            -45.94                 NA
## 11        2004           NA            -47.02             -49.00
## 12        2005       -47.92            -50.98             -49.00
## 13        2006           NA            -43.96             -41.98
## 14        2007       -38.92            -47.92             -45.94
## 15        2008       -47.02            -47.02             -49.00
## 16        2009       -32.98            -41.08             -41.08
## 17        2010       -36.94            -43.96             -38.02
## 18        2011       -47.92            -50.98             -52.06
## 19        2012       -43.96            -47.92             -45.04
## 20        2013       -36.94            -40.90                 NA

See if you can follow the code above. The pipe operator makes is easy to see each operation performed along the way.

There are a couple new functions here, group_by and summarize. group_by indicates at what level we want to group the data, and summarize uses those groupings to perform summary calculations using aggregate functions. We group by station and winter year, then we use the minimum and n functions to get the minimum temperature and number of days in each year where temperature data was available. You can see we are using n to remove winter years where more than two weeks of data are missing.

Also notice that we’re using spread again in order to make a single column for each station containing the minimum temperature data.

Here’s how we can write out the table data as a restructuredText document, which can be converted into many document formats (PDF, ODF, HTML, etc.):

sink("last_twenty.rst")
print(kable(last_twenty, format="rst"))
sink()
Minimum temperatures by winter year, station
winter_year COLLEGE OBSY FAIRBANKS INTL AP UNIVERSITY EXP STN
1994 -43.96 -47.92 -47.92
1995 -45.04 -45.04 -47.92
1996 -50.98 -50.98 -54.04
1997 -43.96 -47.92 -47.92
1998 -52.06 -54.94 -54.04
1999 -50.08 -52.96 -50.98
2000 -27.94 -36.04 -27.04
2001 -40.00 -43.06 -36.04
2002 -34.96 -38.92 -34.06
2003 -45.94 -45.94 NA
2004 NA -47.02 -49.00
2005 -47.92 -50.98 -49.00
2006 NA -43.96 -41.98
2007 -38.92 -47.92 -45.94
2008 -47.02 -47.02 -49.00
2009 -32.98 -41.08 -41.08
2010 -36.94 -43.96 -38.02
2011 -47.92 -50.98 -52.06
2012 -43.96 -47.92 -45.04
2013 -36.94 -40.90 NA

Plotting

Finally, let’s plot the minimum temperatures for all three stations.

q <-
    fai_winter_year_minimum %>%
        gather(station_name, tmin, -winter_year) %>%
        arrange(winter_year) %>%
        ggplot(aes(x=winter_year, y=tmin, colour=station_name)) +
            geom_point(size=1.5, position=position_jitter(w=0.5,h=0.0)) +
            geom_smooth(method="lm", se=FALSE) +
            scale_x_continuous(name="Winter Year", breaks=pretty_breaks(n=20)) +
            scale_y_continuous(name="Minimum temperature (degrees F)", breaks=pretty_breaks(n=10)) +
            scale_color_manual(name="Station",
                               labels=c("College Observatory",
                                        "Fairbanks Airport",
                                        "University Exp. Station"),
                               values=c("darkorange", "blue", "darkcyan")) +
            theme_bw() +
            # theme(legend.position = c(0.150, 0.850)) +
            theme(axis.text.x = element_text(angle=45, hjust=1))

print(q)
//media.swingleydev.com/img/blog/2015/04/min_temp_winter_year_fai_stations.svg

To plot the data, we need the data in a slightly different format with each row containing winter year, station name and the minimum temperature. We’re plotting minimum temperature against winter year, coloring the points and trendlines using the station name. That means all three of those variables need to be on the same row.

To do that we use gather. The first parameter is the name of variable the columns will be moved into (the station names, which are currently columns, will become values in a row named station_name). The second is the name of the column that stores the observations (tmin) and the parameters after that are the list of columns to gather together. In our case, rather than specifying the names of the columns, we're specifying the inverse: all the columns except winter_year.

The result of the gather looks like this:

fai_winter_year_minimum %>%
    gather(station_name, tmin, -winter_year)
## Source: local data frame [321 x 3]
##
##    winter_year station_name tmin
## 1         1905 COLLEGE OBSY   NA
## 2         1907 COLLEGE OBSY   NA
## 3         1908 COLLEGE OBSY   NA
## 4         1909 COLLEGE OBSY   NA
## 5         1910 COLLEGE OBSY   NA
## 6         1911 COLLEGE OBSY   NA
## 7         1912 COLLEGE OBSY   NA
## 8         1913 COLLEGE OBSY   NA
## 9         1915 COLLEGE OBSY   NA
## 10        1916 COLLEGE OBSY   NA
## ..         ...          ...  ...

ggplot2

The plot is produced using ggplot2. A full introduction would be a seminar by itself, but the basics of our plot can be summarized as follows.

ggplot(aes(x=winter_year, y=tmin, colour=station_name)) +

aes defines variables and grouping.

geom_point(size=1.5, position=position_jitter(w=0.5,h=0.0)) +
geom_smooth(method="lm", se=FALSE) +

geom_point draws points, geom_smooth draws fitted lines.

scale_x_continuous(name="Winter Year", breaks=pretty_breaks(n=20)) +
scale_y_continuous(name="Minimum temperature (degrees F)",
                    breaks=pretty_breaks(n=10)) +
scale_color_manual(name="Station",
                    labels=c("College Observatory", "Fairbanks Airport",
                            "University Exp. Station"),
                    values=c("darkorange", "blue", "darkcyan")) +

Scale functions define how the data is scaled into a plot and controls labelling.

theme_bw() +
theme(axis.text.x = element_text(angle=45, hjust=1))

Theme functions controls the style.

For more information:

Linear regression, winter year and minimum temperature

Finally let’s look at the significance of those regression lines:

summary(lm(data=fai_winter_year_minimum, `COLLEGE OBSY` ~ winter_year))
##
## Call:
## lm(formula = `COLLEGE OBSY` ~ winter_year, data = fai_winter_year_minimum)
##
## Residuals:
##      Min       1Q   Median       3Q      Max
## -19.0748  -5.8204   0.1907   3.8042  17.1599
##
## Coefficients:
##               Estimate Std. Error t value Pr(>|t|)
## (Intercept) -275.01062  105.20884  -2.614   0.0114 *
## winter_year    0.11635    0.05311   2.191   0.0325 *
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
## Residual standard error: 7.599 on 58 degrees of freedom
##   (47 observations deleted due to missingness)
## Multiple R-squared:  0.07643,    Adjusted R-squared:  0.06051
## F-statistic:   4.8 on 1 and 58 DF,  p-value: 0.03249
summary(lm(data=fai_winter_year_minimum, `FAIRBANKS INTL AP` ~ winter_year))
##
## Call:
## lm(formula = `FAIRBANKS INTL AP` ~ winter_year, data = fai_winter_year_minimum)
##
## Residuals:
##     Min      1Q  Median      3Q     Max
## -15.529  -4.605  -1.025   4.007  19.764
##
## Coefficients:
##               Estimate Std. Error t value Pr(>|t|)
## (Intercept) -171.19553   43.55177  -3.931 0.000153 ***
## winter_year    0.06250    0.02221   2.813 0.005861 **
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
## Residual standard error: 7.037 on 104 degrees of freedom
##   (1 observation deleted due to missingness)
## Multiple R-squared:  0.07073,    Adjusted R-squared:  0.06179
## F-statistic: 7.916 on 1 and 104 DF,  p-value: 0.005861
summary(lm(data=fai_winter_year_minimum, `UNIVERSITY EXP STN` ~ winter_year))
##
## Call:
## lm(formula = `UNIVERSITY EXP STN` ~ winter_year, data = fai_winter_year_minimum)
##
## Residuals:
##     Min      1Q  Median      3Q     Max
## -15.579  -5.818  -1.283   6.029  19.977
##
## Coefficients:
##               Estimate Std. Error t value Pr(>|t|)
## (Intercept) -158.41837   51.03809  -3.104  0.00248 **
## winter_year    0.05638    0.02605   2.164  0.03283 *
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
## Residual standard error: 8.119 on 100 degrees of freedom
##   (5 observations deleted due to missingness)
## Multiple R-squared:  0.04474,    Adjusted R-squared:  0.03519
## F-statistic: 4.684 on 1 and 100 DF,  p-value: 0.03283

Essentially, all the models show a significant increase in minimum temperature over time, but none of them explain very much of the variation in minimum temperature.

RMarkdown

This presentation was produced with the RMarkdown package. Allows you to mix text and R code, which is then run through R to produce documents in Word, PDF, HTML, and presentation formats.

sun, 12-apr-2015, 16:38

Introduction

Last week I gave a presentation at work about the National Climate Data Center’s GHCND climate database and methods to import and manipulate the data using the dplyr and tidyr R packages (a report-style version of it is here). Along the way, I used this function to calculate the average daily temperature from the minimum and maximum daily temperatures:

mutate(TAVG=(TMIN+TMAX)/2.0))

One of the people in the audience asked why the Weather Service would calculate average daily temperature this way, rather than by averaging the continuous or hourly temperatures at each station. The answer is that many, perhaps most, of the official stations in the GHCND data set are COOP stations which only report minimum and maximum temperature, and the original instrument provided to COOP observers was likely a mercury minimum / maximum thermometer. Now that these instruments are digital, they could conceivably calculate average temperature internally, and observers could report minimum, maximum and average as calculated from the device. But that’s not how it’s done.

In this analysis, I look at the difference between calculating average daily temperature using the mean of all daily temperature observations, and using the average of the minimum and maximum reported temperature each day. I’ll use five years of data collected at our house using our Arduino-based weather station.

Methods

Our weather station records temperature every few seconds, averages this data every five minutes and stores these five minute observations in a database. For our analysis, I’ll group the data by day and calculate the average daily temperature using the mean of all the five minute observations, and using the average of the minimum and maximum daily temperature. I’ll use R to perform the analysis.

Libraries

Load the libraries we need:

library(dplyr)
library(lubridate)
library(ggplot2)
library(scales)
library(readr)

Retrieve the data

Connect to the database and retrieve the data. We’re using build_sql because the data table we’re interested in is a view (sort of like a stored SQL query), not a table, and dplyr::tbl can’t currently read from a view:

dw1454 <- src_postgres(dbname="goldstream_creek_wx",
                       user="readonly")

raw_data <- tbl(dw1454, build_sql("SELECT * FROM arduino_west"))

The raw data contains the timestamp for each five minute observation, and the temperature, in degrees Fahrenheit for that observation. The following series of functions aggregates the data to daily data and calculates the average daily temperature using the two methods.

daily_average <-
    raw_data %>%
    filter(obs_dt>'2009-12-31 23:59:59') %>%
    mutate(date=date(obs_dt)) %>%
    select(date, wtemp) %>%
    group_by(date) %>%
    summarize(mm_avg=(min(wtemp)+max(wtemp))/2.0,
              h_avg=mean(wtemp), n=n()) %>%
    filter(n==24*60/5) %>%   # 5 minute obs
    collect()

All these steps are joined together using the “pipe” or “then” operator %>% as follows:

  • daily_average <-: assign the result of all the operations to daily_average.
  • raw_data %>%: start with the data from our database query (all the temperature observations).
  • filter(obs_dt>'2009-12-31 23:59:59') %>%: use data from 2010 and after.
  • mutate(date=date(obs_dt)) %>%: calculate the data from the timestamp.
  • select(date, wtemp) %>%: reduce the columns to our newly calculated date variable and the temperatures.
  • group_by(date) %>%: group the data by date.
  • summarize(mm_avg=(min(wtemp)+max(wtemp))/2.0) %>%: summarize the data grouped by date, calculate daily average from the average of the minimum and maximum temperature.
  • summarize(h_avg=mean(wtemp), n=n()) %>%: calculate another daily average from the mean of the temperaures. Also calculate the number of observations on each date.
  • filter(n==24*60/5) %>%: Only include dates where we have a complete set of five minute observations. We don’t want data with too few or too many observations because those would skew the averages.
  • collect(): This function retrieves the data from the database. Without collect(), the query is run on the database server, producing a subset of the full results. This allows us to tweak the query until it’s exactly what we want without having to wait to retrieve everything at each iteration.

Now we’ve got a table with one row for each date in the database where we had exactly 288 observations on that date. Columns include the average temperature calculated using the two methods and the number of observations on each date.

Save the data so we don’t have to do these calculations again:

write_csv(daily_average, "daily_average.csv")
save(daily_average, file="daily_average.rdata", compress=TRUE)

Calculate anomalies

How does the min/max method of calculating average daily temperature compare against the true mean of all observed temperatures in a day? We calculate the difference between the methods, the anomaly, as the mean temperature subtracted from the average of minimum and maximum. When this anomaly is positive, the min/max method is higher than the actual mean, and when it’s negative, it’s lower.

anomaly <-
    daily_average %>%
    mutate(month=month(date),
           anomaly=mm_avg-h_avg) %>%
    ungroup() %>%
    arrange(date)

We also populate a column with the month of each date so we can look at the seasonality of the anomalies.

Results

This is what the results look like:

summary(anomaly$anomaly)
##    Min. 1st Qu.  Median    Mean 3rd Qu.    Max.
## -6.8600 -1.5110 -0.1711 -0.1341  1.0740  9.3570

The average anomaly is very close to zero (-0.13), and I suspect it would be even closer to zero as more data is included. Half the data is between -1.5 and 1.1 degrees and the full range is -6.86 to +9.36°F.

Plots

Let’s take a look at some plots of the anomalies.

Raw anomaly data

The first plot shows the raw anomaly data, with positive anomalies (min/max calculate average is higher than the mean daily average) colored red and negative anomalies in blue.

# All anomalies
q <- ggplot(data=anomaly,
            aes(x=date, ymin=0, ymax=anomaly, colour=anomaly<0)) +
    geom_linerange(alpha=0.5) +
    theme_bw() +
    scale_colour_manual(values=c("red", "blue"), guide=FALSE) +
    scale_x_date(name="") +
    scale_y_continuous(name="Difference between min/max and hourly aggregation")

print(q)
//media.swingleydev.com/img/blog/2015/04/diff_mm_hourly_aggregation_to_daily.svg

I don't see much in the way of trends in this data, but there are short periods where all the anomalies are in one direction or another. If there is a seasonal pattern, it's hard to see it when the data is presented this way.

Monthly boxplots

To examine the seasonality of the anomalies, let’s look at some boxplots, grouped by the “month” variable we calculated when calculating the anomalies.

mean_anomaly <- mean(anomaly$anomaly)

# seasonal pattern of anomaly
q <- ggplot(data=anomaly,
            aes(x=as.factor(month), y=anomaly)) +
    geom_hline(data=NULL, aes(yintercept=mean_anomaly), colour="darkorange") +
    geom_boxplot() +
    scale_x_discrete(name="",
                     labels=c("Jan", "Feb", "Mar", "Apr",
                              "May", "Jun", "Jul", "Aug",
                              "Sep", "Oct", "Nov", "Dec")) +
    scale_y_continuous(name="Difference between min/max and hourly aggregation") +
    theme_bw()

print(q)
//media.swingleydev.com/img/blog/2015/04/diff_mm_hourly_aggregation_to_daily_boxplot.svg

There does seem to be a slight seasonal pattern to the anomalies, with spring and summer daily average underestimated when using the min/max calculation (the actual daily average temperature is warmer than was calculated using minimum and maximum temperatures) and slightly overestimated in fall and late winter. The boxes in a boxplot show the range where half the observations fall, and in all months but April and May these ranges include zero, so there's a good chance that the pattern isn't statistically significant. The orange line under the boxplots show the overall average anomaly, close to zero.

Cumulative frequency distribution

Finally, we plot the cumulative frequency distribution of the absolute value of the anomalies. These plots have the variable of interest on the x-axis and the cumulative frequency of all values to the left on the y-axis. It’s a good way of seeing how much of the data falls into certain ranges.

# distribution of anomalies
q <- ggplot(data=anomaly,
            aes(x=abs(anomaly))) +
    stat_ecdf() +
    scale_x_discrete(name="Absolute value of anomaly (+/- degrees F)",
                     breaks=0:11,
                     labels=0:11,
                     expand=c(0, 0)) +
    scale_y_continuous(name="Cumulative frequency",
                       labels=percent,
                       breaks=pretty_breaks(n=10),
                       limits=c(0,1)) +
    annotate("rect", xmin=-1, xmax=1, ymin=0, ymax=0.4, alpha=0.1, fill="darkcyan") +
    annotate("rect", xmin=-1, xmax=2, ymin=0, ymax=0.67, alpha=0.1, fill="darkcyan") +
    annotate("rect", xmin=-1, xmax=3, ymin=0, ymax=0.85, alpha=0.1, fill="darkcyan") +
    annotate("rect", xmin=-1, xmax=4, ymin=0, ymax=0.94, alpha=0.1, fill="darkcyan") +
    annotate("rect", xmin=-1, xmax=5, ymin=0, ymax=0.975, alpha=0.1, fill="darkcyan") +
    theme_bw()

print(q)
//media.swingleydev.com/img/blog/2015/04/cum_freq_distribution.svg

The overlapping rectangles on the plot show what percentages of anomalies fall in certain ranges. Starting from the innermost and darkest rectangle, 40% of the temperatures calculated using minimum and maximum are within a degree of the actual temperature. Sixty-seven percent are within two degrees, 85% within three degrees, 94% are within four degrees, and more than 97% are within five degrees of the actual value. There's probably a way to get R to calculate these intersections along the curve for you, but I looked at the plot and manually added the annotations.

Conclusion

We looked at more than five years of data from our weather station in the Goldstream Valley, comparing daily average temperature calculated from the mean of all five minute temperature observations and those calculated using the average minimum and maximum daily temperature, which is the method the National Weather Service uses for it’s daily data. The results show that the difference between these methods average to zero, which means that on an annual (or greater) basis, there doesn't appear to be any bias in the method.

Two thirds of all daily average temperatures are within two degrees of the actual daily average, and with a few exceptions, the error is always below five degrees.

There is some evidence that there’s a seasonal pattern to the error, however, with April and May daily averages particularly low. If those seasonal patterns are real, this would indicate an important bias in this method of calculating average daily temperature.

tags: R  temperature  dplyr  climate  GHCND 
sun, 22-feb-2015, 11:33

Last night we got a quarter of an inch of rain at our house, making roads “impassable” according to the Fairbanks Police Department, and turning the dog yard, deck, and driveway into an icy mess. There are videos floating around Facebook showing Fairbanks residents playing hockey in the street in front of their houses, and a reported seven vehicles off the road on Ballaine Hill.

Here’s a video of a group of Goldstream Valley musicians ice skating on Golstream Road: http://youtu.be/_afC7UF0NXk

Let’s check out the weather database and take a look at how often Fairbanks experiences this type of event, and when they usually happen. I’m going to skip the parts of the code showing how we get pivoted daily data from the database, but they’re in this post.

Starting with pivoted data we want to look for dates from November through March with more than a tenth of an inch of precipitation, snowfall less than two tenths of an inch and a daily high temperature above 20°F. Then we group by the winter year and month, and aggregate the rain events into a single event. These occurrences are rare enough that this aggregation shoudln’t combine events from different parts of the month.

Here’s the R code:

winter_rain <-
   fai_pivot %>%
      mutate(winter_year=year(dte - days(92)),
               wdoy=yday(dte + days(61)),
               month=month(dte),
               SNOW=ifelse(is.na(SNOW), 0, SNOW),
               TMAX=TMAX*9/5+32,
               TAVG=TAVG*9/5+32,
               TMIN=TMIN*9/5+32,
               PRCP=PRCP/25.4,
               SNOW=SNOW/25.4) %>%
      filter(station_name == 'FAIRBANKS INTL AP',
               winter_year < 2014,
               month %in% c(11, 12, 1, 2, 3),
               TMAX > 20,
               PRCP > 0.1,
               SNOW < 0.2) %>%
      group_by(winter_year, month) %>%
      summarize(date=min(dte), tmax=mean(TMAX),
                prcp=sum(PRCP), days=n()) %>%
      ungroup() %>%
      mutate(month=month(date)) %>%
      select(date, month, tmax, prcp, days) %>%
      arrange(date)

And the results:

List of winter rain events, Fairbanks Airport
Date Month Max temp (°F) Rain (inches) Days
1921-03-07 3 44.06 0.338 1
1923-02-06 2 33.98 0.252 1
1926-01-12 1 35.96 0.142 1
1928-03-02 3 39.02 0.110 1
1931-01-19 1 33.08 0.130 1
1933-11-03 11 41.00 0.110 1
1935-11-02 11 38.30 0.752 3
1936-11-24 11 37.04 0.441 1
1937-01-10 1 32.96 1.362 3
1948-11-10 11 48.02 0.181 1
1963-01-19 1 35.06 0.441 1
1965-03-29 3 35.96 0.118 1
1979-11-11 11 35.96 0.201 1
2003-02-08 2 34.97 0.291 2
2003-11-02 11 34.97 0.268 2
2010-11-22 11 34.34 0.949 3

This year’s event doesn’t compare to 2010 when almost and inch of rain fell over the course of three days in November, but it does look like it comes at an unusual part of the year.

Here’s the counts and frequency of winter rainfall events by month:

by_month <-
   winter_rain %>%
      group_by(month) %>%
      summarize(n=n()) %>%
      mutate(freq=n/sum(n)*100)
Winter rain events by month
Month n Freq
1 4 25.00
2 2 12.50
3 3 18.75
11 7 43.75

There haven’t been any rain events in December, which is a little surprising, but next to that, February rains are the least common.

I looked at this two years ago (Winter freezing rain) using slightly different criteria. At the bottom of that post I looked at the frequency of rain events over time and concluded that they seem to come in cycles, but that the three events in this decade was a bad sign. Now we can add another rain event to the total for the 2010s.

tags: R  weather  winter  rain  dplyr  climate 
sun, 25-jan-2015, 08:26

Following up on yesterday’s post about minimum temperatures, I was thinking that a cumulative measure of cold temperatures would probably be a better measure of how cold a winter is. We all remember the extremely cold days each winter when the propane gells or the car won’t start, but it’s the long periods of deep cold that really take their toll on buildings, equipment, and people in the Interior.

One way of measuring this is to find all the days in a winter year when the average temperature is below freezing and sum all the temperatures below freezing for that winter year. For example, if the temperature is 50°F, that’s not below freezing so it doesn’t count. If the temperature is −40°, that’s 72 freezing degrees (Fahrenheit). Do this for each day in a year and add up all the values.

Here’s the code to make the plot below (see my previous post for how we got fai_pivot).

fai_winter_year_freezing_degree_days <-
   fai_pivot %>%
      mutate(winter_year=year(dte - days(92)),
               fdd=ifelse(TAVG < 0, -1*TAVG*9/5, 0)) %>%
      filter(winter_year < 2014) %>%
      group_by(station_name, winter_year) %>%
      select(station_name, winter_year, fdd) %>%
      summarize(fdd=sum(fdd, na.rm=TRUE), n=n()) %>%
      filter(n>350) %>%
      select(station_name, winter_year, fdd) %>%
      spread(station_name, fdd)

fdd_gathered <-
   fai_winter_year_freezing_degree_days %>%
      gather(station_name, fdd, -winter_year) %>%
      arrange(winter_year)
q <-
   fdd_gathered %>%
      ggplot(aes(x=winter_year, y=fdd, colour=station_name)) +
            geom_point(size=1.5, position=position_jitter(w=0.5,h=0.0)) +
            geom_smooth(data=subset(fdd_gathered, winter_year<1975),
                        method="lm", se=FALSE) +
            geom_smooth(data=subset(fdd_gathered, winter_year>=1975),
                        method="lm", se=FALSE) +
            scale_x_continuous(name="Winter Year",
                               breaks=pretty_breaks(n=20)) +
            scale_y_continuous(name="Freezing degree days (degrees F)",
                               breaks=pretty_breaks(n=10)) +
            scale_color_manual(name="Station",
                              labels=c("College Observatory",
                                       "Fairbanks Airport",
                                       "University Exp. Station"),
                              values=c("darkorange", "blue", "darkcyan")) +
            theme_bw() +
            theme(legend.position = c(0.875, 0.120)) +
            theme(axis.text.x = element_text(angle=45, hjust=1))

rescale <- 0.65
svg('freezing_degree_days.svg', height=10*rescale, width=16*rescale)
print(q)
dev.off()

And the plot.

//media.swingleydev.com/img/blog/2015/01/freezing_degree_days.svg

Cumulative freezing degree days by winter year

You’ll notice I’ve split the trend lines at 1975. When I ran the regressions for the entire period, none of them were statistically significant, but looking at the plot, it seems like something happens in 1975 where the cumulative freezing degree days suddenly drop. Since then, they've been increasing at a faster, and statistically significant rate.

This is odd, and it makes me wonder if I've made a mistake in the calculations because what this says is that, at least since 1975, the winters are getting colder as measured by the total number of degrees below freezing each winter. My previous post (and studies of climate in general) show that the climate is warming, not cooling.

One bias that's possible with cumulative calculations like this is that missing data becomes more important, but I looked at the same relationships when I only include years with at least 364 days of valid data (only one or two missing days) and the same pattern exists.

Curious. When combined, this analysis and yesterday's suggest that winters in Fairbanks are getting colder overall, but that the minimum temperature in any year is likely to be warmer than in the past.

tags: R  weather  dplyr  climate  tidyr 
sat, 24-jan-2015, 12:41

The Weather Service is calling for our first −40° temperatures of the winter, which is pretty remarkable given how late in the winter it is. The 2014/2015 winter is turning out to be one of the warmest on record, and until this upcoming cold snap, we’ve only had a few days below normal, and mostly it’s been significantly warmer. You can see this on my Normalized temperature anomaly plot, where most of the last four months has been reddish.

I thought I’d take a look at the minimum winter temperatures for the three longest running Fairbanks weather stations to see what patterns emerge. This will be a good opportunity to further experiment with the dplyr and tidyr R packages I’m learning.

The data set is the Global Historical Climatology Network - Daily (GHCND) data from the National Climatic Data Center (NCDC). The data, at least as I’ve been collecting it, has been fully normalized, which is another way of saying that it’s stored in a way that makes database operations efficient, but not necessarily the way people want to look at it.

There are three main tables, ghchd_stations containing data about each station, ghcnd_variables containing information about the variables in the data, and ghcnd_obs which contains the observations. We need ghchd_stations in order to find what stations we’re interested in, by name or location, for example. And we need ghcnd_variables to convert the values in the observation table to the proper units. The observation table looks something like this:

gnchd_obs
station_id dte variable raw_value qual_flag
USW00026411 2014-12-25 TMIN -205  
USW00026411 2014-12-25 TMAX -77  
USW00026411 2014-12-25 PRCP 15  
USW00026411 2014-12-25 SNOW 20  
USW00026411 2014-12-25 SNWD 230  

There are a few problems with using this table directly. First, the station_id column doesn’t tell us anything about the station (name, location, etc.) without joining it to the stations table. Second, we need to use the variables table to convert the raw values listed in the table to their actual values. For example, temperatures are in degrees Celsius × 10, so we need to divide the raw value to get actual temperatures. Finally, to get the so that we have one row per date, with columns for the variables we’re interested in we have to “pivot” the data (to use Excel terminology).

Here’s how we get all the data using R.

Load the libraries we will need:

library(dplyr)
library(tidyr)
library(ggplot2)
library(scales)
library(lubridate)
library(knitr)

Connect to the database and get the tables we need, choosing only the stations we want from the stations table. In the filter statement you can see we’re using a PostgreSQL specific operator ~ to do the filtering. In other databases we’d probably use %in% and include the station names as a list.

noaa_db <- src_postgres(host="localhost", user="cswingley", port=5434, dbname="noaa")

# Construct database table objects for the data
ghcnd_obs <- tbl(noaa_db, "ghcnd_obs")
ghcnd_vars <- tbl(noaa_db, "ghcnd_variables")

# Filter stations to just the long term Fairbanks stations:
fai_stations <-
   tbl(noaa_db, "ghcnd_stations") %>%
   filter(station_name %~% "(FAIRBANKS INT|UNIVERSITY EXP|COLLEGE OBSY)")

Here’s where we grab the data. We are using the magrittr package’s pipe operator (%>%) to chain operations together, making it really easy to follow exactly how we’re manipulating the data along the way.

# Get the raw data
fai_raw <-
   ghcnd_obs %>%
   inner_join(fai_stations, by="station_id") %>%
   inner_join(ghcnd_vars, by="variable") %>%
   mutate(value=raw_value*raw_multiplier) %>%
   filter(qual_flag=='') %>%
   select(station_name, dte, variable, value) %>%
   collect()

# Save it
save(fai_raw, file="fai_raw.rdata", compress="xz")

In order, we start with the complete observation table (which contains 29 million rows at this moment), then we join it with our filtered stations using inner_join(fai_stations, by="station_id"). Now we’re down to 723 thousand rows of data. We join it with the variables table, then create a new column called value that is the raw value from the observation table multiplied by the multiplier from the variable table. We remove any observation that doesn’t have an empty string for the quality flag (a value in this fields indicates there’s something wrong with the data). Finally, we reduce the number of columns we’re keeping to just the station name, date, variable name, and the actual value.

We then use collect() to actually run all these operations and collect the results into an R object. One of the neat things about database operations using dplyr is that the SQL isn’t actually performed until it is actually necessary, which really speeds up the testing phase of the analysis. You can play around with joining, filtering and transforming the data using operations that are fast until you have it just right, then collect() to finalize the steps.

At this stage, the data is still in it’s normalized form. We’ve fixed the station name and the values in the data are now what was observed, but we still need to pivot the data to make is useful.

We’ll use the tidyr spread() function to make the value that appears in the variable column (TMIN, TMAX, etc.) appear as columns in the output, and put the data in the value column into the cells in each column and row. We’re also calculating an average daily temperature from the minimum and maximum temperatures and selecting just the columns we want.

# pivot, calculate average temp, include useful vars
fai_pivot <-
   fai_raw %>%
   spread(variable, value) %>%
   transform(TAVG=(TMIN+TMAX)/2.0) %>%
   select(station_name, dte, TAVG, TMIN, TMAX, TOBS, PRCP, SNOW, SNWD,
         WSF1, WDF1, WSF2, WDF2, WSF5, WDF5, WSFG, WDFG, TSUN)

Now we’ve got a table with rows for each station name and date, and columns with all the observed variables we might be interested in.

Time for some analysis. Let’s get the minimum temperatures by year and station. When looking at winter temperatures, it makes more sense to group by “winter year” rather that the actual year. In our case, we’re subtracting 92 days from the date and getting the year. This makes the winter year start in April instead of January and means that the 2014/2015 winter has a winter year of 2014.

# Find coldest temperatures by winter year, as a nice table
fai_winter_year_minimum <-
   fai_pivot %>%
      mutate(winter_year=year(dte - days(92))) %>%
      filter(winter_year < 2014) %>%
      group_by(station_name, winter_year) %>%
      select(station_name, winter_year, TMIN) %>%
      summarize(tmin=min(TMIN*9/5+32, na.rm=TRUE), n=n()) %>%
      filter(n>350) %>%
      select(station_name, winter_year, tmin) %>%
      spread(station_name, tmin)

In order, we’re taking the pivoted data (fai_pivot), adding a column for winter year (mutate), removing the data from the current year since the winter isn’t over (filter), grouping by station and winter year (group_by), reducing the columns down to just minimum temperature (select), summarizing by minimum temperature after converting to Fahrenheit and the number of days with valid data (summarize), only selecting years with 350 ore more days of data (select), and finally grabbing and formatting just the columns we want (select, spread).

Here’s the last 20 years and how we get a nice table of them.

last_twenty <-
   fai_winter_year_minimum %>%
      filter(winter_year > 1993)

# Write to an RST table
sink("last_twenty.rst")
print(kable(last_twenty, format="rst"))
sink()
Minimum temperatures, last 20 years
Winter Year College Obsy Fairbanks Airport University Exp Stn
1994 -43.96 -47.92 -47.92
1995 -45.04 -45.04 -47.92
1996 -50.98 -50.98 -54.04
1997 -43.96 -47.92 -47.92
1998 -52.06 -54.94 -54.04
1999 -50.08 -52.96 -50.98
2000 -27.94 -36.04 -27.04
2001 -40.00 -43.06 -36.04
2002 -34.96 -38.92 -34.06
2003 -45.94 -45.94 NA
2004 NA -47.02 -49.00
2005 -47.92 -50.98 -49.00
2006 NA -43.96 -41.98
2007 -38.92 -47.92 -45.94
2008 -47.02 -47.02 -49.00
2009 -32.98 -41.08 -41.08
2010 -36.94 -43.96 -38.02
2011 -47.92 -50.98 -52.06
2012 -43.96 -47.92 -45.04
2013 -36.94 -40.90 NA

To plot it, we need to re-normalize it so that each row in the data has winter_year, station_name, and tmin in it.

Here’s the plotting code, including the commands to re-normalize.

q <-
   fai_winter_year_minimum %>%
      gather(station_name, tmin, -winter_year) %>%
      arrange(winter_year) %>%
      ggplot(aes(x=winter_year, y=tmin, colour=station_name)) +
            geom_point(size=1.5, position=position_jitter(w=0.5,h=0.0)) +
            geom_smooth(method="lm", se=FALSE) +
            scale_x_continuous(name="Winter Year",
                               breaks=pretty_breaks(n=20)) +
            scale_y_continuous(name="Minimum temperature (degrees F)",
                               breaks=pretty_breaks(n=10)) +
            scale_color_manual(name="Station",
                              labels=c("College Observatory",
                                       "Fairbanks Airport",
                                       "University Exp. Station"),
                              values=c("darkorange", "blue", "darkcyan")) +
            theme_bw() +
            theme(legend.position = c(0.875, 0.120)) +
            theme(axis.text.x = element_text(angle=45, hjust=1))

The lines are the linear regression lines between winter year and minimum temperature. You can see that the trend is for increasing minimum temperatures. Each of these lines is statistically significant (both the coefficients and the overall model), but they only explain about 7% of the variation in temperatures. Given the spread of the points, that’s not surprising. The data shows that the lowest winter temperature at the Fairbanks airport is rising by 0.062 degrees each year.

tags: R  weather  dplyr  climate  tidyr 

0 1 >>
Meta Photolog Archives