sun, 12-apr-2015, 16:38

Introduction

Last week I gave a presentation at work about the National Climate Data Center’s GHCND climate database and methods to import and manipulate the data using the dplyr and tidyr R packages (a report-style version of it is here). Along the way, I used this function to calculate the average daily temperature from the minimum and maximum daily temperatures:

mutate(TAVG=(TMIN+TMAX)/2.0))

One of the people in the audience asked why the Weather Service would calculate average daily temperature this way, rather than by averaging the continuous or hourly temperatures at each station. The answer is that many, perhaps most, of the official stations in the GHCND data set are COOP stations which only report minimum and maximum temperature, and the original instrument provided to COOP observers was likely a mercury minimum / maximum thermometer. Now that these instruments are digital, they could conceivably calculate average temperature internally, and observers could report minimum, maximum and average as calculated from the device. But that’s not how it’s done.

In this analysis, I look at the difference between calculating average daily temperature using the mean of all daily temperature observations, and using the average of the minimum and maximum reported temperature each day. I’ll use five years of data collected at our house using our Arduino-based weather station.

Methods

Our weather station records temperature every few seconds, averages this data every five minutes and stores these five minute observations in a database. For our analysis, I’ll group the data by day and calculate the average daily temperature using the mean of all the five minute observations, and using the average of the minimum and maximum daily temperature. I’ll use R to perform the analysis.

Libraries

Load the libraries we need:

library(dplyr)
library(lubridate)
library(ggplot2)
library(scales)
library(readr)

Retrieve the data

Connect to the database and retrieve the data. We’re using build_sql because the data table we’re interested in is a view (sort of like a stored SQL query), not a table, and dplyr::tbl can’t currently read from a view:

dw1454 <- src_postgres(dbname="goldstream_creek_wx",
                       user="readonly")

raw_data <- tbl(dw1454, build_sql("SELECT * FROM arduino_west"))

The raw data contains the timestamp for each five minute observation, and the temperature, in degrees Fahrenheit for that observation. The following series of functions aggregates the data to daily data and calculates the average daily temperature using the two methods.

daily_average <-
    raw_data %>%
    filter(obs_dt>'2009-12-31 23:59:59') %>%
    mutate(date=date(obs_dt)) %>%
    select(date, wtemp) %>%
    group_by(date) %>%
    summarize(mm_avg=(min(wtemp)+max(wtemp))/2.0,
              h_avg=mean(wtemp), n=n()) %>%
    filter(n==24*60/5) %>%   # 5 minute obs
    collect()

All these steps are joined together using the “pipe” or “then” operator %>% as follows:

  • daily_average <-: assign the result of all the operations to daily_average.
  • raw_data %>%: start with the data from our database query (all the temperature observations).
  • filter(obs_dt>'2009-12-31 23:59:59') %>%: use data from 2010 and after.
  • mutate(date=date(obs_dt)) %>%: calculate the data from the timestamp.
  • select(date, wtemp) %>%: reduce the columns to our newly calculated date variable and the temperatures.
  • group_by(date) %>%: group the data by date.
  • summarize(mm_avg=(min(wtemp)+max(wtemp))/2.0) %>%: summarize the data grouped by date, calculate daily average from the average of the minimum and maximum temperature.
  • summarize(h_avg=mean(wtemp), n=n()) %>%: calculate another daily average from the mean of the temperaures. Also calculate the number of observations on each date.
  • filter(n==24*60/5) %>%: Only include dates where we have a complete set of five minute observations. We don’t want data with too few or too many observations because those would skew the averages.
  • collect(): This function retrieves the data from the database. Without collect(), the query is run on the database server, producing a subset of the full results. This allows us to tweak the query until it’s exactly what we want without having to wait to retrieve everything at each iteration.

Now we’ve got a table with one row for each date in the database where we had exactly 288 observations on that date. Columns include the average temperature calculated using the two methods and the number of observations on each date.

Save the data so we don’t have to do these calculations again:

write_csv(daily_average, "daily_average.csv")
save(daily_average, file="daily_average.rdata", compress=TRUE)

Calculate anomalies

How does the min/max method of calculating average daily temperature compare against the true mean of all observed temperatures in a day? We calculate the difference between the methods, the anomaly, as the mean temperature subtracted from the average of minimum and maximum. When this anomaly is positive, the min/max method is higher than the actual mean, and when it’s negative, it’s lower.

anomaly <-
    daily_average %>%
    mutate(month=month(date),
           anomaly=mm_avg-h_avg) %>%
    ungroup() %>%
    arrange(date)

We also populate a column with the month of each date so we can look at the seasonality of the anomalies.

Results

This is what the results look like:

summary(anomaly$anomaly)
##    Min. 1st Qu.  Median    Mean 3rd Qu.    Max.
## -6.8600 -1.5110 -0.1711 -0.1341  1.0740  9.3570

The average anomaly is very close to zero (-0.13), and I suspect it would be even closer to zero as more data is included. Half the data is between -1.5 and 1.1 degrees and the full range is -6.86 to +9.36°F.

Plots

Let’s take a look at some plots of the anomalies.

Raw anomaly data

The first plot shows the raw anomaly data, with positive anomalies (min/max calculate average is higher than the mean daily average) colored red and negative anomalies in blue.

# All anomalies
q <- ggplot(data=anomaly,
            aes(x=date, ymin=0, ymax=anomaly, colour=anomaly<0)) +
    geom_linerange(alpha=0.5) +
    theme_bw() +
    scale_colour_manual(values=c("red", "blue"), guide=FALSE) +
    scale_x_date(name="") +
    scale_y_continuous(name="Difference between min/max and hourly aggregation")

print(q)
//media.swingleydev.com/img/blog/2015/04/diff_mm_hourly_aggregation_to_daily.svg

I don't see much in the way of trends in this data, but there are short periods where all the anomalies are in one direction or another. If there is a seasonal pattern, it's hard to see it when the data is presented this way.

Monthly boxplots

To examine the seasonality of the anomalies, let’s look at some boxplots, grouped by the “month” variable we calculated when calculating the anomalies.

mean_anomaly <- mean(anomaly$anomaly)

# seasonal pattern of anomaly
q <- ggplot(data=anomaly,
            aes(x=as.factor(month), y=anomaly)) +
    geom_hline(data=NULL, aes(yintercept=mean_anomaly), colour="darkorange") +
    geom_boxplot() +
    scale_x_discrete(name="",
                     labels=c("Jan", "Feb", "Mar", "Apr",
                              "May", "Jun", "Jul", "Aug",
                              "Sep", "Oct", "Nov", "Dec")) +
    scale_y_continuous(name="Difference between min/max and hourly aggregation") +
    theme_bw()

print(q)
//media.swingleydev.com/img/blog/2015/04/diff_mm_hourly_aggregation_to_daily_boxplot.svg

There does seem to be a slight seasonal pattern to the anomalies, with spring and summer daily average underestimated when using the min/max calculation (the actual daily average temperature is warmer than was calculated using minimum and maximum temperatures) and slightly overestimated in fall and late winter. The boxes in a boxplot show the range where half the observations fall, and in all months but April and May these ranges include zero, so there's a good chance that the pattern isn't statistically significant. The orange line under the boxplots show the overall average anomaly, close to zero.

Cumulative frequency distribution

Finally, we plot the cumulative frequency distribution of the absolute value of the anomalies. These plots have the variable of interest on the x-axis and the cumulative frequency of all values to the left on the y-axis. It’s a good way of seeing how much of the data falls into certain ranges.

# distribution of anomalies
q <- ggplot(data=anomaly,
            aes(x=abs(anomaly))) +
    stat_ecdf() +
    scale_x_discrete(name="Absolute value of anomaly (+/- degrees F)",
                     breaks=0:11,
                     labels=0:11,
                     expand=c(0, 0)) +
    scale_y_continuous(name="Cumulative frequency",
                       labels=percent,
                       breaks=pretty_breaks(n=10),
                       limits=c(0,1)) +
    annotate("rect", xmin=-1, xmax=1, ymin=0, ymax=0.4, alpha=0.1, fill="darkcyan") +
    annotate("rect", xmin=-1, xmax=2, ymin=0, ymax=0.67, alpha=0.1, fill="darkcyan") +
    annotate("rect", xmin=-1, xmax=3, ymin=0, ymax=0.85, alpha=0.1, fill="darkcyan") +
    annotate("rect", xmin=-1, xmax=4, ymin=0, ymax=0.94, alpha=0.1, fill="darkcyan") +
    annotate("rect", xmin=-1, xmax=5, ymin=0, ymax=0.975, alpha=0.1, fill="darkcyan") +
    theme_bw()

print(q)
//media.swingleydev.com/img/blog/2015/04/cum_freq_distribution.svg

The overlapping rectangles on the plot show what percentages of anomalies fall in certain ranges. Starting from the innermost and darkest rectangle, 40% of the temperatures calculated using minimum and maximum are within a degree of the actual temperature. Sixty-seven percent are within two degrees, 85% within three degrees, 94% are within four degrees, and more than 97% are within five degrees of the actual value. There's probably a way to get R to calculate these intersections along the curve for you, but I looked at the plot and manually added the annotations.

Conclusion

We looked at more than five years of data from our weather station in the Goldstream Valley, comparing daily average temperature calculated from the mean of all five minute temperature observations and those calculated using the average minimum and maximum daily temperature, which is the method the National Weather Service uses for it’s daily data. The results show that the difference between these methods average to zero, which means that on an annual (or greater) basis, there doesn't appear to be any bias in the method.

Two thirds of all daily average temperatures are within two degrees of the actual daily average, and with a few exceptions, the error is always below five degrees.

There is some evidence that there’s a seasonal pattern to the error, however, with April and May daily averages particularly low. If those seasonal patterns are real, this would indicate an important bias in this method of calculating average daily temperature.

tags: R  temperature  dplyr  climate  GHCND 
Meta Photolog Archives