metachronistic http://swingleydev.com/blog/ Latest metachronistic posts en-us Sun, 03 May 2015 09:22:32 -0800 Runners hit by a batted ball http://swingleydev.com/blog/p/1986/ <div class="document"> <p>Yesterday I saw something I’ve never seen in a baseball game before: a runner getting hit by a batted ball, which according to Rule 7.08(f) means the runner is out and the ball is dead. It turns out that this isn’t as unusual an event as I’d thought (see below), but what was unusal is that this ended the game between the Angels and Giants. Even stranger, this is also how the game between the Diamondbacks and Dodgers ended.</p> <p>Let’s use <a class="reference external" href="http://www.retrosheet.org/">Retrosheet</a> data to see how often this happens. Retrosheet data is organized into game data, roster data and event data. Event files contain a record of every event in a game and include the code <tt class="docutils literal">BR</tt> for when a runner is hit by a batted ball. Here’s a SQL query to find all the matching events, who got hit and whether it was the last out in the game.</p> <div class="highlight"><pre><span class="k">SELECT</span> <span class="n">sub</span><span class="p">.</span><span class="n">game_id</span><span class="p">,</span> <span class="n">teams</span><span class="p">,</span> <span class="nb">date</span><span class="p">,</span> <span class="n">inn_ct</span><span class="p">,</span> <span class="n">outs_ct</span><span class="p">,</span> <span class="n">bat_team</span><span class="p">,</span> <span class="n">event_tx</span><span class="p">,</span> <span class="n">first_name</span> <span class="o">||</span> <span class="s1">&#39; &#39;</span> <span class="o">||</span> <span class="n">last_name</span> <span class="k">AS</span> <span class="n">runner</span><span class="p">,</span> <span class="k">CASE</span> <span class="k">WHEN</span> <span class="n">event_id</span> <span class="o">=</span> <span class="n">max_event_id</span> <span class="k">THEN</span> <span class="s1">&#39;last out&#39;</span> <span class="k">ELSE</span> <span class="s1">&#39;&#39;</span> <span class="k">END</span> <span class="k">AS</span> <span class="n">last_out</span> <span class="k">FROM</span> <span class="p">(</span> <span class="k">SELECT</span> <span class="k">year</span><span class="p">,</span> <span class="n">game_id</span><span class="p">,</span> <span class="n">away_team_id</span> <span class="o">||</span> <span class="s1">&#39; @ &#39;</span> <span class="o">||</span> <span class="n">home_team_id</span> <span class="k">AS</span> <span class="n">teams</span><span class="p">,</span> <span class="nb">date</span><span class="p">,</span> <span class="n">inn_ct</span><span class="p">,</span> <span class="k">CASE</span> <span class="k">WHEN</span> <span class="n">bat_home_id</span> <span class="o">=</span> <span class="mi">1</span> <span class="k">THEN</span> <span class="n">home_team_id</span> <span class="k">ELSE</span> <span class="n">away_team_id</span> <span class="k">END</span> <span class="k">AS</span> <span class="n">bat_team</span><span class="p">,</span> <span class="n">outs_ct</span><span class="p">,</span> <span class="n">event_tx</span><span class="p">,</span> <span class="k">CASE</span> <span class="n">regexp_replace</span><span class="p">(</span><span class="n">event_tx</span><span class="p">,</span> <span class="s1">&#39;.*([1-3])X[1-3H].*&#39;</span><span class="p">,</span> <span class="n">E</span><span class="s1">&#39;\\1&#39;</span><span class="p">)</span> <span class="k">WHEN</span> <span class="s1">&#39;1&#39;</span> <span class="k">THEN</span> <span class="n">base1_run_id</span> <span class="k">WHEN</span> <span class="s1">&#39;2&#39;</span> <span class="k">THEN</span> <span class="n">base2_run_id</span> <span class="k">WHEN</span> <span class="s1">&#39;3&#39;</span> <span class="k">THEN</span> <span class="n">base3_run_id</span> <span class="k">END</span> <span class="k">AS</span> <span class="n">runner_id</span><span class="p">,</span> <span class="n">event_id</span> <span class="k">FROM</span> <span class="n">events</span> <span class="k">WHERE</span> <span class="n">event_tx</span> <span class="o">~</span> <span class="s1">&#39;BR&#39;</span> <span class="p">)</span> <span class="k">AS</span> <span class="n">sub</span> <span class="k">INNER</span> <span class="k">JOIN</span> <span class="n">rosters</span> <span class="k">ON</span> <span class="n">sub</span><span class="p">.</span><span class="k">year</span><span class="o">=</span><span class="n">rosters</span><span class="p">.</span><span class="k">year</span> <span class="k">AND</span> <span class="n">runner_id</span><span class="o">=</span><span class="n">player_id</span> <span class="k">AND</span> <span class="n">rosters</span><span class="p">.</span><span class="n">team_id</span> <span class="o">=</span> <span class="n">bat_team</span> <span class="k">INNER</span> <span class="k">JOIN</span> <span class="p">(</span> <span class="k">SELECT</span> <span class="n">game_id</span><span class="p">,</span> <span class="k">max</span><span class="p">(</span><span class="n">event_id</span><span class="p">)</span> <span class="k">AS</span> <span class="n">max_event_id</span> <span class="k">FROM</span> <span class="n">events</span> <span class="k">GROUP</span> <span class="k">BY</span> <span class="n">game_id</span> <span class="p">)</span> <span class="k">AS</span> <span class="n">max_events</span> <span class="k">ON</span> <span class="n">sub</span><span class="p">.</span><span class="n">game_id</span> <span class="o">=</span> <span class="n">max_events</span><span class="p">.</span><span class="n">game_id</span> <span class="k">ORDER</span> <span class="k">BY</span> <span class="nb">date</span><span class="p">;</span> </pre></div> <p>Here's what the query does. The first sub-query <tt class="docutils literal">sub</tt> finds all the events with the <tt class="docutils literal">BR</tt> code, determines which team was batting and finds the id for the player who was running. This is joined with the roster table so we can assign a name to the runner. Finally, it’s joined with a subquery, <tt class="docutils literal">max_events</tt>, which finds the last event in each game. Once we’ve got all that, the <tt class="docutils literal">SELECT</tt> statement at the very top retrieves the columns of interest, and records whether the event was the last out of the game.</p> <p>Retrosheet has event data going back to 1922, but the event files don’t include every game played in a season until the mid-50s. Starting in 1955 a runner being hit by a batted ball has a game twelve times, most recently in 2010. On average, runners get hit (and are called out) about fourteen times a season.</p> <p>Here are the twelve times a runner got hit to end the game, since 1955. Until yesterday, when it happened twice in one day:</p> <table border="1" class="tdlalign docutils"> <caption>Runners hit by a batted ball to end a game, since 1955</caption> <colgroup> <col width="17%" /> <col width="16%" /> <col width="14%" /> <col width="26%" /> <col width="26%" /> </colgroup> <thead valign="bottom"> <tr><th class="head">Date</th> <th class="head">Teams</th> <th class="head">Batting</th> <th class="head">Event</th> <th class="head">Runner</th> </tr> </thead> <tbody valign="top"> <tr><td>1956-09-30</td> <td>NY1 &#64; PHI</td> <td>PHI</td> <td>S4/BR.1X2(4)#</td> <td>Granny Hamner</td> </tr> <tr><td>1961-09-16</td> <td>PHI &#64; CIN</td> <td>PHI</td> <td>S/BR/G4.1X2(4)</td> <td>Clarence Coleman</td> </tr> <tr><td>1971-08-07</td> <td>SDN &#64; HOU</td> <td>SDN</td> <td>S/BR.1X2(3)</td> <td>Ed Spiezio</td> </tr> <tr><td>1979-04-07</td> <td>CAL &#64; SEA</td> <td>SEA</td> <td>S/BR.1X2(4)</td> <td>Larry Milbourne</td> </tr> <tr><td>1979-08-15</td> <td>TOR &#64; OAK</td> <td>TOR</td> <td>S/BR.3-3;2X3(4)#</td> <td>Alfredo Griffin</td> </tr> <tr><td>1980-09-22</td> <td>CLE &#64; NYA</td> <td>CLE</td> <td>S/BR.3XH(5)</td> <td>Toby Harrah</td> </tr> <tr><td>1984-04-06</td> <td>MIL &#64; SEA</td> <td>MIL</td> <td>S/BR.1X2(4)</td> <td>Robin Yount</td> </tr> <tr><td>1987-06-25</td> <td>ATL &#64; LAN</td> <td>ATL</td> <td>S/L3/BR.1X2(3)</td> <td>Glenn Hubbard</td> </tr> <tr><td>1994-06-13</td> <td>HOU &#64; SFN</td> <td>HOU</td> <td>S/BR.1X2(4)</td> <td>James Mouton</td> </tr> <tr><td>2001-08-04</td> <td>NYN &#64; ARI</td> <td>ARI</td> <td>S/BR.2X3(6)</td> <td>David Dellucci</td> </tr> <tr><td>2003-04-09</td> <td>KCA &#64; DET</td> <td>DET</td> <td>S/BR.1X2(4)</td> <td>Bobby Higginson</td> </tr> <tr><td>2010-06-27</td> <td>PIT &#64; OAK</td> <td>PIT</td> <td>S/BR/G.1X2(3)</td> <td>Pedro Alvarez</td> </tr> </tbody> </table> <p>And all runners hit last season:</p> <table border="1" class="tdlalign docutils"> <caption>Runners hit by a batted ball in the 2014 MLB season</caption> <colgroup> <col width="16%" /> <col width="15%" /> <col width="14%" /> <col width="31%" /> <col width="24%" /> </colgroup> <thead valign="bottom"> <tr><th class="head">Date</th> <th class="head">Teams</th> <th class="head">Batting</th> <th class="head">Event</th> <th class="head">Runner</th> </tr> </thead> <tbody valign="top"> <tr><td>2014-05-07</td> <td>SEA &#64; OAK</td> <td>OAK</td> <td>S/BR/G.1X2(3)</td> <td>Derek Norris</td> </tr> <tr><td>2014-05-11</td> <td>MIN &#64; DET</td> <td>DET</td> <td>S/BR/G.3-3;2X3(6);1-2</td> <td>Austin Jackson</td> </tr> <tr><td>2014-05-23</td> <td>CLE &#64; BAL</td> <td>BAL</td> <td>S/BR/G.2X3(6);1-2</td> <td>Chris Davis</td> </tr> <tr><td>2014-05-27</td> <td>NYA &#64; SLN</td> <td>SLN</td> <td>S/BR/G.1X2(3)</td> <td>Matt Holliday</td> </tr> <tr><td>2014-06-14</td> <td>CHN &#64; PHI</td> <td>CHN</td> <td>S/BR/G.1X2(4)</td> <td>Justin Ruggiano</td> </tr> <tr><td>2014-07-13</td> <td>OAK &#64; SEA</td> <td>SEA</td> <td>S/BR/G.1X2(4)</td> <td>Kyle Seager</td> </tr> <tr><td>2014-07-18</td> <td>PHI &#64; ATL</td> <td>PHI</td> <td>S/BR/G.1X2(4)</td> <td>Grady Sizemore</td> </tr> <tr><td>2014-07-25</td> <td>BAL &#64; SEA</td> <td>SEA</td> <td>S/BR/G.1X2(4)</td> <td>Brad Miller</td> </tr> <tr><td>2014-08-05</td> <td>NYN &#64; WAS</td> <td>WAS</td> <td>S/BR/G.2X3(6);3-3</td> <td>Asdrubal Cabrera</td> </tr> <tr><td>2014-09-04</td> <td>SLN &#64; MIL</td> <td>SLN</td> <td>S/BR/G.2X3(6);1-2</td> <td>Matt Carpenter</td> </tr> <tr><td>2014-09-09</td> <td>SDN &#64; LAN</td> <td>LAN</td> <td>S/BR/G.2X3(6)</td> <td>Matt Kemp</td> </tr> <tr><td>2014-09-18</td> <td>BOS &#64; PIT</td> <td>BOS</td> <td>S/BR/G.3XH(5);1-2;B-1</td> <td>Jemile Weeks</td> </tr> </tbody> </table> </div> Sun, 03 May 2015 09:22:32 -0800 http://swingleydev.com/blog/p/1986/ baseball Giants SQL baserunning Retrosheet Baltimore riots and baseball http://swingleydev.com/blog/p/1985/ <div class="document"> <div class="figure align-right"> <img alt="Memorial Stadium, 1971" src="http://media.swingleydev.com/img/blog/2015/04/memorial_stadium.jpg" style="width: 213px; height: 240px;" /> <p class="caption">Memorial Stadium, 1971 <br /> photo by <a class="reference external" href="https://www.flickr.com/photos/completecardsets/">Tom Vivian</a></p> </div> <p>Yesterday, the Baltimore Orioles and Chicago White Sox played a game at Camden Yards in downtown Baltimore. The game was “closed to fans” due to the riots that broke out in the city after the funeral for a man who died in police custody. It’s the first time a Major League Baseball game has been played without <em>any</em> fans in the stands, but unfortunately it’s not the first time there have been riots in Baltimore.</p> <p>After Martin Luther King, Jr. was murdered in April 1968, Baltimore rioted for six days, with local police, and more than eleven thousand National Guard, Army troops, and Marines brought in to restore order. According to <a class="reference external" href="http://en.wikipedia.org/wiki/1968_Baltimore_riots">wikipedia</a> six people died, more than 700 were injured, 1,000 businesses were damaged and close to six thousand people were arrested.</p> <p>At that time, the Orioles played in Memorial Stadium, about 4 miles north-northwest of where they play now. I don’t know much about that area of Baltimore, but I was curious to know whether the Orioles played any baseball games during those riots.</p> <p><a class="reference external" href="http://www.retrosheet.org/">Retrosheet</a> has one game, on April 10, 1968, with a reported attendance of 22,050. The Orioles defeated the Oakland Athletics by a score of 3–1. Thomas Phoebus got the win over future Hall of Famer Catfish Hunter. Other popular players in the game included Reggie Jackson, Sal Bando, Rick Mondy and Bert Campaneris for the A’s and Brooks Robinson, Frank Robinson, Davey Johnson, and Boog Powell for the Orioles.</p> <p>The box score and play-by-play can be viewed <a class="reference external" href="http://www.retrosheet.org/boxesetc/1968/B04100BAL1968.htm">here</a>.</p> </div> Thu, 30 Apr 2015 18:14:44 -0800 http://swingleydev.com/blog/p/1985/ baseball Baltimore riots Retrosheet Orioles Athletics Average daily temperature calculation, CRN data http://swingleydev.com/blog/p/1984/ <div class="document"> <div class="section" id="introduction"> <h1>Introduction</h1> <p>One of the best sources of weather data in the United States comes from the National Weather Service's Cooperative Observer Network (COOP), which is available from <a class="reference external" href="http://www.ncdc.noaa.gov/data-access/land-based-station-data/land-based-datasets/cooperative-observer-network-coop">NCDC</a>. It's daily data, collected by volunteers at more than 10,000 locations. We participate in this program at our house (station id DW1454 / GHCND:USC00503368), collecting daily minimum and maximum temperature, liquid precipitation, snowfall and snow depth. We also collect river heights for Goldstream Creek as part of the Alaska Pacific River Forecast Center (station GSCA2). Traditionally, daily temperature measurements were collecting using a minimum maximum thermometer, which meant that the only way to calculate average daily temperature was by averaging the minimum and maximum temperature. Even though COOP observers typically have an electronic instrument that could calculate average daily temperature from continuous observations, the daily minimum and maximum data is still what is reported.</p> <p>In an <a class="reference external" href="http://swingleydev.com/blog/p/1982/">earlier post</a> we looked at methods used to calculate average daily temperature, and if there are any biases present in the way the National Weather Service calculates this using the average of the minimum and maximum daily temperature. We looked at five years of data collected at my house every five minutes, comparing the average of these temperatures against the average of the daily minimum and maximum. Here, we will be repeating this analysis using data from the <a class="reference external" href="http://www.ncdc.noaa.gov/crn/">Climate Reference Network</a> stations in the United States.</p> <p>The US Climate Reference Network is a collection of 132 weather stations that are properly sited, maintained, and include multiple redundant measures of temperature and precipitation. Data is available from <a class="reference external" href="http://www1.ncdc.noaa.gov/pub/data/uscrn/products/">http://www1.ncdc.noaa.gov/pub/data/uscrn/products/</a> and includes monthly, daily, and hourly statistics, and sub-hourly (5-minute) observations. We’ll be focusing on the sub-hourly data, since it closely matches the data collected at my weather station.</p> <p>A similar analysis using daily and hourly CRN data appears <a class="reference external" href="http://wattsupwiththat.com/2012/08/30/errors-in-estimating-temperatures-using-the-average-of-tmax-and-tmin-analysis-of-the-uscrn-temperature-stations/">here</a>.</p> </div> <div class="section" id="getting-the-raw-data"> <h1>Getting the raw data</h1> <p>I downloaded all the data using the following Unix commands:</p> <div class="highlight"><pre><span class="nv">$ </span>wget http://www1.ncdc.noaa.gov/pub/data/uscrn/products/stations.tsv <span class="nv">$ </span>wget -np -m http://www1.ncdc.noaa.gov/pub/data/uscrn/products/subhourly01/ <span class="nv">$ </span>find www1.ncdc.noaa.gov/ -type f -name <span class="s1">&#39;CRN*.txt&#39;</span> -exec gzip <span class="o">{}</span> <span class="se">\;</span> </pre></div> <p>The code to insert all of this data into a database can be found <a class="reference external" href="http://media.swingleydev.com/img/blog/2015/04/read_subhourly.r">here</a>. Once inserted, I have a table named <tt class="docutils literal">crn_stations</tt> that has the station data, and one named <tt class="docutils literal">crn_subhourly</tt> with the five minute observation data.</p> </div> <div class="section" id="methods"> <h1>Methods</h1> <p>Once again, we’ll use R to read the data, process it, and produce plots.</p> <div class="section" id="libraries"> <h2>Libraries</h2> <p>Load the libraries we need:</p> <div class="highlight"><pre>library<span class="p">(</span>dplyr<span class="p">)</span> library<span class="p">(</span>lubridate<span class="p">)</span> library<span class="p">(</span>ggplot2<span class="p">)</span> library<span class="p">(</span>scales<span class="p">)</span> library<span class="p">(</span>grid<span class="p">)</span> </pre></div> <p>Connect to the database and load the data tables.</p> <div class="highlight"><pre>noaa_db <span class="o">&lt;-</span> src_postgres<span class="p">(</span>dbname<span class="o">=</span><span class="s">&quot;noaa&quot;</span><span class="p">,</span> host<span class="o">=</span><span class="s">&quot;mason&quot;</span><span class="p">)</span> crn_stations <span class="o">&lt;-</span> tbl<span class="p">(</span>noaa_db<span class="p">,</span> <span class="s">&quot;crn_stations&quot;</span><span class="p">)</span> <span class="o">%&gt;%</span> collect<span class="p">()</span> crn_subhourly <span class="o">&lt;-</span> tbl<span class="p">(</span>noaa_db<span class="p">,</span> <span class="s">&quot;crn_subhourly&quot;</span><span class="p">)</span> </pre></div> <p>Remove observations without temperature data, group by station and date, calculate average daily temperature using the two methods, remove any daily data without a full set of data, and collect the results into an R data frame. This looks very similar to the code used to analyze the data from my weather station.</p> <div class="highlight"><pre>crn_daily <span class="o">&lt;-</span> crn_subhourly <span class="o">%&gt;%</span> filter<span class="p">(</span><span class="o">!</span>is.na<span class="p">(</span>air_temperature<span class="p">))</span> <span class="o">%&gt;%</span> mutate<span class="p">(</span>date<span class="o">=</span>date<span class="p">(</span>timestamp<span class="p">))</span> <span class="o">%&gt;%</span> group_by<span class="p">(</span>wbanno<span class="p">,</span> date<span class="p">)</span> <span class="o">%&gt;%</span> summarize<span class="p">(</span>t_mean<span class="o">=</span>mean<span class="p">(</span>air_temperature<span class="p">),</span> t_minmax_avg<span class="o">=</span><span class="p">(</span>min<span class="p">(</span>air_temperature<span class="p">)</span><span class="o">+</span> max<span class="p">(</span>air_temperature<span class="p">))</span><span class="o">/</span><span class="m">2.0</span><span class="p">,</span> n<span class="o">=</span>n<span class="p">())</span> <span class="o">%&gt;%</span> filter<span class="p">(</span>n<span class="o">==</span><span class="m">24</span><span class="o">*</span><span class="m">12</span><span class="p">)</span> <span class="o">%&gt;%</span> mutate<span class="p">(</span>anomaly<span class="o">=</span>t_minmax_avg<span class="o">-</span>t_mean<span class="p">)</span> <span class="o">%&gt;%</span> select<span class="p">(</span>wbanno<span class="p">,</span> date<span class="p">,</span> t_mean<span class="p">,</span> t_minmax_avg<span class="p">,</span> anomaly<span class="p">)</span> <span class="o">%&gt;%</span> collect<span class="p">()</span> </pre></div> <p>The two types of daily average temperatures are calculated in this step:</p> <div class="highlight"><pre>summarize<span class="p">(</span>t_mean<span class="o">=</span>mean<span class="p">(</span>air_temperature<span class="p">),</span> t_minmax_avg<span class="o">=</span><span class="p">(</span>min<span class="p">(</span>air_temperature<span class="p">)</span><span class="o">+</span> max<span class="p">(</span>air_temperature<span class="p">))</span><span class="o">/</span><span class="m">2.0</span><span class="p">)</span> </pre></div> <p>Where <tt class="docutils literal">t_mean</tt> is the value calculated from all 288 five minute observations, and <tt class="docutils literal">t_minmax_avg</tt> is the value from the daily minimum and maximum.</p> <p>Now we join the observation data with the station data. This attaches station information such as the name and latitude of the station to each record.</p> <div class="highlight"><pre>crn_daily_stations <span class="o">&lt;-</span> crn_daily <span class="o">%&gt;%</span> inner_join<span class="p">(</span>crn_stations<span class="p">,</span> by<span class="o">=</span><span class="s">&quot;wbanno&quot;</span><span class="p">)</span> <span class="o">%&gt;%</span> select<span class="p">(</span>wbanno<span class="p">,</span> date<span class="p">,</span> state<span class="p">,</span> location<span class="p">,</span> latitude<span class="p">,</span> longitude<span class="p">,</span> t_mean<span class="p">,</span> t_minmax_avg<span class="p">,</span> anomaly<span class="p">)</span> </pre></div> <p>Finally, save the data so we don’t have to do these steps again.</p> <div class="highlight"><pre>save<span class="p">(</span>crn_daily_stations<span class="p">,</span> file<span class="o">=</span><span class="s">&quot;crn_daily_averages.rdata&quot;</span><span class="p">)</span> </pre></div> </div> </div> <div class="section" id="results"> <h1>Results</h1> <p>Here are the overall results of the analysis.</p> <div class="highlight"><pre>summary<span class="p">(</span>crn_daily_stations<span class="o">$</span>anomaly<span class="p">)</span> </pre></div> <pre class="literal-block"> ## Min. 1st Qu. Median Mean 3rd Qu. Max. ## -11.9000 -0.1028 0.4441 0.4641 1.0190 10.7900 </pre> <p>The average anomaly across all stations and all dates is 0.44 degrees Celsius (0.79 degrees Farenheit). That’s a pretty significant error. Half the data is between −0.1 and 1.0°C (−0.23 and +1.8°F) and the full range is −11.9 to +10.8°C (−21.4 to +19.4°F).</p> </div> <div class="section" id="plots"> <h1>Plots</h1> <p>Let’s look at some plots.</p> <div class="section" id="raw-data-by-latitude"> <h2>Raw data by latitude</h2> <p>To start, we’ll look at all the anomalies by station latitude. The plot only shows one percent of the actual anomalies because plotting 512,460 points would take a long time and the general pattern is clear from the reduced data set.</p> <div class="highlight"><pre>set.seed<span class="p">(</span><span class="m">43</span><span class="p">)</span> p <span class="o">&lt;-</span> ggplot<span class="p">(</span>data<span class="o">=</span>crn_daily_stations <span class="o">%&gt;%</span> sample_frac<span class="p">(</span><span class="m">0.01</span><span class="p">),</span> aes<span class="p">(</span>x<span class="o">=</span>latitude<span class="p">,</span> y<span class="o">=</span>anomaly<span class="p">))</span> <span class="o">+</span> geom_point<span class="p">(</span>position<span class="o">=</span><span class="s">&quot;jitter&quot;</span><span class="p">,</span> alpha<span class="o">=</span><span class="s">&quot;0.2&quot;</span><span class="p">)</span> <span class="o">+</span> geom_smooth<span class="p">(</span>method<span class="o">=</span><span class="s">&quot;lm&quot;</span><span class="p">,</span> se<span class="o">=</span><span class="kc">FALSE</span><span class="p">)</span> <span class="o">+</span> theme_bw<span class="p">()</span> <span class="o">+</span> scale_x_continuous<span class="p">(</span>name<span class="o">=</span><span class="s">&quot;Station latitude&quot;</span><span class="p">,</span> breaks<span class="o">=</span>pretty_breaks<span class="p">(</span>n<span class="o">=</span><span class="m">10</span><span class="p">))</span> <span class="o">+</span> scale_y_continuous<span class="p">(</span>name<span class="o">=</span><span class="s">&quot;Temperature anomaly (degrees C)&quot;</span><span class="p">,</span> breaks<span class="o">=</span>pretty_breaks<span class="p">(</span>n<span class="o">=</span><span class="m">10</span><span class="p">))</span> print<span class="p">(</span>p<span class="p">)</span> </pre></div> <div class="figure"> <img alt="http://media.swingleydev.com/img/blog/2015/04/crn_minmax_anomaly_scatterplot.svg" class="img-responsive" src="http://media.swingleydev.com/img/blog/2015/04/crn_minmax_anomaly_scatterplot.svg" /> </div> <p>The clouds of points show the differences between the min/max daily average and the actual daily average temperature, where numbers above zero represent cases where the min/max calculation overestimates daily average temperature. The blue line is the fit of a linear model relating latitude with temperature anomaly. We can see that the anomaly is always positive, averaging around half a degree at lower latitudes and drops somewhat as we proceed northward. You also get a sense from the actual data of how variable the anomaly is, and at what latitudes most of the stations are found.</p> <p>Here are the regression results:</p> <div class="highlight"><pre>summary<span class="p">(</span>lm<span class="p">(</span>anomaly <span class="o">~</span> latitude<span class="p">,</span> data<span class="o">=</span>crn_daily_stations<span class="p">))</span> </pre></div> <pre class="literal-block"> ## ## Call: ## lm(formula = anomaly ~ latitude, data = crn_daily_stations) ## ## Residuals: ## Min 1Q Median 3Q Max ## -12.3738 -0.5625 -0.0199 0.5499 10.3485 ## ## Coefficients: ## Estimate Std. Error t value Pr(&gt;|t|) ## (Intercept) 0.7403021 0.0070381 105.19 &lt;2e-16 *** ## latitude -0.0071276 0.0001783 -39.98 &lt;2e-16 *** ## --- ## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1 ## ## Residual standard error: 0.9632 on 512458 degrees of freedom ## Multiple R-squared: 0.00311, Adjusted R-squared: 0.003108 ## F-statistic: 1599 on 1 and 512458 DF, p-value: &lt; 2.2e-16 </pre> <p>The overall model and coefficients are highly significant, and show a slight decrease in the positive anomaly as we move farther north. Perhaps this is part of the reason why the analysis of my station (at a latitude of 64.89) showed an average anomaly close to zero (−0.07°C / −0.13°F).</p> </div> <div class="section" id="anomalies-by-month-and-latitude"> <h2>Anomalies by month and latitude</h2> <p>One of the results of our earlier analysis was a seasonal pattern in the anomalies at our station. Since we also know there is a latitudinal pattern, in the data, let’s combine the two, plotting anomaly by month, and faceting by latitude.</p> <p>Station latitude are binned into groups for plotting, and the plots themselves show the range that cover half of all anomalies for that latitude category × month. Including the full range of anomalies in each group tends to obscure the overall pattern, and the plot of the raw data didn’t show an obvious skew to the rarer anomalies.</p> <p>Here’s how we set up the data frames for the plot.</p> <div class="highlight"><pre>crn_daily_by_month <span class="o">&lt;-</span> crn_daily_stations <span class="o">%&gt;%</span> mutate<span class="p">(</span>month<span class="o">=</span>month<span class="p">(</span>date<span class="p">),</span> lat_bin<span class="o">=</span>factor<span class="p">(</span>ifelse<span class="p">(</span>latitude<span class="o">&lt;</span><span class="m">30</span><span class="p">,</span> <span class="s">&#39;&lt;30&#39;</span><span class="p">,</span> ifelse<span class="p">(</span>latitude<span class="o">&gt;</span><span class="m">60</span><span class="p">,</span> <span class="s">&#39;&gt;60&#39;</span><span class="p">,</span> paste<span class="p">(</span>floor<span class="p">(</span>latitude<span class="o">/</span><span class="m">10</span><span class="p">)</span><span class="o">*</span><span class="m">10</span><span class="p">,</span> <span class="p">(</span>floor<span class="p">(</span>latitude<span class="o">/</span><span class="m">10</span><span class="p">)</span><span class="m">+1</span><span class="p">)</span><span class="o">*</span><span class="m">10</span><span class="p">,</span> sep<span class="o">=</span><span class="s">&#39;-&#39;</span><span class="p">))),</span> levels<span class="o">=</span>c<span class="p">(</span><span class="s">&#39;&lt;30&#39;</span><span class="p">,</span> <span class="s">&#39;30-40&#39;</span><span class="p">,</span> <span class="s">&#39;40-50&#39;</span><span class="p">,</span> <span class="s">&#39;50-60&#39;</span><span class="p">,</span> <span class="s">&#39;&gt;60&#39;</span><span class="p">)))</span> summary_stats <span class="o">&lt;-</span> <span class="kr">function</span><span class="p">(</span>l<span class="p">)</span> <span class="p">{</span> s <span class="o">&lt;-</span> summary<span class="p">(</span>l<span class="p">)</span> data.frame<span class="p">(</span>min<span class="o">=</span>s<span class="p">[</span><span class="s">&#39;Min.&#39;</span><span class="p">],</span> first<span class="o">=</span>s<span class="p">[</span><span class="s">&#39;1st Qu.&#39;</span><span class="p">],</span> median<span class="o">=</span>s<span class="p">[</span><span class="s">&#39;Median&#39;</span><span class="p">],</span> mean<span class="o">=</span>s<span class="p">[</span><span class="s">&#39;Mean&#39;</span><span class="p">],</span> third<span class="o">=</span>s<span class="p">[</span><span class="s">&#39;3rd Qu.&#39;</span><span class="p">],</span> max<span class="o">=</span>s<span class="p">[</span><span class="s">&#39;Max.&#39;</span><span class="p">])</span> <span class="p">}</span> crn_by_month_lat_bin <span class="o">&lt;-</span> crn_daily_by_month <span class="o">%&gt;%</span> group_by<span class="p">(</span>month<span class="p">,</span> lat_bin<span class="p">)</span> <span class="o">%&gt;%</span> do<span class="p">(</span>summary_stats<span class="p">(</span>.<span class="o">$</span>anomaly<span class="p">))</span> <span class="o">%&gt;%</span> ungroup<span class="p">()</span> station_years <span class="o">&lt;-</span> crn_daily_by_month <span class="o">%&gt;%</span> mutate<span class="p">(</span>year<span class="o">=</span>year<span class="p">(</span>date<span class="p">))</span> <span class="o">%&gt;%</span> group_by<span class="p">(</span>wbanno<span class="p">,</span> lat_bin<span class="p">)</span> <span class="o">%&gt;%</span> summarize<span class="p">()</span> <span class="o">%&gt;%</span> group_by<span class="p">(</span>lat_bin<span class="p">)</span> <span class="o">%&gt;%</span> summarize<span class="p">(</span>station_years<span class="o">=</span>n<span class="p">())</span> </pre></div> <p>And the plot itself. At the end, we’re using a function called <tt class="docutils literal">facet_adjust</tt>, which adds x-axis tick labels to the facet on the right that wouldn't ordinarily have them. The code comes from this <a class="reference external" href="http://stackoverflow.com/questions/13297155/add-floating-axis-labels-in-facet-wrap-plot/13316126">stack overflow post</a>.</p> <div class="highlight"><pre>p <span class="o">&lt;-</span> ggplot<span class="p">(</span>data<span class="o">=</span>crn_by_month_lat_bin<span class="p">,</span> aes<span class="p">(</span>x<span class="o">=</span>month<span class="p">,</span> ymin<span class="o">=</span>first<span class="p">,</span> ymax<span class="o">=</span>third<span class="p">,</span> y<span class="o">=</span>mean<span class="p">))</span> <span class="o">+</span> geom_hline<span class="p">(</span>yintercept<span class="o">=</span><span class="m">0</span><span class="p">,</span> alpha<span class="o">=</span><span class="m">0.2</span><span class="p">)</span> <span class="o">+</span> geom_hline<span class="p">(</span>data<span class="o">=</span>crn_by_month_lat_bin <span class="o">%&gt;%</span> group_by<span class="p">(</span>lat_bin<span class="p">)</span> <span class="o">%&gt;%</span> summarize<span class="p">(</span>mean<span class="o">=</span>mean<span class="p">(</span>mean<span class="p">)),</span> aes<span class="p">(</span>yintercept<span class="o">=</span>mean<span class="p">),</span> colour<span class="o">=</span><span class="s">&quot;darkorange&quot;</span><span class="p">,</span> alpha<span class="o">=</span><span class="m">0.5</span><span class="p">)</span> <span class="o">+</span> geom_pointrange<span class="p">()</span> <span class="o">+</span> facet_wrap<span class="p">(</span><span class="o">~</span> lat_bin<span class="p">,</span> ncol<span class="o">=</span><span class="m">3</span><span class="p">)</span> <span class="o">+</span> geom_text<span class="p">(</span>data<span class="o">=</span>station_years<span class="p">,</span> size<span class="o">=</span><span class="m">4</span><span class="p">,</span> aes<span class="p">(</span>x<span class="o">=</span><span class="m">2.25</span><span class="p">,</span> y<span class="o">=</span><span class="m">-0.5</span><span class="p">,</span> ymin<span class="o">=</span><span class="m">0</span><span class="p">,</span> ymax<span class="o">=</span><span class="m">0</span><span class="p">,</span> label<span class="o">=</span>paste<span class="p">(</span><span class="s">&#39;n =&#39;</span><span class="p">,</span> station_years<span class="p">)))</span> <span class="o">+</span> scale_y_continuous<span class="p">(</span>name<span class="o">=</span><span class="s">&quot;Range including 50% of temperature anomalies&quot;</span><span class="p">)</span> <span class="o">+</span> scale_x_discrete<span class="p">(</span>breaks<span class="o">=</span><span class="m">1</span><span class="o">:</span><span class="m">12</span><span class="p">,</span> labels<span class="o">=</span>c<span class="p">(</span><span class="s">&#39;Jan&#39;</span><span class="p">,</span> <span class="s">&#39;Feb&#39;</span><span class="p">,</span> <span class="s">&#39;Mar&#39;</span><span class="p">,</span> <span class="s">&#39;Apr&#39;</span><span class="p">,</span> <span class="s">&#39;May&#39;</span><span class="p">,</span> <span class="s">&#39;Jun&#39;</span><span class="p">,</span> <span class="s">&#39;Jul&#39;</span><span class="p">,</span> <span class="s">&#39;Aug&#39;</span><span class="p">,</span> <span class="s">&#39;Sep&#39;</span><span class="p">,</span> <span class="s">&#39;Oct&#39;</span><span class="p">,</span> <span class="s">&#39;Nov&#39;</span><span class="p">,</span> <span class="s">&#39;Dec&#39;</span><span class="p">))</span> <span class="o">+</span> theme_bw<span class="p">()</span> <span class="o">+</span> theme<span class="p">(</span>axis.text.x<span class="o">=</span>element_text<span class="p">(</span>angle<span class="o">=</span><span class="m">45</span><span class="p">,</span> hjust<span class="o">=</span><span class="m">1</span><span class="p">,</span> vjust<span class="o">=</span><span class="m">1.25</span><span class="p">),</span> axis.title.x<span class="o">=</span>element_blank<span class="p">())</span> facet_adjust<span class="p">(</span>p<span class="p">)</span> </pre></div> <div class="figure"> <img alt="http://media.swingleydev.com/img/blog/2015/04/crn_minmax_anomalies_by_month_lat.svg" class="img-responsive" src="http://media.swingleydev.com/img/blog/2015/04/crn_minmax_anomalies_by_month_lat.svg" /> </div> <p>Each plot shows the range of anomalies from the first to the third quartile (50% of the observed anomalies) by month, with the dot near the middle of the line at the mean anomaly. The orange horizontal line shows the overall mean anomaly for that latitude category, and the count at the bottom of the plot indicates the number of “station years” for that latitude category.</p> <p>It’s clear that there are seasonal patterns in the differences between the mean daily temperature and the min/max estimate. But each plot looks so different from the next that it’s not clear if the patterns we are seeing in each latitude category are real or artificial. It is also problematic that three of our latitude categories have very little data compared with the other two. It may be worth performing this analysis in a few years when the lower and higher latitude stations have a bit more data.</p> </div> </div> <div class="section" id="conclusion"> <h1>Conclusion</h1> <p>This analysis shows that there is a clear bias in using the average of minimum and maximum daily temperature to estimate average daily temperature. Across all of the CRN stations, the min/max estimator overestimates daily average temperature by almost a half a degree Celsius (0.8°F).</p> <p>We also found that this error is larger at lower latitudes, and that there are seasonal patterns to the anomalies, although the seasonal patterns don’t seem to have clear transitions moving from lower to higher latitudes.</p> <p>The current length of the CRN record is quite short, especially for the sub-hourly data used here, so the patterns may not be representative of the true situation.</p> </div> </div> Sat, 25 Apr 2015 10:21:41 -0800 http://swingleydev.com/blog/p/1984/ climate temperature CRN COOP weather R ggplot dplyr, tidyr and the GHCND climate database http://swingleydev.com/blog/p/1983/ <div class="document"> <div class="section" id="abstract"> <h1>Abstract</h1> <p>The following is a document-style version of a presentation I gave at work a couple weeks ago. It's a little less useful for a general audience because you don't have access to the same database I have, but I figured it might be useful for someone who is looking at using <tt class="docutils literal">dplyr</tt> or in manipulating the GHCND data from NCDC.</p> </div> <div class="section" id="introduction"> <h1>Introduction</h1> <p>Today we’re going to briefly take a look at the GHCND climate database and a couple new R packages (<tt class="docutils literal">dplyr</tt> and <tt class="docutils literal">tidyr</tt>) that make data import and manipulation a lot easier than using the standard library.</p> <p>For further reading, consult the vignettes for <tt class="docutils literal">dplyr</tt> and <tt class="docutils literal">tidyr</tt>, and download the cheat sheet:</p> <ul class="simple"> <li><a class="reference external" href="http://cran.r-project.org/web/packages/dplyr/vignettes/introduction.html">http://cran.r-project.org/web/packages/dplyr/vignettes/introduction.html</a></li> <li><a class="reference external" href="http://cran.r-project.org/web/packages/dplyr/vignettes/databases.html">http://cran.r-project.org/web/packages/dplyr/vignettes/databases.html</a></li> <li><a class="reference external" href="http://cran.r-project.org/web/packages/tidyr/vignettes/tidy-data.html">http://cran.r-project.org/web/packages/tidyr/vignettes/tidy-data.html</a></li> <li><a class="reference external" href="http://www.rstudio.com/wp-content/uploads/2015/02/data-wrangling-cheatsheet.pdf">http://www.rstudio.com/wp-content/uploads/2015/02/data-wrangling-cheatsheet.pdf</a></li> </ul> </div> <div class="section" id="ghcnd-database"> <h1>GHCND database</h1> <p>The <a class="reference external" href="http://www1.ncdc.noaa.gov/pub/data/ghcn/daily/readme.txt">GHCND database</a> contains daily observation data from locations around the world. The README linked above describes the data set and the way the data is formatted. I have written scripts that process the station data and the yearly download files and insert it into a PostgreSQL database (<tt class="docutils literal">noaa</tt>).</p> <p>The script for inserting a yearly file (downloaded from <a class="reference external" href="http://www1.ncdc.noaa.gov/pub/data/ghcn/daily/by_year/">http://www1.ncdc.noaa.gov/pub/data/ghcn/daily/by_year/</a>) is here: <a class="reference external" href="http://media.swingleydev.com/img/blog/2015/04/ghcn-daily-by_year_process.py">ghcn-daily-by_year_process.py</a></p> </div> <div class="section" id="tidy-data"> <h1>“Tidy” data</h1> <p>Without going into too much detail on the subject (read <a class="reference external" href="http://vita.had.co.nz/papers/tidy-data.pdf">Hadley Wickham’s paper</a>) for more information, but the basic idea is that it is much easier to analyze data when it is in a particular, “tidy”, form. A Tidy dataset has a single table for each type of real world object or type of data, and each table has one column per variable measured and one row per observation.</p> <p>For example, here’s a tidy table representing daily weather observations with station × date as rows and the various variables as columns.</p> <table border="1" class="docutils"> <colgroup> <col width="16%" /> <col width="20%" /> <col width="16%" /> <col width="16%" /> <col width="11%" /> <col width="11%" /> <col width="10%" /> </colgroup> <thead valign="bottom"> <tr><th class="head">Station</th> <th class="head">Date</th> <th class="head">tmin_c</th> <th class="head">tmax_c</th> <th class="head">prcp</th> <th class="head">snow</th> <th class="head">...</th> </tr> </thead> <tbody valign="top"> <tr><td>PAFA</td> <td>2014-01-01</td> <td>12</td> <td>24</td> <td>0.00</td> <td>0.0</td> <td>...</td> </tr> <tr><td>PAFA</td> <td>2014-01-01</td> <td>8</td> <td>11</td> <td>0.02</td> <td>0.2</td> <td>...</td> </tr> <tr><td>...</td> <td>...</td> <td>...</td> <td>...</td> <td>...</td> <td>...</td> <td>...</td> </tr> </tbody> </table> <p>Getting raw data into this format is what we’ll look at today.</p> </div> <div class="section" id="r-libraries-data-import"> <h1>R libraries &amp; data import</h1> <p>First, let’s load the libraries we’ll need:</p> <div class="highlight"><pre>library<span class="p">(</span>dplyr<span class="p">)</span> <span class="c1"># data import</span> library<span class="p">(</span>tidyr<span class="p">)</span> <span class="c1"># column / row manipulation</span> library<span class="p">(</span>knitr<span class="p">)</span> <span class="c1"># tabular export</span> library<span class="p">(</span>ggplot2<span class="p">)</span> <span class="c1"># plotting</span> library<span class="p">(</span>scales<span class="p">)</span> <span class="c1"># “pretty” scaling</span> library<span class="p">(</span>lubridate<span class="p">)</span> <span class="c1"># date / time manipulations</span> </pre></div> <p><tt class="docutils literal">dplyr</tt> and <tt class="docutils literal">tidyr</tt> are the data import and manipulation libraries we will use, <tt class="docutils literal">knitr</tt> is used to produce tabular data in report-quality forms, <tt class="docutils literal">ggplot2</tt> and <tt class="docutils literal">scales</tt> are plotting libraries, and <tt class="docutils literal">lubridate</tt> is a library that makes date and time manipulation easier.</p> <p>Also note the warnings about how several R functions have been “masked” when we imported <tt class="docutils literal">dplyr</tt>. This just means we'll be getting the <tt class="docutils literal">dplyr</tt> versions instead of those we might be used to. In cases where we need both, you can preface the function with it's package: <tt class="docutils literal"><span class="pre">base::filter</span></tt> would us the normal <tt class="docutils literal">filter</tt> function instead of the one from <tt class="docutils literal">dplyr</tt>.</p> <p>Next, connect to the database and the three tables we will need:</p> <div class="highlight"><pre>noaa_db <span class="o">&lt;-</span> src_postgres<span class="p">(</span>host<span class="o">=</span><span class="s">&quot;mason&quot;</span><span class="p">,</span> dbname<span class="o">=</span><span class="s">&quot;noaa&quot;</span><span class="p">)</span> ghcnd_obs <span class="o">&lt;-</span> tbl<span class="p">(</span>noaa_db<span class="p">,</span> <span class="s">&quot;ghcnd_obs&quot;</span><span class="p">)</span> ghcnd_vars <span class="o">&lt;-</span> tbl<span class="p">(</span>noaa_db<span class="p">,</span> <span class="s">&quot;ghcnd_variables&quot;</span><span class="p">)</span> </pre></div> <p>The first statement connects us to the database and the next two create table links to the observation table and the variables table.</p> <p>Here’s what those two tables look like:</p> <div class="highlight"><pre>glimpse<span class="p">(</span>ghcnd_obs<span class="p">)</span> </pre></div> <pre class="literal-block"> ## Observations: 29404870 ## Variables: ## $ station_id (chr) &quot;USW00027502&quot;, &quot;USW00027502&quot;, &quot;USW00027502&quot;, &quot;USW0... ## $ dte (date) 2011-05-01, 2011-05-01, 2011-05-01, 2011-05-01, 2... ## $ variable (chr) &quot;AWND&quot;, &quot;FMTM&quot;, &quot;PRCP&quot;, &quot;SNOW&quot;, &quot;SNWD&quot;, &quot;TMAX&quot;, &quot;T... ## $ raw_value (dbl) 32, 631, 0, 0, 229, -100, -156, 90, 90, 54, 67, 1,... ## $ meas_flag (chr) &quot;&quot;, &quot;&quot;, &quot;T&quot;, &quot;T&quot;, &quot;&quot;, &quot;&quot;, &quot;&quot;, &quot;&quot;, &quot;&quot;, &quot;&quot;, &quot;&quot;, &quot;&quot;, ... ## $ qual_flag (chr) &quot;&quot;, &quot;&quot;, &quot;&quot;, &quot;&quot;, &quot;&quot;, &quot;&quot;, &quot;&quot;, &quot;&quot;, &quot;&quot;, &quot;&quot;, &quot;&quot;, &quot;&quot;, &quot;&quot;... ## $ source_flag (chr) &quot;X&quot;, &quot;X&quot;, &quot;X&quot;, &quot;X&quot;, &quot;X&quot;, &quot;X&quot;, &quot;X&quot;, &quot;X&quot;, &quot;X&quot;, &quot;X&quot;, ... ## $ time_of_obs (int) NA, NA, 0, NA, NA, 0, 0, NA, NA, NA, NA, NA, NA, N... </pre> <div class="highlight"><pre>glimpse<span class="p">(</span>ghcnd_vars<span class="p">)</span> </pre></div> <pre class="literal-block"> ## Observations: 82 ## Variables: ## $ variable (chr) &quot;AWND&quot;, &quot;EVAP&quot;, &quot;MDEV&quot;, &quot;MDPR&quot;, &quot;MNPN&quot;, &quot;MXPN&quot;,... ## $ description (chr) &quot;Average daily wind speed (tenths of meters per... ## $ raw_multiplier (dbl) 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.... </pre> <p>Each row in the observation table rows contain the <tt class="docutils literal">station_id</tt>, date, a variable code, the raw value for that variable, and a series of flags indicating data quality, source, and special measurements such as the “trace” value used for precipitation under the minimum measurable value.</p> <p>Each row in the variables table contains a variable code, description and the multiplier used to convert the raw value from the observation table into an actual value.</p> <p>This is an example of completely “normalized” data, and it’s stored this way because not all weather stations record all possible variables, and rather than having a single row for each station × date with a whole bunch of empty columns for those variables not measured, each row contains the station × data × variable data.</p> <p>We are also missing information about the stations, so let’s load that data:</p> <div class="highlight"><pre>fai_stations <span class="o">&lt;-</span> tbl<span class="p">(</span>noaa_db<span class="p">,</span> <span class="s">&quot;ghcnd_stations&quot;</span><span class="p">)</span> <span class="o">%&gt;%</span> filter<span class="p">(</span>station_name <span class="o">%in%</span> c<span class="p">(</span><span class="s">&quot;FAIRBANKS INTL AP&quot;</span><span class="p">,</span> <span class="s">&quot;UNIVERSITY EXP STN&quot;</span><span class="p">,</span> <span class="s">&quot;COLLEGE OBSY&quot;</span><span class="p">))</span> glimpse<span class="p">(</span>fai_stations<span class="p">)</span> </pre></div> <pre class="literal-block"> ## Observations: 3 ## Variables: ## $ station_id (chr) &quot;USC00502107&quot;, &quot;USW00026411&quot;, &quot;USC00509641&quot; ## $ station_name (chr) &quot;COLLEGE OBSY&quot;, &quot;FAIRBANKS INTL AP&quot;, &quot;UNIVERSITY ... ## $ latitude (dbl) 64.86030, 64.80389, 64.85690 ## $ longitude (dbl) -147.8484, -147.8761, -147.8610 ## $ elevation (dbl) 181.9656, 131.6736, 144.7800 ## $ coverage (dbl) 0.96, 1.00, 0.98 ## $ start_date (date) 1948-05-16, 1904-09-04, 1904-09-01 ## $ end_date (date) 2015-04-03, 2015-04-02, 2015-03-13 ## $ variables (chr) &quot;TMIN TOBS WT11 SNWD SNOW WT04 WT14 TMAX WT05 DAP... ## $ the_geom (chr) &quot;0101000020E6100000A5BDC117267B62C0EC2FBB270F3750... </pre> <p>The first part is the same as before, loading the <tt class="docutils literal">ghcnd_stations</tt> table, but we are filtering that data down to just the Fairbanks area stations with long term records. To do this, we use the pipe operator <tt class="docutils literal">%&gt;%</tt> which takes the data from the left side and passes it to the function on the right side, the <tt class="docutils literal">filter</tt> function in this case.</p> <p><tt class="docutils literal">filter</tt> requires one or more conditional statements with variable names on the left side and the condition on the right. Multiple conditions can be separated by commas if you want all the conditions to be required (AND) or separated by a logic operator (<tt class="docutils literal">&amp;</tt> for AND, <tt class="docutils literal">|</tt> for OR). For example: <tt class="docutils literal">filter(latitude &gt; 70, longitude &lt; <span class="pre">-140)</span></tt>.</p> <p>When used on database tables, <tt class="docutils literal">filter</tt> can also use conditionals that are built into the database which are passed directly as part of a <tt class="docutils literal">WHERE</tt> clause. In our code above, we’re using the <tt class="docutils literal">%in%</tt> operator here to select the stations from a list.</p> <p>Now we have the <tt class="docutils literal">station_id</tt>s we need to get just the data we want from the observation table and combine it with the other tables.</p> </div> <div class="section" id="combining-data"> <h1>Combining data</h1> <p>Here’s how we do it:</p> <div class="highlight"><pre>fai_raw <span class="o">&lt;-</span> ghcnd_obs <span class="o">%&gt;%</span> inner_join<span class="p">(</span>fai_stations<span class="p">,</span> by<span class="o">=</span><span class="s">&quot;station_id&quot;</span><span class="p">)</span> <span class="o">%&gt;%</span> inner_join<span class="p">(</span>ghcnd_vars<span class="p">,</span> by<span class="o">=</span><span class="s">&quot;variable&quot;</span><span class="p">)</span> <span class="o">%&gt;%</span> mutate<span class="p">(</span>value<span class="o">=</span>raw_value<span class="o">*</span>raw_multiplier<span class="p">)</span> <span class="o">%&gt;%</span> filter<span class="p">(</span>qual_flag<span class="o">==</span><span class="s">&#39;&#39;</span><span class="p">)</span> <span class="o">%&gt;%</span> select<span class="p">(</span>station_name<span class="p">,</span> dte<span class="p">,</span> variable<span class="p">,</span> value<span class="p">)</span> <span class="o">%&gt;%</span> collect<span class="p">()</span> glimpse<span class="p">(</span>fai_raw<span class="p">)</span> </pre></div> <p>In order, here’s what we’re doing:</p> <ul class="simple"> <li>Assign the result to <tt class="docutils literal">fai_raw</tt></li> <li>Join the observation table with the filtered station data, using <tt class="docutils literal">station_id</tt> as the variable to combine against. Because this is an “inner” join, we only get results where <tt class="docutils literal">station_id</tt> matches in both the observation and the filtered station data. At this point we only have observation data from our long-term Fairbanks stations.</li> <li>Join the variable table with the Fairbanks area observation data, using <tt class="docutils literal">variable</tt> to link the tables.</li> <li>Add a new variable called <tt class="docutils literal">value</tt> which is calculated by multiplying <tt class="docutils literal">raw_value</tt> (coming from the observation table) by <tt class="docutils literal">raw_multiplier</tt> (coming from the variable table).</li> <li>Remove rows where the quality flag is not an empty space.</li> <li>Select only the station name, date, variable and actual value columns from the data. Before we did this, each row would contain every column from all three tables, and most of that information is not necessary.</li> <li>Finally, we “collect” the results. <tt class="docutils literal">dplyr</tt> doesn’t actually perform the full SQL until it absolutely has to. Instead it’s retrieving a small subset so that we can test our operations quickly. When we are happy with the results, we use <tt class="docutils literal">collect()</tt> to grab the full data.</li> </ul> </div> <div class="section" id="de-normalize-it"> <h1>De-normalize it</h1> <p>The data is still in a format that makes it difficult to analyze, with each row in the result containing a single station × date × variable observation. A tidy version of this data requires each variable be a column in the table, each row being a single date at each station.</p> <p>To “pivot” the data, we use the <tt class="docutils literal">spread</tt> function, and we'll also calculate a new variable and reduce the number of columns in the result.</p> <div class="highlight"><pre>fai_pivot <span class="o">&lt;-</span> fai_raw <span class="o">%&gt;%</span> spread<span class="p">(</span>variable<span class="p">,</span> value<span class="p">)</span> <span class="o">%&gt;%</span> mutate<span class="p">(</span>TAVG<span class="o">=</span><span class="p">(</span>TMIN<span class="o">+</span>TMAX<span class="p">)</span><span class="o">/</span><span class="m">2.0</span><span class="p">)</span> <span class="o">%&gt;%</span> select<span class="p">(</span>station_name<span class="p">,</span> dte<span class="p">,</span> TAVG<span class="p">,</span> TMIN<span class="p">,</span> TMAX<span class="p">,</span> TOBS<span class="p">,</span> PRCP<span class="p">,</span> SNOW<span class="p">,</span> SNWD<span class="p">,</span> WSF1<span class="p">,</span> WDF1<span class="p">,</span> WSF2<span class="p">,</span> WDF2<span class="p">,</span> WSF5<span class="p">,</span> WDF5<span class="p">,</span> WSFG<span class="p">,</span> WDFG<span class="p">,</span> TSUN<span class="p">)</span> head<span class="p">(</span>fai_pivot<span class="p">)</span> </pre></div> <pre class="literal-block"> ## Source: local data frame [6 x 18] ## ## station_name dte TAVG TMIN TMAX TOBS PRCP SNOW SNWD WSF1 WDF1 ## 1 COLLEGE OBSY 1948-05-16 11.70 5.6 17.8 16.1 NA NA NA NA NA ## 2 COLLEGE OBSY 1948-05-17 15.55 12.2 18.9 17.8 NA NA NA NA NA ## 3 COLLEGE OBSY 1948-05-18 14.40 9.4 19.4 16.1 NA NA NA NA NA ## 4 COLLEGE OBSY 1948-05-19 14.15 9.4 18.9 12.2 NA NA NA NA NA ## 5 COLLEGE OBSY 1948-05-20 10.25 6.1 14.4 14.4 NA NA NA NA NA ## 6 COLLEGE OBSY 1948-05-21 9.75 1.7 17.8 17.8 NA NA NA NA NA ## Variables not shown: WSF2 (dbl), WDF2 (dbl), WSF5 (dbl), WDF5 (dbl), WSFG ## (dbl), WDFG (dbl), TSUN (dbl) </pre> <p><tt class="docutils literal">spread</tt> takes two parameters, the variable we want to spread across the columns, and the variable we want to use as the data value for each row × column intersection.</p> </div> <div class="section" id="examples"> <h1>Examples</h1> <p>Now that we've got the data in a format we can work with, let's look at a few examples.</p> <div class="section" id="find-the-coldest-temperatures-by-winter-year"> <h2>Find the coldest temperatures by winter year</h2> <p>First, let’s find the coldest winter temperatures from each station, by winter year. “Winter year” is just a way of grouping winters into a single value. Instead of the 2014–2015 winter, it’s the 2014 winter year. We get this by subtracting 92 days (the days in January, February, March) from the date, then pulling off the year.</p> <p>Here’s the code.</p> <div class="highlight"><pre>fai_winter_year_minimum <span class="o">&lt;-</span> fai_pivot <span class="o">%&gt;%</span> mutate<span class="p">(</span>winter_year<span class="o">=</span>year<span class="p">(</span>dte <span class="o">-</span> days<span class="p">(</span><span class="m">92</span><span class="p">)))</span> <span class="o">%&gt;%</span> filter<span class="p">(</span>winter_year <span class="o">&lt;</span> <span class="m">2014</span><span class="p">)</span> <span class="o">%&gt;%</span> group_by<span class="p">(</span>station_name<span class="p">,</span> winter_year<span class="p">)</span> <span class="o">%&gt;%</span> select<span class="p">(</span>station_name<span class="p">,</span> winter_year<span class="p">,</span> TMIN<span class="p">)</span> <span class="o">%&gt;%</span> summarize<span class="p">(</span>tmin<span class="o">=</span>min<span class="p">(</span>TMIN<span class="o">*</span><span class="m">9</span><span class="o">/</span><span class="m">5+32</span><span class="p">,</span> na.rm<span class="o">=</span><span class="kc">TRUE</span><span class="p">),</span> n<span class="o">=</span>n<span class="p">())</span> <span class="o">%&gt;%</span> filter<span class="p">(</span>n<span class="o">&gt;</span><span class="m">350</span><span class="p">)</span> <span class="o">%&gt;%</span> select<span class="p">(</span>station_name<span class="p">,</span> winter_year<span class="p">,</span> tmin<span class="p">)</span> <span class="o">%&gt;%</span> spread<span class="p">(</span>station_name<span class="p">,</span> tmin<span class="p">)</span> last_twenty <span class="o">&lt;-</span> fai_winter_year_minimum <span class="o">%&gt;%</span> filter<span class="p">(</span>winter_year <span class="o">&gt;</span> <span class="m">1993</span><span class="p">)</span> last_twenty </pre></div> <pre class="literal-block"> ## Source: local data frame [20 x 4] ## ## winter_year COLLEGE OBSY FAIRBANKS INTL AP UNIVERSITY EXP STN ## 1 1994 -43.96 -47.92 -47.92 ## 2 1995 -45.04 -45.04 -47.92 ## 3 1996 -50.98 -50.98 -54.04 ## 4 1997 -43.96 -47.92 -47.92 ## 5 1998 -52.06 -54.94 -54.04 ## 6 1999 -50.08 -52.96 -50.98 ## 7 2000 -27.94 -36.04 -27.04 ## 8 2001 -40.00 -43.06 -36.04 ## 9 2002 -34.96 -38.92 -34.06 ## 10 2003 -45.94 -45.94 NA ## 11 2004 NA -47.02 -49.00 ## 12 2005 -47.92 -50.98 -49.00 ## 13 2006 NA -43.96 -41.98 ## 14 2007 -38.92 -47.92 -45.94 ## 15 2008 -47.02 -47.02 -49.00 ## 16 2009 -32.98 -41.08 -41.08 ## 17 2010 -36.94 -43.96 -38.02 ## 18 2011 -47.92 -50.98 -52.06 ## 19 2012 -43.96 -47.92 -45.04 ## 20 2013 -36.94 -40.90 NA </pre> <p>See if you can follow the code above. The pipe operator makes is easy to see each operation performed along the way.</p> <p>There are a couple new functions here, <tt class="docutils literal">group_by</tt> and <tt class="docutils literal">summarize</tt>. <tt class="docutils literal">group_by</tt> indicates at what level we want to group the data, and <tt class="docutils literal">summarize</tt> uses those groupings to perform summary calculations using aggregate functions. We group by station and winter year, then we use the minimum and n functions to get the minimum temperature and number of days in each year where temperature data was available. You can see we are using <tt class="docutils literal">n</tt> to remove winter years where more than two weeks of data are missing.</p> <p>Also notice that we’re using <tt class="docutils literal">spread</tt> again in order to make a single column for each station containing the minimum temperature data.</p> <p>Here’s how we can write out the table data as a restructuredText document, which can be converted into many document formats (PDF, ODF, HTML, etc.):</p> <div class="highlight"><pre>sink<span class="p">(</span><span class="s">&quot;last_twenty.rst&quot;</span><span class="p">)</span> print<span class="p">(</span>kable<span class="p">(</span>last_twenty<span class="p">,</span> format<span class="o">=</span><span class="s">&quot;rst&quot;</span><span class="p">))</span> sink<span class="p">()</span> </pre></div> <table border="1" class="docutils"> <caption>Minimum temperatures by winter year, station</caption> <colgroup> <col width="19%" /> <col width="21%" /> <col width="29%" /> <col width="31%" /> </colgroup> <thead valign="bottom"> <tr><th class="head">winter_year</th> <th class="head">COLLEGE OBSY</th> <th class="head">FAIRBANKS INTL AP</th> <th class="head">UNIVERSITY EXP STN</th> </tr> </thead> <tbody valign="top"> <tr><td>1994</td> <td>-43.96</td> <td>-47.92</td> <td>-47.92</td> </tr> <tr><td>1995</td> <td>-45.04</td> <td>-45.04</td> <td>-47.92</td> </tr> <tr><td>1996</td> <td>-50.98</td> <td>-50.98</td> <td>-54.04</td> </tr> <tr><td>1997</td> <td>-43.96</td> <td>-47.92</td> <td>-47.92</td> </tr> <tr><td>1998</td> <td>-52.06</td> <td>-54.94</td> <td>-54.04</td> </tr> <tr><td>1999</td> <td>-50.08</td> <td>-52.96</td> <td>-50.98</td> </tr> <tr><td>2000</td> <td>-27.94</td> <td>-36.04</td> <td>-27.04</td> </tr> <tr><td>2001</td> <td>-40.00</td> <td>-43.06</td> <td>-36.04</td> </tr> <tr><td>2002</td> <td>-34.96</td> <td>-38.92</td> <td>-34.06</td> </tr> <tr><td>2003</td> <td>-45.94</td> <td>-45.94</td> <td>NA</td> </tr> <tr><td>2004</td> <td>NA</td> <td>-47.02</td> <td>-49.00</td> </tr> <tr><td>2005</td> <td>-47.92</td> <td>-50.98</td> <td>-49.00</td> </tr> <tr><td>2006</td> <td>NA</td> <td>-43.96</td> <td>-41.98</td> </tr> <tr><td>2007</td> <td>-38.92</td> <td>-47.92</td> <td>-45.94</td> </tr> <tr><td>2008</td> <td>-47.02</td> <td>-47.02</td> <td>-49.00</td> </tr> <tr><td>2009</td> <td>-32.98</td> <td>-41.08</td> <td>-41.08</td> </tr> <tr><td>2010</td> <td>-36.94</td> <td>-43.96</td> <td>-38.02</td> </tr> <tr><td>2011</td> <td>-47.92</td> <td>-50.98</td> <td>-52.06</td> </tr> <tr><td>2012</td> <td>-43.96</td> <td>-47.92</td> <td>-45.04</td> </tr> <tr><td>2013</td> <td>-36.94</td> <td>-40.90</td> <td>NA</td> </tr> </tbody> </table> </div> <div class="section" id="plotting"> <h2>Plotting</h2> <p>Finally, let’s plot the minimum temperatures for all three stations.</p> <div class="highlight"><pre>q <span class="o">&lt;-</span> fai_winter_year_minimum <span class="o">%&gt;%</span> gather<span class="p">(</span>station_name<span class="p">,</span> tmin<span class="p">,</span> <span class="o">-</span>winter_year<span class="p">)</span> <span class="o">%&gt;%</span> arrange<span class="p">(</span>winter_year<span class="p">)</span> <span class="o">%&gt;%</span> ggplot<span class="p">(</span>aes<span class="p">(</span>x<span class="o">=</span>winter_year<span class="p">,</span> y<span class="o">=</span>tmin<span class="p">,</span> colour<span class="o">=</span>station_name<span class="p">))</span> <span class="o">+</span> geom_point<span class="p">(</span>size<span class="o">=</span><span class="m">1.5</span><span class="p">,</span> position<span class="o">=</span>position_jitter<span class="p">(</span>w<span class="o">=</span><span class="m">0.5</span><span class="p">,</span>h<span class="o">=</span><span class="m">0.0</span><span class="p">))</span> <span class="o">+</span> geom_smooth<span class="p">(</span>method<span class="o">=</span><span class="s">&quot;lm&quot;</span><span class="p">,</span> se<span class="o">=</span><span class="kc">FALSE</span><span class="p">)</span> <span class="o">+</span> scale_x_continuous<span class="p">(</span>name<span class="o">=</span><span class="s">&quot;Winter Year&quot;</span><span class="p">,</span> breaks<span class="o">=</span>pretty_breaks<span class="p">(</span>n<span class="o">=</span><span class="m">20</span><span class="p">))</span> <span class="o">+</span> scale_y_continuous<span class="p">(</span>name<span class="o">=</span><span class="s">&quot;Minimum temperature (degrees F)&quot;</span><span class="p">,</span> breaks<span class="o">=</span>pretty_breaks<span class="p">(</span>n<span class="o">=</span><span class="m">10</span><span class="p">))</span> <span class="o">+</span> scale_color_manual<span class="p">(</span>name<span class="o">=</span><span class="s">&quot;Station&quot;</span><span class="p">,</span> labels<span class="o">=</span>c<span class="p">(</span><span class="s">&quot;College Observatory&quot;</span><span class="p">,</span> <span class="s">&quot;Fairbanks Airport&quot;</span><span class="p">,</span> <span class="s">&quot;University Exp. Station&quot;</span><span class="p">),</span> values<span class="o">=</span>c<span class="p">(</span><span class="s">&quot;darkorange&quot;</span><span class="p">,</span> <span class="s">&quot;blue&quot;</span><span class="p">,</span> <span class="s">&quot;darkcyan&quot;</span><span class="p">))</span> <span class="o">+</span> theme_bw<span class="p">()</span> <span class="o">+</span> <span class="c1"># theme(legend.position = c(0.150, 0.850)) +</span> theme<span class="p">(</span>axis.text.x <span class="o">=</span> element_text<span class="p">(</span>angle<span class="o">=</span><span class="m">45</span><span class="p">,</span> hjust<span class="o">=</span><span class="m">1</span><span class="p">))</span> print<span class="p">(</span>q<span class="p">)</span> </pre></div> <div class="figure"> <img alt="http://media.swingleydev.com/img/blog/2015/04/min_temp_winter_year_fai_stations.svg" class="img-responsive" src="http://media.swingleydev.com/img/blog/2015/04/min_temp_winter_year_fai_stations.svg" /> </div> <p>To plot the data, we need the data in a slightly different format with each row containing winter year, station name and the minimum temperature. We’re plotting minimum temperature against winter year, coloring the points and trendlines using the station name. That means all three of those variables need to be on the same row.</p> <p>To do that we use <tt class="docutils literal">gather</tt>. The first parameter is the name of variable the columns will be moved into (the station names, which are currently columns, will become values in a row named <tt class="docutils literal">station_name</tt>). The second is the name of the column that stores the observations (<tt class="docutils literal">tmin</tt>) and the parameters after that are the list of columns to gather together. In our case, rather than specifying the names of the columns, we're specifying the inverse: all the columns <em>except</em> <tt class="docutils literal">winter_year</tt>.</p> <p>The result of the <tt class="docutils literal">gather</tt> looks like this:</p> <div class="highlight"><pre>fai_winter_year_minimum <span class="o">%&gt;%</span> gather<span class="p">(</span>station_name<span class="p">,</span> tmin<span class="p">,</span> <span class="o">-</span>winter_year<span class="p">)</span> </pre></div> <pre class="literal-block"> ## Source: local data frame [321 x 3] ## ## winter_year station_name tmin ## 1 1905 COLLEGE OBSY NA ## 2 1907 COLLEGE OBSY NA ## 3 1908 COLLEGE OBSY NA ## 4 1909 COLLEGE OBSY NA ## 5 1910 COLLEGE OBSY NA ## 6 1911 COLLEGE OBSY NA ## 7 1912 COLLEGE OBSY NA ## 8 1913 COLLEGE OBSY NA ## 9 1915 COLLEGE OBSY NA ## 10 1916 COLLEGE OBSY NA ## .. ... ... ... </pre> <div class="section" id="ggplot2"> <h3>ggplot2</h3> <p>The plot is produced using <tt class="docutils literal">ggplot2</tt>. A full introduction would be a seminar by itself, but the basics of our plot can be summarized as follows.</p> <div class="highlight"><pre>ggplot<span class="p">(</span>aes<span class="p">(</span>x<span class="o">=</span>winter_year<span class="p">,</span> y<span class="o">=</span>tmin<span class="p">,</span> colour<span class="o">=</span>station_name<span class="p">))</span> <span class="o">+</span> </pre></div> <p><tt class="docutils literal">aes</tt> defines variables and grouping.</p> <div class="highlight"><pre>geom_point<span class="p">(</span>size<span class="o">=</span><span class="m">1.5</span><span class="p">,</span> position<span class="o">=</span>position_jitter<span class="p">(</span>w<span class="o">=</span><span class="m">0.5</span><span class="p">,</span>h<span class="o">=</span><span class="m">0.0</span><span class="p">))</span> <span class="o">+</span> geom_smooth<span class="p">(</span>method<span class="o">=</span><span class="s">&quot;lm&quot;</span><span class="p">,</span> se<span class="o">=</span><span class="kc">FALSE</span><span class="p">)</span> <span class="o">+</span> </pre></div> <p><tt class="docutils literal">geom_point</tt> draws points, <tt class="docutils literal">geom_smooth</tt> draws fitted lines.</p> <div class="highlight"><pre>scale_x_continuous<span class="p">(</span>name<span class="o">=</span><span class="s">&quot;Winter Year&quot;</span><span class="p">,</span> breaks<span class="o">=</span>pretty_breaks<span class="p">(</span>n<span class="o">=</span><span class="m">20</span><span class="p">))</span> <span class="o">+</span> scale_y_continuous<span class="p">(</span>name<span class="o">=</span><span class="s">&quot;Minimum temperature (degrees F)&quot;</span><span class="p">,</span> breaks<span class="o">=</span>pretty_breaks<span class="p">(</span>n<span class="o">=</span><span class="m">10</span><span class="p">))</span> <span class="o">+</span> scale_color_manual<span class="p">(</span>name<span class="o">=</span><span class="s">&quot;Station&quot;</span><span class="p">,</span> labels<span class="o">=</span>c<span class="p">(</span><span class="s">&quot;College Observatory&quot;</span><span class="p">,</span> <span class="s">&quot;Fairbanks Airport&quot;</span><span class="p">,</span> <span class="s">&quot;University Exp. Station&quot;</span><span class="p">),</span> values<span class="o">=</span>c<span class="p">(</span><span class="s">&quot;darkorange&quot;</span><span class="p">,</span> <span class="s">&quot;blue&quot;</span><span class="p">,</span> <span class="s">&quot;darkcyan&quot;</span><span class="p">))</span> <span class="o">+</span> </pre></div> <p>Scale functions define how the data is scaled into a plot and controls labelling.</p> <div class="highlight"><pre>theme_bw<span class="p">()</span> <span class="o">+</span> theme<span class="p">(</span>axis.text.x <span class="o">=</span> element_text<span class="p">(</span>angle<span class="o">=</span><span class="m">45</span><span class="p">,</span> hjust<span class="o">=</span><span class="m">1</span><span class="p">))</span> </pre></div> <p>Theme functions controls the style.</p> <p>For more information:</p> <ul class="simple"> <li><a class="reference external" href="http://docs.ggplot2.org/current/">Documentation</a></li> <li><a class="reference external" href="http://www.rstudio.com/wp-content/uploads/2015/04/ggplot2-cheatsheet.pdf">Cheat sheet</a></li> <li><a class="reference external" href="http://www.cookbook-r.com/">Cookbook</a></li> </ul> </div> </div> <div class="section" id="linear-regression-winter-year-and-minimum-temperature"> <h2>Linear regression, winter year and minimum temperature</h2> <p>Finally let’s look at the significance of those regression lines:</p> <div class="highlight"><pre>summary<span class="p">(</span>lm<span class="p">(</span>data<span class="o">=</span>fai_winter_year_minimum<span class="p">,</span> <span class="sb">`COLLEGE OBSY`</span> <span class="o">~</span> winter_year<span class="p">))</span> </pre></div> <pre class="literal-block"> ## ## Call: ## lm(formula = `COLLEGE OBSY` ~ winter_year, data = fai_winter_year_minimum) ## ## Residuals: ## Min 1Q Median 3Q Max ## -19.0748 -5.8204 0.1907 3.8042 17.1599 ## ## Coefficients: ## Estimate Std. Error t value Pr(&gt;|t|) ## (Intercept) -275.01062 105.20884 -2.614 0.0114 * ## winter_year 0.11635 0.05311 2.191 0.0325 * ## --- ## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1 ## ## Residual standard error: 7.599 on 58 degrees of freedom ## (47 observations deleted due to missingness) ## Multiple R-squared: 0.07643, Adjusted R-squared: 0.06051 ## F-statistic: 4.8 on 1 and 58 DF, p-value: 0.03249 </pre> <div class="highlight"><pre>summary<span class="p">(</span>lm<span class="p">(</span>data<span class="o">=</span>fai_winter_year_minimum<span class="p">,</span> <span class="sb">`FAIRBANKS INTL AP`</span> <span class="o">~</span> winter_year<span class="p">))</span> </pre></div> <pre class="literal-block"> ## ## Call: ## lm(formula = `FAIRBANKS INTL AP` ~ winter_year, data = fai_winter_year_minimum) ## ## Residuals: ## Min 1Q Median 3Q Max ## -15.529 -4.605 -1.025 4.007 19.764 ## ## Coefficients: ## Estimate Std. Error t value Pr(&gt;|t|) ## (Intercept) -171.19553 43.55177 -3.931 0.000153 *** ## winter_year 0.06250 0.02221 2.813 0.005861 ** ## --- ## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1 ## ## Residual standard error: 7.037 on 104 degrees of freedom ## (1 observation deleted due to missingness) ## Multiple R-squared: 0.07073, Adjusted R-squared: 0.06179 ## F-statistic: 7.916 on 1 and 104 DF, p-value: 0.005861 </pre> <div class="highlight"><pre>summary<span class="p">(</span>lm<span class="p">(</span>data<span class="o">=</span>fai_winter_year_minimum<span class="p">,</span> <span class="sb">`UNIVERSITY EXP STN`</span> <span class="o">~</span> winter_year<span class="p">))</span> </pre></div> <pre class="literal-block"> ## ## Call: ## lm(formula = `UNIVERSITY EXP STN` ~ winter_year, data = fai_winter_year_minimum) ## ## Residuals: ## Min 1Q Median 3Q Max ## -15.579 -5.818 -1.283 6.029 19.977 ## ## Coefficients: ## Estimate Std. Error t value Pr(&gt;|t|) ## (Intercept) -158.41837 51.03809 -3.104 0.00248 ** ## winter_year 0.05638 0.02605 2.164 0.03283 * ## --- ## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1 ## ## Residual standard error: 8.119 on 100 degrees of freedom ## (5 observations deleted due to missingness) ## Multiple R-squared: 0.04474, Adjusted R-squared: 0.03519 ## F-statistic: 4.684 on 1 and 100 DF, p-value: 0.03283 </pre> <p>Essentially, all the models show a significant increase in minimum temperature over time, but none of them explain very much of the variation in minimum temperature.</p> </div> </div> <div class="section" id="rmarkdown"> <h1>RMarkdown</h1> <p>This presentation was produced with the <a class="reference external" href="http://rmarkdown.rstudio.com/">RMarkdown</a> package. Allows you to mix text and R code, which is then run through R to produce documents in Word, PDF, HTML, and presentation formats.</p> <ul class="simple"> <li><a class="reference external" href="http://www.rstudio.com/wp-content/uploads/2015/02/rmarkdown-cheatsheet.pdf">Cheat sheet</a></li> <li><a class="reference external" href="http://www.rstudio.com/wp-content/uploads/2015/02/rmarkdown-cheatsheet.pdf">Reference guide</a></li> </ul> </div> </div> Tue, 21 Apr 2015 17:33:24 -0800 http://swingleydev.com/blog/p/1983/ R dplyr tidyr ggplot2 climate GHCND winter temperature Calculating average daily temperature http://swingleydev.com/blog/p/1982/ <div class="document"> <div class="section" id="introduction"> <h1>Introduction</h1> <p>Last week I gave a presentation at work about the National Climate Data Center’s <a class="reference external" href="ftp://ftp.ncdc.noaa.gov/pub/data/ghcn/daily/readme.txt">GHCND</a> climate database and methods to import and manipulate the data using the <tt class="docutils literal">dplyr</tt> and <tt class="docutils literal">tidyr</tt> R packages (a report-style version of it is <a class="reference external" href="http://swingleydev.com/blog/p/1983/">here</a>). Along the way, I used this function to calculate the average daily temperature from the minimum and maximum daily temperatures:</p> <div class="highlight"><pre>mutate<span class="p">(</span>TAVG<span class="o">=</span><span class="p">(</span>TMIN<span class="o">+</span>TMAX<span class="p">)</span><span class="o">/</span><span class="m">2.0</span><span class="p">))</span> </pre></div> <p>One of the people in the audience asked why the Weather Service would calculate average daily temperature this way, rather than by averaging the continuous or hourly temperatures at each station. The answer is that many, perhaps most, of the official stations in the GHCND data set are COOP stations which only report minimum and maximum temperature, and the original instrument provided to COOP observers was likely a mercury minimum / maximum thermometer. Now that these instruments are digital, they could conceivably calculate average temperature internally, and observers could report minimum, maximum and average as calculated from the device. But that’s not how it’s done.</p> <p>In this analysis, I look at the difference between calculating average daily temperature using the mean of all daily temperature observations, and using the average of the minimum and maximum reported temperature each day. I’ll use five years of data collected at our house using our Arduino-based weather station.</p> </div> <div class="section" id="methods"> <h1>Methods</h1> <p>Our weather station records temperature every few seconds, averages this data every five minutes and stores these five minute observations in a database. For our analysis, I’ll group the data by day and calculate the average daily temperature using the mean of all the five minute observations, and using the average of the minimum and maximum daily temperature. I’ll use R to perform the analysis.</p> <div class="section" id="libraries"> <h2>Libraries</h2> <p>Load the libraries we need:</p> <div class="highlight"><pre>library<span class="p">(</span>dplyr<span class="p">)</span> library<span class="p">(</span>lubridate<span class="p">)</span> library<span class="p">(</span>ggplot2<span class="p">)</span> library<span class="p">(</span>scales<span class="p">)</span> library<span class="p">(</span>readr<span class="p">)</span> </pre></div> </div> <div class="section" id="retrieve-the-data"> <h2>Retrieve the data</h2> <p>Connect to the database and retrieve the data. We’re using <tt class="docutils literal">build_sql</tt> because the data table we’re interested in is a view (sort of like a stored SQL query), not a table, and <tt class="docutils literal"><span class="pre">dplyr::tbl</span></tt> can’t currently read from a view:</p> <div class="highlight"><pre>dw1454 <span class="o">&lt;-</span> src_postgres<span class="p">(</span>dbname<span class="o">=</span><span class="s">&quot;goldstream_creek_wx&quot;</span><span class="p">,</span> user<span class="o">=</span><span class="s">&quot;readonly&quot;</span><span class="p">)</span> raw_data <span class="o">&lt;-</span> tbl<span class="p">(</span>dw1454<span class="p">,</span> build_sql<span class="p">(</span><span class="s">&quot;SELECT * FROM arduino_west&quot;</span><span class="p">))</span> </pre></div> <p>The raw data contains the timestamp for each five minute observation, and the temperature, in degrees Fahrenheit for that observation. The following series of functions aggregates the data to daily data and calculates the average daily temperature using the two methods.</p> <div class="highlight"><pre>daily_average <span class="o">&lt;-</span> raw_data <span class="o">%&gt;%</span> filter<span class="p">(</span>obs_dt<span class="o">&gt;</span><span class="s">&#39;2009-12-31 23:59:59&#39;</span><span class="p">)</span> <span class="o">%&gt;%</span> mutate<span class="p">(</span>date<span class="o">=</span>date<span class="p">(</span>obs_dt<span class="p">))</span> <span class="o">%&gt;%</span> select<span class="p">(</span>date<span class="p">,</span> wtemp<span class="p">)</span> <span class="o">%&gt;%</span> group_by<span class="p">(</span>date<span class="p">)</span> <span class="o">%&gt;%</span> summarize<span class="p">(</span>mm_avg<span class="o">=</span><span class="p">(</span>min<span class="p">(</span>wtemp<span class="p">)</span><span class="o">+</span>max<span class="p">(</span>wtemp<span class="p">))</span><span class="o">/</span><span class="m">2.0</span><span class="p">,</span> h_avg<span class="o">=</span>mean<span class="p">(</span>wtemp<span class="p">),</span> n<span class="o">=</span>n<span class="p">())</span> <span class="o">%&gt;%</span> filter<span class="p">(</span>n<span class="o">==</span><span class="m">24</span><span class="o">*</span><span class="m">60</span><span class="o">/</span><span class="m">5</span><span class="p">)</span> <span class="o">%&gt;%</span> <span class="c1"># 5 minute obs</span> collect<span class="p">()</span> </pre></div> <p>All these steps are joined together using the “pipe” or “then” operator <tt class="docutils literal">%&gt;%</tt> as follows:</p> <ul class="simple"> <li><tt class="docutils literal">daily_average &lt;-</tt>: assign the result of all the operations to <tt class="docutils literal">daily_average</tt>.</li> <li><tt class="docutils literal">raw_data %&gt;%</tt>: start with the data from our database query (all the temperature observations).</li> <li><tt class="docutils literal"><span class="pre">filter(obs_dt&gt;'2009-12-31</span> 23:59:59') %&gt;%</tt>: use data from 2010 and after.</li> <li><tt class="docutils literal">mutate(date=date(obs_dt)) %&gt;%</tt>: calculate the data from the timestamp.</li> <li><tt class="docutils literal">select(date, wtemp) %&gt;%</tt>: reduce the columns to our newly calculated <tt class="docutils literal">date</tt> variable and the temperatures.</li> <li><tt class="docutils literal">group_by(date) %&gt;%</tt>: group the data by date.</li> <li><tt class="docutils literal"><span class="pre">summarize(mm_avg=(min(wtemp)+max(wtemp))/2.0)</span> %&gt;%</tt>: summarize the data grouped by date, calculate daily average from the average of the minimum and maximum temperature.</li> <li><tt class="docutils literal">summarize(h_avg=mean(wtemp), <span class="pre">n=n())</span> %&gt;%</tt>: calculate another daily average from the mean of the temperaures. Also calculate the number of observations on each date.</li> <li><tt class="docutils literal"><span class="pre">filter(n==24*60/5)</span> %&gt;%</tt>: Only include dates where we have a complete set of five minute observations. We don’t want data with too few or too many observations because those would skew the averages.</li> <li><tt class="docutils literal">collect()</tt>: This function retrieves the data from the database. Without <tt class="docutils literal">collect()</tt>, the query is run on the database server, producing a subset of the full results. This allows us to tweak the query until it’s exactly what we want without having to wait to retrieve everything at each iteration.</li> </ul> <p>Now we’ve got a table with one row for each date in the database where we had exactly 288 observations on that date. Columns include the average temperature calculated using the two methods and the number of observations on each date.</p> <p>Save the data so we don’t have to do these calculations again:</p> <div class="highlight"><pre>write_csv<span class="p">(</span>daily_average<span class="p">,</span> <span class="s">&quot;daily_average.csv&quot;</span><span class="p">)</span> save<span class="p">(</span>daily_average<span class="p">,</span> file<span class="o">=</span><span class="s">&quot;daily_average.rdata&quot;</span><span class="p">,</span> compress<span class="o">=</span><span class="kc">TRUE</span><span class="p">)</span> </pre></div> </div> <div class="section" id="calculate-anomalies"> <h2>Calculate anomalies</h2> <p>How does the min/max method of calculating average daily temperature compare against the true mean of all observed temperatures in a day? We calculate the difference between the methods, the anomaly, as the mean temperature subtracted from the average of minimum and maximum. When this anomaly is positive, the min/max method is higher than the actual mean, and when it’s negative, it’s lower.</p> <div class="highlight"><pre>anomaly <span class="o">&lt;-</span> daily_average <span class="o">%&gt;%</span> mutate<span class="p">(</span>month<span class="o">=</span>month<span class="p">(</span>date<span class="p">),</span> anomaly<span class="o">=</span>mm_avg<span class="o">-</span>h_avg<span class="p">)</span> <span class="o">%&gt;%</span> ungroup<span class="p">()</span> <span class="o">%&gt;%</span> arrange<span class="p">(</span>date<span class="p">)</span> </pre></div> <p>We also populate a column with the month of each date so we can look at the seasonality of the anomalies.</p> </div> </div> <div class="section" id="results"> <h1>Results</h1> <p>This is what the results look like:</p> <div class="highlight"><pre>summary<span class="p">(</span>anomaly<span class="o">$</span>anomaly<span class="p">)</span> </pre></div> <pre class="literal-block"> ## Min. 1st Qu. Median Mean 3rd Qu. Max. ## -6.8600 -1.5110 -0.1711 -0.1341 1.0740 9.3570 </pre> <p>The average anomaly is very close to zero (-0.13), and I suspect it would be even closer to zero as more data is included. Half the data is between -1.5 and 1.1 degrees and the full range is -6.86 to +9.36°F.</p> </div> <div class="section" id="plots"> <h1>Plots</h1> <p>Let’s take a look at some plots of the anomalies.</p> <div class="section" id="raw-anomaly-data"> <h2>Raw anomaly data</h2> <p>The first plot shows the raw anomaly data, with positive anomalies (min/max calculate average is higher than the mean daily average) colored red and negative anomalies in blue.</p> <div class="highlight"><pre><span class="c1"># All anomalies</span> q <span class="o">&lt;-</span> ggplot<span class="p">(</span>data<span class="o">=</span>anomaly<span class="p">,</span> aes<span class="p">(</span>x<span class="o">=</span>date<span class="p">,</span> ymin<span class="o">=</span><span class="m">0</span><span class="p">,</span> ymax<span class="o">=</span>anomaly<span class="p">,</span> colour<span class="o">=</span>anomaly<span class="o">&lt;</span><span class="m">0</span><span class="p">))</span> <span class="o">+</span> geom_linerange<span class="p">(</span>alpha<span class="o">=</span><span class="m">0.5</span><span class="p">)</span> <span class="o">+</span> theme_bw<span class="p">()</span> <span class="o">+</span> scale_colour_manual<span class="p">(</span>values<span class="o">=</span>c<span class="p">(</span><span class="s">&quot;red&quot;</span><span class="p">,</span> <span class="s">&quot;blue&quot;</span><span class="p">),</span> guide<span class="o">=</span><span class="kc">FALSE</span><span class="p">)</span> <span class="o">+</span> scale_x_date<span class="p">(</span>name<span class="o">=</span><span class="s">&quot;&quot;</span><span class="p">)</span> <span class="o">+</span> scale_y_continuous<span class="p">(</span>name<span class="o">=</span><span class="s">&quot;Difference between min/max and hourly aggregation&quot;</span><span class="p">)</span> print<span class="p">(</span>q<span class="p">)</span> </pre></div> <div class="figure"> <img alt="http://media.swingleydev.com/img/blog/2015/04/diff_mm_hourly_aggregation_to_daily.svg" class="img-responsive" src="http://media.swingleydev.com/img/blog/2015/04/diff_mm_hourly_aggregation_to_daily.svg" /> </div> <p>I don't see much in the way of trends in this data, but there are short periods where all the anomalies are in one direction or another. If there is a seasonal pattern, it's hard to see it when the data is presented this way.</p> </div> <div class="section" id="monthly-boxplots"> <h2>Monthly boxplots</h2> <p>To examine the seasonality of the anomalies, let’s look at some boxplots, grouped by the “month” variable we calculated when calculating the anomalies.</p> <div class="highlight"><pre>mean_anomaly <span class="o">&lt;-</span> mean<span class="p">(</span>anomaly<span class="o">$</span>anomaly<span class="p">)</span> <span class="c1"># seasonal pattern of anomaly</span> q <span class="o">&lt;-</span> ggplot<span class="p">(</span>data<span class="o">=</span>anomaly<span class="p">,</span> aes<span class="p">(</span>x<span class="o">=</span>as.factor<span class="p">(</span>month<span class="p">),</span> y<span class="o">=</span>anomaly<span class="p">))</span> <span class="o">+</span> geom_hline<span class="p">(</span>data<span class="o">=</span><span class="kc">NULL</span><span class="p">,</span> aes<span class="p">(</span>yintercept<span class="o">=</span>mean_anomaly<span class="p">),</span> colour<span class="o">=</span><span class="s">&quot;darkorange&quot;</span><span class="p">)</span> <span class="o">+</span> geom_boxplot<span class="p">()</span> <span class="o">+</span> scale_x_discrete<span class="p">(</span>name<span class="o">=</span><span class="s">&quot;&quot;</span><span class="p">,</span> labels<span class="o">=</span>c<span class="p">(</span><span class="s">&quot;Jan&quot;</span><span class="p">,</span> <span class="s">&quot;Feb&quot;</span><span class="p">,</span> <span class="s">&quot;Mar&quot;</span><span class="p">,</span> <span class="s">&quot;Apr&quot;</span><span class="p">,</span> <span class="s">&quot;May&quot;</span><span class="p">,</span> <span class="s">&quot;Jun&quot;</span><span class="p">,</span> <span class="s">&quot;Jul&quot;</span><span class="p">,</span> <span class="s">&quot;Aug&quot;</span><span class="p">,</span> <span class="s">&quot;Sep&quot;</span><span class="p">,</span> <span class="s">&quot;Oct&quot;</span><span class="p">,</span> <span class="s">&quot;Nov&quot;</span><span class="p">,</span> <span class="s">&quot;Dec&quot;</span><span class="p">))</span> <span class="o">+</span> scale_y_continuous<span class="p">(</span>name<span class="o">=</span><span class="s">&quot;Difference between min/max and hourly aggregation&quot;</span><span class="p">)</span> <span class="o">+</span> theme_bw<span class="p">()</span> print<span class="p">(</span>q<span class="p">)</span> </pre></div> <div class="figure"> <img alt="http://media.swingleydev.com/img/blog/2015/04/diff_mm_hourly_aggregation_to_daily_boxplot.svg" class="img-responsive" src="http://media.swingleydev.com/img/blog/2015/04/diff_mm_hourly_aggregation_to_daily_boxplot.svg" /> </div> <p>There does seem to be a slight seasonal pattern to the anomalies, with spring and summer daily average underestimated when using the min/max calculation (the actual daily average temperature is warmer than was calculated using minimum and maximum temperatures) and slightly overestimated in fall and late winter. The boxes in a boxplot show the range where half the observations fall, and in all months but April and May these ranges include zero, so there's a good chance that the pattern isn't statistically significant. The orange line under the boxplots show the overall average anomaly, close to zero.</p> </div> <div class="section" id="cumulative-frequency-distribution"> <h2>Cumulative frequency distribution</h2> <p>Finally, we plot the cumulative frequency distribution of the absolute value of the anomalies. These plots have the variable of interest on the x-axis and the cumulative frequency of all values to the left on the y-axis. It’s a good way of seeing how much of the data falls into certain ranges.</p> <div class="highlight"><pre><span class="c1"># distribution of anomalies</span> q <span class="o">&lt;-</span> ggplot<span class="p">(</span>data<span class="o">=</span>anomaly<span class="p">,</span> aes<span class="p">(</span>x<span class="o">=</span>abs<span class="p">(</span>anomaly<span class="p">)))</span> <span class="o">+</span> stat_ecdf<span class="p">()</span> <span class="o">+</span> scale_x_discrete<span class="p">(</span>name<span class="o">=</span><span class="s">&quot;Absolute value of anomaly (+/- degrees F)&quot;</span><span class="p">,</span> breaks<span class="o">=</span><span class="m">0</span><span class="o">:</span><span class="m">11</span><span class="p">,</span> labels<span class="o">=</span><span class="m">0</span><span class="o">:</span><span class="m">11</span><span class="p">,</span> expand<span class="o">=</span>c<span class="p">(</span><span class="m">0</span><span class="p">,</span> <span class="m">0</span><span class="p">))</span> <span class="o">+</span> scale_y_continuous<span class="p">(</span>name<span class="o">=</span><span class="s">&quot;Cumulative frequency&quot;</span><span class="p">,</span> labels<span class="o">=</span>percent<span class="p">,</span> breaks<span class="o">=</span>pretty_breaks<span class="p">(</span>n<span class="o">=</span><span class="m">10</span><span class="p">),</span> limits<span class="o">=</span>c<span class="p">(</span><span class="m">0</span><span class="p">,</span><span class="m">1</span><span class="p">))</span> <span class="o">+</span> annotate<span class="p">(</span><span class="s">&quot;rect&quot;</span><span class="p">,</span> xmin<span class="o">=</span><span class="m">-1</span><span class="p">,</span> xmax<span class="o">=</span><span class="m">1</span><span class="p">,</span> ymin<span class="o">=</span><span class="m">0</span><span class="p">,</span> ymax<span class="o">=</span><span class="m">0.4</span><span class="p">,</span> alpha<span class="o">=</span><span class="m">0.1</span><span class="p">,</span> fill<span class="o">=</span><span class="s">&quot;darkcyan&quot;</span><span class="p">)</span> <span class="o">+</span> annotate<span class="p">(</span><span class="s">&quot;rect&quot;</span><span class="p">,</span> xmin<span class="o">=</span><span class="m">-1</span><span class="p">,</span> xmax<span class="o">=</span><span class="m">2</span><span class="p">,</span> ymin<span class="o">=</span><span class="m">0</span><span class="p">,</span> ymax<span class="o">=</span><span class="m">0.67</span><span class="p">,</span> alpha<span class="o">=</span><span class="m">0.1</span><span class="p">,</span> fill<span class="o">=</span><span class="s">&quot;darkcyan&quot;</span><span class="p">)</span> <span class="o">+</span> annotate<span class="p">(</span><span class="s">&quot;rect&quot;</span><span class="p">,</span> xmin<span class="o">=</span><span class="m">-1</span><span class="p">,</span> xmax<span class="o">=</span><span class="m">3</span><span class="p">,</span> ymin<span class="o">=</span><span class="m">0</span><span class="p">,</span> ymax<span class="o">=</span><span class="m">0.85</span><span class="p">,</span> alpha<span class="o">=</span><span class="m">0.1</span><span class="p">,</span> fill<span class="o">=</span><span class="s">&quot;darkcyan&quot;</span><span class="p">)</span> <span class="o">+</span> annotate<span class="p">(</span><span class="s">&quot;rect&quot;</span><span class="p">,</span> xmin<span class="o">=</span><span class="m">-1</span><span class="p">,</span> xmax<span class="o">=</span><span class="m">4</span><span class="p">,</span> ymin<span class="o">=</span><span class="m">0</span><span class="p">,</span> ymax<span class="o">=</span><span class="m">0.94</span><span class="p">,</span> alpha<span class="o">=</span><span class="m">0.1</span><span class="p">,</span> fill<span class="o">=</span><span class="s">&quot;darkcyan&quot;</span><span class="p">)</span> <span class="o">+</span> annotate<span class="p">(</span><span class="s">&quot;rect&quot;</span><span class="p">,</span> xmin<span class="o">=</span><span class="m">-1</span><span class="p">,</span> xmax<span class="o">=</span><span class="m">5</span><span class="p">,</span> ymin<span class="o">=</span><span class="m">0</span><span class="p">,</span> ymax<span class="o">=</span><span class="m">0.975</span><span class="p">,</span> alpha<span class="o">=</span><span class="m">0.1</span><span class="p">,</span> fill<span class="o">=</span><span class="s">&quot;darkcyan&quot;</span><span class="p">)</span> <span class="o">+</span> theme_bw<span class="p">()</span> print<span class="p">(</span>q<span class="p">)</span> </pre></div> <div class="figure"> <img alt="http://media.swingleydev.com/img/blog/2015/04/cum_freq_distribution.svg" class="img-responsive" src="http://media.swingleydev.com/img/blog/2015/04/cum_freq_distribution.svg" /> </div> <p>The overlapping rectangles on the plot show what percentages of anomalies fall in certain ranges. Starting from the innermost and darkest rectangle, 40% of the temperatures calculated using minimum and maximum are within a degree of the actual temperature. Sixty-seven percent are within two degrees, 85% within three degrees, 94% are within four degrees, and more than 97% are within five degrees of the actual value. There's probably a way to get R to calculate these intersections along the curve for you, but I looked at the plot and manually added the annotations.</p> </div> </div> <div class="section" id="conclusion"> <h1>Conclusion</h1> <p>We looked at more than five years of data from our weather station in the Goldstream Valley, comparing daily average temperature calculated from the mean of all five minute temperature observations and those calculated using the average minimum and maximum daily temperature, which is the method the National Weather Service uses for it’s daily data. The results show that the difference between these methods average to zero, which means that on an annual (or greater) basis, there doesn't appear to be any bias in the method.</p> <p>Two thirds of all daily average temperatures are within two degrees of the actual daily average, and with a few exceptions, the error is always below five degrees.</p> <p>There is some evidence that there’s a seasonal pattern to the error, however, with April and May daily averages particularly low. If those seasonal patterns are real, this would indicate an important bias in this method of calculating average daily temperature.</p> </div> </div> Sun, 12 Apr 2015 16:38:35 -0800 http://swingleydev.com/blog/p/1982/ R temperature dplyr climate GHCND