metachronistic http://swingleydev.com/blog/ Latest metachronistic posts en-us Sun, 12 Apr 2015 16:38:35 -0800 Calculating average daily temperature http://swingleydev.com/blog/p/1982/ <div class="document"> <div class="section" id="introduction"> <h1>Introduction</h1> <p>Last week I gave a presentation at work about the National Climate Data Center’s <a class="reference external" href="ftp://ftp.ncdc.noaa.gov/pub/data/ghcn/daily/readme.txt">GHCND</a> climate database and methods to import and manipulate the data using the <tt class="docutils literal">dplyr</tt> and <tt class="docutils literal">tidyr</tt> R packages. Along the way, I used this function to calculate the average daily temperature from the minimum and maximum daily temperatures:</p> <div class="highlight"><pre>mutate<span class="p">(</span>TAVG<span class="o">=</span><span class="p">(</span>TMIN<span class="o">+</span>TMAX<span class="p">)</span><span class="o">/</span><span class="m">2.0</span><span class="p">))</span> </pre></div> <p>One of the people in the audience asked why the Weather Service would calculate average daily temperature this way, rather than by averaging the continuous or hourly temperatures at each station. The answer is that many, perhaps most, of the official stations in the GHCND data set are COOP stations which only report minimum and maximum temperature, and the original instrument provided to COOP observers was likely a mercury minimum / maximum thermometer. Now that these instruments are digital, they could conceivably calculate average temperature internally, and observers could report minimum, maximum and average as calculated from the device. But that’s not how it’s done.</p> <p>In this analysis, I look at the difference between calculating average daily temperature using the mean of all daily temperature observations, and using the average of the minimum and maximum reported temperature each day. I’ll use five years of data collected at our house using our Arduino-based weather station.</p> </div> <div class="section" id="methods"> <h1>Methods</h1> <p>Our weather station records temperature every few seconds, averages this data every five minutes and stores these five minute observations in a database. For our analysis, I’ll group the data by day and calculate the average daily temperature using the mean of all the five minute observations, and using the average of the minimum and maximum daily temperature. I’ll use R to perform the analysis.</p> <div class="section" id="libraries"> <h2>Libraries</h2> <p>Load the libraries we need:</p> <div class="highlight"><pre>library<span class="p">(</span>dplyr<span class="p">)</span> library<span class="p">(</span>lubridate<span class="p">)</span> library<span class="p">(</span>ggplot2<span class="p">)</span> library<span class="p">(</span>scales<span class="p">)</span> library<span class="p">(</span>readr<span class="p">)</span> </pre></div> </div> <div class="section" id="retrieve-the-data"> <h2>Retrieve the data</h2> <p>Connect to the database and retrieve the data. We’re using <tt class="docutils literal">build_sql</tt> because the data table we’re interested in is a view (sort of like a stored SQL query), not a table, and <tt class="docutils literal"><span class="pre">dplyr::tbl</span></tt> can’t currently read from a view:</p> <div class="highlight"><pre>dw1454 <span class="o">&lt;-</span> src_postgres<span class="p">(</span>dbname<span class="o">=</span><span class="s">&quot;goldstream_creek_wx&quot;</span><span class="p">,</span> user<span class="o">=</span><span class="s">&quot;readonly&quot;</span><span class="p">)</span> raw_data <span class="o">&lt;-</span> tbl<span class="p">(</span>dw1454<span class="p">,</span> build_sql<span class="p">(</span><span class="s">&quot;SELECT * FROM arduino_west&quot;</span><span class="p">))</span> </pre></div> <p>The raw data contains the timestamp for each five minute observation, and the temperature, in degrees Fahrenheit for that observation. The following series of functions aggregates the data to daily data and calculates the average daily temperature using the two methods.</p> <div class="highlight"><pre>daily_average <span class="o">&lt;-</span> raw_data <span class="o">%&gt;%</span> filter<span class="p">(</span>obs_dt<span class="o">&gt;</span><span class="s">&#39;2009-12-31 23:59:59&#39;</span><span class="p">)</span> <span class="o">%&gt;%</span> mutate<span class="p">(</span>date<span class="o">=</span>date<span class="p">(</span>obs_dt<span class="p">))</span> <span class="o">%&gt;%</span> select<span class="p">(</span>date<span class="p">,</span> wtemp<span class="p">)</span> <span class="o">%&gt;%</span> group_by<span class="p">(</span>date<span class="p">)</span> <span class="o">%&gt;%</span> summarize<span class="p">(</span>mm_avg<span class="o">=</span><span class="p">(</span>min<span class="p">(</span>wtemp<span class="p">)</span><span class="o">+</span>max<span class="p">(</span>wtemp<span class="p">))</span><span class="o">/</span><span class="m">2.0</span><span class="p">,</span> h_avg<span class="o">=</span>mean<span class="p">(</span>wtemp<span class="p">),</span> n<span class="o">=</span>n<span class="p">())</span> <span class="o">%&gt;%</span> filter<span class="p">(</span>n<span class="o">==</span><span class="m">24</span><span class="o">*</span><span class="m">60</span><span class="o">/</span><span class="m">5</span><span class="p">)</span> <span class="o">%&gt;%</span> <span class="c1"># 5 minute obs</span> collect<span class="p">()</span> </pre></div> <p>All these steps are joined together using the “pipe” or “then” operator <tt class="docutils literal">%&gt;%</tt> as follows:</p> <ul class="simple"> <li><tt class="docutils literal">daily_average &lt;-</tt>: assign the result of all the operations to <tt class="docutils literal">daily_average</tt>.</li> <li><tt class="docutils literal">raw_data %&gt;%</tt>: start with the data from our database query (all the temperature observations).</li> <li><tt class="docutils literal"><span class="pre">filter(obs_dt&gt;'2009-12-31</span> 23:59:59') %&gt;%</tt>: use data from 2010 and after.</li> <li><tt class="docutils literal">mutate(date=date(obs_dt)) %&gt;%</tt>: calculate the data from the timestamp.</li> <li><tt class="docutils literal">select(date, wtemp) %&gt;%</tt>: reduce the columns to our newly calculated <tt class="docutils literal">date</tt> variable and the temperatures.</li> <li><tt class="docutils literal">group_by(date) %&gt;%</tt>: group the data by date.</li> <li><tt class="docutils literal"><span class="pre">summarize(mm_avg=(min(wtemp)+max(wtemp))/2.0)</span> %&gt;%</tt>: summarize the data grouped by date, calculate daily average from the average of the minimum and maximum temperature.</li> <li><tt class="docutils literal">summarize(h_avg=mean(wtemp), <span class="pre">n=n())</span> %&gt;%</tt>: calculate another daily average from the mean of the temperaures. Also calculate the number of observations on each date.</li> <li><tt class="docutils literal"><span class="pre">filter(n==24*60/5)</span> %&gt;%</tt>: Only include dates where we have a complete set of five minute observations. We don’t want data with too few or too many observations because those would skew the averages.</li> <li><tt class="docutils literal">collect()</tt>: This function retrieves the data from the database. Without <tt class="docutils literal">collect()</tt>, the query is run on the database server, producing a subset of the full results. This allows us to tweak the query until it’s exactly what we want without having to wait to retrieve everything at each iteration.</li> </ul> <p>Now we’ve got a table with one row for each date in the database where we had exactly 288 observations on that date. Columns include the average temperature calculated using the two methods and the number of observations on each date.</p> <p>Save the data so we don’t have to do these calculations again:</p> <div class="highlight"><pre>write_csv<span class="p">(</span>daily_average<span class="p">,</span> <span class="s">&quot;daily_average.csv&quot;</span><span class="p">)</span> save<span class="p">(</span>daily_average<span class="p">,</span> file<span class="o">=</span><span class="s">&quot;daily_average.rdata&quot;</span><span class="p">,</span> compress<span class="o">=</span><span class="kc">TRUE</span><span class="p">)</span> </pre></div> </div> <div class="section" id="calculate-anomalies"> <h2>Calculate anomalies</h2> <p>How does the min/max method of calculating average daily temperature compare against the true mean of all observed temperatures in a day? We calculate the difference between the methods, the anomaly, as the mean temperature subtracted from the average of minimum and maximum. When this anomaly is positive, the min/max method is higher than the actual mean, and when it’s negative, it’s lower.</p> <div class="highlight"><pre>anomaly <span class="o">&lt;-</span> daily_average <span class="o">%&gt;%</span> mutate<span class="p">(</span>month<span class="o">=</span>month<span class="p">(</span>date<span class="p">),</span> anomaly<span class="o">=</span>mm_avg<span class="o">-</span>h_avg<span class="p">)</span> <span class="o">%&gt;%</span> ungroup<span class="p">()</span> <span class="o">%&gt;%</span> arrange<span class="p">(</span>date<span class="p">)</span> </pre></div> <p>We also populate a column with the month of each date so we can look at the seasonality of the anomalies.</p> </div> </div> <div class="section" id="results"> <h1>Results</h1> <p>This is what the results look like:</p> <div class="highlight"><pre>summary<span class="p">(</span>anomaly<span class="o">$</span>anomaly<span class="p">)</span> </pre></div> <pre class="literal-block"> ## Min. 1st Qu. Median Mean 3rd Qu. Max. ## -6.8600 -1.5110 -0.1711 -0.1341 1.0740 9.3570 </pre> <p>The average anomaly is very close to zero (-0.13), and I suspect it would be even closer to zero as more data is included. Half the data is between -1.5 and 1.1 degrees and the full range is -6.86 to +9.36°F.</p> </div> <div class="section" id="plots"> <h1>Plots</h1> <p>Let’s take a look at some plots of the anomalies.</p> <div class="section" id="raw-anomaly-data"> <h2>Raw anomaly data</h2> <p>The first plot shows the raw anomaly data, with positive anomalies (min/max calculate average is higher than the mean daily average) colored red and negative anomalies in blue.</p> <div class="highlight"><pre><span class="c1"># All anomalies</span> q <span class="o">&lt;-</span> ggplot<span class="p">(</span>data<span class="o">=</span>anomaly<span class="p">,</span> aes<span class="p">(</span>x<span class="o">=</span>date<span class="p">,</span> ymin<span class="o">=</span><span class="m">0</span><span class="p">,</span> ymax<span class="o">=</span>anomaly<span class="p">,</span> colour<span class="o">=</span>anomaly<span class="o">&lt;</span><span class="m">0</span><span class="p">))</span> <span class="o">+</span> geom_linerange<span class="p">(</span>alpha<span class="o">=</span><span class="m">0.5</span><span class="p">)</span> <span class="o">+</span> theme_bw<span class="p">()</span> <span class="o">+</span> scale_colour_manual<span class="p">(</span>values<span class="o">=</span>c<span class="p">(</span><span class="s">&quot;red&quot;</span><span class="p">,</span> <span class="s">&quot;blue&quot;</span><span class="p">),</span> guide<span class="o">=</span><span class="kc">FALSE</span><span class="p">)</span> <span class="o">+</span> scale_x_date<span class="p">(</span>name<span class="o">=</span><span class="s">&quot;&quot;</span><span class="p">)</span> <span class="o">+</span> scale_y_continuous<span class="p">(</span>name<span class="o">=</span><span class="s">&quot;Difference between min/max and hourly aggregation&quot;</span><span class="p">)</span> print<span class="p">(</span>q<span class="p">)</span> </pre></div> <div class="figure"> <img alt="http://media.swingleydev.com/img/blog/2015/04/diff_mm_hourly_aggregation_to_daily.svg" class="img-responsive" src="http://media.swingleydev.com/img/blog/2015/04/diff_mm_hourly_aggregation_to_daily.svg" /> </div> <p>I don't see much in the way of trends in this data, but there are short periods where all the anomalies are in one direction or another. If there is a seasonal pattern, it's hard to see it when the data is presented this way.</p> </div> <div class="section" id="monthly-boxplots"> <h2>Monthly boxplots</h2> <p>To examine the seasonality of the anomalies, let’s look at some boxplots, grouped by the “month” variable we calculated when calculating the anomalies.</p> <div class="highlight"><pre>mean_anomaly <span class="o">&lt;-</span> mean<span class="p">(</span>anomaly<span class="o">$</span>anomaly<span class="p">)</span> <span class="c1"># seasonal pattern of anomaly</span> q <span class="o">&lt;-</span> ggplot<span class="p">(</span>data<span class="o">=</span>anomaly<span class="p">,</span> aes<span class="p">(</span>x<span class="o">=</span>as.factor<span class="p">(</span>month<span class="p">),</span> y<span class="o">=</span>anomaly<span class="p">))</span> <span class="o">+</span> geom_hline<span class="p">(</span>data<span class="o">=</span><span class="kc">NULL</span><span class="p">,</span> aes<span class="p">(</span>yintercept<span class="o">=</span>mean_anomaly<span class="p">),</span> colour<span class="o">=</span><span class="s">&quot;darkorange&quot;</span><span class="p">)</span> <span class="o">+</span> geom_boxplot<span class="p">()</span> <span class="o">+</span> scale_x_discrete<span class="p">(</span>name<span class="o">=</span><span class="s">&quot;&quot;</span><span class="p">,</span> labels<span class="o">=</span>c<span class="p">(</span><span class="s">&quot;Jan&quot;</span><span class="p">,</span> <span class="s">&quot;Feb&quot;</span><span class="p">,</span> <span class="s">&quot;Mar&quot;</span><span class="p">,</span> <span class="s">&quot;Apr&quot;</span><span class="p">,</span> <span class="s">&quot;May&quot;</span><span class="p">,</span> <span class="s">&quot;Jun&quot;</span><span class="p">,</span> <span class="s">&quot;Jul&quot;</span><span class="p">,</span> <span class="s">&quot;Aug&quot;</span><span class="p">,</span> <span class="s">&quot;Sep&quot;</span><span class="p">,</span> <span class="s">&quot;Oct&quot;</span><span class="p">,</span> <span class="s">&quot;Nov&quot;</span><span class="p">,</span> <span class="s">&quot;Dec&quot;</span><span class="p">))</span> <span class="o">+</span> scale_y_continuous<span class="p">(</span>name<span class="o">=</span><span class="s">&quot;Difference between min/max and hourly aggregation&quot;</span><span class="p">)</span> <span class="o">+</span> theme_bw<span class="p">()</span> print<span class="p">(</span>q<span class="p">)</span> </pre></div> <div class="figure"> <img alt="http://media.swingleydev.com/img/blog/2015/04/diff_mm_hourly_aggregation_to_daily_boxplot.svg" class="img-responsive" src="http://media.swingleydev.com/img/blog/2015/04/diff_mm_hourly_aggregation_to_daily_boxplot.svg" /> </div> <p>There does seem to be a slight seasonal pattern to the anomalies, with spring and summer daily average underestimated when using the min/max calculation (the actual daily average temperature is warmer than was calculated using minimum and maximum temperatures) and slightly overestimated in fall and late winter. The boxes in a boxplot show the range where half the observations fall, and in all months but April and May these ranges include zero, so there's a good chance that the pattern isn't statistically significant. The orange line under the boxplots show the overall average anomaly, close to zero.</p> </div> <div class="section" id="cumulative-frequency-distribution"> <h2>Cumulative frequency distribution</h2> <p>Finally, we plot the cumulative frequency distribution of the absolute value of the anomalies. These plots have the variable of interest on the x-axis and the cumulative frequency of all values to the left on the y-axis. It’s a good way of seeing how much of the data falls into certain ranges.</p> <div class="highlight"><pre><span class="c1"># distribution of anomalies</span> q <span class="o">&lt;-</span> ggplot<span class="p">(</span>data<span class="o">=</span>anomaly<span class="p">,</span> aes<span class="p">(</span>x<span class="o">=</span>abs<span class="p">(</span>anomaly<span class="p">)))</span> <span class="o">+</span> stat_ecdf<span class="p">()</span> <span class="o">+</span> scale_x_discrete<span class="p">(</span>name<span class="o">=</span><span class="s">&quot;Absolute value of anomaly (+/- degrees F)&quot;</span><span class="p">,</span> breaks<span class="o">=</span><span class="m">0</span><span class="o">:</span><span class="m">11</span><span class="p">,</span> labels<span class="o">=</span><span class="m">0</span><span class="o">:</span><span class="m">11</span><span class="p">,</span> expand<span class="o">=</span>c<span class="p">(</span><span class="m">0</span><span class="p">,</span> <span class="m">0</span><span class="p">))</span> <span class="o">+</span> scale_y_continuous<span class="p">(</span>name<span class="o">=</span><span class="s">&quot;Cumulative frequency&quot;</span><span class="p">,</span> labels<span class="o">=</span>percent<span class="p">,</span> breaks<span class="o">=</span>pretty_breaks<span class="p">(</span>n<span class="o">=</span><span class="m">10</span><span class="p">),</span> limits<span class="o">=</span>c<span class="p">(</span><span class="m">0</span><span class="p">,</span><span class="m">1</span><span class="p">))</span> <span class="o">+</span> annotate<span class="p">(</span><span class="s">&quot;rect&quot;</span><span class="p">,</span> xmin<span class="o">=</span><span class="m">-1</span><span class="p">,</span> xmax<span class="o">=</span><span class="m">1</span><span class="p">,</span> ymin<span class="o">=</span><span class="m">0</span><span class="p">,</span> ymax<span class="o">=</span><span class="m">0.4</span><span class="p">,</span> alpha<span class="o">=</span><span class="m">0.1</span><span class="p">,</span> fill<span class="o">=</span><span class="s">&quot;darkcyan&quot;</span><span class="p">)</span> <span class="o">+</span> annotate<span class="p">(</span><span class="s">&quot;rect&quot;</span><span class="p">,</span> xmin<span class="o">=</span><span class="m">-1</span><span class="p">,</span> xmax<span class="o">=</span><span class="m">2</span><span class="p">,</span> ymin<span class="o">=</span><span class="m">0</span><span class="p">,</span> ymax<span class="o">=</span><span class="m">0.67</span><span class="p">,</span> alpha<span class="o">=</span><span class="m">0.1</span><span class="p">,</span> fill<span class="o">=</span><span class="s">&quot;darkcyan&quot;</span><span class="p">)</span> <span class="o">+</span> annotate<span class="p">(</span><span class="s">&quot;rect&quot;</span><span class="p">,</span> xmin<span class="o">=</span><span class="m">-1</span><span class="p">,</span> xmax<span class="o">=</span><span class="m">3</span><span class="p">,</span> ymin<span class="o">=</span><span class="m">0</span><span class="p">,</span> ymax<span class="o">=</span><span class="m">0.85</span><span class="p">,</span> alpha<span class="o">=</span><span class="m">0.1</span><span class="p">,</span> fill<span class="o">=</span><span class="s">&quot;darkcyan&quot;</span><span class="p">)</span> <span class="o">+</span> annotate<span class="p">(</span><span class="s">&quot;rect&quot;</span><span class="p">,</span> xmin<span class="o">=</span><span class="m">-1</span><span class="p">,</span> xmax<span class="o">=</span><span class="m">4</span><span class="p">,</span> ymin<span class="o">=</span><span class="m">0</span><span class="p">,</span> ymax<span class="o">=</span><span class="m">0.94</span><span class="p">,</span> alpha<span class="o">=</span><span class="m">0.1</span><span class="p">,</span> fill<span class="o">=</span><span class="s">&quot;darkcyan&quot;</span><span class="p">)</span> <span class="o">+</span> annotate<span class="p">(</span><span class="s">&quot;rect&quot;</span><span class="p">,</span> xmin<span class="o">=</span><span class="m">-1</span><span class="p">,</span> xmax<span class="o">=</span><span class="m">5</span><span class="p">,</span> ymin<span class="o">=</span><span class="m">0</span><span class="p">,</span> ymax<span class="o">=</span><span class="m">0.975</span><span class="p">,</span> alpha<span class="o">=</span><span class="m">0.1</span><span class="p">,</span> fill<span class="o">=</span><span class="s">&quot;darkcyan&quot;</span><span class="p">)</span> <span class="o">+</span> theme_bw<span class="p">()</span> print<span class="p">(</span>q<span class="p">)</span> </pre></div> <div class="figure"> <img alt="http://media.swingleydev.com/img/blog/2015/04/cum_freq_distribution.svg" class="img-responsive" src="http://media.swingleydev.com/img/blog/2015/04/cum_freq_distribution.svg" /> </div> <p>The overlapping rectangles on the plot show what percentages of anomalies fall in certain ranges. Starting from the innermost and darkest rectangle, 40% of the temperatures calculated using minimum and maximum are within a degree of the actual temperature. Sixty-seven percent are within two degrees, 85% within three degrees, 94% are within four degrees, and more than 97% are within five degrees of the actual value. There's probably a way to get R to calculate these intersections along the curve for you, but I looked at the plot and manually added the annotations.</p> </div> </div> <div class="section" id="conclusion"> <h1>Conclusion</h1> <p>We looked at more than five years of data from our weather station in the Goldstream Valley, comparing daily average temperature calculated from the mean of all five minute temperature observations and those calculated using the average minimum and maximum daily temperature, which is the method the National Weather Service uses for it’s daily data. The results show that the difference between these methods average to zero, which means that on an annual (or greater) basis, there doesn't appear to be any bias in the method.</p> <p>Two thirds of all daily average temperatures are within two degrees of the actual daily average, and with a few exceptions, the error is always below five degrees.</p> <p>There is some evidence that there’s a seasonal pattern to the error, however, with April and May daily averages particularly low. If those seasonal patterns are real, this would indicate an important bias in this method of calculating average daily temperature.</p> </div> </div> Sun, 12 Apr 2015 16:38:35 -0800 http://swingleydev.com/blog/p/1982/ R dplyr GHCND climate temperature Winter rain http://swingleydev.com/blog/p/1981/ <div class="document"> <p>Last night we got a quarter of an inch of rain at our house, making roads “impassable” according to the Fairbanks Police Department, and turning the dog yard, deck, and driveway into an icy mess. There are videos floating around Facebook showing Fairbanks residents playing hockey in the street in front of their houses, and a reported seven vehicles off the road on Ballaine Hill.</p> <p>Here’s a video of a group of Goldstream Valley musicians ice skating on Golstream Road: <a class="reference external" href="http://youtu.be/_afC7UF0NXk">http://youtu.be/_afC7UF0NXk</a></p> <p>Let’s check out the weather database and take a look at how often Fairbanks experiences this type of event, and when they usually happen. I’m going to skip the parts of the code showing how we get pivoted daily data from the database, but they’re in <a class="reference external" href="http://swingleydev.com/blog/p/1976/">this post</a>.</p> <p>Starting with pivoted data we want to look for dates from November through March with more than a tenth of an inch of precipitation, snowfall less than two tenths of an inch and a daily high temperature above 20°F. Then we group by the winter year and month, and aggregate the rain events into a single event. These occurrences are rare enough that this aggregation shoudln’t combine events from different parts of the month.</p> <p>Here’s the R code:</p> <div class="highlight"><pre>winter_rain <span class="o">&lt;-</span> fai_pivot <span class="o">%&gt;%</span> mutate<span class="p">(</span>winter_year<span class="o">=</span>year<span class="p">(</span>dte <span class="o">-</span> days<span class="p">(</span><span class="m">92</span><span class="p">)),</span> wdoy<span class="o">=</span>yday<span class="p">(</span>dte <span class="o">+</span> days<span class="p">(</span><span class="m">61</span><span class="p">)),</span> month<span class="o">=</span>month<span class="p">(</span>dte<span class="p">),</span> SNOW<span class="o">=</span>ifelse<span class="p">(</span>is.na<span class="p">(</span>SNOW<span class="p">),</span> <span class="m">0</span><span class="p">,</span> SNOW<span class="p">),</span> TMAX<span class="o">=</span>TMAX<span class="o">*</span><span class="m">9</span><span class="o">/</span><span class="m">5+32</span><span class="p">,</span> TAVG<span class="o">=</span>TAVG<span class="o">*</span><span class="m">9</span><span class="o">/</span><span class="m">5+32</span><span class="p">,</span> TMIN<span class="o">=</span>TMIN<span class="o">*</span><span class="m">9</span><span class="o">/</span><span class="m">5+32</span><span class="p">,</span> PRCP<span class="o">=</span>PRCP<span class="o">/</span><span class="m">25.4</span><span class="p">,</span> SNOW<span class="o">=</span>SNOW<span class="o">/</span><span class="m">25.4</span><span class="p">)</span> <span class="o">%&gt;%</span> filter<span class="p">(</span>station_name <span class="o">==</span> <span class="s">&#39;FAIRBANKS INTL AP&#39;</span><span class="p">,</span> winter_year <span class="o">&lt;</span> <span class="m">2014</span><span class="p">,</span> month <span class="o">%in%</span> c<span class="p">(</span><span class="m">11</span><span class="p">,</span> <span class="m">12</span><span class="p">,</span> <span class="m">1</span><span class="p">,</span> <span class="m">2</span><span class="p">,</span> <span class="m">3</span><span class="p">),</span> TMAX <span class="o">&gt;</span> <span class="m">20</span><span class="p">,</span> PRCP <span class="o">&gt;</span> <span class="m">0.1</span><span class="p">,</span> SNOW <span class="o">&lt;</span> <span class="m">0.2</span><span class="p">)</span> <span class="o">%&gt;%</span> group_by<span class="p">(</span>winter_year<span class="p">,</span> month<span class="p">)</span> <span class="o">%&gt;%</span> summarize<span class="p">(</span>date<span class="o">=</span>min<span class="p">(</span>dte<span class="p">),</span> tmax<span class="o">=</span>mean<span class="p">(</span>TMAX<span class="p">),</span> prcp<span class="o">=</span>sum<span class="p">(</span>PRCP<span class="p">),</span> days<span class="o">=</span>n<span class="p">())</span> <span class="o">%&gt;%</span> ungroup<span class="p">()</span> <span class="o">%&gt;%</span> mutate<span class="p">(</span>month<span class="o">=</span>month<span class="p">(</span>date<span class="p">))</span> <span class="o">%&gt;%</span> select<span class="p">(</span>date<span class="p">,</span> month<span class="p">,</span> tmax<span class="p">,</span> prcp<span class="p">,</span> days<span class="p">)</span> <span class="o">%&gt;%</span> arrange<span class="p">(</span>date<span class="p">)</span> </pre></div> <p>And the results:</p> <table border="1" class="tosf docutils"> <caption>List of winter rain events, Fairbanks Airport</caption> <colgroup> <col width="22%" /> <col width="11%" /> <col width="29%" /> <col width="29%" /> <col width="9%" /> </colgroup> <thead valign="bottom"> <tr><th class="head">Date</th> <th class="head">Month</th> <th class="head">Max temp (°F)</th> <th class="head">Rain (inches)</th> <th class="head">Days</th> </tr> </thead> <tbody valign="top"> <tr><td>1921-03-07</td> <td>3</td> <td>44.06</td> <td>0.338</td> <td>1</td> </tr> <tr><td>1923-02-06</td> <td>2</td> <td>33.98</td> <td>0.252</td> <td>1</td> </tr> <tr><td>1926-01-12</td> <td>1</td> <td>35.96</td> <td>0.142</td> <td>1</td> </tr> <tr><td>1928-03-02</td> <td>3</td> <td>39.02</td> <td>0.110</td> <td>1</td> </tr> <tr><td>1931-01-19</td> <td>1</td> <td>33.08</td> <td>0.130</td> <td>1</td> </tr> <tr><td>1933-11-03</td> <td>11</td> <td>41.00</td> <td>0.110</td> <td>1</td> </tr> <tr><td>1935-11-02</td> <td>11</td> <td>38.30</td> <td>0.752</td> <td>3</td> </tr> <tr><td>1936-11-24</td> <td>11</td> <td>37.04</td> <td>0.441</td> <td>1</td> </tr> <tr><td>1937-01-10</td> <td>1</td> <td>32.96</td> <td>1.362</td> <td>3</td> </tr> <tr><td>1948-11-10</td> <td>11</td> <td>48.02</td> <td>0.181</td> <td>1</td> </tr> <tr><td>1963-01-19</td> <td>1</td> <td>35.06</td> <td>0.441</td> <td>1</td> </tr> <tr><td>1965-03-29</td> <td>3</td> <td>35.96</td> <td>0.118</td> <td>1</td> </tr> <tr><td>1979-11-11</td> <td>11</td> <td>35.96</td> <td>0.201</td> <td>1</td> </tr> <tr><td>2003-02-08</td> <td>2</td> <td>34.97</td> <td>0.291</td> <td>2</td> </tr> <tr><td>2003-11-02</td> <td>11</td> <td>34.97</td> <td>0.268</td> <td>2</td> </tr> <tr><td>2010-11-22</td> <td>11</td> <td>34.34</td> <td>0.949</td> <td>3</td> </tr> </tbody> </table> <p>This year’s event doesn’t compare to 2010 when almost and inch of rain fell over the course of three days in November, but it does look like it comes at an unusual part of the year.</p> <p>Here’s the counts and frequency of winter rainfall events by month:</p> <div class="highlight"><pre>by_month <span class="o">&lt;-</span> winter_rain <span class="o">%&gt;%</span> group_by<span class="p">(</span>month<span class="p">)</span> <span class="o">%&gt;%</span> summarize<span class="p">(</span>n<span class="o">=</span>n<span class="p">())</span> <span class="o">%&gt;%</span> mutate<span class="p">(</span>freq<span class="o">=</span>n<span class="o">/</span>sum<span class="p">(</span>n<span class="p">)</span><span class="o">*</span><span class="m">100</span><span class="p">)</span> </pre></div> <table border="1" class="tosf docutils"> <caption>Winter rain events by month</caption> <colgroup> <col width="38%" /> <col width="23%" /> <col width="38%" /> </colgroup> <thead valign="bottom"> <tr><th class="head">Month</th> <th class="head">n</th> <th class="head">Freq</th> </tr> </thead> <tbody valign="top"> <tr><td>1</td> <td>4</td> <td>25.00</td> </tr> <tr><td>2</td> <td>2</td> <td>12.50</td> </tr> <tr><td>3</td> <td>3</td> <td>18.75</td> </tr> <tr><td>11</td> <td>7</td> <td>43.75</td> </tr> </tbody> </table> <p>There haven’t been any rain events in December, which is a little surprising, but next to that, February rains are the least common.</p> <p>I looked at this two years ago (<a class="reference external" href="http://swingleydev.com/blog/p/1956/">Winter freezing rain</a>) using slightly different criteria. At the bottom of that post I looked at the frequency of rain events over time and concluded that they seem to come in cycles, but that the three events in this decade was a bad sign. Now we can add another rain event to the total for the 2010s.</p> </div> Sun, 22 Feb 2015 11:33:20 -0900 http://swingleydev.com/blog/p/1981/ rain R weather winter dplyr climate Checking your ISP http://swingleydev.com/blog/p/1980/ <div class="document"> <p><strong>Abstract</strong> (tl;dr)</p> <p>We’re getting some bad home Internet service from Alaska Communications, and it’s getting worse. There are clear patterns indicating lower quality service in the evening, and very poor download rates over the past couple days. Scroll down to check out the plots.</p> <p><strong>Introduction</strong></p> <p>Over the past year we’ve started having trouble watching streaming video over our Internet connection. We’re paying around $100/month for phone, long distance and a 4 Mbps DSL Internet connection, which is a lot of money if we’re not getting a quality product. The connection was pretty great when we first signed up (and frankly, it’s better than a lot of people in Fairbanks), but over time, the quality has degraded and despite having a technician out to take a look, it hasn’t gotten better.</p> <p><strong>Methods</strong></p> <p>In September last year I started monitoring our bandwidth, once every two hours, using the Python <tt class="docutils literal"><span class="pre">speedtest-cli</span></tt> tool, which uses <a class="reference external" href="http://www.speedtest.net/">speedtest.net</a> to get the data.</p> <p>To use it, install the package:</p> <div class="highlight"><pre><span class="nv">$ </span>pip install speedtest-cli </pre></div> <p>Then set up a <tt class="docutils literal">cron</tt> job on your server to run this once every two hours. I have it running on the raspberry pi that collects our weather data. I use this script, which appends data to a file each time it is run. You’ll want to change the server to whichever is closest and most reliable at your location.</p> <div class="highlight"><pre><span class="c">#! /bin/bash</span> <span class="nv">results_dir</span><span class="o">=</span><span class="s2">&quot;/path/to/results&quot;</span> date &gt;&gt; <span class="k">${</span><span class="nv">results_dir</span><span class="k">}</span>/speedtest_results speedtest --server 3191 --simple &gt;&gt; <span class="k">${</span><span class="nv">results_dir</span><span class="k">}</span>/speedtest_results </pre></div> <p>The raw output file just keeps growing, and looks like this:</p> <pre class="literal-block"> Mon Sep 1 09:20:08 AKDT 2014 Ping: 199.155 ms Download: 2.51 Mbits/s Upload: 0.60 Mbits/s Mon Sep 1 10:26:01 AKDT 2014 Ping: 158.118 ms Download: 3.73 Mbits/s Upload: 0.60 Mbits/s ... </pre> <p>This isn’t a very good format for analysis, so I wrote a <a class="reference external" href="http://media.swingleydev.com/img/blog/2015/02/parse_speedtest_results.py">Python script</a> to process the data into a tidy data set with one row per observation, and columns for ping time, download and upload rates as numbers.</p> <p>From here, we can look at the data in R. First, let’s see how our rates change during the day. One thing we’ve noticed is that our Internet will be fine until around seven or eight in the evening, at which point we can no longer stream video successfully. Hulu is particularly bad at handling a lower quality connection.</p> <p>Code to get the data and add some columns to group the data appropriately for plotting:</p> <div class="highlight"><pre><span class="c1">#! /usr/bin/env Rscript</span> <span class="c1"># Prep:</span> <span class="c1"># parse_speedtest_results.py speedtest_results speedtest_results.csv</span> library<span class="p">(</span>lubridate<span class="p">)</span> library<span class="p">(</span>ggplot2<span class="p">)</span> library<span class="p">(</span>dplyr<span class="p">)</span> speed <span class="o">&lt;-</span> read.csv<span class="p">(</span><span class="s">&#39;speedtest_results.csv&#39;</span><span class="p">,</span> header<span class="o">=</span><span class="kc">TRUE</span><span class="p">)</span> <span class="o">%&gt;%</span> tbl_df<span class="p">()</span> <span class="o">%&gt;%</span> mutate<span class="p">(</span>date<span class="o">=</span>ymd_hms<span class="p">(</span>as.character<span class="p">(</span>date<span class="p">)),</span> yyyymm<span class="o">=</span>paste<span class="p">(</span>year<span class="p">(</span>date<span class="p">),</span> sprintf<span class="p">(</span><span class="s">&quot;%02d&quot;</span><span class="p">,</span> month<span class="p">(</span>date<span class="p">)),</span> sep<span class="o">=</span><span class="s">&#39;-&#39;</span><span class="p">),</span> month<span class="o">=</span>month<span class="p">(</span>date<span class="p">),</span> hour<span class="o">=</span>hour<span class="p">(</span>date<span class="p">))</span> </pre></div> <p>Plot it:</p> <div class="highlight"><pre>q <span class="o">&lt;-</span> ggplot<span class="p">(</span>data<span class="o">=</span>speed<span class="p">,</span> aes<span class="p">(</span>x<span class="o">=</span>factor<span class="p">(</span>hour<span class="p">),</span> y<span class="o">=</span>download<span class="p">))</span> <span class="o">+</span> geom_boxplot<span class="p">()</span> <span class="o">+</span> scale_x_discrete<span class="p">(</span>name<span class="o">=</span><span class="s">&quot;Hour of the day&quot;</span><span class="p">)</span> <span class="o">+</span> scale_y_continuous<span class="p">(</span>name<span class="o">=</span><span class="s">&quot;Download speed (Mbps)&quot;</span><span class="p">)</span> <span class="o">+</span> ggtitle<span class="p">(</span>paste<span class="p">(</span><span class="s">&quot;Speedtest results (&quot;</span><span class="p">,</span> min<span class="p">(</span>floor_date<span class="p">(</span>speed<span class="o">$</span>date<span class="p">,</span> <span class="s">&quot;day&quot;</span><span class="p">)),</span> <span class="s">&quot; - &quot;</span> <span class="p">,</span> max<span class="p">(</span>floor_date<span class="p">(</span>speed<span class="o">$</span>date<span class="p">,</span> <span class="s">&quot;day&quot;</span><span class="p">)),</span> <span class="s">&quot;)&quot;</span><span class="p">,</span> sep<span class="o">=</span><span class="s">&quot;&quot;</span><span class="p">))</span> <span class="o">+</span> theme_bw<span class="p">()</span> <span class="o">+</span> facet_wrap<span class="p">(</span><span class="o">~</span> yyyymm<span class="p">)</span> </pre></div> <p><strong>Results and Discussion</strong></p> <p>Here’s the result:</p> <div class="figure"> <a class="reference external image-reference" href="http://media.swingleydev.com/img/blog/2015/02/speedtest_by_hour_facet.pdf"><img alt="http://media.swingleydev.com/img/blog/2015/02/speedtest_by_hour_facet.svg" class="img-responsive" src="http://media.swingleydev.com/img/blog/2015/02/speedtest_by_hour_facet.svg" /></a> <p class="caption">Download bandwidth boxplots by hour</p> </div> <p>Box and whisker plots (boxplots) show how data is distributed. The box represents the range where half the data lies (from the 25th to the 75th percentile) and the line through the box represents the median value. The vertical lines extending above and below the box (the whiskers), show where most of the rest of the observations are, and the dots are extreme values. The figure above has a single boxplot for each two hour period, and the plots are split into month-long periods so we can see if there are any trends over time.</p> <p>There are some clear patterns across all months: our bandwidth is pretty close to what we’re paying for for most of the day. The boxes are all up near 4 Mbps and they’re skinny, indicating that most of the observations are close to 4 Mbps. Starting in the early evening, the boxes start getting larger, demonstrating that we’re not always getting our expected rate. The boxes are very large between eight and ten, which means we’re as likely to get 2 Mbps as the 4 we pay for.</p> <p>Patterns over time are also showing up. Starting in January, there’s another drop in our bandwidth around noon and by February it’s rare that we’re getting the full speed of our connection at <em>any</em> time of day.</p> <p>One note: it is possible that some of the decline in our bandwidth during the evening is because the download script is competing with the other things we are doing on the Internet when we are home from work. This doesn’t explain the drop around noon, however, and when I look at the actual Internet usage diagrams collected from our router using SMTP / MRTG, it doesn’t appear that we are really using enough bandwidth to explain the dramatic and consistent drops seen in the plot above.</p> <p>February is starting to look different from the other months, I took a closer look at the data for that month. I’m filtering the data to just February, and based on a look at the initial version of this plot, I added trend lines for the period before and after noon on the 12th of February.</p> <div class="highlight"><pre>library<span class="p">(</span>dplyr<span class="p">)</span> library<span class="p">(</span>lubridate<span class="p">)</span> library<span class="p">(</span>ggplot2<span class="p">)</span> library<span class="p">(</span>scales<span class="p">)</span> speeds <span class="o">&lt;-</span> tbl_df<span class="p">(</span>read.csv<span class="p">(</span><span class="s">&#39;speedtest_results.csv&#39;</span><span class="p">,</span> header<span class="o">=</span><span class="kc">TRUE</span><span class="p">))</span> speed_plot <span class="o">&lt;-</span> speeds <span class="o">%&gt;%</span> mutate<span class="p">(</span>date<span class="o">=</span>ymd_hms<span class="p">(</span>date<span class="p">),</span> grp<span class="o">=</span>ifelse<span class="p">(</span>date<span class="o">&lt;</span><span class="s">&#39;2015-02-12 12:00:00&#39;</span><span class="p">,</span> <span class="s">&#39;before&#39;</span><span class="p">,</span> <span class="s">&#39;after&#39;</span><span class="p">))</span> <span class="o">%&gt;%</span> filter<span class="p">(</span>date <span class="o">&gt;</span> <span class="s">&#39;2015-01-31 23:59:59&#39;</span><span class="p">)</span> <span class="o">%&gt;%</span> ggplot<span class="p">(</span>aes<span class="p">(</span>x<span class="o">=</span>date<span class="p">,</span> y<span class="o">=</span>download<span class="p">))</span> <span class="o">+</span> geom_point<span class="p">()</span> <span class="o">+</span> theme_bw<span class="p">()</span> <span class="o">+</span> geom_smooth<span class="p">(</span>aes<span class="p">(</span>group<span class="o">=</span>grp<span class="p">),</span> method<span class="o">=</span><span class="s">&quot;lm&quot;</span><span class="p">,</span> se<span class="o">=</span><span class="kc">FALSE</span><span class="p">)</span> <span class="o">+</span> scale_y_continuous<span class="p">(</span>limits<span class="o">=</span>c<span class="p">(</span><span class="m">0</span><span class="p">,</span><span class="m">4.5</span><span class="p">),</span> breaks<span class="o">=</span>c<span class="p">(</span><span class="m">0</span><span class="p">,</span><span class="m">1</span><span class="p">,</span><span class="m">2</span><span class="p">,</span><span class="m">3</span><span class="p">,</span><span class="m">4</span><span class="p">),</span> name<span class="o">=</span><span class="s">&quot;Download speed (Mbps)&quot;</span><span class="p">)</span> <span class="o">+</span> theme<span class="p">(</span>axis.title.x<span class="o">=</span>element_blank<span class="p">())</span> </pre></div> <p>The result:</p> <div class="figure"> <a class="reference external image-reference" href="http://media.swingleydev.com/img/blog/2015/02/bad_feb_2015_internet.pdf"><img alt="http://media.swingleydev.com/img/blog/2015/02/bad_feb_2015_internet.svg" class="img-responsive" src="http://media.swingleydev.com/img/blog/2015/02/bad_feb_2015_internet.svg" /></a> <p class="caption">February download speeds</p> </div> <p>Ouch. Throughout the month our bandwidth has been going down, but you can also see that after noon on the 12th, we’re no longer getting 4 Mpbs no matter what time of day it is. The trend line probably isn’t statistically significant for this period, but it’s clear that our Internet service, for which we pay a lot of money for, is getting worse and worse, now averaging less than 2 Mbps.</p> <p><strong>Conclusion</strong></p> <p>I think there’s enough evidence here that we aren’t getting what we are paying for from our ISP. Time to contact Alaska Communications and get them to either reduce our rates based on the poor quality of service they are providing, or upgrade their equipment to handle the traffic on our line. I suspect they probably oversold the connection and the equipment can’t handle all the users trying to get their full bandwidth at the same time.</p> </div> Sun, 15 Feb 2015 09:20:35 -0900 http://swingleydev.com/blog/p/1980/ Alaska Communications R Python bandwidth Cold snap, record lengths http://swingleydev.com/blog/p/1979/ <div class="document"> <!-- .. figure:: http://media.swingleydev.com/img/blog/2015/02/ --> <!-- :align: right --> <!-- :width: --> <!-- :height: --> <!-- :alt: --> <!-- :target: http://media.swingleydev.com/img/blog/2015/02/ --> <!-- --> <!-- Caption --> <!-- --> <p>Whenever we’re in the middle of a cold snap, as we are right now, I’m tempted to see how the current snap compares to those in the past. The one we’re in right now isn’t all that bad: sixteen days in a row where the minimum temperature is colder than −20°F. In some years, such a threshold wouldn’t even qualify as the definition of a “cold snap,” but right now, it feels like one.</p> <p>Getting the length of consecutive things in a database isn’t simple. What we’ll do is get a list of all the days where the minimum daily temperature was <em>warmer</em> than −20°F. Then go through each record and count the number of days between the current row and the next one. Most of these will be one, but when the number of days is greater than one, that means there’s one or more observations in between the “warm” days where the minimum temperature was <em>colder</em> than −20°F (or there was missing data).</p> <p>For example, given this set of dates and temperatures from earlier this year:</p> <table border="1" class="docutils"> <colgroup> <col width="60%" /> <col width="40%" /> </colgroup> <thead valign="bottom"> <tr><th class="head">date</th> <th class="head">tmin_f</th> </tr> </thead> <tbody valign="top"> <tr><td>2015‑01‑02</td> <td>−15</td> </tr> <tr><td>2015‑01‑03</td> <td>−20</td> </tr> <tr><td>2015‑01‑04</td> <td>−26</td> </tr> <tr><td>2015‑01‑05</td> <td>−30</td> </tr> <tr><td>2015‑01‑06</td> <td>−30</td> </tr> <tr><td>2015‑01‑07</td> <td>−26</td> </tr> <tr><td>2015‑01‑08</td> <td>−17</td> </tr> </tbody> </table> <p>Once we select for rows where the temperature is above −20°F we get this:</p> <table border="1" class="docutils"> <colgroup> <col width="60%" /> <col width="40%" /> </colgroup> <thead valign="bottom"> <tr><th class="head">date</th> <th class="head">tmin_f</th> </tr> </thead> <tbody valign="top"> <tr><td>2015‑01‑02</td> <td>−15</td> </tr> <tr><td>2015‑01‑08</td> <td>−17</td> </tr> </tbody> </table> <p>Now we can grab the start and end of the period (January 2nd + one day and January 8th - one day) and get the length of the cold snap. You can see why missing data would be a problem, since it would create a gap that isn’t necessarily due to cold temperatures.</p> <p>I couldn't figure out how to get the time periods and check them for validity all in one step, so I wrote a simple function that counts the days with valid data between two dates, then used this function in the real query. Only periods with non-null data on each day during the cold snap were included.</p> <div class="highlight"><pre><span class="k">CREATE</span> <span class="k">FUNCTION</span> <span class="n">valid_n</span><span class="p">(</span><span class="nb">date</span><span class="p">,</span> <span class="nb">date</span><span class="p">)</span> <span class="k">RETURNS</span> <span class="nb">bigint</span> <span class="k">AS</span> <span class="s1">&#39;SELECT count(*)</span> <span class="s1"> FROM ghcnd_pivot</span> <span class="s1"> WHERE station_name = &#39;&#39;FAIRBANKS INTL AP&#39;&#39;</span> <span class="s1"> AND dte BETWEEN $1 AND $2</span> <span class="s1"> AND tmin_c IS NOT NULL&#39;</span> <span class="k">LANGUAGE</span> <span class="k">SQL</span> <span class="k">RETURNS</span> <span class="k">NULL</span> <span class="k">ON</span> <span class="k">NULL</span> <span class="k">INPUT</span><span class="p">;</span> </pre></div> <p>Here we go:</p> <div class="highlight"><pre><span class="k">SELECT</span> <span class="n">rank</span><span class="p">()</span> <span class="n">OVER</span> <span class="p">(</span><span class="k">ORDER</span> <span class="k">BY</span> <span class="n">days</span> <span class="k">DESC</span><span class="p">)</span> <span class="k">AS</span> <span class="n">rank</span><span class="p">,</span> <span class="k">start</span><span class="p">,</span> <span class="ss">&quot;end&quot;</span><span class="p">,</span> <span class="n">days</span> <span class="k">FROM</span> <span class="p">(</span> <span class="k">SELECT</span> <span class="k">start</span> <span class="o">+</span> <span class="nb">interval</span> <span class="s1">&#39;1 day&#39;</span> <span class="k">AS</span> <span class="k">start</span><span class="p">,</span> <span class="ss">&quot;end&quot;</span> <span class="o">-</span> <span class="nb">interval</span> <span class="s1">&#39;1 day&#39;</span> <span class="k">AS</span> <span class="k">end</span><span class="p">,</span> <span class="n">interv</span> <span class="o">-</span> <span class="mi">1</span> <span class="k">AS</span> <span class="n">days</span><span class="p">,</span> <span class="n">valid_n</span><span class="p">(</span><span class="nb">date</span><span class="p">(</span><span class="k">start</span> <span class="o">+</span> <span class="nb">interval</span> <span class="s1">&#39;1 day&#39;</span><span class="p">),</span> <span class="nb">date</span><span class="p">(</span><span class="ss">&quot;end&quot;</span> <span class="o">-</span> <span class="nb">interval</span> <span class="s1">&#39;1 day&#39;</span><span class="p">))</span> <span class="k">as</span> <span class="n">valid_n</span> <span class="k">FROM</span> <span class="p">(</span> <span class="k">SELECT</span> <span class="n">dte</span> <span class="k">AS</span> <span class="k">start</span><span class="p">,</span> <span class="n">lead</span><span class="p">(</span><span class="n">dte</span><span class="p">)</span> <span class="n">OVER</span> <span class="p">(</span><span class="k">ORDER</span> <span class="k">BY</span> <span class="n">dte</span><span class="p">)</span> <span class="k">AS</span> <span class="k">end</span><span class="p">,</span> <span class="n">lead</span><span class="p">(</span><span class="n">dte</span><span class="p">)</span> <span class="n">OVER</span> <span class="p">(</span><span class="k">ORDER</span> <span class="k">BY</span> <span class="n">dte</span><span class="p">)</span> <span class="o">-</span> <span class="n">dte</span> <span class="k">AS</span> <span class="n">interv</span> <span class="k">FROM</span> <span class="p">(</span> <span class="k">SELECT</span> <span class="n">dte</span> <span class="k">FROM</span> <span class="n">ghcnd_pivot</span> <span class="k">WHERE</span> <span class="n">station_name</span> <span class="o">=</span> <span class="s1">&#39;FAIRBANKS INTL AP&#39;</span> <span class="k">AND</span> <span class="n">tmin_c</span> <span class="o">&gt;</span> <span class="n">f_to_c</span><span class="p">(</span><span class="o">-</span><span class="mi">20</span><span class="p">)</span> <span class="p">)</span> <span class="k">AS</span> <span class="n">foo</span> <span class="p">)</span> <span class="k">AS</span> <span class="n">bar</span> <span class="k">WHERE</span> <span class="n">interv</span> <span class="o">&gt;=</span> <span class="mi">17</span> <span class="p">)</span> <span class="k">AS</span> <span class="n">f</span> <span class="k">WHERE</span> <span class="n">days</span> <span class="o">=</span> <span class="n">valid_n</span> <span class="k">ORDER</span> <span class="k">BY</span> <span class="n">days</span> <span class="k">DESC</span><span class="p">;</span> </pre></div> <p>And the top 10:</p> <table border="1" class="tosf docutils"> <caption>Top ten longest cold snaps (−20°F or colder minimum temp)</caption> <colgroup> <col width="15%" /> <col width="30%" /> <col width="30%" /> <col width="25%" /> </colgroup> <thead valign="bottom"> <tr><th class="head">rank</th> <th class="head">start</th> <th class="head">end</th> <th class="head">days</th> </tr> </thead> <tbody valign="top"> <tr><td>1</td> <td>1917‑11‑26</td> <td>1918‑01‑01</td> <td>37</td> </tr> <tr><td>2</td> <td>1909‑01‑13</td> <td>1909‑02‑12</td> <td><span class="lg">31</span></td> </tr> <tr><td>3</td> <td>1948‑11‑17</td> <td>1948‑12‑13</td> <td>27</td> </tr> <tr><td>4</td> <td>1925‑01‑16</td> <td>1925‑02‑10</td> <td>26</td> </tr> <tr><td>4</td> <td>1947‑01‑12</td> <td>1947‑02‑06</td> <td><span class="lg">26</span></td> </tr> <tr><td>4</td> <td>1943‑01‑02</td> <td>1943‑01‑27</td> <td>26</td> </tr> <tr><td>4</td> <td>1968‑12‑26</td> <td>1969‑01‑20</td> <td>26</td> </tr> <tr><td>4</td> <td>1979‑02‑01</td> <td>1979‑02‑26</td> <td><span class="lg">26</span></td> </tr> <tr><td>9</td> <td>1980‑12‑06</td> <td>1980‑12‑30</td> <td>25</td> </tr> <tr><td>9</td> <td>1930‑01‑28</td> <td>1930‑02‑21</td> <td>25</td> </tr> </tbody> </table> <p>There have been seven cold snaps that lasted 16 days (including the one we’re currently in), tied for 45th place.</p> <p>Keep in mind that defining days where the daily minimum is −20°F or colder is a pretty generous definition of a cold snap. If we require the minimum temperatures be below −40° the lengths are considerably shorter:</p> <table border="1" class="tosf docutils"> <caption>Top ten longest cold snaps (−40° or colder minimum temp)</caption> <colgroup> <col width="15%" /> <col width="30%" /> <col width="30%" /> <col width="25%" /> </colgroup> <thead valign="bottom"> <tr><th class="head">rank</th> <th class="head">start</th> <th class="head">end</th> <th class="head">days</th> </tr> </thead> <tbody valign="top"> <tr><td>1</td> <td>1964‑12‑25</td> <td>1965‑01‑11</td> <td>18</td> </tr> <tr><td>2</td> <td>1973‑01‑12</td> <td>1973‑01‑26</td> <td>15</td> </tr> <tr><td>2</td> <td>1961‑12‑16</td> <td>1961‑12‑30</td> <td>15</td> </tr> <tr><td>2</td> <td>2008‑12‑28</td> <td>2009‑01‑11</td> <td>15</td> </tr> <tr><td>5</td> <td>1950‑02‑04</td> <td>1950‑02‑17</td> <td>14</td> </tr> <tr><td>5</td> <td>1989‑01‑18</td> <td>1989‑01‑31</td> <td>14</td> </tr> <tr><td>5</td> <td>1979‑02‑03</td> <td>1979‑02‑16</td> <td><span class="lg">14</span></td> </tr> <tr><td>5</td> <td>1947‑01‑23</td> <td>1947‑02‑05</td> <td><span class="lg">14</span></td> </tr> <tr><td>9</td> <td>1909‑01‑14</td> <td>1909‑01‑25</td> <td><span class="lg">12</span></td> </tr> <tr><td>9</td> <td>1942‑12‑15</td> <td>1942‑12‑26</td> <td>12</td> </tr> <tr><td>9</td> <td>1932‑02‑18</td> <td>1932‑02‑29</td> <td>12</td> </tr> <tr><td>9</td> <td>1935‑12‑02</td> <td>1935‑12‑13</td> <td>12</td> </tr> <tr><td>9</td> <td>1951‑01‑14</td> <td>1951‑01‑25</td> <td>12</td> </tr> </tbody> </table> <p>I think it’s also interesting that only three (marked with a grey background) of the top ten cold snaps defined at −20°F appear in those that have a −40° threshold.</p> </div> Sun, 08 Feb 2015 14:13:47 -0900 http://swingleydev.com/blog/p/1979/ temperature weather climate sql cold snap 2015 Tournament of Books http://swingleydev.com/blog/p/1978/ <div class="document"> <div class="figure align-right"> <img alt="http://www.themorningnews.org/tob/images/logo.gif" src="http://www.themorningnews.org/tob/images/logo.gif" style="width: 115px; height: 115px;" /> </div> <p>I’ve been a bit behind on mentioning the 2015 <a class="reference external" href="http://www.themorningnews.org/article/announcing-the-morning-news-2015-tournament-of-books">Tournament of Books</a>. The contestants were announced last month. As usual, here’s the list with a three star rating system for those I've read: ☆&nbsp;-&nbsp;not worthy, ☆☆&nbsp;-&nbsp;good, ★★★&nbsp;-&nbsp;great.</p> <ul class="simple"> <li><a class="reference external" href="http://www.powells.com/biblio/9780345805522">Silence Once Begun</a> by Jesse Ball ☆☆</li> <li><a class="reference external" href="http://www.powells.com/biblio/9780062280008">A Brave Man Seven Storeys Tall</a> by Will Chancellor ☆</li> <li><a class="reference external" href="http://www.powells.com/biblio/9781476746586">All the Light We Cannot See</a> by Anthony Doerr ★★★</li> <li><a class="reference external" href="http://www.powells.com/biblio/9781609452339">Those Who Leave and Those Who Stay</a> by Elena Ferrante</li> <li><a class="reference external" href="http://www.powells.com/biblio/9780802122513">An Untamed State</a> by Roxane Gay ★★★</li> <li><a class="reference external" href="http://www.powells.com/biblio/9781612193762">Wittgenstein Jr</a> by Lars Iyer</li> <li><a class="reference external" href="http://www.powells.com/biblio/9781594486005">A Brief History of Seven Killings</a> by Marlon James</li> <li><a class="reference external" href="http://www.powells.com/biblio/9781594204999">Redeployment</a> by Phil Klay</li> <li><a class="reference external" href="http://www.powells.com/biblio/9780385353304">Station Eleven</a> by Emily St. John Mandel ☆☆</li> <li><a class="reference external" href="http://www.powells.com/biblio/9781400065677">The Bone Clocks</a> by David Mitchell ★★★</li> <li><a class="reference external" href="http://www.powells.com/biblio/9781594205712">Everything I Never Told You</a> by Celeste Ng ☆☆</li> <li><a class="reference external" href="http://www.powells.com/biblio/9780345806871">Dept. of Speculation</a> by Jenny Offill ★★★</li> <li><a class="reference external" href="http://www.powells.com/biblio/9780544142930">Adam</a> by Ariel Schrag</li> <li><a class="reference external" href="http://www.powells.com/biblio/9781594633119">The Paying Guests</a> by Sarah Waters ☆</li> <li><a class="reference external" href="http://www.powells.com/biblio/9780374104092">Annihilation</a> by Jeff VanderMeer ☆☆</li> <li><a class="reference external" href="http://www.powells.com/biblio/9780307907769">All the Birds, Singing</a> by Evie Wyld ★★★</li> </ul> <p>Thus far, my early favorite is, of course, <em>The Bone Clocks</em> by David Mitchell. It's a fantastic book, similar in design to <em>Cloud Atlas</em>, but better. Both <em>All the Light We Cannot See</em> and <em>Dept. of Speculation</em> are distant runner's up. <em>All the Light</em> is great story, told in very short and easy to digest chapters, and <em>Speculation</em> is a funny, heartrending, strange, and ultimately redemptive story of marriage.</p> </div> Sun, 01 Feb 2015 08:48:39 -0900 http://swingleydev.com/blog/p/1978/ Tournament of Books books tob