Data Visualizing Global Warming through Shiny
The skills the author demoed here can be learned through taking Data Science with Machine Learning bootcamp with NYC Data Science Academy.
Contributed by Steven Ginsberg.He is currently in the NYC Data Science Academy 12 week full time Data Science Bootcamp program taking place between April 11th to July 1st, 2016. This post is based on his second class project - R Shiny, due on the 2nd week of the program.
My Shiny application (R Code) looks at historical temperatures in an effort to visualize global warming. NOAA provides worldwide average monthly temperature recordings from this year going all the way back to 1743. The full data set includes 465,000 records from 7,300 weather stations worldwide. Filtering this data for the United States only reduced the data set to 181,000 records and 1,900 stations. In addition, I decided to start the presentation at 1840, just before the number of stations started to
increase.
Addtional data is available, such as minimums and maximums as well as precipitation information. While it would be useful to include these in a variability study, I decided to put this off for another time.
Cleaning up the Data
The data was relatively clean right off the website. The first issue I encountered was the missing data points were filled in with -9999, making averages and minimums useless. So these were replaced with NA’s. The temperatures were stored as degrees Celsius *100, and as integers. So, divide by 100 to get the two decimal points, convert to Fahrenheit, and they’re good to go.
NA’s accounted for 83,000 data points or .68% of the total. I tried replacing the NA’s using k nearest neighbors built into R, but the data set was too large for the memory available. I did not feel Mean or Random methods were appropriate, so I left the NA’s and ignored them for all average calculations.
A potentially bigger problem than the NA’s was some bad data. There was a small number of data points with temperatures over 200 degrees. This throws off averages, maximums and the entire scale of the chart. These were replaced with NA’s and I was ready to create the application.
The Data Application
The application is simple, with two views of the data, the Chart and a Map. I used the Shiny dashboard to layout the components, and googleVis for visualizations.
The Chart View
The chart displayed average temperature over time. The “Year” slider changed the start year, while the end year was fixed to 2016. The “Month” slider allowed the user to change the averages period viewed, from 1 specific month, 12 months, or seasonally if desired.
For each data set selected, the red line shows the linear trend. While useful, it is important to note it is not a scientifically or mathematically accurate proof for or against global warming for a few reasons:
- Taking an average of monthly averages is mathematically inaccurate, and weighs months evenly regardless of the number of days in a month;
- NA’s are just being ignored and there was no analysis whether there is any bias or relationship to these data points;
- In addition to the above, there was no analysis of outliers, location bias, etc.
The Map View of Data
The map view shows temperatures by location. The “Year” slider selects the year to display (not a range of years this time) and the "Month" slider selects the months to include in the average. In all cases, the temps range from blue/cold to orange/hot. Green dots show NA’s.
Conclusion
Having played with the chart for a while, some interesting patterns emerge. Generally, we see an upward trend in average temperatures. However, during the past 20 years this seems to have leveled off. In addition, during this period it appears Jan – Mar is getting colder, while the rest of the year is level or warmer. Again, take this all with a grain of salt – this is an example of what can be done from a visual perspective rather than a complete statistically correct study.