Detecting Anomolous Events in Washington DC Bike Sharing Dataset
Contributed by Shin Chin. He is one of the remove students in the NYC Data Science Academy 12 week full time Data Science Online Bootcamp program taking place between January 11th to April 1st, 2016. This post is based on his data visualization project.
We want to detect anomolous events in the Washington DC Bike Sharing dataset.
Data Set Information:
Bike sharing systems are new generation of traditional bike rentals where whole process from membership, rental and return back has become automatic. Through these systems, user is able to easily rent a bike from a particular position and return back at another position. Currently, there are about over 500 bike-sharing programs around the world which is composed of over 500 thousands bicycles. Today, there exists great interest in these systems due to their important role in traffic, environmental and health issues.
Apart from interesting real world applications of bike sharing systems, the characteristics of data being generated by these systems make them attractive for the research. Opposed to other transport services such as bus or subway, the duration of travel, departure and arrival position is explicitly recorded in these systems. This feature turns bike sharing system into a virtual sensor network that can be used for sensing mobility in the city. Hence, it is expected that most of important events in the city could be detected via monitoring these data.
Attribute Information:
Both hour.csv and day.csv have the following fields, except hr which is not available in day.csv
- instant: record index
- dteday : date
- season : season (1:springer, 2:summer, 3:fall, 4:winter)
- yr : year (0: 2011, 1:2012)
- mnth : month ( 1 to 12)
- hr : hour (0 to 23)
- holiday : weather day is holiday or not (extracted from [Web Link])
- weekday : day of the week
- workingday : if day is neither weekend nor holiday is 1, otherwise is 0.
- weathersit :
- 1: Clear, Few clouds, Partly cloudy, Partly cloudy
- 2: Mist + Cloudy, Mist + Broken clouds, Mist + Few clouds, Mist
- 3: Light Snow, Light Rain + Thunderstorm + Scattered clouds, Light Rain + Scattered clouds
- 4: Heavy Rain + Ice Pallets + Thunderstorm + Mist, Snow + Fog
- temp : Normalized temperature in Celsius. The values are derived via (t-t_min)/(t_max-t_min), t_min=-8, t_max=+39 (only in hourly scale)
- atemp: Normalized feeling temperature in Celsius. The values are derived via (t-t_min)/(t_max-t_min), t_min=-16, t_max=+50 (only in hourly scale)
- hum: Normalized humidity. The values are divided to 100 (max)
- windspeed: Normalized wind speed. The values are divided to 67 (max)
- casual: count of casual users
- registered: count of registered users
- cnt: count of total rental bikes including both casual and registered
First we load the required libraries as well as read in the “day” data from the bike sharing dataset.
rm(list=ls())
library('ggplot2')
## Warning: package 'ggplot2' was built under R version 3.2.4
library('dplyr')
##
## Attaching package: 'dplyr'
## The following objects are masked from 'package:stats':
##
## filter, lag
## The following objects are masked from 'package:base':
##
## intersect, setdiff, setequal, union
day <- read.csv("~/NYCDataScience_course/Final_Project/Bike-Sharing-Dataset/day.csv")
#View(day)
Next we set season, yr, mnth, holiday, weekday, workingday and weathersit as factors.
day$season = as.factor(day$season)
day$yr = as.factor(day$yr)
day$mnth = as.factor(day$mnth)
day$holiday = as.factor(day$holiday)
day$weekday = as.factor(day$weekday)
day$workingday = as.factor(day$workingday)
day$weathersit = as.factor(day$weathersit)
We now do some visualizations by plotting various variables against each other with points color coded according the magintude of cnt.
scale_color_gradient(low = "#132B43", high = "#56B1F7", space = "Lab", na.value = "grey50", guide = "colourbar")
## <ScaleContinuous>
## Range:
## Limits: 0 -- 1
d1 <- qplot(atemp, hum, data=day, color=cnt)
d1 + scale_colour_gradient(limits=c(22, 8714))
d2 <- qplot(temp, hum, data=day, color=cnt)
d2 + scale_colour_gradient(limits=c(22, 8714))
d3 <- qplot(windspeed, temp, data=day, color=cnt)
d3 + scale_colour_gradient(limits=c(22, 8714))
d3 <- qplot(windspeed, hum, data=day, color=cnt)
d3 + scale_colour_gradient(limits=c(22, 8714))
As you can see from the plots, cnt increases with increasing temp as well as decreasing windspeed. Humdity may or may not have some effect on cnt.
Next we plot hitograms of cnt versus various variables.
qplot(cnt, data=day, binwidth=100) + facet_wrap(~weathersit)
by_weather <-group_by(day, weathersit)
summarize(by_weather, total=sum(cnt))
## Source: local data frame [3 x 2]
##
## weathersit total
## (fctr) (int)
## 1 1 2257952
## 2 2 996858
## 3 3 37869
qplot(cnt, data=day, binwidth=100) + facet_wrap(~holiday)
by_holiday <-group_by(day, holiday)
summarize(by_holiday, total=sum(cnt))
## Source: local data frame [2 x 2]
##
## holiday total
## (fctr) (int)
## 1 0 3214244
## 2 1 78435
qplot(cnt, data=day, binwidth=100) + facet_wrap(~workingday)
by_workingday <-group_by(day, workingday)
summarize(by_workingday, total=sum(cnt))
## Source: local data frame [2 x 2]
##
## workingday total
## (fctr) (int)
## 1 0 1000269
## 2 1 2292410
qplot(cnt, data=day, binwidth=100) + facet_wrap(~season)
by_season <-group_by(day, season)
summarize(by_season, total=sum(cnt))
## Source: local data frame [4 x 2]
##
## season total
## (fctr) (int)
## 1 1 471348
## 2 2 918589
## 3 3 1061129
## 4 4 841613
qplot(cnt, data=day, binwidth=100) + facet_wrap(~weekday)
by_weekday <-group_by(day, weekday)
summarize(by_weekday, total=sum(cnt))
## Source: local data frame [7 x 2]
##
## weekday total
## (fctr) (int)
## 1 0 444027
## 2 1 455503
## 3 2 469109
## 4 3 473048
## 5 4 485395
## 6 5 487790
## 7 6 477807
cnt is greatest for weathersit = 1 and then 2 with low values for 3.
cnt is much larger when holiday = 0 than 1
cnt is much larger for workingday = 1 than 0
cnt is largest for season = 3 followed by 2, 4 and 1.
cnt is roughly similar for all weekday.
We now perform multiple linear regressiom on the “day”" data.
model.saturated = lm(cnt ~ . -instant -dteday -casual -registered, data = day)
summary(model.saturated)
##
## Call:
## lm(formula = cnt ~ . - instant - dteday - casual - registered,
## data = day)
##
## Residuals:
## Min 1Q Median 3Q Max
## -3944.7 -348.2 63.8 457.4 2912.7
##
## Coefficients: (1 not defined because of singularities)
## Estimate Std. Error t value Pr(>|t|)
## (Intercept) 1485.84 239.75 6.198 9.77e-10 ***
## season2 884.71 179.49 4.929 1.03e-06 ***
## season3 832.70 213.13 3.907 0.000102 ***
## season4 1575.35 181.00 8.704 < 2e-16 ***
## yr1 2019.74 58.22 34.691 < 2e-16 ***
## mnth2 131.03 143.78 0.911 0.362443
## mnth3 542.83 165.43 3.281 0.001085 **
## mnth4 451.17 247.57 1.822 0.068820 .
## mnth5 735.51 267.63 2.748 0.006145 **
## mnth6 515.40 282.41 1.825 0.068423 .
## mnth7 30.80 313.82 0.098 0.921854
## mnth8 444.95 303.17 1.468 0.142639
## mnth9 1004.17 265.12 3.788 0.000165 ***
## mnth10 519.67 241.55 2.151 0.031787 *
## mnth11 -116.69 230.78 -0.506 0.613257
## mnth12 -89.59 182.21 -0.492 0.623098
## holiday1 -589.70 180.36 -3.270 0.001130 **
## weekday1 212.05 109.49 1.937 0.053187 .
## weekday2 309.53 107.13 2.889 0.003982 **
## weekday3 381.36 107.48 3.548 0.000414 ***
## weekday4 386.34 107.53 3.593 0.000350 ***
## weekday5 436.98 107.44 4.067 5.30e-05 ***
## weekday6 440.46 106.56 4.133 4.01e-05 ***
## workingday1 NA NA NA NA
## weathersit2 -462.54 77.09 -6.000 3.16e-09 ***
## weathersit3 -1965.09 197.05 -9.972 < 2e-16 ***
## temp 2855.01 1398.16 2.042 0.041526 *
## atemp 1786.16 1462.12 1.222 0.222261
## hum -1535.47 292.45 -5.250 2.01e-07 ***
## windspeed -2823.30 414.55 -6.810 2.09e-11 ***
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
## Residual standard error: 769.2 on 702 degrees of freedom
## Multiple R-squared: 0.8484, Adjusted R-squared: 0.8423
## F-statistic: 140.3 on 28 and 702 DF, p-value: < 2.2e-16
plot(model.saturated) #Assessing the assumptions of the model.
library(car) #Companion to applied regression.
influencePlot(model.saturated)
## StudRes Hat CookD
## 595 2.112057 0.64959535 0.5326909
## 669 -5.336784 0.04056466 0.1998979
alias(model.saturated)
## Model :
## cnt ~ (instant + dteday + season + yr + mnth + holiday + weekday +
## workingday + weathersit + temp + atemp + hum + windspeed +
## casual + registered) - instant - dteday - casual - registered
##
## Complete :
## (Intercept) season2 season3 season4 yr1 mnth2 mnth3 mnth4
## workingday1 0 0 0 0 0 0 0 0
## mnth5 mnth6 mnth7 mnth8 mnth9 mnth10 mnth11 mnth12 holiday1
## workingday1 0 0 0 0 0 0 0 0 -1
## weekday1 weekday2 weekday3 weekday4 weekday5 weekday6
## workingday1 1 1 1 1 1 0
## weathersit2 weathersit3 temp atemp hum windspeed
## workingday1 0 0 0 0 0 0
model.saturated seems like a good candidate for linear regression from the above plots.
As you can see from the alias values, workinday1 is perfectly collinear with holiday1.
Hence remove workingday variable in the next model2.
#vif(model.saturated) #Assessing the variance inflation factors for the variables
#in our model.
model2 = lm(cnt ~ . -instant -dteday -workingday -atemp -casual -registered, data = day)
summary(model2) #R^2 adjusted went up, model still significant, etc.
##
## Call:
## lm(formula = cnt ~ . - instant - dteday - workingday - atemp -
## casual - registered, data = day)
##
## Residuals:
## Min 1Q Median 3Q Max
## -3960.9 -350.9 74.1 456.0 2919.9
##
## Coefficients:
## Estimate Std. Error t value Pr(>|t|)
## (Intercept) 1543.552 235.129 6.565 1.01e-10 ***
## season2 889.302 179.516 4.954 9.12e-07 ***
## season3 832.236 213.204 3.903 0.000104 ***
## season4 1578.947 181.040 8.722 < 2e-16 ***
## yr1 2018.063 58.225 34.660 < 2e-16 ***
## mnth2 136.855 143.747 0.952 0.341396
## mnth3 545.132 165.480 3.294 0.001036 **
## mnth4 456.494 247.615 1.844 0.065667 .
## mnth5 723.520 267.541 2.704 0.007010 **
## mnth6 490.552 281.776 1.741 0.082133 .
## mnth7 8.404 313.395 0.027 0.978613
## mnth8 404.912 301.494 1.343 0.179700
## mnth9 983.948 264.698 3.717 0.000217 ***
## mnth10 520.937 241.636 2.156 0.031432 *
## mnth11 -111.362 230.816 -0.482 0.629621
## mnth12 -84.389 182.229 -0.463 0.643439
## holiday1 -603.605 180.066 -3.352 0.000845 ***
## weekday1 214.877 109.508 1.962 0.050133 .
## weekday2 309.132 107.171 2.884 0.004041 **
## weekday3 377.407 107.467 3.512 0.000473 ***
## weekday4 385.206 107.562 3.581 0.000366 ***
## weekday5 428.604 107.258 3.996 7.12e-05 ***
## weekday6 438.699 106.590 4.116 4.32e-05 ***
## weathersit2 -465.202 77.083 -6.035 2.57e-09 ***
## weathersit3 -1981.357 196.670 -10.075 < 2e-16 ***
## temp 4487.305 411.838 10.896 < 2e-16 ***
## hum -1518.178 292.208 -5.196 2.68e-07 ***
## windspeed -2925.438 406.175 -7.202 1.53e-12 ***
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
## Residual standard error: 769.5 on 703 degrees of freedom
## Multiple R-squared: 0.848, Adjusted R-squared: 0.8422
## F-statistic: 145.3 on 27 and 703 DF, p-value: < 2.2e-16
plot(model2) #No overt additional violations.
influencePlot(model2) #No overt additional violations; Hawaii actually lowers
## StudRes Hat CookD
## 69 -1.210739 0.16828646 0.1028882
## 668 -4.756498 0.07702696 0.2557740
## 669 -5.356592 0.04026858 0.2033892
#its hat value (leverage).
vif(model2) #VIFs all decrease.
## GVIF Df GVIF^(1/(2*Df))
## season 169.713093 3 2.352985
## yr 1.046249 1 1.022863
## mnth 391.715048 11 1.311784
## holiday 1.116829 1 1.056801
## weekday 1.153293 6 1.011956
## weathersit 1.886133 2 1.171907
## temp 7.006223 1 2.646927
## hum 2.135348 1 1.461283
## windspeed 1.221500 1 1.105215
anova(model2, model.saturated)
## Analysis of Variance Table
##
## Model 1: cnt ~ (instant + dteday + season + yr + mnth + holiday + weekday +
## workingday + weathersit + temp + atemp + hum + windspeed +
## casual + registered) - instant - dteday - workingday - atemp -
## casual - registered
## Model 2: cnt ~ (instant + dteday + season + yr + mnth + holiday + weekday +
## workingday + weathersit + temp + atemp + hum + windspeed +
## casual + registered) - instant - dteday - casual - registered
## Res.Df RSS Df Sum of Sq F Pr(>F)
## 1 703 416284778
## 2 702 415401688 1 883090 1.4924 0.2223
model2 seems like a good candidate for linear regression from the above plots.
The VIF values for all variables are low indicating that collinearity is not a problem.
From analysis of variance, seems like model2 and model.saturated are significantly different.
We now have a reduced model where we remove the hum variable.
model.reduced = lm(cnt ~ . -instant -dteday -workingday -atemp -hum -casual -registered, data = day)
anova(model.reduced, model.saturated)
## Analysis of Variance Table
##
## Model 1: cnt ~ (instant + dteday + season + yr + mnth + holiday + weekday +
## workingday + weathersit + temp + atemp + hum + windspeed +
## casual + registered) - instant - dteday - workingday - atemp -
## hum - casual - registered
## Model 2: cnt ~ (instant + dteday + season + yr + mnth + holiday + weekday +
## workingday + weathersit + temp + atemp + hum + windspeed +
## casual + registered) - instant - dteday - casual - registered
## Res.Df RSS Df Sum of Sq F Pr(>F)
## 1 704 432269190
## 2 702 415401688 2 16867502 14.252 8.563e-07 ***
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
summary(model.reduced)
##
## Call:
## lm(formula = cnt ~ . - instant - dteday - workingday - atemp -
## hum - casual - registered, data = day)
##
## Residuals:
## Min 1Q Median 3Q Max
## -4189.5 -372.9 67.2 481.8 2893.1
##
## Coefficients:
## Estimate Std. Error t value Pr(>|t|)
## (Intercept) 715.03 175.95 4.064 5.37e-05 ***
## season2 848.08 182.62 4.644 4.08e-06 ***
## season3 841.83 217.10 3.878 0.000115 ***
## season4 1585.86 184.35 8.603 < 2e-16 ***
## yr1 2065.21 58.57 35.263 < 2e-16 ***
## mnth2 180.92 146.12 1.238 0.216071
## mnth3 638.19 167.52 3.810 0.000151 ***
## mnth4 606.99 250.41 2.424 0.015603 *
## mnth5 820.13 271.78 3.018 0.002639 **
## mnth6 749.76 282.40 2.655 0.008110 **
## mnth7 242.98 315.80 0.769 0.441899
## mnth8 575.45 305.18 1.886 0.059762 .
## mnth9 1033.99 269.36 3.839 0.000135 ***
## mnth10 541.10 246.02 2.199 0.028175 *
## mnth11 -109.31 235.04 -0.465 0.642021
## mnth12 -119.21 185.44 -0.643 0.520540
## holiday1 -585.60 183.33 -3.194 0.001464 **
## weekday1 222.82 111.50 1.998 0.046061 *
## weekday2 328.50 109.07 3.012 0.002688 **
## weekday3 394.44 109.38 3.606 0.000333 ***
## weekday4 440.09 109.00 4.038 5.99e-05 ***
## weekday5 476.69 108.81 4.381 1.36e-05 ***
## weekday6 467.50 108.39 4.313 1.84e-05 ***
## weathersit2 -702.78 63.19 -11.122 < 2e-16 ***
## weathersit3 -2422.60 180.63 -13.412 < 2e-16 ***
## temp 3923.88 404.57 9.699 < 2e-16 ***
## windspeed -2328.09 396.69 -5.869 6.76e-09 ***
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
## Residual standard error: 783.6 on 704 degrees of freedom
## Multiple R-squared: 0.8422, Adjusted R-squared: 0.8364
## F-statistic: 144.5 on 26 and 704 DF, p-value: < 2.2e-16
plot(model.reduced)
influencePlot(model.reduced)
## StudRes Hat CookD
## 668 -4.825354 0.07620740 0.2625982
## 669 -5.562958 0.03699797 0.2055197
## 725 -1.938558 0.09986872 0.1240252
vif(model.reduced)
## GVIF Df GVIF^(1/(2*Df))
## season 169.095113 3 2.351555
## yr 1.020838 1 1.010365
## mnth 354.080512 11 1.305775
## holiday 1.116415 1 1.056606
## weekday 1.135306 6 1.010631
## weathersit 1.115412 2 1.027682
## temp 6.520446 1 2.553516
## windspeed 1.123627 1 1.060013
model.reduced seems like a good candidate for linear regression from the above plots.
We now perform AIC and BIC to see which of these models are most appropriate.
AIC(model.saturated, #Model with all variables.
model2, #Model with all variables EXCEPT Illiteracy.
model.reduced) #Model with all variables EXCEPT Illiteracy, Area, and Income.
## df AIC
## model.saturated 30 11820.49
## model2 29 11820.04
## model.reduced 28 11845.58
BIC(model.saturated,
model2,
model.reduced) #Both the minimum AIC and BIC values appear alongside the
## df BIC
## model.saturated 30 11958.32
## model2 29 11953.28
## model.reduced 28 11974.23
#reduced model that we tested above.
AIC and BIC values are lowest for model2 hence we will go with model2.
We now do a prediction using model2.
#choose model2
p2.conf = predict(model2, se.fit = TRUE, interval = "confidence")
p2.pred = predict(model2, se.fit = TRUE, interval = "prediction")
## Warning in predict.lm(model2, se.fit = TRUE, interval = "prediction"): predictions on current data refer to _future_ responses
We now read in the hour data and do a multiple linear regression on the data.
hour <- read.csv("~/NYCDataScience_course/Final_Project/Bike-Sharing-Dataset/hour.csv")
#View(hour)
hour$season = as.factor(hour$season)
hour$yr = as.factor(hour$yr)
hour$mnth = as.factor(hour$mnth)
hour$hr = as.factor(hour$hr)
hour$holiday = as.factor(hour$holiday)
hour$weekday = as.factor(hour$weekday)
hour$workingday = as.factor(hour$workingday)
hour$weathersit = as.factor(hour$weathersit)
model.hour = lm(cnt ~ . -instant -dteday -workingday -atemp -casual -registered, data = hour)
summary(model.hour)
##
## Call:
## lm(formula = cnt ~ . - instant - dteday - workingday - atemp -
## casual - registered, data = hour)
##
## Residuals:
## Min 1Q Median 3Q Max
## -392.88 -60.82 -7.68 51.17 440.65
##
## Coefficients:
## Estimate Std. Error t value Pr(>|t|)
## (Intercept) -79.627 6.567 -12.126 < 2e-16 ***
## season2 38.495 4.858 7.925 2.43e-15 ***
## season3 31.999 5.752 5.563 2.69e-08 ***
## season4 68.257 4.884 13.976 < 2e-16 ***
## yr1 85.337 1.564 54.577 < 2e-16 ***
## mnth2 3.873 3.921 0.988 0.323281
## mnth3 14.469 4.409 3.282 0.001034 **
## mnth4 6.626 6.551 1.012 0.311788
## mnth5 19.834 7.007 2.830 0.004654 **
## mnth6 4.502 7.196 0.626 0.531540
## mnth7 -14.838 8.077 -1.837 0.066213 .
## mnth8 5.054 7.853 0.644 0.519865
## mnth9 30.810 6.996 4.404 1.07e-05 ***
## mnth10 15.854 6.487 2.444 0.014533 *
## mnth11 -9.485 6.241 -1.520 0.128551
## mnth12 -5.912 4.956 -1.193 0.232936
## hr1 -17.393 5.348 -3.253 0.001146 **
## hr2 -26.467 5.366 -4.932 8.21e-07 ***
## hr3 -37.167 5.405 -6.876 6.36e-12 ***
## hr4 -40.286 5.411 -7.446 1.01e-13 ***
## hr5 -23.561 5.375 -4.383 1.18e-05 ***
## hr6 35.327 5.361 6.589 4.55e-11 ***
## hr7 170.448 5.351 31.855 < 2e-16 ***
## hr8 310.981 5.344 58.191 < 2e-16 ***
## hr9 163.267 5.350 30.518 < 2e-16 ***
## hr10 108.445 5.372 20.187 < 2e-16 ***
## hr11 133.841 5.412 24.730 < 2e-16 ***
## hr12 173.168 5.458 31.725 < 2e-16 ***
## hr13 168.120 5.496 30.588 < 2e-16 ***
## hr14 152.266 5.527 27.548 < 2e-16 ***
## hr15 161.644 5.538 29.188 < 2e-16 ***
## hr16 223.672 5.526 40.475 < 2e-16 ***
## hr17 377.355 5.494 68.687 < 2e-16 ***
## hr18 345.532 5.458 63.310 < 2e-16 ***
## hr19 236.985 5.407 43.832 < 2e-16 ***
## hr20 157.466 5.377 29.285 < 2e-16 ***
## hr21 108.045 5.355 20.176 < 2e-16 ***
## hr22 71.023 5.345 13.287 < 2e-16 ***
## hr23 32.169 5.341 6.023 1.74e-09 ***
## holiday1 -27.189 4.878 -5.574 2.52e-08 ***
## weekday1 9.456 2.974 3.180 0.001478 **
## weekday2 10.812 2.905 3.722 0.000198 ***
## weekday3 13.320 2.900 4.592 4.42e-06 ***
## weekday4 13.048 2.902 4.496 6.98e-06 ***
## weekday5 16.852 2.890 5.831 5.60e-09 ***
## weekday6 15.934 2.879 5.535 3.16e-08 ***
## weathersit2 -10.690 1.920 -5.568 2.61e-08 ***
## weathersit3 -65.927 3.233 -20.394 < 2e-16 ***
## weathersit4 -64.069 58.920 -1.087 0.276874
## temp 233.297 9.403 24.810 < 2e-16 ***
## hum -81.333 5.545 -14.667 < 2e-16 ***
## windspeed -36.309 6.845 -5.304 1.14e-07 ***
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
## Residual standard error: 101.8 on 17327 degrees of freedom
## Multiple R-squared: 0.686, Adjusted R-squared: 0.6851
## F-statistic: 742.4 on 51 and 17327 DF, p-value: < 2.2e-16
plot(model.hour) #Assessing the assumptions of the model.
influencePlot(model.hour)
## StudRes Hat CookD
## 586 -0.253638 0.334718960 0.02494951
## 9124 1.531435 0.334602696 0.15059276
## 14774 4.337326 0.002697825 0.03126733
vif(model.hour) #A
## GVIF Df GVIF^(1/(2*Df))
## season 163.930448 3 2.339429
## yr 1.025279 1 1.012561
## mnth 312.113041 11 1.298308
## hr 1.747652 23 1.012210
## holiday 1.115178 1 1.056020
## weekday 1.123621 6 1.009760
## weathersit 1.389828 3 1.056396
## temp 5.499232 1 2.345044
## hum 1.919913 1 1.385609
## windspeed 1.176308 1 1.084577
#Do not use multiple linear regression for "hour" data
As you can see from the above plots, multiple linear regression is not a good candidate for the hour dataset.
Hence we will try Generalized Boosted Regression Models on the hour dataset.
library(gbm)
## Warning: package 'gbm' was built under R version 3.2.3
## Loading required package: survival
## Loading required package: lattice
## Loading required package: splines
## Loading required package: parallel
## Loaded gbm 2.1.1
train = sample(1:nrow(hour), 7*nrow(hour)/10)
#Fitting 10,000 trees with a depth of 4.
set.seed(0)
boost.hour = gbm(cnt ~ . -instant -dteday -workingday -atemp -casual -registered, data = hour[train, ],
distribution = "gaussian",
n.trees = 10000,
interaction.depth = 4)
#Inspecting the relative influence.
par(mfrow = c(1, 1))
summary(boost.hour)
## var rel.inf
## hr hr 61.9885912
## temp temp 11.4149177
## yr yr 9.5000267
## weekday weekday 8.5856425
## season season 3.5194042
## weathersit weathersit 1.9422920
## hum hum 1.4143500
## mnth mnth 1.1357884
## holiday holiday 0.3748570
## windspeed windspeed 0.1241303
#Let's make a prediction on the test set. With boosting, the number of trees is
#a tuning parameter; having too many can cause overfitting. In general, we should
#use cross validation to select the number of trees. Instead, we will compute the
#test error as a function of the number of trees and make a plot for illustrative
#purposes.
n.trees = seq(from = 100, to = 10000, by = 100)
predmat = predict(boost.hour, newdata = hour[-train, ], n.trees = n.trees)
#Produces 100 different predictions for each of the 152 observations in our
#test set.
dim(predmat)
## [1] 5214 100
#Calculating the boosted errors.
par(mfrow = c(1, 1))
berr = with(hour[-train, ], apply((predmat - cnt)^2, 2, mean))
plot(n.trees, berr, pch = 16,
ylab = "Mean Squared Error",
xlab = "# Trees",
main = "Boosting Test Error")
berr[which(berr==min(berr))]
## 10000
## 3567.622
Above we get the number of trees which will give the lowest mean squared error.
We will now increase the shrinkage parameter where a higher proportion of the errors are carried over in order to obtain a lower mean sequared error.
#Increasing the shrinkage parameter; a higher proportion of the errors are
#carried over.
set.seed(0)
boost.hour2 = gbm(cnt ~ . -instant -dteday -workingday -atemp -casual -registered, data = hour[train, ],
distribution = "gaussian",
n.trees = 10000,
interaction.depth = 4,
shrinkage = 0.1)
predmat2 = predict(boost.hour2, newdata = hour[-train, ], n.trees = n.trees)
berr2 = with(hour[-train, ], apply((predmat2 - cnt)^2, 2, mean))
plot(n.trees, berr2, pch = 16,
ylab = "Mean Squared Error",
xlab = "# Trees",
main = "Boosting Test Error")
berr2[which(berr2==min(berr2))]
## 1300
## 1782.969
We get the get the number of trees which will give the lowest mean squared error.
We then use that value to do a prediction.
nt2 = as.numeric(labels(berr2[which(berr2==min(berr2))]))
predgbm = predict(boost.hour2, newdata=hour, n.trees=nt2)
We now compute the residuals which are difference between predicted counts and actual counts.
We calculate the mean of the residuals as well as the standard deviation.
Z scores are then calculated.
We calculate the mean Z-scores for all the hours in a given day.
We choose the highest 15 Z-scores and use that to pick the dates for anomolous events.
res = predgbm - hour$cnt
mres = mean(res)
sdres = sd(res)
n = length(res)
z = abs((res-mres)/(sdres))
hour$z = z
#View(hour)
by_dteday <-group_by(hour, dteday)
s = summarize(by_dteday, avgz=mean(z))
sort(s$avgz, decreasing=TRUE)[1:100]
## [1] 2.7112381 2.2397319 1.9624200 1.8211334 1.7304263 1.6513585 1.6275855
## [8] 1.5973282 1.5938915 1.5439886 1.5294643 1.5179849 1.4857667 1.4665028
## [15] 1.4621685 1.4621259 1.4235389 1.4077666 1.4067821 1.4015984 1.3930406
## [22] 1.3411182 1.3401727 1.3340643 1.3156862 1.3036272 1.2918769 1.2299950
## [29] 1.2292920 1.2125380 1.2090872 1.2021152 1.1918650 1.1855863 1.1745246
## [36] 1.1742758 1.1674217 1.1653828 1.1635320 1.1620328 1.1601677 1.1573951
## [43] 1.1307613 1.1268530 1.1154102 1.0996260 1.0969936 1.0950148 1.0905806
## [50] 1.0904985 1.0864845 1.0853500 1.0756282 1.0736106 1.0638337 1.0604942
## [57] 1.0493767 1.0341047 1.0336842 1.0304345 1.0279820 1.0196014 1.0192730
## [64] 1.0177070 1.0171453 1.0129120 1.0086488 1.0080085 1.0057368 1.0052259
## [71] 1.0051839 1.0025880 0.9992134 0.9988848 0.9962330 0.9881547 0.9875984
## [78] 0.9874134 0.9855418 0.9850833 0.9831720 0.9805352 0.9780101 0.9751244
## [85] 0.9747139 0.9714812 0.9710251 0.9692384 0.9690795 0.9690553 0.9689953
## [92] 0.9675672 0.9654109 0.9600272 0.9511359 0.9504113 0.9417479 0.9363075
## [99] 0.9351280 0.9334449
o = order(s$avgz, decreasing=TRUE)[1:15]
s$avgz[o[1:15]]
## [1] 2.711238 2.239732 1.962420 1.821133 1.730426 1.651358 1.627585
## [8] 1.597328 1.593891 1.543989 1.529464 1.517985 1.485767 1.466503
## [15] 1.462169
day$dteday[o[1:15]]
## [1] 2012-10-30 2012-11-23 2012-11-21 2011-08-28 2012-10-02 2012-07-04
## [7] 2012-12-31 2012-03-17 2012-11-24 2012-05-14 2012-10-19 2011-10-19
## [13] 2011-11-25 2012-06-01 2011-08-23
## 731 Levels: 2011-01-01 2011-01-02 2011-01-03 2011-01-04 ... 2012-12-31
Events detected:
2012-10-30: Sandy
2012-11-23: Day after Thanksgiving
2012-11-21: Day before Thanksgiving
2011-08-28: Hurricane Irene
2012-07-04: Independence Day/Washington DC Fireworks
2012-12-31: New Year's Eve
2012-11-24: Saturday after Thanksgiving
2012-10-19: Storm
2011-11-25: Day after Thanksgiving
2012-06-01: Tornado
2011-08-23: Virginia Earthquake
We now use the multiple linear regression results from the day dataset to detect anomolous events.
We compute the residuals which are difference between predicted counts and actual counts.
We calculate the mean of the residuals as well as the standard deviation.
Z scores are then calculated. We choose the highest 15 Z-scores and use that to pick the dates for anomolous events.
res.lm = p2.conf$fit[,1] - day$cnt
mres.lm = mean(res.lm)
sdres.lm = sd(res.lm)
n.lm = length(res.lm)
z.lm = abs((res.lm-mres.lm)/(sdres.lm))
day$zlm = z.lm
#View(day)
sort(z.lm, decreasing=TRUE)[1:15]
## 669 668 442 692 500 694 266 554
## 5.245143 4.586557 3.866616 3.808029 3.751417 3.606062 3.578995 3.498611
## 695 185 546 239 204 555 448
## 3.386125 3.256032 3.134793 3.047427 2.920467 2.890135 2.776642
o.lm = order(z.lm, decreasing=TRUE)[1:100]
z.lm[o.lm[1:15]]
## 669 668 442 692 500 694 266 554
## 5.245143 4.586557 3.866616 3.808029 3.751417 3.606062 3.578995 3.498611
## 695 185 546 239 204 555 448
## 3.386125 3.256032 3.134793 3.047427 2.920467 2.890135 2.776642
day$dteday[o.lm[1:15]]
## [1] 2012-10-30 2012-10-29 2012-03-17 2012-11-22 2012-05-14 2012-11-24
## [7] 2011-09-23 2012-07-07 2012-11-25 2011-07-04 2012-06-29 2011-08-27
## [13] 2011-07-23 2012-07-08 2012-03-23
## 731 Levels: 2011-01-01 2011-01-02 2011-01-03 2011-01-04 ... 2012-12-31
Events detected:
2012-10-30: Sandy
2012-10-29: Sandy
2012-11-22: Thanksgiving
2012-11-24: Saturday after Thanksgiving
2012-11-25: Sunday after Thanksgiving
2011-07-04: Independence Day/Washington DC fireworks
2011-08-27: Hurricane Irene
2011-07-23: Heatwave
2012-03-23: National Cherry Blossom Festival