Data Study on Uber's Fare
The skills the author demoed here can be learned through taking Data Science with Machine Learning bootcamp with NYC Data Science Academy.
See the code! | Try to the app! (We suggest you use Safari.)
Introduction
Data shows driving an Uber in New York City is a difficult proposition. Navigating the crowded streets of Manhattan, competing with yellow cabs for fares, and sometimes driving for miles just to get gas. The margins of driving an uber are razor thin, and drivers can’t afford to spend their precious minutes looking for a new fare.
While this is certainly an issue for its employees, the problem of finding new fares quickly is also dilemma for Uber itself. Uber needs as many drivers as it can get in New York, otherwise the response time to new fares will not be sufficient compete with yellow cabs and other apps. Drivers won’t want to drive for Uber if they can’t make money, so they need their drivers picking up new fares as quickly as they drop them off. Uber also wants their drivers to head to areas that will be popular in advance, so that their customers don’t have to wait too long for their ride.
We attempted to solve these problems with this app. While our program won’t solve every problem with driving an Uber in New York, we believe that drivers will get a great deal of benefit out of using it. We encourage you to try out the app, and we hope that you enjoy it!
Data
We used two different sources for our data on taxi rides. The Uber dataset, which was roughly 100 MB and 3 million rows long, was taken from FiveThirtyEight’s Github account. The data was available to FiveThirtyEight as the result of a freedom of information act, and thus only contained the longitude, latitude, and time of each ride. The yellow cab data was download from the city of New York’s website, which contained many more variables than just latitude/longitude. We automated this process with the following script:
From the original format, we grouped the data by neighborhood and split it up by time of day using ten minute intervals. We also created our variable of interest, which was the sum of uber and taxi rides.
Supplemental Data
We knew we needed to include weather data in our analysis, as anyone who’s tried to hail a cab in a rainstorm can attest. We downloaded data from Weather Underground, which contained daily weather statistics recorded in Central Park going back several years. It’d be better if we had weather broken out by hour or minute, but this measure should give a rough indication for conditions on each day.
We also scraped data from newyorkgasprices.com, for a complete list of local gas stations and their prices. This site was a bit cumbersome to scrape, but we accomplished it with the following script:
Model
Now that the data is collected and clean, it’s time to implement a model to do the actual predicting. We tested two different models: linear regression and random forests.
Linear Regression
We implemented linear regression using the sklearn package in python. While linear regresion is not the most predictive model, we thought it would be useful to test, as it can quickly and efficiently return results (unlike other machine learning methods that take longer). When we tested the model, it returned an R^2 value of .72 and an RMSE score of 112.1. Here is our final code for its implementation:
Random Forests
We also tested random forests, a model that is significantly more predictive and expensive than linear regression. We saw much better test results from Random Forests, with an R^2 of .93 and an RMSE of 10.5. While it takes a bit longer to return results than linear regression, we opted to use Random Forests in our model for its greater predictive power.
App
All of these predictions wouldn’t be very useful without an interface to display them, so we built a Flask app for drivers to use. We wanted to limit the amount of information that drivers had to input into the app, so we only ask for the distance that they’re willing to drive, and if they’d like to get gas.
Visualization Graphic
The visualization on the front page is meant to give drivers a rough idea of where most fares are happening throughout the day. This was built using R and ggplot2, and the data is grouped by latitude and longitude every half hour. We looped through every half hour and made a separate graph for that time, which we output to a .png file. We then used gifmaker.com to turn those images into a completed gif.
Retrieving Current Conditions
As we previously mentioned, we wanted to limit the amount of fields that the user had to input when using our app. Our algorithm requires quite a bit of information to generate its prediction, however, so our app uses APIs to retrieve the relevant data. The user’s current location is retrieved from your IP address. Weather conditions are taken from Dark Skies, which returns the current temperature and the precipitation rate. Travel times and traffic conditions are calculated using the Google Maps API, and the current date and time are taken from the user’s computer.
Running the Data Algorithm
After the current conditions are collected, the app takes the radius that the user has inputted and excludes any neighborhoods that are too far away. This saves a great deal of computational energy, so we can return results faster and more efficiently. For each of these neighborhoods, the app inputs the current conditions and calculates the predicted demand. It then returns the best neighborhood, along with directions to that neighborhood from Google Maps.
Gas?
If the user checks the box indicating that they need gas, directions to the nearest gas station are returned. This is simply done by calculating the distance to all available gas stations using Google Maps, and returning the direction to the closest one.
Conclusions
This project was a fascinating topic, and we both greatly enjoyed working on predicting Uber rides in New York. This is clearly an important area, and there is much more work that can be done. With better data from Uber, we could learn a tremendous amount about where and when the next fare is coming. We sincerely hope they make more data available soon, as it could lead to more fascinating explorations.