Dataset: Creating Accurate Pricing Model Based on UberEats

Posted on May 17, 2020
The skills I demoed here can be learned through taking Data Science with Machine Learning bootcamp with NYC Data Science Academy.

 

Introduction                                                                                 

ShinyApp|LinkedIn

With the rich dataset gathered from the web scraping project, I decided to explore some of the features in the dataset and formulate a simple model to price restaurant sub-menus.  Due to the sheer size of the dataset(restaurant data for all cities in California consumed ~1/2 GB of my hard drive), I limited the work and analysis to cities in California. The analysis can be however extended to all cities in the US.

The basic question I seek to answer in this project is: Can I extract useful features from the UberEats site to create an accurate pricing model for sub-menus? 

The shinyApp contains the visuals that were useful in the exploratory data analysis as well as a prototype of the pricing model and its performance on a toy menu uploaded for illustrative purposes.

The intended audience would be a restaurant owner looking to introduce a new sub-menu item in a particular location and who needs a good gauge as to how to competitively price their menu from comparables.

Data:

As mentioned in the intro, the analysis was limited to the state of California. Dataset included 357 cities and 28k+ unique restaurants.  Main features scraped from UberEats are :

        • restaurant name and location
        • restaurant ratings
        • sub-menu items: dish names, prices and descriptions

First take:

Out of the 357 cities with a listed restaurant, Los Angeles was by far the most active city with ~4000 restaurants listed. Out of the ~24,000 sub-menus listed across the state, the most popular items were beverages(or soft drinks), sides, salads, desserts and sandwiches as shown below.

It can be seen from the above chart as well as the shinyApp that ~15% of the listed restaurants are found in Los Angeles alone. Still I wanted to explore the other cities and in particular the price distribution of their sub-menus. For this project, I limited the analysis to the top cities and top sub-menus as shown above.

City views:

After cutting off outliers and doing basic pre-processing of data to correct misspellings of sub-menu names and lump same sub-menu names with slight varying spellings together, I got the histograms and violin plots of prices for the sub-menus of the top 10 cities in California. The above is a snapshot for Los Angeles taken from the shinyApp with the drop down menu allowing users to select any of the other 9 cities used for this analysis.

Visually, same sub-menus showed similar price distributions across the cities. So I thought to myself, why not use the most actively listed city(Los Angeles) as the benchmark city and use the sub-menu prices in Los Angeles as the feature to predict like sub-menu prices in each of the other cities. That way we reduce the size of the data set drastically as we don't need to gather data from all cities to model sub-menu price ranges in each city.

Data Analysis

To do this though, I had to make an assumption that the sub-menu prices are drawn from the same distribution. To simplify the analysis, I assumed that their variances are the same and focused on analyzing the means of the sub-menu prices using a 2-sample t-test.  The null hypothesis here being: the mean of prices in the same sub-menu between Los Angeles and another city in California are the same.

For each 2-sample t-test if the null hypothesis cannot be rejected under the assumed confidence level then we can use the average price in Los Angeles to represent the average price in that city for that sub-menu. However if the null hypothesis is rejected then a transformation is made to adjust the mean of the sub-menu price in Los Angeles to match the mean of the sub-menu price in the actual data for the city being analyzed.

So are prices really different?

As mentioned above, I ran 2-sample t-tests for all top 10 cities with Los Angeles as the control city.  Testing means of sub-menu prices in Los Angeles against itself served as a test of the analysis. We expect that the means of all sub-menus in Los Angeles with itself to be the same as shown in the graph below:

 

dataset

However the means of same sub-menu items are expected to vary across different cities:

dataset

While I thought geographical proximity to Los Angeles would determine whether average prices were statistically similar or not, this does not appear to be the case at least for the top 10 cities included in the analysis and would be an interesting topic to explore further. Nonetheless, the results identify which sub-menu prices need to be transformed and which do not in our sub-menu pricing model.

What are ratings worth?

At this point, I've identified one feature to use in the pricing model - the mean of sub-menu prices in LA and for the other cities - a flag to determine which sub-menu prices need to be transformed based on the sub-menu, city combination.

One other potential feature is the restaurant rating. Most restaurants are reviewed and rated online by their customers and given the focus given to this by the industry, I wanted to find out if there was any significant correlation between the ratings and menu prices. If there is then we can successfully incorporate ratings into our pricing model.

Regression

Running a simple linear regression of mean prices against ratings shows significant regression beta for some cities especially Los Angeles and across the top 10 cities as a whole at the 95% confidence level:

dataset

dataset

So the good news is that there appears to be a linear relationship between ratings and prices even if the R-squared for the top 10 cities as a whole is fairly low. Ratings then gets to be included as the second feature in our model.

What else can we add to the model that's available to us from the scraped data?

Key words and prices?

I wanted to find out if certain words or phrases were common in higher priced meal items. Consequently I created a word cloud on meal descriptions after removing stop words and punctuation. Unfortunately, words and phrases that dominated were related to "party size" items on the menu. What I needed to do was to normalize prices by quantity to deduce the unit price of each meal.

I however do not have quantities in my scraped data and as such couldn't proceed further with this analysis. While keywords did not make it into the final pricing model, there's still information that can be gleaned from them. This warrants further investigation - something I intend to do using machine learning techniques as I pick them up along the course of this bootcamp.

dataset

 

Can I price your menu?

With knowledge of the distribution of average sub-menu prices relative to their equivalents in Los Angeles and how menu prices on average vary with restaurant ratings I created a simple linear model that accepts as input a table of sub-menu items, restaurant rating and location and outputs a range of prices for each sub-menu item as a recommendation for a restaurant owner looking to price such menu items. Model results were compared visually to those from the actual price ranges for those sub-menu items and restaurant's location as shown below:

dataset

The simple model didn't perform too well especially when it comes to sub-menu items with significant skew different from that seen in the equivalent price series from Los Angeles. This was expected as pricing assumptions ignored variances and skewness and focused solely on the 1st moment of the price distributions: mean and hence didn't have enough information to accurately model the price ranges for each of these sub-menus when they differed significantly from their equivalents in Los Angeles. More generally, more features will need to introduced in this model to improve accuracy.

Back to the Question

Coming back to my initial research question, there's isn't enough features from the scraped data to accurately price a restaurant's sub-menu item but this work serves as the building blocks for further more advanced work on this pricing topic. Further work will be heavy on feature generation and selection using sources outside uberEats' public website.  Features would then be fitted using multi-linear regression with penalization or non-linear regression techniques to ascertain the best pricing model given the data.

Stay tuned. 

About Author

Robert Atuahene

Financial services executive with extensive experience in trading and risk management of listed derivatives. Excited about opportunities to apply data science to the financial services domain and beyond!
View all posts by Robert Atuahene >

Leave a Comment

No comments found.

View Posts by Categories


Our Recent Popular Posts


View Posts by Tags

#python #trainwithnycdsa 2019 2020 Revenue 3-points agriculture air quality airbnb airline alcohol Alex Baransky algorithm alumni Alumni Interview Alumni Reviews Alumni Spotlight alumni story Alumnus ames dataset ames housing dataset apartment rent API Application artist aws bank loans beautiful soup Best Bootcamp Best Data Science 2019 Best Data Science Bootcamp Best Data Science Bootcamp 2020 Best Ranked Big Data Book Launch Book-Signing bootcamp Bootcamp Alumni Bootcamp Prep boston safety Bundles cake recipe California Cancer Research capstone car price Career Career Day citibike classic cars classpass clustering Coding Course Demo Course Report covid 19 credit credit card crime frequency crops D3.js data data analysis Data Analyst data analytics data for tripadvisor reviews data science Data Science Academy Data Science Bootcamp Data science jobs Data Science Reviews Data Scientist Data Scientist Jobs data visualization database Deep Learning Demo Day Discount disney dplyr drug data e-commerce economy employee employee burnout employer networking environment feature engineering Finance Financial Data Science fitness studio Flask flight delay gbm Get Hired ggplot2 googleVis H20 Hadoop hallmark holiday movie happiness healthcare frauds higgs boson Hiring hiring partner events Hiring Partners hotels housing housing data housing predictions housing price hy-vee Income Industry Experts Injuries Instructor Blog Instructor Interview insurance italki Job Job Placement Jobs Jon Krohn JP Morgan Chase Kaggle Kickstarter las vegas airport lasso regression Lead Data Scienctist Lead Data Scientist leaflet league linear regression Logistic Regression machine learning Maps market matplotlib Medical Research Meet the team meetup methal health miami beach movie music Napoli NBA netflix Networking neural network Neural networks New Courses NHL nlp NYC NYC Data Science nyc data science academy NYC Open Data nyc property NYCDSA NYCDSA Alumni Online Online Bootcamp Online Training Open Data painter pandas Part-time performance phoenix pollutants Portfolio Development precision measurement prediction Prework Programming public safety PwC python Python Data Analysis python machine learning python scrapy python web scraping python webscraping Python Workshop R R Data Analysis R language R Programming R Shiny r studio R Visualization R Workshop R-bloggers random forest Ranking recommendation recommendation system regression Remote remote data science bootcamp Scrapy scrapy visualization seaborn seafood type Selenium sentiment analysis sentiment classification Shiny Shiny Dashboard Spark Special Special Summer Sports statistics streaming Student Interview Student Showcase SVM Switchup Tableau teachers team team performance TensorFlow Testimonial tf-idf Top Data Science Bootcamp Top manufacturing companies Transfers tweets twitter videos visualization wallstreet wallstreetbets web scraping Weekend Course What to expect whiskey whiskeyadvocate wildfire word cloud word2vec XGBoost yelp youtube trending ZORI