YelpQuest - Begin your Journey Here!

Abstract

It is a daunting task for small business owners to begin their own restaurant – 60% of which fail within their first three years. For a lot of small businesses, Yelp exposure is a key factor to determining whether or not they make it past year 3. Our capstone project goal is to determine the key attributes and features to getting high ratings on Yelp. We used data visualizations in conjunction with machine learning algorithms to identify attributes and features correlated with high business ratings. We also used natural language processing to extract a wealth of information from review texts. We, the Fantastic Five, created an application using R Shiny to help them assess the marketplace by three means:

  1. Map: Geolocation analysis of restaurant success
  2. Topic Modeling: Understanding negative reviews within market of specified category
  3. Cuisine Gallery: Understanding frequent cuisine topics within positive reviews

Contributed by Jiaxu Luo, Charles Leung, Danli Zeng and Samriddhi Shakya. They are currently in the NYC Data Science Academy 12 week full time Data Science Bootcamp program taking place between July 4th to September 23th, 2016. This post is based on their final capstone project (due on the 12th week of the program).

 

Introduction

Based on a research conducted by Cornell University, 27% of new restaurants fail within the first year and nearly 60% close by year three. One of the main barriers to succeed in that actually is Yelp exposure. Well established restaurants with regular clientele have a base of high ratings and review counts that push them to the top of searches and provide a fall back for any mishaps. Restaurant ratings from Yelp are trusted globally as a metric – it is one of the best examples of utilizing crowdsourced experiences and opinions. The questions we seek to answer – what are the key features that drive a restaurant’s success? We believe there are identifiable features from the public data provided by Yelp that correlate to review ratings. These important factors could be either inherent attributes of business, like open hours, noise level, or some subjective factors induced by the customers. We explored attributes of importance to high ratings with exploratory data visualization and predictive modeling. The review text we mined with a topic modeling algorithm known as Latent Dirichlet Allocation. Our final product, the culmination of our findings, is an R Shiny application, YelpQuest.

 

Data Processing

Our application sources from the Yelp 2016 Dataset Challenge, which provides the data tables: businesses, reviews, tips (shorter reviews), user information and check-ins. The business table lists a restaurant’s name, location, opening hours, category, average star rating, the number of reviews about the business and a series of attributes like noise level or reservations policy. The review table lists a restaurant’s star rating, the review text, the review date, and the number of votes that the review has received. We limited the scope of our dataset to the Greater Phoenix Metropolitan Area of Phoenix, then filtered the business by category to keep only restaurants and their reviews (622446). The texts from those restaurants reviews will form the corpus of this project.

yelp_challenge

Challenges

  1. For prediction, the number of features from attributes are enormous, while there are only 9427 restaurants. The amount of data we have is insufficient to support the results. Feature reduction or choosing a more sophisticated model would be necessary.
  2. The positive/negative standards of reviews differ by restaurant categories - review stars alone cannot be viewed as a generalization of customers' opinions. For example, fast food restaurants generally receive lower ratings; as a result, a 4-star fast food restaurant is more significant than a 4-star Italian restaurant.

Business Table

  1. Identify restaurants on the business file
  2. Create a subset of the business file that only include restaurants
  3. Create a subset of the reviews, check-ins, and tips files
  4. Summarize data from reviews, check-in, and tips file(e.g, sum the number of check-ins/tips/reviews for each restaurants) and create a file for the summarized data containing only business ID and summary fields that can be appended back to the restaurants file
  5. Merge the summary table back to the restaurant/business file that would serve as the final modeling dataset

Reviews Table

  1. Identify restaurants that are above/below the average in terms of overall business rating by categories (e.g., above the average - Thai: 4.5 stars, Fast food: 3.5 stars)
  2. Create a subset of positive reviews from above average restaurants
  3. Create a subset of negative reviews from below average restaurants
  4. Concatenate all the reviews of each subset from step (3) and (4)
  5. Create review subsets of top cuisines from step (5) for modeling datasets for positive review topics and negative review topics

Exploratory Data Analysis

We first wanted to test our preconceived notions with exploratory analysis. When we think of a 5 star restaurant, we don’t exactly think of fast food or pizza. We imagine lavish European cuisines like Italian or French. We plotted in the above chart, the average restaurant ratings by categories. Some restaurants, such as Thai or Greek, tend to have higher ratings while Buffets, Fast Food and Chicken Wings receive lower ratings. The data seems to support our hypothesis that restaurant ratings are correlated with certain restaurant categories.

cats

Also, generally we don’t think of divey cheap eateries when we imagine 5 star restaurants either. To test if price range has some level of dependence with restaurant ratings, we plotted a mosaic plot below. What you see is The Chi-square Test of price range and ratings level. (It gave us a highly significant p-value of 2.2e-16)

price-range

A mosaic plot uses color as a comparison for different combinations of categorical variables to their expected frequencies. Blue means there are more observations for that combination than what would be expected, and red fewer. In this particular case, we can see that the two variables are not completely independent, as is reinforced by the χ2 test of independence results.

 

Predictive Modeling

To determine the important features within the data, we decided to fit tree-based models. With so many attributes and features compared to observation, our data table is sparse. Tree-based models can handle sparsity, but XGBoost excels at it. We fit 3 different models: Random Forest, XgBoost and Gradient Boosted Trees. Below we show the results of XGBoost since, as expected, it had the most stable performance.

varimp2

First we passed all the available features as predictors into the model and received a model fit of R square = 0.936. The feature importance plot gave us a super strong predictor - Average User Star. However, this piece of information isn’t very insightful; the overall business rating is an average of all the user ratings, and so it is obvious that it would be the most important on the plot. We decided to rerun the XGBoost and remove all review related features:

varimp1

The second time, the R2 dropped drastically to 0.318. Without review-related information, we cannot predict ratings very well. This time, location, check-in number and price range are the most important predictors.

 

Topic Modeling

Pre-Processing

Before any modeling, we need to preprocess the review text:

  1. Remove commonly used "stop words", e.g., the, and, but ...
  2. Remove punctuations, escape non-characters
  3. Remove numbers
  4. Remove non-writable characters
  5. Strip whitespace
  6. Convert the text to lowercase
  7. Tokenize text into bigrams and trigrams

We decided to tokenize the review text into bigrams and trigrams (2 and 3 word combinations) because some words in unison with another change the original meaning. For example, "not good" would be split into "not" and "good" unigrams, which would be misleading in text analysis.

 

LDA and Data Visualization

To understand key topics within the review data, we used the LDA topic modelling algorithm to extract 20 topics from each category and rating. We used the R package "LDAvis" for interactive topic model visualization and answer these questions:

  1. What is the meaning of each topic?
  2. How prevalent is each topic?
  3. How do the topics relate to each other?

screen-shot-2016-09-29-at-6-25-20-pm

The first part of LDAvis answers the first question. The bar graph visualizes the term frequency per topic within our review text, with the term listed on the y axis, and frequency listed on the x axis. A special metric is used to determine the most important terms within the topic – saliency. Saliency is estimated term frequency within a singular topic relative to the term’s overall frequency in the review texts. The higher on the list of terms, the more unique and important it is for that topic.

The second part of the LDAvis is called the intertopic distance model. It provides a global view of the topic model and answers the latter two questions – The prevalence of each topic is displayed by the diameter of each topic circle. The relation of topics to each other is calculated through a distance known as the Jensen-Shannon divergence, a popular method of measuring similarity between two probability distributions, and then scaled onto two principle components. In gist, the closer two bubbles are to each other, the more similar they are. The visualization itself is so descriptive, we implemented it as a feature of our application.

 

Application

Our final product is an R Shiny application, comprising the following main features:

  1. Map: Geolocation analysis of restaurant success
  2. Topic Modeling: Understanding negative reviews within market of specified category
  3. Cuisine Gallery: Understanding frequent cuisine topics within positive reviews

The main user will be the small business owner, who intends on either starting or expanding his or her current restaurant.

ezgif-com-gif-maker

With the map, the user can identify the best place to start a restaurant, or get a bird’s eye view of the competition. The restaurants are displayed on an interactive map of Arizona, classified into “above average” and “below average” within the specified restaurant category. The user can get specific information about the restaurant by clicking one of the markers. Now say the user wants to start a two-dollar Italian restaurant

  1. Select the filters – category: Italian, price range: two-dollar
  2. Identify on the map large clusters

The largest cluster where people generally favor Italian food is probably a good place for one to start a restaurant.

ezgif-com-crop

The next feature is the topic model – while previously, to understand other restaurant’s negative reviews, the only way to do so was to browse each individual page. Topic modeling is the fastest way to quickly aggregate information. A user can quickly explore the different topic bubbles, and identify the problems based on the frequent terms in the reviews. For example, if time seems to be a large topic, perhaps the user can take advantage of that when starting his or her restaurant.

The final big feature is the menu gallery. The gallery displays the most positively talked about food items among reviews. A user can then build their restaurant’s own menu by looking at the most popular terms.

Through use of predictive modeling and EDA, we identified key features to incorporate as predictors, and filters for our application YelpQuest. Topic models from the negative and positive reviews make the product will hopefully assist future small business owners in growing and succeeding.

 

 

About Authors

Charles Leung

During his past three years in the manufacturing industry, Charles has discovered and developed his passion for big data – not only to solve quality and production issues but also to create tools that automated and optimized steelmaking...
View all posts by Charles Leung >

Jiaxu Luo

Jiaxu Luo is a recent MS graduate from Columbia University. As a child born in a business family and a student with diverse academic background in Material Science and Chemical Engineering, he grasps the importance of asking questions...
View all posts by Jiaxu Luo >

Danli

Danli Zeng is a young professional with 5 years' experience in MARKETING and MEDIA. She was specialized in integrated media planning and ROMI analysis for FMCG industry. Having worked on all sides of agency, media and client, she...
View all posts by Danli >

Samriddhi Shakya

Samriddhi comes from a Remote Sensing and Geographic Information Systems (GIS) background. He has a Master’s degree in Geography from Auburn University and Bachelors of Engineering degree in Geomatics from Kathmandu University. During his Masters at Auburn University,...
View all posts by Samriddhi Shakya >

Leave a Comment

No comments found.

View Posts by Categories


Our Recent Popular Posts


View Posts by Tags

#python #trainwithnycdsa 2019 2020 Revenue 3-points agriculture air quality airbnb airline alcohol Alex Baransky algorithm alumni Alumni Interview Alumni Reviews Alumni Spotlight alumni story Alumnus ames dataset ames housing dataset apartment rent API Application artist aws bank loans beautiful soup Best Bootcamp Best Data Science 2019 Best Data Science Bootcamp Best Data Science Bootcamp 2020 Best Ranked Big Data Book Launch Book-Signing bootcamp Bootcamp Alumni Bootcamp Prep boston safety Bundles cake recipe California Cancer Research capstone car price Career Career Day citibike classic cars classpass clustering Coding Course Demo Course Report covid 19 credit credit card crime frequency crops D3.js data data analysis Data Analyst data analytics data for tripadvisor reviews data science Data Science Academy Data Science Bootcamp Data science jobs Data Science Reviews Data Scientist Data Scientist Jobs data visualization database Deep Learning Demo Day Discount disney dplyr drug data e-commerce economy employee employee burnout employer networking environment feature engineering Finance Financial Data Science fitness studio Flask flight delay gbm Get Hired ggplot2 googleVis H20 Hadoop hallmark holiday movie happiness healthcare frauds higgs boson Hiring hiring partner events Hiring Partners hotels housing housing data housing predictions housing price hy-vee Income Industry Experts Injuries Instructor Blog Instructor Interview insurance italki Job Job Placement Jobs Jon Krohn JP Morgan Chase Kaggle Kickstarter las vegas airport lasso regression Lead Data Scienctist Lead Data Scientist leaflet league linear regression Logistic Regression machine learning Maps market matplotlib Medical Research Meet the team meetup methal health miami beach movie music Napoli NBA netflix Networking neural network Neural networks New Courses NHL nlp NYC NYC Data Science nyc data science academy NYC Open Data nyc property NYCDSA NYCDSA Alumni Online Online Bootcamp Online Training Open Data painter pandas Part-time performance phoenix pollutants Portfolio Development precision measurement prediction Prework Programming public safety PwC python Python Data Analysis python machine learning python scrapy python web scraping python webscraping Python Workshop R R Data Analysis R language R Programming R Shiny r studio R Visualization R Workshop R-bloggers random forest Ranking recommendation recommendation system regression Remote remote data science bootcamp Scrapy scrapy visualization seaborn seafood type Selenium sentiment analysis sentiment classification Shiny Shiny Dashboard Spark Special Special Summer Sports statistics streaming Student Interview Student Showcase SVM Switchup Tableau teachers team team performance TensorFlow Testimonial tf-idf Top Data Science Bootcamp Top manufacturing companies Transfers tweets twitter videos visualization wallstreet wallstreetbets web scraping Weekend Course What to expect whiskey whiskeyadvocate wildfire word cloud word2vec XGBoost yelp youtube trending ZORI