YelpQuest - Begin your Journey Here!
Abstract
It is a daunting task for small business owners to begin their own restaurant – 60% of which fail within their first three years. For a lot of small businesses, Yelp exposure is a key factor to determining whether or not they make it past year 3. Our capstone project goal is to determine the key attributes and features to getting high ratings on Yelp. We used data visualizations in conjunction with machine learning algorithms to identify attributes and features correlated with high business ratings. We also used natural language processing to extract a wealth of information from review texts. We, the Fantastic Five, created an application using R Shiny to help them assess the marketplace by three means:
- Map: Geolocation analysis of restaurant success
- Topic Modeling: Understanding negative reviews within market of specified category
- Cuisine Gallery: Understanding frequent cuisine topics within positive reviews
Contributed by Jiaxu Luo, Charles Leung, Danli Zeng and Samriddhi Shakya. They are currently in the NYC Data Science Academy 12 week full time Data Science Bootcamp program taking place between July 4th to September 23th, 2016. This post is based on their final capstone project (due on the 12th week of the program).
Introduction
Based on a research conducted by Cornell University, 27% of new restaurants fail within the first year and nearly 60% close by year three. One of the main barriers to succeed in that actually is Yelp exposure. Well established restaurants with regular clientele have a base of high ratings and review counts that push them to the top of searches and provide a fall back for any mishaps. Restaurant ratings from Yelp are trusted globally as a metric – it is one of the best examples of utilizing crowdsourced experiences and opinions. The questions we seek to answer – what are the key features that drive a restaurant’s success? We believe there are identifiable features from the public data provided by Yelp that correlate to review ratings. These important factors could be either inherent attributes of business, like open hours, noise level, or some subjective factors induced by the customers. We explored attributes of importance to high ratings with exploratory data visualization and predictive modeling. The review text we mined with a topic modeling algorithm known as Latent Dirichlet Allocation. Our final product, the culmination of our findings, is an R Shiny application, YelpQuest.
Data Processing
Our application sources from the Yelp 2016 Dataset Challenge, which provides the data tables: businesses, reviews, tips (shorter reviews), user information and check-ins. The business table lists a restaurant’s name, location, opening hours, category, average star rating, the number of reviews about the business and a series of attributes like noise level or reservations policy. The review table lists a restaurant’s star rating, the review text, the review date, and the number of votes that the review has received. We limited the scope of our dataset to the Greater Phoenix Metropolitan Area of Phoenix, then filtered the business by category to keep only restaurants and their reviews (622446). The texts from those restaurants reviews will form the corpus of this project.
Challenges
- For prediction, the number of features from attributes are enormous, while there are only 9427 restaurants. The amount of data we have is insufficient to support the results. Feature reduction or choosing a more sophisticated model would be necessary.
- The positive/negative standards of reviews differ by restaurant categories - review stars alone cannot be viewed as a generalization of customers' opinions. For example, fast food restaurants generally receive lower ratings; as a result, a 4-star fast food restaurant is more significant than a 4-star Italian restaurant.
Business Table
- Identify restaurants on the business file
- Create a subset of the business file that only include restaurants
- Create a subset of the reviews, check-ins, and tips files
- Summarize data from reviews, check-in, and tips file(e.g, sum the number of check-ins/tips/reviews for each restaurants) and create a file for the summarized data containing only business ID and summary fields that can be appended back to the restaurants file
- Merge the summary table back to the restaurant/business file that would serve as the final modeling dataset
Reviews Table
- Identify restaurants that are above/below the average in terms of overall business rating by categories (e.g., above the average - Thai: 4.5 stars, Fast food: 3.5 stars)
- Create a subset of positive reviews from above average restaurants
- Create a subset of negative reviews from below average restaurants
- Concatenate all the reviews of each subset from step (3) and (4)
- Create review subsets of top cuisines from step (5) for modeling datasets for positive review topics and negative review topics
Exploratory Data Analysis
We first wanted to test our preconceived notions with exploratory analysis. When we think of a 5 star restaurant, we don’t exactly think of fast food or pizza. We imagine lavish European cuisines like Italian or French. We plotted in the above chart, the average restaurant ratings by categories. Some restaurants, such as Thai or Greek, tend to have higher ratings while Buffets, Fast Food and Chicken Wings receive lower ratings. The data seems to support our hypothesis that restaurant ratings are correlated with certain restaurant categories.
Also, generally we don’t think of divey cheap eateries when we imagine 5 star restaurants either. To test if price range has some level of dependence with restaurant ratings, we plotted a mosaic plot below. What you see is The Chi-square Test of price range and ratings level. (It gave us a highly significant p-value of 2.2e-16)
A mosaic plot uses color as a comparison for different combinations of categorical variables to their expected frequencies. Blue means there are more observations for that combination than what would be expected, and red fewer. In this particular case, we can see that the two variables are not completely independent, as is reinforced by the χ2 test of independence results.
Predictive Modeling
To determine the important features within the data, we decided to fit tree-based models. With so many attributes and features compared to observation, our data table is sparse. Tree-based models can handle sparsity, but XGBoost excels at it. We fit 3 different models: Random Forest, XgBoost and Gradient Boosted Trees. Below we show the results of XGBoost since, as expected, it had the most stable performance.
First we passed all the available features as predictors into the model and received a model fit of R square = 0.936. The feature importance plot gave us a super strong predictor - Average User Star. However, this piece of information isn’t very insightful; the overall business rating is an average of all the user ratings, and so it is obvious that it would be the most important on the plot. We decided to rerun the XGBoost and remove all review related features:
The second time, the R2 dropped drastically to 0.318. Without review-related information, we cannot predict ratings very well. This time, location, check-in number and price range are the most important predictors.
Topic Modeling
Pre-Processing
Before any modeling, we need to preprocess the review text:
- Remove commonly used "stop words", e.g., the, and, but ...
- Remove punctuations, escape non-characters
- Remove numbers
- Remove non-writable characters
- Strip whitespace
- Convert the text to lowercase
- Tokenize text into bigrams and trigrams
We decided to tokenize the review text into bigrams and trigrams (2 and 3 word combinations) because some words in unison with another change the original meaning. For example, "not good" would be split into "not" and "good" unigrams, which would be misleading in text analysis.
LDA and Data Visualization
To understand key topics within the review data, we used the LDA topic modelling algorithm to extract 20 topics from each category and rating. We used the R package "LDAvis" for interactive topic model visualization and answer these questions:
- What is the meaning of each topic?
- How prevalent is each topic?
- How do the topics relate to each other?
The first part of LDAvis answers the first question. The bar graph visualizes the term frequency per topic within our review text, with the term listed on the y axis, and frequency listed on the x axis. A special metric is used to determine the most important terms within the topic – saliency. Saliency is estimated term frequency within a singular topic relative to the term’s overall frequency in the review texts. The higher on the list of terms, the more unique and important it is for that topic.
The second part of the LDAvis is called the intertopic distance model. It provides a global view of the topic model and answers the latter two questions – The prevalence of each topic is displayed by the diameter of each topic circle. The relation of topics to each other is calculated through a distance known as the Jensen-Shannon divergence, a popular method of measuring similarity between two probability distributions, and then scaled onto two principle components. In gist, the closer two bubbles are to each other, the more similar they are. The visualization itself is so descriptive, we implemented it as a feature of our application.
Application
Our final product is an R Shiny application, comprising the following main features:
- Map: Geolocation analysis of restaurant success
- Topic Modeling: Understanding negative reviews within market of specified category
- Cuisine Gallery: Understanding frequent cuisine topics within positive reviews
The main user will be the small business owner, who intends on either starting or expanding his or her current restaurant.
With the map, the user can identify the best place to start a restaurant, or get a bird’s eye view of the competition. The restaurants are displayed on an interactive map of Arizona, classified into “above average” and “below average” within the specified restaurant category. The user can get specific information about the restaurant by clicking one of the markers. Now say the user wants to start a two-dollar Italian restaurant
- Select the filters – category: Italian, price range: two-dollar
- Identify on the map large clusters
The largest cluster where people generally favor Italian food is probably a good place for one to start a restaurant.
The next feature is the topic model – while previously, to understand other restaurant’s negative reviews, the only way to do so was to browse each individual page. Topic modeling is the fastest way to quickly aggregate information. A user can quickly explore the different topic bubbles, and identify the problems based on the frequent terms in the reviews. For example, if time seems to be a large topic, perhaps the user can take advantage of that when starting his or her restaurant.
The final big feature is the menu gallery. The gallery displays the most positively talked about food items among reviews. A user can then build their restaurant’s own menu by looking at the most popular terms.
Through use of predictive modeling and EDA, we identified key features to incorporate as predictors, and filters for our application YelpQuest. Topic models from the negative and positive reviews make the product will hopefully assist future small business owners in growing and succeeding.