Data Web Scraping and Sentiment Analysis for Yelp Review
The skills the author demoed here can be learned through taking Data Science with Machine Learning bootcamp with NYC Data Science Academy.
Contributed by Frank Wang. He is currently in the NYC Data Science Academy 12 week full time Data Science Bootcamp program taking place between April 11th to July 1st, 2016. This post is based on his third class project - Web Scraping, due on the 6th week of the program.
Human language data is one of the most important data and most are free. Internet and social media time and age, people's opinions, reviews, and recommendations have become a valuable resource for political science and businesses. Thanks to modern technologies, we are now able to collect and analyze such data most efficiently. In this project, I scrape the Yelp review data and then applied sentiment analysis to classify documents based on their polarity: the attitude of the writer.
Yelp Data Web Scraping
Yelp review can be download for free. API and keys are required to download all the data. What I am interested are the review, rating score and author’s name. Beautifufsoup is used to scrape the web data and then information is extracted and save to data frame. Data for different category, such as restaurant and park, are save to different folder for analysis.
FIG.1 example of the ABC Kitchen Review
FIG.2 Author information data (left) and the histogram of the review rating score
Sentiment Data Analysis
Textbold is first used to check the prediction accuracy. Although Texbold provide option to train the data using Naive Bayes, it is very slow. Therefore the pre-trained model is used for simplicity.
FIG.3 shows the predicted score for ABC kitchen, which is scaled to 1-5 to compare the real score. The distribution is about symmetry with long tails in both ends. The predicted average score is 3.6, which is lower than the real score of 4.2. If we divided the score to two categories: positive and negative, the prediction accuracy is about 96%. The prediction accuracy is largely drop (<50%) when 1-5 rating are compared. The predictor is not generous comparing with human being: the 5 star is much lower. Different data set, such as park review, are tried. The overall conclusions are similar.
FIG.3 Predicted rating score for restaurant review
FIG.4 Predicted rating score for park review
Training model using real review data
NLTK tool is used in this study. The review data is transformed to a list of dictionary: bag-of-words. A simple unigram model is used for the first try. We use Naïve Bayes to train our data. The data includes about 6000 restaurant reviews. It is important to randomly shuffle the data before use it. The training data and test data are 70% and 30%, respectively. The prediction accuracy on the test data is about 72% with this simple model. While the accuracy drop to 62% with 2000 data. Apparently more data is needed.
Fig. 5 show the most informative words. It is interesting that some “positive” words, such as “greeting”, “politely” , can becomes negative in the most informative words. They are not errors. Since we use unigram model: the sentences are broken into bag of single words. We will improve this use bigram model in the future.
FIG. 4 Word cloud of most informative words
Web review is scarped. Besides the review text, the pictures can have useful information. We download review photo for future study.
Preliminary sentiment study is done with simple model using Naive Bayes model. The prediction accuracy is about 72%, which is low due to the simple model. TextBlob package gives high accuracy about 96%. We will continue study using better model: word clean, bigram word-bag-model, tf-idf transform, word stemming and cross-validation.