Exploratory Data visualization of Amazon fine food reviews

Posted on Apr 29, 2016
The skills the author demoed here can be learned through taking Data Science with Machine Learning bootcamp with NYC Data Science Academy.
Exploratory Data visualization of Amazon fine food reviews
Contributed by Rob Castellano. He  is currently in the NYC Data Science Academy 12 week full time Data Science Bootcamp program taking place between April 11th to July 1st, 2016. This post is based on his first class project - R visualization (due on the 2nd week of the program).

Amazon reviews are often the most publicly visible reviews of consumer products. As a frequent Amazon user, I was interested in examining the structure of a large database of Amazon reviews and visualizing this data so as to be a smarter consumer and reviewer.

Note: All data analysis and visualizations here were produced in R. The code can be found on GitHub.

I. Introduction

Data

An example of an Amazon review is pictured below. It consists of the following information:

  1. Rating (1 - 5 stars)
  2. The review
  3. A summary of the review
  4. The number of people who have voted if this review is helpful or not.
  5. The number of people who have voted that the review is helpful.
  6. User ID
  7. Product ID

Exploratory Data visualization of Amazon fine food reviews

I used a database of over 500,000 reviews of Amazon fine foods that is available via Kaggle and can be found here. This database contains each of the elements of a review listed above, as well as time of the review and the user nickname, both of which I did not use.

It should be emphasized that all reviews considered are food reviews, so the reader should not consider the data representative of all Amazon products.

Initial goals

  1. Perform some basic exploratory analysis to better understand reviews.
  2. What are the properties of helpful reviews?

II. Exploratory Data analysis

Data on Distribution of ratings

I first looked at the distribution of ratings among all of the reviews. We see that 5-star reviews constitute a large proportion (64%) of all reviews. The next most prevalent rating is 4-stars(14%), followed by 1-star (9%), 3-star (8%), and finally 2-star reviews (5%).

Exploratory Data visualization of Amazon fine food reviews

Popular words in reviews

A look at the post popular words in positive (4-5 stars) and negative (1-2 stars) reviews shows that both positive and negative reviews share many popular words, such as "like", "taste", "flavor", "one", "just", and "product." The words "good", "great", "love", "favorite", and "find" are indicative of positive reviews, while negative reviews contain words such as "didn't" and "disappointed", but these distinguishing words appear less frequently than distinguishing words in positive reviews.

The code to produce these word clouds is included below.

 

Common words in positive reviews

Common words in positive reviews

Common words in negative reviews

Common words in negative reviews

III. Helpfulness

Reviews are voted upon based on how helpful other reviewers find them. The most helpful reviews appear near the top of the list of reviews and are hence more visible. As such, I was interested in exploring the properties of helpful reviews.

How many reviews are helpful?

Among all reviews, almost half (48%) are not voted on at all. I divided the reviews that were voted upon into three categories: Helpful reviews had more than 75% of voters find the review helpful, unhelpful reviews had less than 25% of voters find the review helpful, and an intermediate group of 25-75% helpfulness. This choice of division seemed to not have a larger impact on results; we will henceforth use this terminology to describe the helpfulness of reviews. Among reviews that are voted on, helpful reviews are the most common.

HelpfulnessDist

How do ratings affect helpfulness?

For each rating, I looked at the reviews that were voted on and the percent of those reviews that users found helpful or not helpful. As the rating becomes more positive, the reviews become more helpful (and less unhelpful). For 1-star reviews voted upon, 34% were voted helpful, while 27% were found not helpful. For 5-star reviews, 81% were found helpful and 7% not helpful.

PercentHelpfulByRating

IV. Data on Word count

One of the most basic characteristics of a review is the number of words it contains. I wanted to see how word count related to the other properties of reviews already discussed, including rating and helpfulness.

How does word count vary by rating?

The first question I had regrading word count was how it varied with rating. 5-star reviews had the lowest median word count (53 words), while 3-star reviews had the largest median word count (71 words).

 

WordCountRating

 

How does word count relate to helpfulness?

The word counts for helpful reviews and not helpful reviews have a similar distribution with the greatest concentration of reviews of approximately 25 words. However, not helpful reviews have a larger concentration of reviews with low word count and helpful reviews have more longer reviews. Helpful reviews have a higher median word count (67 words) than not helpful reviews (54 words).

WordCountHelpfulness

V. Frequency of reviewers

Using User IDs, one can recognize repeat reviewers. Reviewers that have reviewed over 50 products account for over 5% of all reviews in the database. We will call such reviewers frequent reviewers. (The cutoff choice of 50, as opposed to another choice, seemed to not have a larger impact on the results.) I asked: Does the behavior of frequent reviewers differ from that of infrequent reviewers?

Are frequent reviewers more discerning?

The distribution of ratings among frequent reviewers is similar to that of all reviews. However, we can see that frequent reviewers give less 5-star reviews and less 1-star review. Frequent users appear to be more discerning in the sense that they give less extreme reviews than infrequent reviews.

FreqScoreDist

Are frequent reviewers more helpful?

The distribution of helpfulness for frequent reviewers is similar to that of all reviews. However, frequent reviewers are more likely to have their review voted on and when voted on, more likely to be voted helpful, and less likely to be unhelpful.

FreqHelpful

 

Are frequent reviewers more verbose?

The distributions of word counts for frequent and infrequent reviews shows that infrequent reviewers have a large amount of reviews of low word count. On the other hand, the largest concentration of word count is higher for frequent reviewers than for infrequent reviews. Moreover, the median word count for frequent reviewers is higher than the median for infrequent reviewers.

WordCountHelpfulness

 

VI. Conclusions

Data Findings

  • Positive reviews are very common.
  • Positive reviews are shorter.
  • Longer reviews are more helpful.
  • Despite being more common and shorter, positive reviews are found more helpful.
  • Frequent reviewers are more discerning in their ratings, write longer reviews, and write more helpful reviews.

Further Data analysis

  • Analyze by category of product: This data I used contains product IDs, but not a categorization of the products. Getting more information on the categories of products or using the review text to gain product information and doing analysis across product category would be interesting.
  • Develop a model for predicting a review's helpfulness based on user, rating, and text of the review.
  • Investigate the relationship between the products and the reviewers. For example, do certain groups of reviewers review similar groups of products?

About Author

Rob Castellano

Rob recently received his Ph.D. in Mathematics from Columbia. His training as a pure mathematician has given him strong quantitative skills and experience in using creative problem solving techniques. He has experience conveying abstract concepts to both experts...
View all posts by Rob Castellano >

Related Articles

Leave a Comment

No comments found.

View Posts by Categories


Our Recent Popular Posts


View Posts by Tags

#python #trainwithnycdsa 2019 2020 Revenue 3-points agriculture air quality airbnb airline alcohol Alex Baransky algorithm alumni Alumni Interview Alumni Reviews Alumni Spotlight alumni story Alumnus ames dataset ames housing dataset apartment rent API Application artist aws bank loans beautiful soup Best Bootcamp Best Data Science 2019 Best Data Science Bootcamp Best Data Science Bootcamp 2020 Best Ranked Big Data Book Launch Book-Signing bootcamp Bootcamp Alumni Bootcamp Prep boston safety Bundles cake recipe California Cancer Research capstone car price Career Career Day citibike classic cars classpass clustering Coding Course Demo Course Report covid 19 credit credit card crime frequency crops D3.js data data analysis Data Analyst data analytics data for tripadvisor reviews data science Data Science Academy Data Science Bootcamp Data science jobs Data Science Reviews Data Scientist Data Scientist Jobs data visualization database Deep Learning Demo Day Discount disney dplyr drug data e-commerce economy employee employee burnout employer networking environment feature engineering Finance Financial Data Science fitness studio Flask flight delay gbm Get Hired ggplot2 googleVis H20 Hadoop hallmark holiday movie happiness healthcare frauds higgs boson Hiring hiring partner events Hiring Partners hotels housing housing data housing predictions housing price hy-vee Income Industry Experts Injuries Instructor Blog Instructor Interview insurance italki Job Job Placement Jobs Jon Krohn JP Morgan Chase Kaggle Kickstarter las vegas airport lasso regression Lead Data Scienctist Lead Data Scientist leaflet league linear regression Logistic Regression machine learning Maps market matplotlib Medical Research Meet the team meetup methal health miami beach movie music Napoli NBA netflix Networking neural network Neural networks New Courses NHL nlp NYC NYC Data Science nyc data science academy NYC Open Data nyc property NYCDSA NYCDSA Alumni Online Online Bootcamp Online Training Open Data painter pandas Part-time performance phoenix pollutants Portfolio Development precision measurement prediction Prework Programming public safety PwC python Python Data Analysis python machine learning python scrapy python web scraping python webscraping Python Workshop R R Data Analysis R language R Programming R Shiny r studio R Visualization R Workshop R-bloggers random forest Ranking recommendation recommendation system regression Remote remote data science bootcamp Scrapy scrapy visualization seaborn seafood type Selenium sentiment analysis sentiment classification Shiny Shiny Dashboard Spark Special Special Summer Sports statistics streaming Student Interview Student Showcase SVM Switchup Tableau teachers team team performance TensorFlow Testimonial tf-idf Top Data Science Bootcamp Top manufacturing companies Transfers tweets twitter videos visualization wallstreet wallstreetbets web scraping Weekend Course What to expect whiskey whiskeyadvocate wildfire word cloud word2vec XGBoost yelp youtube trending ZORI