Exploratory visualization of Amazon fine food reviews

Rob Castellano
Posted on Apr 29, 2016

NewCover

Contributed by Rob Castellano. He  is currently in the NYC Data Science Academy 12 week full time Data Science Bootcamp program taking place between April 11th to July 1st, 2016. This post is based on his first class project - R visualization (due on the 2nd week of the program).

Amazon reviews are often the most publicly visible reviews of consumer products. As a frequent Amazon user, I was interested in examining the structure of a large database of Amazon reviews and visualizing this information so as to be a smarter consumer and reviewer.

Note: All data analysis and visualizations here were produced in R. The code can be found on GitHub.

I. Introduction

Data

An example of an Amazon review is pictured below. It consists of the following information:

  1. Rating (1 - 5 stars)
  2. The review
  3. A summary of the review
  4. The number of people who have voted if this review is helpful or not.
  5. The number of people who have voted that the review is helpful.
  6. User ID
  7. Product ID

AmazonReview

I used a database of over 500,000 reviews of Amazon fine foods that is available via Kaggle and can be found here. This database contains each of the elements of a review listed above, as well as time of the review and the user nickname, both of which I did not use.

It should be emphasized that all reviews considered are food reviews, so the reader should not consider the data representative of all Amazon products.

Initial goals

  1. Perform some basic exploratory analysis to better understand reviews.
  2. What are the properties of helpful reviews?

II. Exploratory analysis

Distribution of ratings

I first looked at the distribution of ratings among all of the reviews. We see that 5-star reviews constitute a large proportion (64%) of all reviews. The next most prevalent rating is 4-stars(14%), followed by 1-star (9%), 3-star (8%), and finally 2-star reviews (5%).

RatingDist

Popular words in reviews

A look at the post popular words in positive (4-5 stars) and negative (1-2 stars) reviews shows that both positive and negative reviews share many popular words, such as "like", "taste", "flavor", "one", "just", and "product." The words "good", "great", "love", "favorite", and "find" are indicative of positive reviews, while negative reviews contain words such as "didn't" and "disappointed", but these distinguishing words appear less frequently than distinguishing words in positive reviews.

The code to produce these word clouds is included below.

 

Common words in positive reviews

Common words in positive reviews

Common words in negative reviews

Common words in negative reviews

https://gist.github.com/rtcastellano/52550e34f912328e376436fefed6c62c

III. Helpfulness

Reviews are voted upon based on how helpful other reviewers find them. The most helpful reviews appear near the top of the list of reviews and are hence more visible. As such, I was interested in exploring the properties of helpful reviews.

How many reviews are helpful?

Among all reviews, almost half (48%) are not voted on at all. I divided the reviews that were voted upon into three categories: Helpful reviews had more than 75% of voters find the review helpful, unhelpful reviews had less than 25% of voters find the review helpful, and an intermediate group of 25-75% helpfulness. This choice of division seemed to not have a larger impact on results; we will henceforth use this terminology to describe the helpfulness of reviews. Among reviews that are voted on, helpful reviews are the most common.

HelpfulnessDist

How do ratings affect helpfulness?

For each rating, I looked at the reviews that were voted on and the percent of those reviews that users found helpful or not helpful. As the rating becomes more positive, the reviews become more helpful (and less unhelpful). For 1-star reviews voted upon, 34% were voted helpful, while 27% were found not helpful. For 5-star reviews, 81% were found helpful and 7% not helpful.

PercentHelpfulByRating

IV. Word count

One of the most basic characteristics of a review is the number of words it contains. I wanted to see how word count related to the other properties of reviews already discussed, including rating and helpfulness.

How does word count vary by rating?

The first question I had regrading word count was how it varied with rating. 5-star reviews had the lowest median word count (53 words), while 3-star reviews had the largest median word count (71 words).

 

WordCountRating

 

How does word count relate to helpfulness?

The word counts for helpful reviews and not helpful reviews have a similar distribution with the greatest concentration of reviews of approximately 25 words. However, not helpful reviews have a larger concentration of reviews with low word count and helpful reviews have more longer reviews. Helpful reviews have a higher median word count (67 words) than not helpful reviews (54 words).

WordCountHelpfulness

V. Frequency of reviewers

Using User IDs, one can recognize repeat reviewers. Reviewers that have reviewed over 50 products account for over 5% of all reviews in the database. We will call such reviewers frequent reviewers. (The cutoff choice of 50, as opposed to another choice, seemed to not have a larger impact on the results.) I asked: Does the behavior of frequent reviewers differ from that of infrequent reviewers?

Are frequent reviewers more discerning?

The distribution of ratings among frequent reviewers is similar to that of all reviews. However, we can see that frequent reviewers give less 5-star reviews and less 1-star review. Frequent users appear to be more discerning in the sense that they give less extreme reviews than infrequent reviews.

FreqScoreDist

Are frequent reviewers more helpful?

The distribution of helpfulness for frequent reviewers is similar to that of all reviews. However, frequent reviewers are more likely to have their review voted on and when voted on, more likely to be voted helpful, and less likely to be unhelpful.

FreqHelpful

 

Are frequent reviewers more verbose?

The distributions of word counts for frequent and infrequent reviews shows that infrequent reviewers have a large amount of reviews of low word count. On the other hand, the largest concentration of word count is higher for frequent reviewers than for infrequent reviews. Moreover, the median word count for frequent reviewers is higher than the median for infrequent reviewers.

WordCountHelpfulness

 

VI. Conclusions

Findings

  • Positive reviews are very common.
  • Positive reviews are shorter.
  • Longer reviews are more helpful.
  • Despite being more common and shorter, positive reviews are found more helpful.
  • Frequent reviewers are more discerning in their ratings, write longer reviews, and write more helpful reviews.

Further analysis

  • Analyze by category of product: This data I used contains product IDs, but not a categorization of the products. Getting more information on the categories of products or using the review text to gain product information and doing analysis across product category would be interesting.
  • Develop a model for predicting a review's helpfulness based on user, rating, and text of the review.
  • Investigate the relationship between the products and the reviewers. For example, do certain groups of reviewers review similar groups of products?

About Author

Rob Castellano

Rob Castellano

Rob recently received his Ph.D. in Mathematics from Columbia. His training as a pure mathematician has given him strong quantitative skills and experience in using creative problem solving techniques. He has experience conveying abstract concepts to both experts...
View all posts by Rob Castellano >

Related Articles

Leave a Comment

No comments found.

View Posts by Categories


Our Recent Popular Posts


View Posts by Tags

#python #trainwithnycdsa 2019 airbnb Alex Baransky alumni Alumni Interview Alumni Reviews Alumni Spotlight alumni story Alumnus API Application artist aws beautiful soup Best Bootcamp Best Data Science 2019 Best Data Science Bootcamp Best Data Science Bootcamp 2020 Best Ranked Big Data Book Launch Book-Signing bootcamp Bootcamp Alumni Bootcamp Prep Bundles California Cancer Research capstone Career Career Day citibike clustering Coding Course Demo Course Report D3.js data Data Analyst data science Data Science Academy Data Science Bootcamp Data science jobs Data Science Reviews Data Scientist Data Scientist Jobs data visualization Deep Learning Demo Day Discount dplyr employer networking feature engineering Finance Financial Data Science Flask gbm Get Hired ggplot2 googleVis Hadoop higgs boson Hiring hiring partner events Hiring Partners Industry Experts Instructor Blog Instructor Interview Job Job Placement Jobs Jon Krohn JP Morgan Chase Kaggle Kickstarter lasso regression Lead Data Scienctist Lead Data Scientist leaflet linear regression Logistic Regression machine learning Maps matplotlib Medical Research Meet the team meetup Networking neural network Neural networks New Courses nlp NYC NYC Data Science nyc data science academy NYC Open Data NYCDSA NYCDSA Alumni Online Online Bootcamp Open Data painter pandas Part-time Portfolio Development prediction Prework Programming PwC python python machine learning python scrapy python web scraping python webscraping Python Workshop R R language R Programming R Shiny r studio R Visualization R Workshop R-bloggers random forest Ranking recommendation recommendation system regression Remote remote data science bootcamp Scrapy scrapy visualization seaborn Selenium sentiment analysis Shiny Shiny Dashboard Spark Special Special Summer Sports statistics streaming Student Interview Student Showcase SVM Switchup Tableau team TensorFlow Testimonial tf-idf Top Data Science Bootcamp twitter visualization web scraping Weekend Course What to expect word cloud word2vec XGBoost yelp