Exploratory visualization of Amazon fine food reviews
Contributed by Rob Castellano. He is currently in the NYC Data Science Academy 12 week full time Data Science Bootcamp program taking place between April 11th to July 1st, 2016. This post is based on his first class project - R visualization (due on the 2nd week of the program).
Amazon reviews are often the most publicly visible reviews of consumer products. As a frequent Amazon user, I was interested in examining the structure of a large database of Amazon reviews and visualizing this information so as to be a smarter consumer and reviewer.
Note: All data analysis and visualizations here were produced in R. The code can be found on GitHub.
An example of an Amazon review is pictured below. It consists of the following information:
- Rating (1 - 5 stars)
- The review
- A summary of the review
- The number of people who have voted if this review is helpful or not.
- The number of people who have voted that the review is helpful.
- User ID
- Product ID
I used a database of over 500,000 reviews of Amazon fine foods that is available via Kaggle and can be found here. This database contains each of the elements of a review listed above, as well as time of the review and the user nickname, both of which I did not use.
It should be emphasized that all reviews considered are food reviews, so the reader should not consider the data representative of all Amazon products.
- Perform some basic exploratory analysis to better understand reviews.
- What are the properties of helpful reviews?
II. Exploratory analysis
Distribution of ratings
I first looked at the distribution of ratings among all of the reviews. We see that 5-star reviews constitute a large proportion (64%) of all reviews. The next most prevalent rating is 4-stars(14%), followed by 1-star (9%), 3-star (8%), and finally 2-star reviews (5%).
Popular words in reviews
A look at the post popular words in positive (4-5 stars) and negative (1-2 stars) reviews shows that both positive and negative reviews share many popular words, such as "like", "taste", "flavor", "one", "just", and "product." The words "good", "great", "love", "favorite", and "find" are indicative of positive reviews, while negative reviews contain words such as "didn't" and "disappointed", but these distinguishing words appear less frequently than distinguishing words in positive reviews.
The code to produce these word clouds is included below.
Reviews are voted upon based on how helpful other reviewers find them. The most helpful reviews appear near the top of the list of reviews and are hence more visible. As such, I was interested in exploring the properties of helpful reviews.
How many reviews are helpful?
Among all reviews, almost half (48%) are not voted on at all. I divided the reviews that were voted upon into three categories: Helpful reviews had more than 75% of voters find the review helpful, unhelpful reviews had less than 25% of voters find the review helpful, and an intermediate group of 25-75% helpfulness. This choice of division seemed to not have a larger impact on results; we will henceforth use this terminology to describe the helpfulness of reviews. Among reviews that are voted on, helpful reviews are the most common.
How do ratings affect helpfulness?
For each rating, I looked at the reviews that were voted on and the percent of those reviews that users found helpful or not helpful. As the rating becomes more positive, the reviews become more helpful (and less unhelpful). For 1-star reviews voted upon, 34% were voted helpful, while 27% were found not helpful. For 5-star reviews, 81% were found helpful and 7% not helpful.
IV. Word count
One of the most basic characteristics of a review is the number of words it contains. I wanted to see how word count related to the other properties of reviews already discussed, including rating and helpfulness.
How does word count vary by rating?
The first question I had regrading word count was how it varied with rating. 5-star reviews had the lowest median word count (53 words), while 3-star reviews had the largest median word count (71 words).
How does word count relate to helpfulness?
The word counts for helpful reviews and not helpful reviews have a similar distribution with the greatest concentration of reviews of approximately 25 words. However, not helpful reviews have a larger concentration of reviews with low word count and helpful reviews have more longer reviews. Helpful reviews have a higher median word count (67 words) than not helpful reviews (54 words).
V. Frequency of reviewers
Using User IDs, one can recognize repeat reviewers. Reviewers that have reviewed over 50 products account for over 5% of all reviews in the database. We will call such reviewers frequent reviewers. (The cutoff choice of 50, as opposed to another choice, seemed to not have a larger impact on the results.) I asked: Does the behavior of frequent reviewers differ from that of infrequent reviewers?
Are frequent reviewers more discerning?
The distribution of ratings among frequent reviewers is similar to that of all reviews. However, we can see that frequent reviewers give less 5-star reviews and less 1-star review. Frequent users appear to be more discerning in the sense that they give less extreme reviews than infrequent reviews.
Are frequent reviewers more helpful?
The distribution of helpfulness for frequent reviewers is similar to that of all reviews. However, frequent reviewers are more likely to have their review voted on and when voted on, more likely to be voted helpful, and less likely to be unhelpful.
Are frequent reviewers more verbose?
The distributions of word counts for frequent and infrequent reviews shows that infrequent reviewers have a large amount of reviews of low word count. On the other hand, the largest concentration of word count is higher for frequent reviewers than for infrequent reviews. Moreover, the median word count for frequent reviewers is higher than the median for infrequent reviewers.
- Positive reviews are very common.
- Positive reviews are shorter.
- Longer reviews are more helpful.
- Despite being more common and shorter, positive reviews are found more helpful.
- Frequent reviewers are more discerning in their ratings, write longer reviews, and write more helpful reviews.
- Analyze by category of product: This data I used contains product IDs, but not a categorization of the products. Getting more information on the categories of products or using the review text to gain product information and doing analysis across product category would be interesting.
- Develop a model for predicting a review's helpfulness based on user, rating, and text of the review.
- Investigate the relationship between the products and the reviewers. For example, do certain groups of reviewers review similar groups of products?