Scraping millions of reviews from Amazon.com

QUENTIN PICARD
Posted on Oct 28, 2017

Introduction

As part of a web scraping project I wanted to answer the following question: could I use the number of product reviews on Amazon.com as a proxy for product sales? Could I then analyze market share for the companies in a product category?

The project

Scraping Amazon obviously comes with challenges. I had to modify my code several times before I was able to capture all of the reviews (99%+) in a given category. I used the following tools for my project:

  • Scrapy: I implemented 2 separate instances of the tool as the base for my scraping program: one instance to retrieve all the products in a category ('Networking & Wireless') or sub-category ('Modems'), and another to retrieve all of the reviews for that list of products.
    Note: I could have created 2 spiders within the same Scrapy project, but found it easier to manage the settings and outputs separately.
  • Redis: I stored intermediate values (product codes, URLs to scrape) in a Redis set key accessible remotely. This allowed me to resume interrupted programs easily without losing cached information. It also allowed me to run concurrent instances all feeding from the same queues.
  • Amazon Web Services: I used one AWS RDS server with a PostgreSQL database (free tier) to store the results, and one EC2 server (also free tier) to run my Scrapy remotely. The database had a 'products' table and a 'reviews' table. I used the Amazon Standard Identification Number (ASIN) as the common foreign key.
  • Jupyter Notebooks: I analyzed the data in a Jupyter Notebook.

I want to acknowledge Hartley Brody's amazing blog, where I found much inspiration for this project.

My code and notebook are accessible here.

Lessons Learned

  • Managing Runs
    In order to scrape large volumes of data, it is necessary to be able to track your program runs. Of course I could see the logs live (either directly on the console or using tail -f on the log file) but I was also tracking the Redis queues to check on the progress. I wasn't just hoping the program would complete in a reasonable time, I could always estimate the run duration based on the pace and the number of elements left in the queue. Also, I was only launching big runs after extensive testing so as not to waste resources.
  • Avoiding Detection
    Retrieving millions of reviews would quickly get a single IP banned. I rotated IPs rented on sharedproxy.com. This website worked best for me as they don't have bandwidth limits (I spent $27 for 300 IPs for 5 days). I used an existing proxy middleware called scrapy.proxies for the rotation. In hindsight I wish I had implemented it myself, as I ended up spending a relatively large amount of time debugging it. I used a generic header/user agent and generated a new cookie session with each connection, however the real game changer was the next point.
  • Explicit vs. Recursive
    Well into my project, I found myself being banned faster and faster by Amazon. Somehow they were picking up on my crawler clicking 'next' continuously to access all the review pages. So I decided to understand how URLs were constructed and access them directly. I created a queue with the URLs of all the review pages for all of my target products and accessed them in random order. After that, I stopped being detected completely. The additional benefit from this strategy was that I had much faster code with a single loop vs. nested function calls. Finally, as explained above, the queue of URLs could be processed by several instances of the programs in parallel (from the EC2 server or my machine).

Results

I ended up scraping ~2 million reviews over 2 days. To verify I had retrieved everything, I compared the number of rows in my database with the number of reviews displayed by Amazon on the main product page. I scraped 99-100% of the reviews for many products, yet for others I only scraped ~50%. After investigation, it turns out that Amazon uses the sum of all reviews for the different models available (example here) meaning that I had indeed captured all the unique reviews. Although the practice of sharing reviews across models seems justified enough in some cases (e.g. same product but different codes if an extra cable is included), for others the products were quite different. Showing the sum of all reviews makes the individual products look more successful than they really are. In the categories I parsed, I found that ~20% of the reviews were used for the review counts of several products Note that the specific model being commented on is indicated in the detail review page.

Case Study

There are obviously many things one can do with Amazon reviews. I will just give one example. Imagine that an investor's goal is to study the market share evolution of Zoom Telephonics (ticker: ZMTP) in the US cable modem market. In the spring of 2015, Zoom signed an exclusive license agreement with Motorola Mobility LLC for the Motorola brand (going into effect January 1st, 2016). The Motorola brand was licensed to Arris at the time. How would the market shares of Arris, Zoom Telephonics and the 2 other main players NetGear and TP-link shake up afterwards?

I plotted the number of reviews for the top modems (trailing 12 months):

A couple of comments on this graph:

  • As of late October 2017, the Arris Motorola modem (SB6120) is still showing in the charts. It hasn't been produced for a while since Arris cannot use the Motorola name anymore, but it keeps the reviews. In my experience, the mapping of products to companies is not trivial and would probably have to be verified manually. It only needs to be done once for each new product though.
  • The new Motorola modems manufactured by Zoom didn't make it to the top. It appears they had a hard time competing with established models with tens of thousands of reviews (more on this in the conclusion).

Note: Clearly there are many other factors influencing market share in addition to the number of reviews on Amazon.com, e.g. advertising or brick-and-mortar sales. Covering all of those is outside the scope of this document, but needs to be kept in mind when drawing conclusions.

To circumvent this problem I looked at the new generation of modems sporting the DOCSIS 3.1 norm. It provided a cleaner start as all major brands released new products in a short timeframe.

Here, the Motorola modem is indeed the one manufactured by Zoom. Despite a few months' delayed start, we can see that it is catching up in number of reviews. The tentative conclusion is that Zoom will grab a significant share of the cable modem market thanks to their Motorola licensing deal.

Note: This example is for illustration and not meant as investment advice.

Conclusion and Next Steps

Overall I have accomplished my initial goals for this project. Using the count of reviews to predict quarterly sales is best suited to situations where a company has a manageable number of SKUs. The mapping between company and products is not always consistent and needs monitoring. As we have seen sometimes the brand 'Motorola' is used instead of the manufacturer 'Zoom Telephonics'. Sometimes different names are used for the company or the field is blank.

Review counts are good to spot trends, using the rate of growth, but it is difficult to derive market share penetration from the absolute number of reviews. In the 18 months to mid-October 2017, Zoom Telephonics market share on Amazon.com grew from 3% to 27% while Arris' went from 57% to 38% but the number of reviews for Zoom's top cable modem is still a fraction of Arris' top one (700 to 25k).

No project is ever truly complete and I can think of many ways to improve the current version. One promising area would be to include the rating in the analysis, e.g. how does the rating explain subsequent review count growth?

About Author

QUENTIN PICARD

QUENTIN PICARD

Quentin holds a BS in Electrical Engineering with a minor in Computer Science from Telecom ParisTech in France.
View all posts by QUENTIN PICARD >

Related Articles

Leave a Comment

Avatar
Sara Clisby September 11, 2018
vscrape.com are providing amazon scraping tools without IP blocked and Banned.Using that tools any one can scrape million of records easily. Below is Few Tools we provide 1.Amazon Scraping and Reprice tools 2.Amazon competitor products monitor tools 3.FBA scraping tools 4.Buybox Scraping tools 5.Amazon title modifications alert tools 6.Amazon to Ebay Price comparisons 7.Amazon to Ebay automatic scraping and listing tools and maintain price and stocks 8.Aliexpress to Ebay Automatic listing tools and maintain price and stocks 9.Walmart,Bhphotovideo,best buy and many other website to Ebay listing tools and maintain price and stocks 10.Ebay scraping tools and Tracking tools 11.ASIN track tools 12.Ebay Listing tools 13.Scrape million of data from any website. etc….. based on your needs i can develop or modify this tools Contact us for demo
Avatar
Daniel Armstrong July 4, 2018
I would first like to say I really enjoyed your post, and the comments in your code were extremely helpful in aiding in my understanding of how your project worked. I am wondering if you could help me understand the type of problems you were having with scrapy.proxies. What would you have done differently in your implementation? Do you have any tips for using scrapy.proxies?
Avatar
QUENTIN PICARD February 5, 2018
Hi Marc, I didn't research tools such as proxycrawl as I wanted to keep full control for my project. I looked it up and found a few positive comments so I might try it in the future.
Avatar
Marc February 4, 2018
Hey Quentin, you said that you used sharedproxy to get proxies, why didn't you use something like proxycrawl.com? In my experience is much better to use some kind of service like the one they provide than to handle your own proxies

View Posts by Categories


Our Recent Popular Posts


View Posts by Tags

2019 airbnb alumni Alumni Interview Alumni Reviews Alumni Spotlight alumni story Alumnus API Application artist aws beautiful soup Best Bootcamp Best Data Science 2019 Best Data Science Bootcamp Best Data Science Bootcamp 2020 Best Ranked Big Data Book Launch Book-Signing bootcamp Bootcamp Alumni Bootcamp Prep Bundles California Cancer Research capstone Career Career Day citibike clustering Coding Course Demo Course Report D3.js data Data Analyst data science Data Science Academy Data Science Bootcamp Data science jobs Data Science Reviews Data Scientist Data Scientist Jobs data visualization Deep Learning Demo Day Discount dplyr employer networking feature engineering Finance Financial Data Science Flask gbm Get Hired ggplot2 googleVis Hadoop higgs boson Hiring hiring partner events Hiring Partners Industry Experts Job Job Placement Jobs Jon Krohn JP Morgan Chase Kaggle Kickstarter lasso regression Lead Data Scienctist Lead Data Scientist leaflet linear regression Logistic Regression machine learning Maps matplotlib Medical Research meetup Networking neural network Neural networks New Courses nlp NYC NYC Data Science nyc data science academy NYC Open Data NYCDSA NYCDSA Alumni Online Open Data painter pandas Part-time Portfolio Development prediction Prework Programming PwC python python machine learning python scrapy python web scraping python webscraping Python Workshop R R language R Programming R Shiny r studio R Visualization R Workshop R-bloggers random forest Ranking recommendation recommendation system regression Remote remote data science bootcamp Scrapy scrapy visualization seaborn Selenium sentiment analysis Shiny Shiny Dashboard Spark Special Special Summer Sports statistics streaming Student Interview Student Showcase SVM Switchup Tableau team TensorFlow Testimonial tf-idf Top Data Science Bootcamp twitter visualization web scraping Weekend Course What to expect word cloud word2vec XGBoost yelp