Scraping millions of reviews from Amazon.com
The skills the authors demonstrated here can be learned through taking Data Science with Machine Learning bootcamp with NYC Data Science Academy.
Introduction
As part of a web scraping project I wanted to answer the following question: could I use the number of product reviews on Amazon.com as a proxy for product sales? Could I then analyze market share for the companies in a product category?
The project
Scraping Amazon obviously comes with challenges. I had to modify my code several times before I was able to capture all of the reviews (99%+) in a given category. I used the following tools for my project:
- Scrapy: I implemented 2 separate instances of the tool as the base for my scraping program: one instance to retrieve all the products in a category ('Networking & Wireless') or sub-category ('Modems'), and another to retrieve all of the reviews for that list of products.
Note: I could have created 2 spiders within the same Scrapy project, but found it easier to manage the settings and outputs separately. - Redis: I stored intermediate values (product codes, URLs to scrape) in a Redis set key accessible remotely. This allowed me to resume interrupted programs easily without losing cached information. It also allowed me to run concurrent instances all feeding from the same queues.
- Amazon Web Services: I used one AWS RDS server with a PostgreSQL database (free tier) to store the results, and one EC2 server (also free tier) to run my Scrapy remotely. The database had a 'products' table and a 'reviews' table. I used the Amazon Standard Identification Number (ASIN) as the common foreign key.
- Jupyter Notebooks: I analyzed the data in a Jupyter Notebook.
I want to acknowledge Hartley Brody's amazing blog, where I found much inspiration for this project.
My code and notebook are accessible here.
Lessons Learned
- Managing Runs
In order to scrape large volumes of data, it is necessary to be able to track your program runs. Of course I could see the logs live (either directly on the console or using tail -f on the log file) but I was also tracking the Redis queues to check on the progress. I wasn't just hoping the program would complete in a reasonable time, I could always estimate the run duration based on the pace and the number of elements left in the queue. Also, I was only launching big runs after extensive testing so as not to waste resources. - Avoiding Detection
Retrieving millions of reviews would quickly get a single IP banned. I rotated IPs rented on sharedproxy.com. This website worked best for me as they don't have bandwidth limits (I spent $27 for 300 IPs for 5 days). I used an existing proxy middleware called scrapy.proxies for the rotation. In hindsight I wish I had implemented it myself, as I ended up spending a relatively large amount of time debugging it. I used a generic header/user agent and generated a new cookie session with each connection, however the real game changer was the next point. - Explicit vs. Recursive
Well into my project, I found myself being banned faster and faster by Amazon. Somehow they were picking up on my crawler clicking 'next' continuously to access all the review pages. So I decided to understand how URLs were constructed and access them directly. I created a queue with the URLs of all the review pages for all of my target products and accessed them in random order. After that, I stopped being detected completely. The additional benefit from this strategy was that I had much faster code with a single loop vs. nested function calls. Finally, as explained above, the queue of URLs could be processed by several instances of the programs in parallel (from the EC2 server or my machine).
Results
I ended up scraping ~2 million reviews over 2 days. To verify I had retrieved everything, I compared the number of rows in my database with the number of reviews displayed by Amazon on the main product page. I scraped 99-100% of the reviews for many products, yet for others I only scraped ~50%. After investigation, it turns out that Amazon uses the sum of all reviews for the different models available (example here) meaning that I had indeed captured all the unique reviews.
Although the practice of sharing reviews across models seems justified enough in some cases (e.g. same product but different codes if an extra cable is included), for others the products were quite different. Showing the sum of all reviews makes the individual products look more successful than they really are. In the categories I parsed, I found that ~20% of the reviews were used for the review counts of several products Note that the specific model being commented on is indicated in the detail review page.
Case Study
There are obviously many things one can do with Amazon reviews. I will just give one example. Imagine that an investor's goal is to study the market share evolution of Zoom Telephonics (ticker: ZMTP) in the US cable modem market. In the spring of 2015, Zoom signed an exclusive license agreement with Motorola Mobility LLC for the Motorola brand (going into effect January 1st, 2016). The Motorola brand was licensed to Arris at the time. How would the market shares of Arris, Zoom Telephonics and the 2 other main players NetGear and TP-link shake up afterwards?
I plotted the number of reviews for the top modems (trailing 12 months):
A couple of comments on this graph:
- As of late October 2017, the Arris Motorola modem (SB6120) is still showing in the charts. It hasn't been produced for a while since Arris cannot use the Motorola name anymore, but it keeps the reviews. In my experience, the mapping of products to companies is not trivial and would probably have to be verified manually. It only needs to be done once for each new product though.
- The new Motorola modems manufactured by Zoom didn't make it to the top. It appears they had a hard time competing with established models with tens of thousands of reviews (more on this in the conclusion).
Note: Clearly there are many other factors influencing market share in addition to the number of reviews on Amazon.com, e.g. advertising or brick-and-mortar sales. Covering all of those is outside the scope of this document, but needs to be kept in mind when drawing conclusions.
To circumvent this problem I looked at the new generation of modems sporting the DOCSIS 3.1 norm. It provided a cleaner start as all major brands released new products in a short timeframe.
Here, the Motorola modem is indeed the one manufactured by Zoom. Despite a few months' delayed start, we can see that it is catching up in number of reviews. The tentative conclusion is that Zoom will grab a significant share of the cable modem market thanks to their Motorola licensing deal.
Note: This example is for illustration and not meant as investment advice.
Conclusion and Next Steps
Overall I have accomplished my initial goals for this project. Using the count of reviews to predict quarterly sales is best suited to situations where a company has a manageable number of SKUs. The mapping between company and products is not always consistent and needs monitoring. As we have seen sometimes the brand 'Motorola' is used instead of the manufacturer 'Zoom Telephonics'. Sometimes different names are used for the company or the field is blank.
Review counts are good to spot trends, using the rate of growth, but it is difficult to derive market share penetration from the absolute number of reviews. In the 18 months to mid-October 2017, Zoom Telephonics market share on Amazon.com grew from 3% to 27% while Arris' went from 57% to 38% but the number of reviews for Zoom's top cable modem is still a fraction of Arris' top one (700 to 25k).
No project is ever truly complete and I can think of many ways to improve the current version. One promising area would be to include the rating in the analysis, e.g. how does the rating explain subsequent review count growth?