Data Scraping Amazon: The Good, the Bad, and the Ugly
The skills the author demoed here can be learned through taking Data Science with Machine Learning bootcamp with NYC Data Science Academy.
Introduction
Data shows e-commerce is a fast growing market with annual growth of 10% for the past five years. Even more significantly, many industry analysts expect the future to bode well for the e-commerce sector. Technology and advances in effectively using large amounts of data leave the industry poised to make serious inroads against traditional retail firms.
E-commerce also benefits over the retail industry with another key distinction: data collection. In retail, you can understand a lot about a customer by just their credit card number and even more by their email address but that pales in comparison to the data e-commerce companies collect on their customers. From personal information to product browsing history, you name it and e-commerce collects it.
Amazon is one of, if not the most, important players in the e-commerce industry. From pioneering the e-commerce industry in the 1990s' to changing what consumers expect out of an online shopping experience, Amazon has been on the forefront of the e-commerce industry. Unsurprisingly, they are also on the cutting edge of both storing and analyzing large quantities of data. It is no secret that Amazon uses
Objective
Given the premises that i) e-commerce sales are increasingly data driven and that, I wanted to answer a fundamental research question for my second project here at NYC Data Science Academy. Specifically:
Can Amazon's product listing data to predict products sales?
This question ultimately proved surprisingly hard to crack in a limited time frame. As I describe below, I was unable to collect enough data to give a satisfying answer to my original research question. However, I am confident that I have the groundwork in place to continue working towards answering this question.
In the following post, I will detail how I attempted to answer this research question, the technology stack I used, things that worked and what I did well (a.k.a. the good), things that didn't work but were not total deal breakers in terms of collecting enough data to answer my original research question (a.k.a. the bad), and things that totally failed and prevented me from collecting the data I needed to answer my original research question.
Technology Stack Used
To perform the scraping I used the following technologies:
- Python 3.5
- Scrapy 1.1.1
- Shiny dashboard
I chose to use Python 3.5 over 2.7 mostly for a superior handling of any potential unicode characters in the product listings. I also chose to use a shiny dashboard to quickly visualize, explore, and understand the data obtained via the scraping process.
Data on The Good: What worked and what I did well
The best outcome of this project was that I learned how to use scrapy to crawl a website for systematic data extraction. I began by extracting a list of all the top-level categories on Amazon (as seen below).
After I had the top level categories and the URLs to each of their respective pages, I extracted the URLs for the top 100 items in each of those categories and attempted to scrape those items. Once one of those items was scraped, I crawled through Amazon by selecting products from the Customers Who Bought This Item Also Bought section (as seen below):
This process was repeated until the crawler ran out of products to pull information on. Overall, I was able to collect over 300,000 records. As we will see though all of this data was simply a mirage in the desert, giving false promises to me in my hopes of answer my research question.
Data on The Bad: Things that were sub-optimal but were not deal breakers
As I was scraping the data, I quickly noticed that many of my results were coming back with an HTTP response code of 200 but were not returning any data. Upon further inspection it seems that some of the product pages have different structures from what I built off of originally. This makes sense intuitively as each different type of product has different key attributes and features that need to be highlighted. Overall, I believe this is not a huge issue for me as I won't necessarily get a great representative sample of products across all of Amazon but I could still get enough data to make some predictions and work from.
While I was able to scrape the Best Seller Rank (BSR) for some products, I could not get it for every product I scraped. I originally intended to supplement this with data from Amazon's official API but as of the time of this writing I have not been able to.
Furthermore, due to differences in product page layouts, I quickly determined that the BSR data I had collected was not consistent across each product I scraped. On some products, I was grabbing the overall BSR rank on the entire Amazon website. On other products, I was getting the BSR rank within its top level category.
Even worse still, I was scraping the BSR for a product in its sub-level category. Needless to say the data inconsistency made the BSR data I was able to scrape essentially worthless. However, since I still had the unique ASIN for each product I scraped, it is theoretically possible to get the BSR for each of those products from the official API so all is not lost.
Data on The Ugly: The total failures
Unfortunately, my goal of answering my original research question fell off the rails along the way for one primary reason--simply put, I was not able to get enough reliable data. This was mostly caused because of captchas and IP bans Amazon deployed to thwart my scraping efforts.
After I began scraping, I collected data relatively easily and I thought I was well on my way to answering my original research question. However, after a handful of minutes, I would quickly have my IP address blocked by Amazon. In order to get around this, I deployed a consumer grade VPN service I subscribe to with IP rotation capabilities to try and get around this defense by Amazon but it was slow and largely ineffective as many of the IP addresses assigned were almost instantly blocked.
Failures and Next Steps
After many futile hours of attempting to break through the captchas and IP bans that Amazon deployed to prevent my scraping, I gave up in trying to answer my original research question in discouragement and frustration. However, after some more thought and consideration I think there are ways I can improve my scraping techniques to achieve more efficiency and consistency during the scraping process.
First and foremost, I need to re-write large pieces of my scrapy scripts to handle for differing DOM structures across different product categories. I began by implementing a few different varieties of spiders to call depending on the HTML loaded but this was still too simplistic and needs to be refined further. Careful analysis of when and where specific HTML sections are loaded on product pages is needed so that the correct spider script can be loaded.
Second, to prevent the massive problem with IP bans, I need to use a proxy service to rotate my scraper's IP address in a smart and sophisticated way. Services like Crawlera and ScrapingHub are specially designed to help solve these sorts of problems and I intend to research them further for my use case.
Lastly, the need to solve systematic Captcha checks is extremely important. Once Amazon determines that it wants to verify your humanity and issues a Captcha check, all scraping activity is stopped in the interim. This likely leads to the NA/no data problem mentioned in "The Bad" section above. With the use of services like Death By Captcha, I believe circumvention of these counter-scraping measures is feasible. Unfortunately, it seems like the trade off is on the cost side of things as Death By Captcha is a for-profit business.