Data Scraping Amazon: The Good, the Bad, and the Ugly

Posted on Feb 20, 2017
The skills the author demoed here can be learned through taking Data Science with Machine Learning bootcamp with NYC Data Science Academy.

Introduction

Data shows e-commerce is a fast growing market with annual growth of 10% for the past five years. Even more significantly, many industry analysts expect the future to bode well for the e-commerce sector. Technology and advances in effectively using large amounts of data leave the industry poised to make serious inroads against traditional retail firms.

E-commerce also benefits over the retail industry with another key distinction: data collection. In retail, you can understand a lot about a customer by just their credit card number and even more by their email address but that pales in comparison to the data e-commerce companies collect on their customers. From personal information to product browsing history, you name it and e-commerce collects it.

Amazon is one of, if not the most, important players in the e-commerce industry. From pioneering the e-commerce industry in the 1990s' to changing what consumers expect out of an online shopping experience, Amazon has been on the forefront of the e-commerce industry.  Unsurprisingly, they are also on the cutting edge of both storing and analyzing large quantities of data. It is no secret that Amazon uses

Objective

Given the premises that i) e-commerce sales are increasingly data driven and that, I wanted to answer a fundamental research question for my second project here at NYC Data Science Academy. Specifically:

Can Amazon's product listing data to predict products sales?

This question ultimately proved surprisingly hard to crack in a limited time frame. As I describe below, I was unable to collect enough data to give a satisfying answer to my original research question. However, I am confident that I have the groundwork in place to continue working towards answering this question.

In the following post, I will detail how I attempted to answer this research question, the technology stack I used, things that worked and what I did well (a.k.a. the good), things that didn't work but were not total deal breakers in terms of collecting enough data to answer my original research question (a.k.a. the bad), and things that totally failed and prevented me from collecting the data I needed to answer my original research question.

Technology Stack Used

To perform the scraping I used the following technologies:

  • Python 3.5
  • Scrapy 1.1.1
  • Shiny dashboard

I chose to use Python 3.5 over 2.7 mostly for a superior handling of any potential unicode characters in the product listings. I also chose to use a shiny dashboard to quickly visualize, explore, and understand the data obtained via the scraping process.

Data on The Good: What worked and what I did well

The best outcome of this project was that I learned how to use scrapy to crawl a website for systematic data extraction. I began by extracting a list of all the top-level categories on Amazon (as seen below).

Data Scraping Amazon: The Good, the Bad, and the Ugly

After I had the top level categories and the URLs to each of their respective pages, I extracted the URLs for the top 100 items in each of those categories and attempted to scrape those items. Once one of those items was scraped, I crawled through Amazon by selecting products from the Customers Who Bought This Item Also Bought section (as seen below):

This process was repeated until the crawler ran out of products to pull information on. Overall, I was able to collect over 300,000 records. As we will see though all of this data was simply a mirage in the desert, giving false promises to me in my hopes of answer my research question.

 

Data on The Bad: Things that were sub-optimal but were not deal breakers

As I was scraping the data, I quickly noticed that many of my results were coming back with an HTTP response code of 200 but were not returning any data. Upon further inspection it seems that some of the product pages have different structures from what I built off of originally. This makes sense intuitively as each different type of product has different key attributes and features that need to be highlighted. Overall, I believe this is not a huge issue for me as I won't necessarily get a great representative sample of products across all of Amazon but I could still get enough data to make some predictions and work from.

While I was able to scrape the Best Seller Rank (BSR) for some products, I could not get it for every product I scraped. I originally intended to supplement this with data from Amazon's official API but as of the time of this writing I have not been able to.

Furthermore, due to differences in product page layouts, I quickly determined that the BSR data I had collected was not consistent across each product I scraped. On some products, I was grabbing the overall BSR rank on the entire Amazon website. On other products, I was getting the BSR rank within its top level category.

Even worse still, I was scraping the BSR for a product in its sub-level category. Needless to say the data inconsistency made the BSR data I was able to scrape essentially worthless. However, since I still had the unique ASIN for each product I scraped, it is theoretically possible to get the BSR for each of those products from the official API so all is not lost.

 

Data on The Ugly: The total failures

Unfortunately, my goal of answering my original research question fell off the rails along the way for one primary reason--simply put, I was not able to get enough reliable data. This was mostly caused because of captchas and IP bans Amazon deployed to thwart my scraping efforts.

After I began scraping, I collected data relatively easily and I thought I was well on my way to answering my original research question. However, after a handful of minutes, I would quickly have my IP address blocked by Amazon. In order to get around this, I deployed a consumer grade VPN service I subscribe to with IP rotation capabilities to try and get around this defense by Amazon but it was slow and largely ineffective as many of the IP addresses assigned were almost instantly blocked.

 

Failures and Next Steps

After many futile hours of attempting to break through the captchas and IP bans that Amazon deployed to prevent my scraping, I gave up in trying to answer my original research question in discouragement and frustration. However, after some more thought and consideration I think there are ways I can improve my scraping techniques to achieve more efficiency and consistency during the scraping process.

First and foremost, I need to re-write large pieces of my scrapy scripts to handle for differing DOM structures across different product categories. I began by implementing a few different varieties of spiders to call depending on the HTML loaded but this was still too simplistic and needs to be refined further. Careful analysis of when and where specific HTML sections are loaded on product pages is needed so that the correct spider script can be loaded.

Second, to prevent the massive problem with IP bans, I need to use a proxy service to rotate my scraper's IP address in a smart and sophisticated way. Services like Crawlera and ScrapingHub are specially designed to help solve these sorts of problems and I intend to research them further for my use case.

Lastly, the need to solve systematic Captcha checks is extremely important. Once Amazon determines that it wants to verify your humanity and issues a Captcha check, all scraping activity is stopped in the interim. This likely leads to the NA/no data problem mentioned in "The Bad" section above. With the use of services like Death By Captcha, I believe circumvention of these counter-scraping measures is feasible. Unfortunately, it seems like the trade off is on the cost side of things as Death By Captcha is a for-profit business.

 

About Author

Related Articles

Leave a Comment

Google October 8, 2019
Google Every the moment inside a even though we pick blogs that we read. Listed beneath are the most up-to-date web pages that we pick.
Google September 26, 2019
Google Although internet websites we backlink to below are considerably not connected to ours, we feel they may be truly really worth a go by means of, so have a look.
George May 14, 2019
If you want to prevent getting blacklisted when scraping, you can always use real headers. There are so many anti-scraping prevention mechanisms too.
Musab May 14, 2019
Scraping Amazon could be a terrible problem. Rotating proxies could help on this. When scraping the web, rotating proxies could also help you crawl with anonymous identity. Because when proxies rotated, you can always sleep in rest without worrying about identity blow up
Jake December 29, 2018
Im a Web and DS contractor that writes scrapers on a very consistent basis. One thing I would suggest for filter resistance would be User Agent cycling along with IP cycling if you haven't done it already. Also, I've written an article about in Data Driven Investor about building a simple dynamic retrieval service of free, anonymous, HTTPS proxy IPs with Python from scratch. The articule uses BeautifulSoup and Requests to do the actually GET/POSt and HTML parsing, but the principle is the same. Adjust it to your needs and you won't have to pay for a proxy service and you'll have a very deep pool of rotatable IPs to draw from. Yes, they IPs do tend to get blocked fairly quickly. But proxy services also add new ones very quickly so, when all have been blocked, just use your new custom service to go get more! Link to my article below! No paywall or anything either. My articles are all free 😁😁 https://medium.com/datadriveninvestor/bye-bye-403-building-a-filter-resistant-web-crawler-part-ii-building-a-proxy-list-49ffe437f458
Stella September 29, 2018
We've always known Amazon to be one hell of a difficult site to scrape, but if by chance you successfully scrape data from there your business has hit a goldmine because of the wealth of ecommerce related data available for every product on Amazon. These you can use to scale your business. With a good web scraper though, scraping data from Amazon becomes a stroll in the park this is because you don't have to learn the good the bad and the ugly, time is money hence the time you've been using to learn all these could have been put into good use and make more for your business
divinfosys.com September 11, 2018
Vscrape.com are providing amazon scraping tools without IP blocked and Banned.Using that tools any one can scrape million of records easily. Below is Few Tools we provide 1.Amazon Scraping and Reprice tools 2.Amazon competitor products monitor tools 3.FBA scraping tools 4.Buybox Scraping tools 5.Amazon title modifications alert tools 6.Amazon to Ebay Price comparisons 7.Amazon to Ebay automatic scraping and listing tools and maintain price and stocks 8.Aliexpress to Ebay Automatic listing tools and maintain price and stocks 9.Walmart,Bhphotovideo,best buy and many other website to Ebay listing tools and maintain price and stocks 10.Ebay scraping tools and Tracking tools 11.ASIN track tools 12.Ebay Listing tools 13.Scrape million of data one time. etc….. based on your needs i can develop or modify this tools Contact us for demo
The High-Level Challenges That Make Scraping Amazon Data So Painful | VizzyV.com August 5, 2018
[…] to NYC Data Science Academy’s article, it’s very complicated to extract the products from Amazon categories because most of the […]
The High-Level Challenges That Make Scraping Amazon Data So Painful – Local Business Generator August 4, 2018
[…] to NYC Data Science Academy’s article, it’s very complicated to extract the products from Amazon categories because most of the […]
The High-Level Challenges That Make Scraping Amazon Data So Painful - Proxy International August 3, 2018
[…] to NYC Data Science Academy’s article, it’s very complicated to extract the products from Amazon categories because most of the […]
Biz Tips: The High-Level Challenges That Make Scraping Amazon Data So Painful | BizAtomic August 3, 2018
[…] to NYC Data Science Academy’s article, it’s very complicated to extract the products from Amazon categories because most of the […]
The High-Level Challenges That Make Scraping Amazon Data So Painful By Philip Volna | Technopreneurph August 3, 2018
[…] to NYC Data Science Academy’s article, it’s very complicated to extract the products from Amazon categories because most of the […]
ip rotation proxy January 19, 2018
Wow, amazing weblog layout! How long have you ever been blogging for? you make running a blog look easy. The full look of your site is fantastic, let alone the content material!
DNS leak openvpn January 4, 2018
Having read this I thought it was rather informative. I appreciate you spending some time and energy to put this short article together. I once again find myself personally spending way too much time both reading and leaving comments. But so what, it was still worth it!
http://humancoder.com/ January 3, 2018
Yߋu sһould tɑke part in a contest for ߋne of tһе hiցhest quality sites оn the internet. Ӏ ᴡill recommend tһis web site!
Captcha Auto Solver December 30, 2017
It's amazing designed foг me to have a web site, ԝhich iѕ helpful designed for my knowledge. tһanks admin
Humancoder.com December 14, 2017
Remarkable! Ӏts genuinely remarkable piece οf writing, I have got much clear idea about from thiѕ post.
Amazon Appearal & Jewelry, Cinda Amazon Appearal & Jewelry August 24, 2017
Sweet reply thanks so much Also Check Out![ www.facebook.com/cindaappearalandjewelry/]
Paul July 16, 2017
Saved as a favorite, I actually enjoy your site!
Selim Cam April 7, 2017
Using Amazon Product Advertising API would've helped with the variations in the structure of the pages and simplified your project ten fold. Something like this would serve you really well: https://github.com/lionheart/bottlenose

View Posts by Categories


Our Recent Popular Posts


View Posts by Tags

#python #trainwithnycdsa 2019 2020 Revenue 3-points agriculture air quality airbnb airline alcohol Alex Baransky algorithm alumni Alumni Interview Alumni Reviews Alumni Spotlight alumni story Alumnus ames dataset ames housing dataset apartment rent API Application artist aws bank loans beautiful soup Best Bootcamp Best Data Science 2019 Best Data Science Bootcamp Best Data Science Bootcamp 2020 Best Ranked Big Data Book Launch Book-Signing bootcamp Bootcamp Alumni Bootcamp Prep boston safety Bundles cake recipe California Cancer Research capstone car price Career Career Day citibike classic cars classpass clustering Coding Course Demo Course Report covid 19 credit credit card crime frequency crops D3.js data data analysis Data Analyst data analytics data for tripadvisor reviews data science Data Science Academy Data Science Bootcamp Data science jobs Data Science Reviews Data Scientist Data Scientist Jobs data visualization database Deep Learning Demo Day Discount disney dplyr drug data e-commerce economy employee employee burnout employer networking environment feature engineering Finance Financial Data Science fitness studio Flask flight delay gbm Get Hired ggplot2 googleVis H20 Hadoop hallmark holiday movie happiness healthcare frauds higgs boson Hiring hiring partner events Hiring Partners hotels housing housing data housing predictions housing price hy-vee Income Industry Experts Injuries Instructor Blog Instructor Interview insurance italki Job Job Placement Jobs Jon Krohn JP Morgan Chase Kaggle Kickstarter las vegas airport lasso regression Lead Data Scienctist Lead Data Scientist leaflet league linear regression Logistic Regression machine learning Maps market matplotlib Medical Research Meet the team meetup methal health miami beach movie music Napoli NBA netflix Networking neural network Neural networks New Courses NHL nlp NYC NYC Data Science nyc data science academy NYC Open Data nyc property NYCDSA NYCDSA Alumni Online Online Bootcamp Online Training Open Data painter pandas Part-time performance phoenix pollutants Portfolio Development precision measurement prediction Prework Programming public safety PwC python Python Data Analysis python machine learning python scrapy python web scraping python webscraping Python Workshop R R Data Analysis R language R Programming R Shiny r studio R Visualization R Workshop R-bloggers random forest Ranking recommendation recommendation system regression Remote remote data science bootcamp Scrapy scrapy visualization seaborn seafood type Selenium sentiment analysis sentiment classification Shiny Shiny Dashboard Spark Special Special Summer Sports statistics streaming Student Interview Student Showcase SVM Switchup Tableau teachers team team performance TensorFlow Testimonial tf-idf Top Data Science Bootcamp Top manufacturing companies Transfers tweets twitter videos visualization wallstreet wallstreetbets web scraping Weekend Course What to expect whiskey whiskeyadvocate wildfire word cloud word2vec XGBoost yelp youtube trending ZORI