A Comedy of Errors: Or How to Identify Pattern Issues Using Selenium

Devon Blumenthal
Posted on Nov 10, 2018

Introduction:

The original purpose of this project was to demonstrate the scraping of data from a beer review website using Scrapy in Python in order to create a personalized beer recommendation system for people to discover new beers that would suit their tastes.

However, due to major inconsistencies in how the first website was coded, this project morphed into a demonstration on how to recreate the Scrapy shell using Selenium. The shiny app created from the data can be found here, and the code for the shiny app and the python scraping can be found here. If you wish to see the issues encountered and how they were fixed, please see the section below entitled "The Nitty Gritty".

The Data:

The data for this project came from two separate websites. The first website I scraped was beerconnoisseur.com.  An example page from this website can be seen in the photo above. Some of the variables that can be found on each webpage are:

  • Name
  • Brewer
  • Type of beer
  • Where the brewer is located
  • Description of the beer
  • Scores for the beer
  • A review by a professional

However, while the website was very uniform in how it was coded, it did not provide enough detail for the intended beer recommendation system. While the information from this website was useful, as with all data sets, the more data you have the better end product you can produce!

As a result, the second website I scraped was beerandbrewing.com. This website provided much more information including:

  • Descriptions of the beers provided by the brewer
  • Separate reviews for the aroma, the flavor, and the overall score by a tasting panel

The app:

The app contains two tabs:

  • About
  • Beer Menu

The Beer Menu tab contains an interactive table of all the beer reviews scraped from beerandbrewing.com. If a user is interested in a particular brewer or style of beer, they can use the search box to type in a query (e.g. pale ale). This will filter the entire table. In the photo above, a search for "pale ale" narrowed the total number of beers displayed from 1000 beers of different styles to 78 pale ales.

To the right of the table, there is a box with three different tabs, containing information about the selected beer. The first tab, "Description", contains the ABV (alcohol by volume) and the IBU (international bittering unit) as well as the brewer's description of the beer.

The second tab, "Reviews", provides the tasting panel's views on the selected beer's aroma and flavor, along with a general review of the product.

The final tab, "Picture", displays a picture of the beer, including the label to aid users in searching for the selected beer in their local bar or grocery store.

The Nitty Gritty:

As mentioned above, the original project was met with quite a few issues.

The first website beerconnoisseur.com worked very well with Scrapy. The code was about 100 lines long, the script took about two minutes to run, and data from approximately 2000 web pages were captured. However, despite the ease with which the data were scraped, the product was incomplete.

As a result, the second website beerandbrewing.com was used.  The Scrapy script for the second website is shown above. Like the previous script, it is fairly brief at only 60 lines of code, however, when I ran the script, several problems occurred. First, I discovered the second website uses AJAX, which is not fully compatible with Scrapy. While some of the web pages were scraped without incident, many failed due to inconsistencies in the website code. Thus, Scrapy was not a useful module for this website, so switching to Selenium was required. This meant that the project was going to take longer than anticipated because Selenium takes far longer to run through a website than Scrapy does, however, it was ultimately an effective solution.

The new challenge was to create a Selenium script that could identify pattern inconsistencies, edit them, and rerun the script without having to constantly run through all 1000 web pages, thereby shorting the amount of time Selenium takes.

The solution was twofold. The first was to build fails into the Selenium script. In addition to including each webpage's URL into the csv file, this allowed me to pinpoint pattern inconsistencies and log exactly which pages were having issues. Using the initial csv file, you can filter results based on whether they had a fail in a given category. 

The second solution needed to determine what the pattern inconsistencies were so they could be fixed. In order to do so, I created a separate debugging script. This allowed me to not only shorten the amount of websites scraped in any given run, but also to identify the pattern inconsistencies that needed to be corrected. But what did this debugging script do that made it so convenient?

It allowed me to create a pseudo-Scrapy shell! Because the website was printed out in the command prompt, it allowed me to tackle specific pattern issues. Once I determined what the issue was, I updated the code and reintroduced it to the original Selenium script. This saved me lots of time and energy throughout the entire process.

Future Directions:

While I was able to scrape both websites to full effect, because of time constraints, I was not able to do what I originally set out to do: Create a recommendation system. With the data now scrapped, this would be the end goal: to create a fully fledged app that recommends beers based on personal preferences.

I hope this post was informative. Happy beer hunting!

About Author

Devon Blumenthal

Devon Blumenthal

Prior to joining NYC Data Science Academy, Devon received his Masters in Measurement, Statistics, and Evaluation at the University of Maryland, College Park. As an instructor, he taught Introduction to Statistics and as a teaching assistant, he helped...
View all posts by Devon Blumenthal >

Leave a Comment

No comments found.

View Posts by Categories


Our Recent Popular Posts


View Posts by Tags

2019 airbnb alumni Alumni Interview Alumni Spotlight alumni story Alumnus API artist aws beautiful soup Best Bootcamp Best Data Science 2019 Best Data Science Bootcamp Big Data bootcamp Bootcamp Prep Bundles California Cancer Research capstone Career citibike clustering Coding Course Report D3.js data Data Analyst data science Data Science Academy Data Science Bootcamp Data Scientist Data Scientist Jobs data visualization Deep Learning Demo Day dplyr employer networking feature engineering Finance Financial Data Science Flask gbm Get Hired ggplot2 googleVis Hadoop higgs boson Hiring hiring partner events Industry Experts Job JP Morgan Chase Kaggle lasso regression Lead Data Scienctist Lead Data Scientist leaflet linear regression Logistic Regression machine learning Maps matplotlib Medical Research meetup Networking neural network Neural networks New Courses nlp NYC NYC Data Science nyc data science academy NYC Open Data NYCDSA NYCDSA Alumni Open Data painter pandas Portfolio Development prediction Programming PwC python python machine learning python scrapy python web scraping python webscraping Python Workshop R R language R Programming R Shiny r studio R Visualization R Workshop R-bloggers random forest recommendation recommendation system regression Scrapy scrapy visualization seaborn Selenium sentiment analysis Shiny Shiny Dashboard Spark Special Special Summer Sports statistics streaming Student Interview Student Showcase SVM Tableau Testimonial tf-idf Top Data Science Bootcamp twitter visualization web scraping What to expect word cloud word2vec XGBoost yelp