A Comedy of Errors: Or How to Identify Pattern Issues Using Selenium

Posted on Nov 10, 2018


The original purpose of this project was to demonstrate the scraping of data from a beer review website using Scrapy in Python in order to create a personalized beer recommendation system for people to discover new beers that would suit their tastes.

However, due to major inconsistencies in how the first website was coded, this project morphed into a demonstration on how to recreate the Scrapy shell using Selenium. The shiny app created from the data can be found here, and the code for the shiny app and the python scraping can be found here. If you wish to see the issues encountered and how they were fixed, please see the section below entitled "The Nitty Gritty".

The Data:

The data for this project came from two separate websites. The first website I scraped was beerconnoisseur.com.  An example page from this website can be seen in the photo above. Some of the variables that can be found on each webpage are:

  • Name
  • Brewer
  • Type of beer
  • Where the brewer is located
  • Description of the beer
  • Scores for the beer
  • A review by a professional

However, while the website was very uniform in how it was coded, it did not provide enough detail for the intended beer recommendation system. While the information from this website was useful, as with all data sets, the more data you have the better end product you can produce!

As a result, the second website I scraped was beerandbrewing.com. This website provided much more information including:

  • Descriptions of the beers provided by the brewer
  • Separate reviews for the aroma, the flavor, and the overall score by a tasting panel

The app:

The app contains two tabs:

  • About
  • Beer Menu

The Beer Menu tab contains an interactive table of all the beer reviews scraped from beerandbrewing.com. If a user is interested in a particular brewer or style of beer, they can use the search box to type in a query (e.g. pale ale). This will filter the entire table. In the photo above, a search for "pale ale" narrowed the total number of beers displayed from 1000 beers of different styles to 78 pale ales.

To the right of the table, there is a box with three different tabs, containing information about the selected beer. The first tab, "Description", contains the ABV (alcohol by volume) and the IBU (international bittering unit) as well as the brewer's description of the beer.

The second tab, "Reviews", provides the tasting panel's views on the selected beer's aroma and flavor, along with a general review of the product.

The final tab, "Picture", displays a picture of the beer, including the label to aid users in searching for the selected beer in their local bar or grocery store.

The Nitty Gritty:

As mentioned above, the original project was met with quite a few issues.

The first website beerconnoisseur.com worked very well with Scrapy. The code was about 100 lines long, the script took about two minutes to run, and data from approximately 2000 web pages were captured. However, despite the ease with which the data were scraped, the product was incomplete.

As a result, the second website beerandbrewing.com was used.  The Scrapy script for the second website is shown above. Like the previous script, it is fairly brief at only 60 lines of code, however, when I ran the script, several problems occurred. First, I discovered the second website uses AJAX, which is not fully compatible with Scrapy. While some of the web pages were scraped without incident, many failed due to inconsistencies in the website code. Thus, Scrapy was not a useful module for this website, so switching to Selenium was required. This meant that the project was going to take longer than anticipated because Selenium takes far longer to run through a website than Scrapy does, however, it was ultimately an effective solution.

The new challenge was to create a Selenium script that could identify pattern inconsistencies, edit them, and rerun the script without having to constantly run through all 1000 web pages, thereby shorting the amount of time Selenium takes.

The solution was twofold. The first was to build fails into the Selenium script. In addition to including each webpage's URL into the csv file, this allowed me to pinpoint pattern inconsistencies and log exactly which pages were having issues. Using the initial csv file, you can filter results based on whether they had a fail in a given category. 

The second solution needed to determine what the pattern inconsistencies were so they could be fixed. In order to do so, I created a separate debugging script. This allowed me to not only shorten the amount of websites scraped in any given run, but also to identify the pattern inconsistencies that needed to be corrected. But what did this debugging script do that made it so convenient?

It allowed me to create a pseudo-Scrapy shell! Because the website was printed out in the command prompt, it allowed me to tackle specific pattern issues. Once I determined what the issue was, I updated the code and reintroduced it to the original Selenium script. This saved me lots of time and energy throughout the entire process.

Future Directions:

While I was able to scrape both websites to full effect, because of time constraints, I was not able to do what I originally set out to do: Create a recommendation system. With the data now scrapped, this would be the end goal: to create a fully fledged app that recommends beers based on personal preferences.

I hope this post was informative. Happy beer hunting!

About Author

Devon Blumenthal

Prior to joining NYC Data Science Academy, Devon received his Masters in Measurement, Statistics, and Evaluation at the University of Maryland, College Park. As an instructor, he taught Introduction to Statistics and as a teaching assistant, he helped...
View all posts by Devon Blumenthal >

Leave a Comment

No comments found.

View Posts by Categories

Our Recent Popular Posts

View Posts by Tags

#python #trainwithnycdsa 2019 2020 Revenue 3-points agriculture air quality airbnb airline alcohol Alex Baransky algorithm alumni Alumni Interview Alumni Reviews Alumni Spotlight alumni story Alumnus ames dataset ames housing dataset apartment rent API Application artist aws bank loans beautiful soup Best Bootcamp Best Data Science 2019 Best Data Science Bootcamp Best Data Science Bootcamp 2020 Best Ranked Big Data Book Launch Book-Signing bootcamp Bootcamp Alumni Bootcamp Prep boston safety Bundles cake recipe California Cancer Research capstone car price Career Career Day citibike classic cars classpass clustering Coding Course Demo Course Report covid 19 credit credit card crime frequency crops D3.js data data analysis Data Analyst data analytics data for tripadvisor reviews data science Data Science Academy Data Science Bootcamp Data science jobs Data Science Reviews Data Scientist Data Scientist Jobs data visualization database Deep Learning Demo Day Discount disney dplyr drug data e-commerce economy employee employee burnout employer networking environment feature engineering Finance Financial Data Science fitness studio Flask flight delay gbm Get Hired ggplot2 googleVis H20 Hadoop hallmark holiday movie happiness healthcare frauds higgs boson Hiring hiring partner events Hiring Partners hotels housing housing data housing predictions housing price hy-vee Income Industry Experts Injuries Instructor Blog Instructor Interview insurance italki Job Job Placement Jobs Jon Krohn JP Morgan Chase Kaggle Kickstarter las vegas airport lasso regression Lead Data Scienctist Lead Data Scientist leaflet league linear regression Logistic Regression machine learning Maps market matplotlib Medical Research Meet the team meetup methal health miami beach movie music Napoli NBA netflix Networking neural network Neural networks New Courses NHL nlp NYC NYC Data Science nyc data science academy NYC Open Data nyc property NYCDSA NYCDSA Alumni Online Online Bootcamp Online Training Open Data painter pandas Part-time performance phoenix pollutants Portfolio Development precision measurement prediction Prework Programming public safety PwC python Python Data Analysis python machine learning python scrapy python web scraping python webscraping Python Workshop R R Data Analysis R language R Programming R Shiny r studio R Visualization R Workshop R-bloggers random forest Ranking recommendation recommendation system regression Remote remote data science bootcamp Scrapy scrapy visualization seaborn seafood type Selenium sentiment analysis sentiment classification Shiny Shiny Dashboard Spark Special Special Summer Sports statistics streaming Student Interview Student Showcase SVM Switchup Tableau teachers team team performance TensorFlow Testimonial tf-idf Top Data Science Bootcamp Top manufacturing companies Transfers tweets twitter videos visualization wallstreet wallstreetbets web scraping Weekend Course What to expect whiskey whiskeyadvocate wildfire word cloud word2vec XGBoost yelp youtube trending ZORI