An Iterative Approach to Data Science

Posted on Aug 4, 2017

It is the nature of boot camp.  We drink from the firehose because we only have 12 weeks to learn what university programs would spread out over  two to four years.  The program is more rigorous.  It is able to be kept up-to-date in a way that conventional programs that have to submit their curriculum to central committees can not.  But there is a price.  How to get done what you need under very pressing time constraints?

You will see many posts on here that end with some flavor of the words "I wanted to do more."  I've decided to begin mine that way.  This post is not as much about the Consumer Reports (CR) Data on Computers that I chose to analyze as it is about a process that, when the unexpected happens again, and again, at least allows for the completion of something useful.  Here's how it works ...

A Modular Approach To Screen Scraping

Selenium -> Scrapy -> Selenium

My presentation slides have some images you can feel free to check out with respect to the organization of the Consumer Reports site.  High level, this project was targeting product specification and review data for the 3 computer related product classes available on the site:

  • desktop computers,
  • laptop computers, and
  • ChromeBooks.

Each product had its own unique URL, and upon initial inspection, the site looked like it could be done most simply with Selenium to scrape the pages of links for each product URL, and then a Scrapy spider to go after all of the data.  This proved to not be the case.

While specifications data loaded immediately after clicking on a page link, review data only fully loaded after clicking the "Reviews" tab.  It turned out to be dynamically generated by JavaScript.  A conventional approach might have been to build one giant Selenium script where you would not know if you were truly going to get all of your data until it was done extracting all of the data.  If something went wrong, you could lose a lot of time during test iterations.

I took a somewhat different approach.  Selenium scripts were run to scrape the URLs for each product, and the results were saved to a csv file.  The file was then loaded into other scripts.  First a Scrapy spider proved the path of least resistance to obtain specification data.  Then a Selenium script loaded the same csv file iterating over it to capture reviews for each product that had them.  The results were then saved to two files this time:  basic fields for products with no reviews and many fields full of data for products that had reviews.

As the reviews were more challenging to capture, this approach ensured that I already had all of the specification data safely stored in a csv ready for analysis before I even began the work on capturing reviews data.  This also made debugging easier. Since each script only focused on a piece of the puzzle, it was easier to see where something might be going wrong to fix it.

Two points of interest from this process:  (1) How specification data was captured, and (2) the hard learned lesson that what works in one part of a website does not always work in another.

1) For the specifications data, a pattern was identified in the source HTML tagging that made it possible to to extract about 50+ variables using only 2 XPath coding rules:xpath rules

  • one for a "spec_label" and
  • another for a "spec_value"

The data was saved to csv in a long stack with a plan to then bring it into R and use the dplyer "spread" function to convert the 2 column stack to a wide format with about 50 column variables per observation.

The review data could not be approached in this manner and had to be obtained with one coding rule per field.

2) While the first Selenium script to capture page URLs was able to "see" the data with a simple "sleep" time delay (though much trial and error to get the timing right was required), the second script required experimentation with other commands in the Selenium arsenal.  Both strategies had to be complemented with trial and error experiments to understand random errors (unique to each process) that required error handling to go with each script.

An Iterative Approach to Data Munging and Analysis

High level, if you try to do "everything" up front in what project managers might call a classical waterfall approach, you run the risk of running out of time without delivering anything useful.  Although I was just one guy doing lone research, I found that an iterative approach to the project helped ensure there was something to deliver before the clock ran down.  A quick summary of how to do this would look something like this:

  • Get all of the data up front
  • Integrate some data cleaning into your web scraping code for known patterns
  • Create CSV spreadsheets each step of the way to preserve what you have so far
  • Load the sheets into R or Jupyter/iPython – whichever one “feels faster” for data cleaning and preparation steps
  • Generate new csvs of the results
  • Use cleaned and transformed data in your analysis generating visualizations in R / iPython
  • Go back to the source, identify more fields that can yield data insights and do it all again ...

You don't build everything you want this way, but it guarantees that each iteration results in actual finished analysis.  Each data sheet along the way allows you to pick up where you left off without having to re-run code.

Note:  It is important during the first step to "edit" yourself.  "Get all the data up front" could leave you writing complex extraction rules in your scraper leaving not enough time for actual data analysis ahead of deadline.  Set reasonable goals.  Get enough data so that you have choices, but don't try to "scrape the universe" on the first go-round.

Resulting Data Analysis and Lessons Learned

I went into this with a number of ideas about what I wanted to explore.  I was able to give the most coverage to the initial question of support for new features and standards in the products reviewed by Consumer Reports.  This was then followed with a little analysis of RAM (Random Access Memory) by brand, and a look at RAM for its potential impact on prices (in the models under review by CR).  The full details of the research using Consumer Reports specification data is provided in this Jupyter Notebook on my GitHub:

TheMitchWorksPro (on Github) -> NYCDSA_CR_WebScrape (Repo) -> CR Data Analysis Jupyter Notebook

User Review Word Clouds

With respects to the data collected on user reviews, several word clouds were developed but not as much time was left to look into either a more detailed configuration, or to explore finding better R or Python libraries with which to do this.  The one I selected proved finicky to configure.  Word clouds with none of the original words suppressed by the algorithm are provided near the end of these power points.  After much tinkering, I added an exclusion list to the word cloud for positive reviews.  The intent of this kind of filtering is to eliminate words that are not reflective of true sentiment so the words that are reflective are brought more into focus.  The revised result is presented here.

WordCloud - Positive Reviews on Consumer Reports

The top 25 words (by frequency) shown in this word cloud are provided here:

Note:  Even now, if you look closely, you can probably see many words that might be worth considering for exclusion.  It's a word game you can literally spend hours or days on.

It should also be noted that as you remove words from the frequency list to bring more relevant words into focus, you start getting more and more ties for words with the same frequency near the bottom.  A max words limit prevents the diagram from becoming too cluttered by randomly choosing which "tie words" to exclude form the word cloud.  You then start seeing warnings like this for what got left out:

Final Thoughts

The data used in this research is clearly a modest sample of the much larger population that is the computer market.  As more expensive model configurations do not appear to be included in what Consumer Reports is provided to review, findings on this blog and in my project should be treated as specific to the market segments and price levels covered and cannot be generalized to the whole of the computer marketplace.  Given what Consumer Reports data is and how it is used, analysis can still be useful.  If there were more time to pursue this avenue of research, I might seek to collect more disparate samples from other sources and blend them together to see what they might tell us.

About Author


Mitch Abramson has served as: Business Writer/Techwriter, Researcher, Editor, Problem Solver, Tinkerer, Communicator, Code Hacker, System Administrator, Trainer, Content Strategist, XML Reuse Architect and facilitator of projects and initiatives. He jokes that his career took him from being...
View all posts by Mitch >

Related Articles

Leave a Comment

No comments found.

View Posts by Categories

Our Recent Popular Posts

View Posts by Tags

#python #trainwithnycdsa 2019 2020 Revenue 3-points agriculture air quality airbnb airline alcohol Alex Baransky algorithm alumni Alumni Interview Alumni Reviews Alumni Spotlight alumni story Alumnus ames dataset ames housing dataset apartment rent API Application artist aws bank loans beautiful soup Best Bootcamp Best Data Science 2019 Best Data Science Bootcamp Best Data Science Bootcamp 2020 Best Ranked Big Data Book Launch Book-Signing bootcamp Bootcamp Alumni Bootcamp Prep boston safety Bundles cake recipe California Cancer Research capstone car price Career Career Day citibike classic cars classpass clustering Coding Course Demo Course Report covid 19 credit credit card crime frequency crops D3.js data data analysis Data Analyst data analytics data for tripadvisor reviews data science Data Science Academy Data Science Bootcamp Data science jobs Data Science Reviews Data Scientist Data Scientist Jobs data visualization database Deep Learning Demo Day Discount disney dplyr drug data e-commerce economy employee employee burnout employer networking environment feature engineering Finance Financial Data Science fitness studio Flask flight delay gbm Get Hired ggplot2 googleVis H20 Hadoop hallmark holiday movie happiness healthcare frauds higgs boson Hiring hiring partner events Hiring Partners hotels housing housing data housing predictions housing price hy-vee Income Industry Experts Injuries Instructor Blog Instructor Interview insurance italki Job Job Placement Jobs Jon Krohn JP Morgan Chase Kaggle Kickstarter las vegas airport lasso regression Lead Data Scienctist Lead Data Scientist leaflet league linear regression Logistic Regression machine learning Maps market matplotlib Medical Research Meet the team meetup methal health miami beach movie music Napoli NBA netflix Networking neural network Neural networks New Courses NHL nlp NYC NYC Data Science nyc data science academy NYC Open Data nyc property NYCDSA NYCDSA Alumni Online Online Bootcamp Online Training Open Data painter pandas Part-time performance phoenix pollutants Portfolio Development precision measurement prediction Prework Programming public safety PwC python Python Data Analysis python machine learning python scrapy python web scraping python webscraping Python Workshop R R Data Analysis R language R Programming R Shiny r studio R Visualization R Workshop R-bloggers random forest Ranking recommendation recommendation system regression Remote remote data science bootcamp Scrapy scrapy visualization seaborn seafood type Selenium sentiment analysis sentiment classification Shiny Shiny Dashboard Spark Special Special Summer Sports statistics streaming Student Interview Student Showcase SVM Switchup Tableau teachers team team performance TensorFlow Testimonial tf-idf Top Data Science Bootcamp Top manufacturing companies Transfers tweets twitter videos visualization wallstreet wallstreetbets web scraping Weekend Course What to expect whiskey whiskeyadvocate wildfire word cloud word2vec XGBoost yelp youtube trending ZORI