Slickdeals: What Deals are Users Interested In?

Posted on May 15, 2017

You may have heard of a site named SlickDeals. As a site with more than ten million monthly users, this deal-sharing site is a hot spot for people to share and pass judgment onΒ offers and discounts for a huge variety of things. Ever since the early days of college, I have been visiting this site almost daily to keep up with prices for items of interest. As our boot camp cohort at NYC Data Science learned about web scraping, I felt that it would be a great idea to play around and see what more I could learn about this popular deal-sharing website.

Note: If you are uninterested in the programming aspect and are more interested in the findings, please skip the data, scrapy, and cleaning portions of this post.

TheΒ Data

Preliminary Variable Seeking

Since SlickDeals is largely a community-driven website, what better question is there to ask than what is popular with the users? In order to measure popularity, I wanted numerical values that were able to capture that.

Sample deal post on SlickDeals

Taking a look at a random deal page, I found that there were two such variables: view count, and deal score. So now that I have my dependent variables, I needed independent values that I could compare to the two. Including view count and deal score, I ended up with a total of 15 variables that I wanted to have in my data set (see Scrapy section). But how would I get this into a table format that is easy to work with?

Scrapy

This is where the Python-based Scrapy comes in handy. As described by its official GitHub repository:

Scrapy is a fast high-level web crawling and web scraping framework, used to crawl websites and extract structured data from their pages. It can be used for a wide range of purposes, from data mining to monitoring and automated testing.

So now that I figured out the information I want to extract from the website, I needed to tell Scrapy how I want it to approach this. If you are interested in learning how to use Scrapy, I recommend checking out this tutorial.Β Here is the summary of what I had to do with my spider:

Workflow

In general, the Scrapy Spider needs to know how you want to approach scraping each element. Mine needed to do the following:

1. Login Authorization

SlickDeals uses a forum structure for its deals, which came with one major problem: only members could see all posts. After going through a myriad of suggested solutions, I ended up finding a working solution from the example in the tutorial I provided earlier.

2.Β Main Parse

Each deal has it's own thread/post on the forums. I wanted information on the Hot Deals section, so I needed to tell it to make requests for each of these thread pages. When it is done collecting information from each thread on the page, the Spider needs to go find the next page and extract from the deals there. The general workflow instruction was written as shown below:

3.Β Parsing Elements in Each Deal Page

Now for the meat of this entire process. To get each element or variable of interest, the Spider needs to store the results of XPath Selectors:

Now that the Spider is set up, I need Scrapy to output a file for me. Using an item pipeline, I had Scrapy dump a .csv output file with these columns. It took a lot of trial and error, but after many hours I was rewarded with an output data set.

Cleaning the Data

So in order for me to use the data, I need to change variable formats so that I can use packages such as Pandas, Numpy, Matlib and Seaborn to do some data exploration and visualization. This is what I tried to do with the output from Scrapy:

  • Change columns to appropriate data types (ie. strftime, pandasΒ functions, etc)
  • Strip whitespace from DealTitle
  • Remove Nonsensical Rows (ie. Stickied Posts, Rules, "Delete" etc)
  • Remove unwanted substrings (ie. '$' and ',' in Deal Price)
  • Remove duplicates

However, there were several problems I could not resolve, and thus lead to less variables used in the analysis at the end:

  • The DealPrice included a lot of non-numerical entries. In addition to a problem with removing '$' and ',' characters from the observations with numerical values, there were many entries that listed text such as "Buy One Get One Free" or "50% Off" instead of a nominal price. I decided to remove the Deal Price because of this.
  • On some posts, there was additional information on the userΒ who posted the deal. However, I was not able to figure out when these were displayed, so I ended up excluding the user reputation and deals posted columns that I scraped.

Visualization and Analysis with Python Packages

Using a combination of Pandas and Matplotlib, I was able to return with these findings:

Both the ViewCount and DealScore indicate a right skew, implying that a handful of posts are generatingΒ the lion's share of the views and deal scores. This is likely due to the sparseness of good deals and an abundance of marginal, or unattractive deals posted by the community.

Below are some findings on what categories and stores are getting a high amount of views and deal scores:

View Count

Deal Score

To see all my findings for this particular project, please seeΒ my Python Notebook upload on my GitHub.

The skills the author demonstrated here can be learned through taking Data Science with Machine Learning bootcamp with NYC Data Science Academy.

About Author

Kai Chan

Kai Chan received his bachelor's degree in economics from the University of California, San Diego. Kai believes that with the increasing importance and prevalence of various programming tools in the world of data analysis, a career in data...
View all posts by Kai Chan >

Related Articles

Leave a Comment

bangle cartier pour femme copie July 18, 2017
Hello, Malcolm. Would you care to declare an interest before we discuss this pathetic straw-man argument of yours? bangle cartier pour femme copie http://www.gioiellifini.nl/fr/replica-cartier-love-bracelet-pink-gold-plated-real-with-screwdriver-p3.htm

View Posts by Categories


Our Recent Popular Posts


View Posts by Tags

#python #trainwithnycdsa 2019 2020 Revenue 3-points agriculture air quality airbnb airline alcohol Alex Baransky algorithm alumni Alumni Interview Alumni Reviews Alumni Spotlight alumni story Alumnus ames dataset ames housing dataset apartment rent API Application artist aws bank loans beautiful soup Best Bootcamp Best Data Science 2019 Best Data Science Bootcamp Best Data Science Bootcamp 2020 Best Ranked Big Data Book Launch Book-Signing bootcamp Bootcamp Alumni Bootcamp Prep boston safety Bundles cake recipe California Cancer Research capstone car price Career Career Day citibike classic cars classpass clustering Coding Course Demo Course Report covid 19 credit credit card crime frequency crops D3.js data data analysis Data Analyst data analytics data for tripadvisor reviews data science Data Science Academy Data Science Bootcamp Data science jobs Data Science Reviews Data Scientist Data Scientist Jobs data visualization database Deep Learning Demo Day Discount disney dplyr drug data e-commerce economy employee employee burnout employer networking environment feature engineering Finance Financial Data Science fitness studio Flask flight delay gbm Get Hired ggplot2 googleVis H20 Hadoop hallmark holiday movie happiness healthcare frauds higgs boson Hiring hiring partner events Hiring Partners hotels housing housing data housing predictions housing price hy-vee Income Industry Experts Injuries Instructor Blog Instructor Interview insurance italki Job Job Placement Jobs Jon Krohn JP Morgan Chase Kaggle Kickstarter las vegas airport lasso regression Lead Data Scienctist Lead Data Scientist leaflet league linear regression Logistic Regression machine learning Maps market matplotlib Medical Research Meet the team meetup methal health miami beach movie music Napoli NBA netflix Networking neural network Neural networks New Courses NHL nlp NYC NYC Data Science nyc data science academy NYC Open Data nyc property NYCDSA NYCDSA Alumni Online Online Bootcamp Online Training Open Data painter pandas Part-time performance phoenix pollutants Portfolio Development precision measurement prediction Prework Programming public safety PwC python Python Data Analysis python machine learning python scrapy python web scraping python webscraping Python Workshop R R Data Analysis R language R Programming R Shiny r studio R Visualization R Workshop R-bloggers random forest Ranking recommendation recommendation system regression Remote remote data science bootcamp Scrapy scrapy visualization seaborn seafood type Selenium sentiment analysis sentiment classification Shiny Shiny Dashboard Spark Special Special Summer Sports statistics streaming Student Interview Student Showcase SVM Switchup Tableau teachers team team performance TensorFlow Testimonial tf-idf Top Data Science Bootcamp Top manufacturing companies Transfers tweets twitter videos visualization wallstreet wallstreetbets web scraping Weekend Course What to expect whiskey whiskeyadvocate wildfire word cloud word2vec XGBoost yelp youtube trending ZORI