Slickdeals: What Deals are Users Interested In?
You may have heard of a site named SlickDeals. As a site with more than ten million monthly users, this deal-sharing site is a hot spot for people to share and pass judgment onΒ offers and discounts for a huge variety of things. Ever since the early days of college, I have been visiting this site almost daily to keep up with prices for items of interest. As our boot camp cohort at NYC Data Science learned about web scraping, I felt that it would be a great idea to play around and see what more I could learn about this popular deal-sharing website.
Note: If you are uninterested in the programming aspect and are more interested in the findings, please skip the data, scrapy, and cleaning portions of this post.
TheΒ Data
Preliminary Variable Seeking
Since SlickDeals is largely a community-driven website, what better question is there to ask than what is popular with the users? In order to measure popularity, I wanted numerical values that were able to capture that.
Sample deal post on SlickDeals
Taking a look at a random deal page, I found that there were two such variables: view count, and deal score. So now that I have my dependent variables, I needed independent values that I could compare to the two. Including view count and deal score, I ended up with a total of 15 variables that I wanted to have in my data set (see Scrapy section). But how would I get this into a table format that is easy to work with?
Scrapy
This is where the Python-based Scrapy comes in handy. As described by its official GitHub repository:
Scrapy is a fast high-level web crawling and web scraping framework, used to crawl websites and extract structured data from their pages. It can be used for a wide range of purposes, from data mining to monitoring and automated testing.
So now that I figured out the information I want to extract from the website, I needed to tell Scrapy how I want it to approach this. If you are interested in learning how to use Scrapy, I recommend checking out this tutorial.Β Here is the summary of what I had to do with my spider:
Workflow
In general, the Scrapy Spider needs to know how you want to approach scraping each element. Mine needed to do the following:
1. Login Authorization
SlickDeals uses a forum structure for its deals, which came with one major problem: only members could see all posts. After going through a myriad of suggested solutions, I ended up finding a working solution from the example in the tutorial I provided earlier.
2.Β Main Parse
Each deal has it's own thread/post on the forums. I wanted information on the Hot Deals section, so I needed to tell it to make requests for each of these thread pages. When it is done collecting information from each thread on the page, the Spider needs to go find the next page and extract from the deals there. The general workflow instruction was written as shown below:
3.Β Parsing Elements in Each Deal Page
Now for the meat of this entire process. To get each element or variable of interest, the Spider needs to store the results of XPath Selectors:
Now that the Spider is set up, I need Scrapy to output a file for me. Using an item pipeline, I had Scrapy dump a .csv output file with these columns. It took a lot of trial and error, but after many hours I was rewarded with an output data set.
Cleaning the Data
So in order for me to use the data, I need to change variable formats so that I can use packages such as Pandas, Numpy, Matlib and Seaborn to do some data exploration and visualization. This is what I tried to do with the output from Scrapy:
- Change columns to appropriate data types (ie. strftime, pandasΒ functions, etc)
- Strip whitespace from DealTitle
- Remove Nonsensical Rows (ie. Stickied Posts, Rules, "Delete" etc)
- Remove unwanted substrings (ie. '$' and ',' in Deal Price)
- Remove duplicates
However, there were several problems I could not resolve, and thus lead to less variables used in the analysis at the end:
- The DealPrice included a lot of non-numerical entries. In addition to a problem with removing '$' and ',' characters from the observations with numerical values, there were many entries that listed text such as "Buy One Get One Free" or "50% Off" instead of a nominal price. I decided to remove the Deal Price because of this.
- On some posts, there was additional information on the userΒ who posted the deal. However, I was not able to figure out when these were displayed, so I ended up excluding the user reputation and deals posted columns that I scraped.
Visualization and Analysis with Python Packages
Using a combination of Pandas and Matplotlib, I was able to return with these findings:
Both the ViewCount and DealScore indicate a right skew, implying that a handful of posts are generatingΒ the lion's share of the views and deal scores. This is likely due to the sparseness of good deals and an abundance of marginal, or unattractive deals posted by the community.
Below are some findings on what categories and stores are getting a high amount of views and deal scores:
View Count
Deal Score
To see all my findings for this particular project, please seeΒ my Python Notebook upload on my GitHub.