Now Trending: Data Analysis of Top 10 Reddit Posts

Posted on Nov 20, 2016
The skills the author demoed here can be learned through taking Data Science with Machine Learning bootcamp with NYC Data Science Academy.

Introduction

Reddit - it's the home of a diverse community of users who come for a variety of content. Deeming itself as "the front page of the internet," users get a slew of data content such as new articles, funny pictures, memes, serious discussions–you name it! To those who are unfamiliar, Reddit allows users to post content onto sub-communities called 'subreddits.' Once posted, other users can vote for content that they believe is good.

Over time, the 'best' content as deemed by the communities are pushed to the top; if it reaches enough points fast enough, the post will reach the front page of Reddit. While some people might use Reddit to find the next cute cat picture, to a data scientist, Reddit is a playground of data useful for detecting interesting trends.

Objective

As a result, I decided to scrape Reddit to examine the trends that appear on the front page and answer the following questions:

  • What subreddits produce the most front-page content?
  • What are people talking about on Reddit?
  • When is the best time to submit content to get to the front page?

However, there was a big problem with the execution of this project. Here in the bootcamp, we only have about 2 weeks to work on this scraping project. Subtracting the time it needs to get the scraping script up and running as well as the time needed for other activities within the bootcamp, I felt like there would not be enough data points to really get a sense of the trends on Reddit. Unless I built a time machine to go back a couple months to tell myself to learn web scraping and start scraping immediately, this project idea was doomed to fail.

Now Trending: Data Analysis of Top 10 Reddit Posts

Hey kid, you better learn some web scraping; it's going to be useful. Also, go learn some machine learning while you're at it!

Luckily, I did not need to travel back in time to do this project. Thinking outside of the box, I remembered a site called The Wayback Machine which periodically crawls and takes snapshots of websites. By using this site, one can go and look at websites snapshots from as far back as the early 2000's. Since Reddit seems to have many snapshots per hour, I decided to take as much data as I could from these snapshots and then cutback and cleanup later down the line.

Data Scraping Workflow

I decided to use the Scrapy library in Python for the project as I felt traversing through a webpage via XPaths would help with these deeply embedded sites. Below is a picture of a standard page depicting a list of snapshots for a website (in my case, Reddit).

Now Trending: Data Analysis of Top 10 Reddit Posts

While there are a lot of elements, my idea for tackling this was to find the entire calendar element and get each month element. Once I have each month element, I would be able to find the list that appears on hover of each day and extract the anchor tag for each element. This would allow me to essentially build a list of Reddit snapshot URLs to request for scraping:

After leaving the scraping script running overnight, I was able to scrape ~211k rows of top 10 Reddit posts. This consists of data ranging from 06/30/2016 to 11/11/2016. The scraping script extracted a variety of features per post that I thought would be useful for analysis, such as:

titles Title of post
upvotes Number of upvotes of a post
comments Number of comments of a post
subreddit Subreddit of a given post
url URL of a snapshot
submit datetime Submit time of a post
snapshot datetime Snapshot datetime
submitter Submitter of a post
rank Rank of a post

 

Exploratory Data Analysis

With the data scraped, let's perform some EDA:

Subreddits Topics

First, I wanted to see what sort of topics were getting onto the front page. To do this, I looked at the breakdown of subreddits

Now Trending: Data Analysis of Top 10 Reddit Posts

 

After inspecting the top subreddits that got posts to the front page, I noticed that a majority of these subreddits were boards that submitted graphical content (r/funny, r/pics, r/gifs, r/aww, etc.). This alludes that the typical Reddit user prefers graphical content rather than text content. While these types of posts seem to routinely get up to the front page, they don't appear to be the posts that generate the most discussion. To look at that, I examined the ratio of comments vs. upvotes for these posts:

Comments vs Upvotes

screen-shot-2016-11-22-at-9-29-10-am

After calculating the ratio and sorting by the highest scorers, we can see that r/AskReddit consistently generates the most discussed posts. While it's not surprising that a discussion board like r/AskReddit appeared here, it is interesting that other discussion-type boards such as r/IAmA or r/news do not appear as consistently on the top of Reddit.

Ok great, we have a sense of the type of content on the front page of Reddit, but what if I wanted to submit my own content and get to the top? What would be the best time to submit the post? To get an macro view of the upvote distribution in my dataset, I compared the average Reddit score per day over the snapshot dates for which I scraped.

Best Upvote Times by Date

screen-shot-2016-11-22-at-9-29-19-am

Peculiarly, there was an ascending trend for the number of upvotes over the days and months of data. After double checking the diagnostics of the regression line (which checked out), I started to speculate the possible reasons for this trend. My first intuition was that this was more of a local trend and that I would want more data points over a longer period of the year to confirm if there is an ascending trend. I felt that this is a scenario where the regression line's diagnostics might all check out but the model does not make sense and should not be used.

Luckily, the data for upvote time by hour was more intuitive and interpretable. As seen in the chart below, going on to Reddit from 17:00-20:00 UTC will be when the posts have the highest average upvote scores. This translates to 12:00PM-3:00PM EST or 9:00AM-12:00PM PST.

Best Upvote Time by Snapshot Hr

screen-shot-2016-11-22-at-9-29-27-am

Looking at the data from a post submission perspective, we notice a similar pattern. The pattern is just a shift of the snapshot chart to the left. This chart tells us that the best time to submit your content to Reddit is from 13:00-19:00 UTC. This translates to 8:00AM EST to 2:00PM EST. Thinking critically about this, it makes sense as this is the time when Europe is in full force through their day and the Americas are just waking up.

screen-shot-2016-11-22-at-9-29-33-am

Conclusion and Retrospective

Although some of the insights from this dataset reaffirmed my preconceived notions of Reddit, I was definitely surprised at some findings. Particularly, the increasing trend in upvotes over the year was interesting and warrants a deeper and broader look in future work on this data. The dichotomy of discussion reddit boards such as r/AskReddit and other image subreddits was also an interesting find as I expected other discussion subreddits to be more prevalent. Finally, the trend of the upvote times for posts by hour is especially useful for anyone looking to becoming a content submitter on Reddit.

While the insights of the data are interesting, I found that the biggest insight was from working with Scrapy to scrape historical time series data. There were definitely limitations in scraping historical data from the Wayback Machine. First the Wayback Machine site needs to have snapshots of the site you are interested in. Even if there are snapshots, the snapshots need to be frequent enough to be able to show consistent data throughout your time series.

I was lucky with Reddit as the snapshots were very frequent, but there are sites which get less than 24 snapshots a day. Another major issue is the fact that websites change their layouts overtime. Hence, your scraping code will need to be able to handle the different layouts and subtle changes to a website over a period of time. Lastly, many web scraping bots will run into issues with too many requests (e.g. HTTP 429 errors). Although I set a long wait time, there bot was still running into numerous 429 errors. I was lucky again as there was enough Reddit snapshots so that if I dropped any data points due to a 429, I would still be fine.

About Author

Regan Yee

Regan is an aspiring data scientist who comes from a computer science background. He obtained his Bachelors degree from Northeastern University in Computer Science. After graduating, Regan worked at State Street Global Advisors on business intelligence systems, performing...
View all posts by Regan Yee >

Related Articles

Leave a Comment

Google August 21, 2020
Google Just beneath, are a lot of totally not related websites to ours, nevertheless, they're certainly really worth going over.
behance.net January 23, 2019
I'm very pleased to discover this site. I need to to thank you for your time due to this wonderful read!! I definitely liked every little bit of it and I have you bookmarked to check out new things on your website.
Abraham July 20, 2017
Great information. Blessed me I reach on your own website by accident, I bookmarked it.
Rhys June 12, 2017
Great line up. We will be linking to this great post on our website. Keep up the great writing.
ve may bay quoc noi May 31, 2017
Great post. I was checking continuously this weblog and I am inspired! Very helpful information specially the closing part :) I take care of such info much. I was looking for this particular info for a long time. Thanks and good luck.
Pixel gun 3d Coin generator April 23, 2017
The Pixel Gun was the sole weapon make use of, and it has a faster firerate set alongside the newer types.

View Posts by Categories


Our Recent Popular Posts


View Posts by Tags

#python #trainwithnycdsa 2019 2020 Revenue 3-points agriculture air quality airbnb airline alcohol Alex Baransky algorithm alumni Alumni Interview Alumni Reviews Alumni Spotlight alumni story Alumnus ames dataset ames housing dataset apartment rent API Application artist aws bank loans beautiful soup Best Bootcamp Best Data Science 2019 Best Data Science Bootcamp Best Data Science Bootcamp 2020 Best Ranked Big Data Book Launch Book-Signing bootcamp Bootcamp Alumni Bootcamp Prep boston safety Bundles cake recipe California Cancer Research capstone car price Career Career Day citibike classic cars classpass clustering Coding Course Demo Course Report covid 19 credit credit card crime frequency crops D3.js data data analysis Data Analyst data analytics data for tripadvisor reviews data science Data Science Academy Data Science Bootcamp Data science jobs Data Science Reviews Data Scientist Data Scientist Jobs data visualization database Deep Learning Demo Day Discount disney dplyr drug data e-commerce economy employee employee burnout employer networking environment feature engineering Finance Financial Data Science fitness studio Flask flight delay gbm Get Hired ggplot2 googleVis H20 Hadoop hallmark holiday movie happiness healthcare frauds higgs boson Hiring hiring partner events Hiring Partners hotels housing housing data housing predictions housing price hy-vee Income Industry Experts Injuries Instructor Blog Instructor Interview insurance italki Job Job Placement Jobs Jon Krohn JP Morgan Chase Kaggle Kickstarter las vegas airport lasso regression Lead Data Scienctist Lead Data Scientist leaflet league linear regression Logistic Regression machine learning Maps market matplotlib Medical Research Meet the team meetup methal health miami beach movie music Napoli NBA netflix Networking neural network Neural networks New Courses NHL nlp NYC NYC Data Science nyc data science academy NYC Open Data nyc property NYCDSA NYCDSA Alumni Online Online Bootcamp Online Training Open Data painter pandas Part-time performance phoenix pollutants Portfolio Development precision measurement prediction Prework Programming public safety PwC python Python Data Analysis python machine learning python scrapy python web scraping python webscraping Python Workshop R R Data Analysis R language R Programming R Shiny r studio R Visualization R Workshop R-bloggers random forest Ranking recommendation recommendation system regression Remote remote data science bootcamp Scrapy scrapy visualization seaborn seafood type Selenium sentiment analysis sentiment classification Shiny Shiny Dashboard Spark Special Special Summer Sports statistics streaming Student Interview Student Showcase SVM Switchup Tableau teachers team team performance TensorFlow Testimonial tf-idf Top Data Science Bootcamp Top manufacturing companies Transfers tweets twitter videos visualization wallstreet wallstreetbets web scraping Weekend Course What to expect whiskey whiskeyadvocate wildfire word cloud word2vec XGBoost yelp youtube trending ZORI