Scraping the Scrapers - Web Scraped NYC Data Science Academy Blog Posts

Posted on Oct 30, 2018

Web Scraping Project - This project will be a practice in scraping information from the web, cleaning it and gathering insights from it through visualization or using machine learning techniques where appropriate.

Visit my project here: https://taipeifx.shinyapps.io/nycdsa_blog_data/

INTRO: For my second project I had to search for a website to web scrape. It had to be a website with meaningful information that I wanted to parse data from. I gave it quite a bit of thought but sometimes the answer is just staring you in the face, so I chose to scrape the NYC Data Science Academy Blog. I would add a link but this is it right here.

For this project I chose to web scrape with Scrapy which is written in Python. Taking a first look at the main blog page, I knew I had to get the fundamental items: author name, blog title, date published, topic category, the excerpt. Then, clicking into a post, I thought that I wanted to obtain the number of times each project was shared on social media. This could tell me what topics were widely shared, maybe also entailing that the project was well made. Then I realized that although the number of shares could be a fun fact, it wasn't what I wanted to focus on.

The blog had much more content to offer and so the real scope of the project took shape. After all, this would be the first time anyone has scraped the NYC Data Science Academy blog for a project.

SCRAPING: Scraping the fundamental information from the blog took me several tries, but it was fairly straightforward once I manually found the xpath. Aside from the first page of the blog post, https://nycdatascience.edu/blog/, the rest of the pages had similar URLs and XPaths (e.g.: https://nycdatascience.edu/blog/page/2, https://nycdatascience.edu/blog/page/35). So I had my Scrapy Spider do the work and it grabbed all the fundamental information for me. In the end it scraped a grand total of 1,215 usable posts. This was out of 1,221 available, public blog posts on the website as of the time of the scrape on Oct 26, 2018. The posts that were omitted were test posts and posts that were password protected (these posts lacked at least one fundamental item).

scraping the fundamentals

With this completed, I proceeded to create another scrapy spider. This second spider's job was to scrape the actual content of each individual post. I had it grab all the text it could. While it was busy doing that, I built a timeline with the fundamental data that I had acquired.

TIMELINE: With timevis() in RStudio I created a timeline, or more specifically a Gantt chart. It would show when each post was created, grouped by post category. There was a ton of data.

a portion of the posts which had R as the main topic category

A visitor to the project shiny app would be able to select multiple categories to see how they compared on a timeline. There are two versions of this chart: version 1 is the one shown above and it shows the title of each post. In version 2, below, the frequency of posts by category is shown with tallies.

version 2: alumni, student works, meetup

We can see that the earliest posts were Meetup posts created in mid 2013, which was perhaps how NYC Data Science started out. Having meetings and reaching out to the community could definitely garner interest. Then, posts of Student Works started appearing in 2014 with Alumni posts following after in late 2015.

Underneath the timeline I added a searchable table with actual links to the posts so that the ones that are attention catching can be visited. There is a search function for either all of the posts or just within a specific category by selecting the category from a list.

Natural Language Processing (NLP): WORD CLOUD: Now that the second scrapy spider that I had created finished grabbing all the text from the blog posts, I rushed to see what I could do with this acquired data. What I found was row upon row of missing data and posts that were under 100 characters in length. There are 97 characters in the last sentence I just wrote. There is no way that nchar() < 100 could constitute an actual blog post. I had to go back into the HTML and re-scrape the data that wasn't captured.

I found that some posts were formatted to contain a span tag. I ended up with two response.xpath's that grabbed the vast majority of the 1,215 blog posts:

response.xpath('//div[contains(@class, "the-content")]/p/text()').extract() # 1,168 posts
response.xpath('//div[contains(@class, "the-content")]/p/span/text()').extract() # 481 posts

Some posts had their content entirely in one xpath while other posts had partial information from both xpaths. I omitted content that had less than 100 characters from an extract() because those were usually snippets, maybe captions or side-notes. There was also one post where someone used bullet points to write the entirety of the post, the text had a xpath as response.xpath('//div[contains(@class, "the-content")]/ul/li/text()').extract(). It was as if they were expecting someone to come along one day to scrape the blog and they made it an uphill battle to do so.

After combining the separate scrapes and cleaning all of the posts of weird punctuation and "\u00A0"s, the wordcloud package was used to create this WordCloud():

a word cloud created from the text of 1,215 NYC Data Science Academy blog posts

*A note on Stop Words: the most common words found were "use" and all of its variations ("user","using","used"), but I included them as Stop Words for the Word Cloud so they are not included.

NLP:LDA: With all the data at hand I had one more objective for this project and that was to do some Latent Dirichlet Allocation (LDA) on the posts. LDA is an unsupervised method that allows sets of observations to be explained by unobserved groups that explain why some parts of the data are similar. LDA would allow me to extract the most commonly used words from all posts, then create 10 groups of 20 words each which would form the major topics. These 10 topics could then be used to categorize each post based on the post's similarities to each of the topics.

As an example I tested my first project "Taiwan Voting Data". The numbers that I got were

[[1.36030417e-01 1.96982420e-02 7.70709540e-02 7.65025867e-01
  3.62418627e-04 3.62425841e-04 3.62396375e-04 3.62411829e-04
  3.62429431e-04 3.62438249e-04]]

with each number corresponding to the project's similarity to the ten topics (Topics #0 - 9) in sequential order. The top match for my post was

  • Topic #3 (with a score of 0.765) : data app shiny user time information number map code based different tab health used project average job application chart salary

Indeed, my project was a shiny app which contained a map. There were other topics that were far off the mark of what my project was about and those topics showed low scores. While it's not exact, it was still fun to see LDA in action.  So as a final idea, I wanted to add on my app a page which allowed users to test out LDA on their own to see how well it worked. There was just one problem however. I did the LDA analysis in python (and it was python 2 for that matter) and the Shiny Apps which contained my project was in R.

INTERACTIVE: LDA: With this last part of my project, I included the fun of Natural Language Processing by Latent Dirichlet Allocation in my Shiny App.

Predicting the Baseball Hall of Fame had a top match with Topic #7

FINAL NOTE: I hope that this app can help students and readers choose a project topic or even find a post that they are interested in. If I have time I'll be able to scrape the blog with this "Scraping the Scrapers" post in it and add it as post #1,216 in this project. I wonder how that works?

The actual project can be visited at https://taipeifx.shinyapps.io/nycdsa_blog_data/

For actual code, all my work is stored at my GitHub https://github.com/taipeifx/the_scrapers

Thanks!

About Author

Daniel Chen

Daniel Chen is the founder of multiple startups including foreign exchange brokerages. Managed cross-border development with international companies from Europe and Asia. Has even more experience now with data science and coding. Born in 1987's Los Angeles, California...
View all posts by Daniel Chen >

Leave a Comment

No comments found.

View Posts by Categories


Our Recent Popular Posts


View Posts by Tags

#python #trainwithnycdsa 2019 2020 Revenue 3-points agriculture air quality airbnb airline alcohol Alex Baransky algorithm alumni Alumni Interview Alumni Reviews Alumni Spotlight alumni story Alumnus ames dataset ames housing dataset apartment rent API Application artist aws bank loans beautiful soup Best Bootcamp Best Data Science 2019 Best Data Science Bootcamp Best Data Science Bootcamp 2020 Best Ranked Big Data Book Launch Book-Signing bootcamp Bootcamp Alumni Bootcamp Prep boston safety Bundles cake recipe California Cancer Research capstone car price Career Career Day citibike classic cars classpass clustering Coding Course Demo Course Report covid 19 credit credit card crime frequency crops D3.js data data analysis Data Analyst data analytics data for tripadvisor reviews data science Data Science Academy Data Science Bootcamp Data science jobs Data Science Reviews Data Scientist Data Scientist Jobs data visualization database Deep Learning Demo Day Discount disney dplyr drug data e-commerce economy employee employee burnout employer networking environment feature engineering Finance Financial Data Science fitness studio Flask flight delay gbm Get Hired ggplot2 googleVis H20 Hadoop hallmark holiday movie happiness healthcare frauds higgs boson Hiring hiring partner events Hiring Partners hotels housing housing data housing predictions housing price hy-vee Income Industry Experts Injuries Instructor Blog Instructor Interview insurance italki Job Job Placement Jobs Jon Krohn JP Morgan Chase Kaggle Kickstarter las vegas airport lasso regression Lead Data Scienctist Lead Data Scientist leaflet league linear regression Logistic Regression machine learning Maps market matplotlib Medical Research Meet the team meetup methal health miami beach movie music Napoli NBA netflix Networking neural network Neural networks New Courses NHL nlp NYC NYC Data Science nyc data science academy NYC Open Data nyc property NYCDSA NYCDSA Alumni Online Online Bootcamp Online Training Open Data painter pandas Part-time performance phoenix pollutants Portfolio Development precision measurement prediction Prework Programming public safety PwC python Python Data Analysis python machine learning python scrapy python web scraping python webscraping Python Workshop R R Data Analysis R language R Programming R Shiny r studio R Visualization R Workshop R-bloggers random forest Ranking recommendation recommendation system regression Remote remote data science bootcamp Scrapy scrapy visualization seaborn seafood type Selenium sentiment analysis sentiment classification Shiny Shiny Dashboard Spark Special Special Summer Sports statistics streaming Student Interview Student Showcase SVM Switchup Tableau teachers team team performance TensorFlow Testimonial tf-idf Top Data Science Bootcamp Top manufacturing companies Transfers tweets twitter videos visualization wallstreet wallstreetbets web scraping Weekend Course What to expect whiskey whiskeyadvocate wildfire word cloud word2vec XGBoost yelp youtube trending ZORI