Scraping the Scrapers - Web Scraped NYC Data Science Academy Blog Posts
Web Scraping Project - This project will be a practice in scraping information from the web, cleaning it and gathering insights from it through visualization or using machine learning techniques where appropriate.
Visit my project here: https://taipeifx.shinyapps.io/nycdsa_blog_data/
INTRO: For my second project I had to search for a website to web scrape. It had to be a website with meaningful information that I wanted to parse data from. I gave it quite a bit of thought but sometimes the answer is just staring you in the face, so I chose to scrape the NYC Data Science Academy Blog. I would add a link but this is it right here.
For this project I chose to web scrape with Scrapy which is written in Python. Taking a first look at the main blog page, I knew I had to get the fundamental items: author name, blog title, date published, topic category, the excerpt. Then, clicking into a post, I thought that I wanted to obtain the number of times each project was shared on social media. This could tell me what topics were widely shared, maybe also entailing that the project was well made. Then I realized that although the number of shares could be a fun fact, it wasn't what I wanted to focus on.
The blog had much more content to offer and so the real scope of the project took shape. After all, this would be the first time anyone has scraped the NYC Data Science Academy blog for a project.
SCRAPING: Scraping the fundamental information from the blog took me several tries, but it was fairly straightforward once I manually found the xpath. Aside from the first page of the blog post, https://nycdatascience.com/blog/, the rest of the pages had similar URLs and XPaths (e.g.: https://nycdatascience.com/blog/page/2, https://nycdatascience.com/blog/page/35). So I had my Scrapy Spider do the work and it grabbed all the fundamental information for me. In the end it scraped a grand total of 1,215 usable posts. This was out of 1,221 available, public blog posts on the website as of the time of the scrape on Oct 26, 2018. The posts that were omitted were test posts and posts that were password protected (these posts lacked at least one fundamental item).
With this completed, I proceeded to create another scrapy spider. This second spider's job was to scrape the actual content of each individual post. I had it grab all the text it could. While it was busy doing that, I built a timeline with the fundamental data that I had acquired.
TIMELINE: With timevis() in RStudio I created a timeline, or more specifically a Gantt chart. It would show when each post was created, grouped by post category. There was a ton of data.
A visitor to the project shiny app would be able to select multiple categories to see how they compared on a timeline. There are two versions of this chart: version 1 is the one shown above and it shows the title of each post. In version 2, below, the frequency of posts by category is shown with tallies.
We can see that the earliest posts were Meetup posts created in mid 2013, which was perhaps how NYC Data Science started out. Having meetings and reaching out to the community could definitely garner interest. Then, posts of Student Works started appearing in 2014 with Alumni posts following after in late 2015.
Underneath the timeline I added a searchable table with actual links to the posts so that the ones that are attention catching can be visited. There is a search function for either all of the posts or just within a specific category by selecting the category from a list.
Natural Language Processing (NLP): WORD CLOUD: Now that the second scrapy spider that I had created finished grabbing all the text from the blog posts, I rushed to see what I could do with this acquired data. What I found was row upon row of missing data and posts that were under 100 characters in length. There are 97 characters in the last sentence I just wrote. There is no way that nchar() < 100 could constitute an actual blog post. I had to go back into the HTML and re-scrape the data that wasn't captured.
I found that some posts were formatted to contain a span tag. I ended up with two response.xpath's that grabbed the vast majority of the 1,215 blog posts:
response.xpath('//div[contains(@class, "the-content")]/p/text()').extract() # 1,168 posts
response.xpath('//div[contains(@class, "the-content")]/p/span/text()').extract() # 481 posts
Some posts had their content entirely in one xpath while other posts had partial information from both xpaths. I omitted content that had less than 100 characters from an extract() because those were usually snippets, maybe captions or side-notes. There was also one post where someone used bullet points to write the entirety of the post, the text had a xpath as response.xpath('//div[contains(@class, "the-content")]/ul/li/text()').extract(). It was as if they were expecting someone to come along one day to scrape the blog and they made it an uphill battle to do so.
After combining the separate scrapes and cleaning all of the posts of weird punctuation and "\u00A0"s, the wordcloud package was used to create this WordCloud():
*A note on Stop Words: the most common words found were "use" and all of its variations ("user","using","used"), but I included them as Stop Words for the Word Cloud so they are not included.
NLP:LDA: With all the data at hand I had one more objective for this project and that was to do some Latent Dirichlet Allocation (LDA) on the posts. LDA is an unsupervised method that allows sets of observations to be explained by unobserved groups that explain why some parts of the data are similar. LDA would allow me to extract the most commonly used words from all posts, then create 10 groups of 20 words each which would form the major topics. These 10 topics could then be used to categorize each post based on the post's similarities to each of the topics.
As an example I tested my first project "Taiwan Voting Data". The numbers that I got were
[[1.36030417e-01 1.96982420e-02 7.70709540e-02 7.65025867e-01 3.62418627e-04 3.62425841e-04 3.62396375e-04 3.62411829e-04 3.62429431e-04 3.62438249e-04]]
with each number corresponding to the project's similarity to the ten topics (Topics #0 - 9) in sequential order. The top match for my post was
- Topic #3 (with a score of 0.765) : data app shiny user time information number map code based different tab health used project average job application chart salary
Indeed, my project was a shiny app which contained a map. There were other topics that were far off the mark of what my project was about and those topics showed low scores. While it's not exact, it was still fun to see LDA in action. So as a final idea, I wanted to add on my app a page which allowed users to test out LDA on their own to see how well it worked. There was just one problem however. I did the LDA analysis in python (and it was python 2 for that matter) and the Shiny Apps which contained my project was in R.
INTERACTIVE: LDA: With this last part of my project, I included the fun of Natural Language Processing by Latent Dirichlet Allocation in my Shiny App.
FINAL NOTE: I hope that this app can help students and readers choose a project topic or even find a post that they are interested in. If I have time I'll be able to scrape the blog with this "Scraping the Scrapers" post in it and add it as post #1,216 in this project. I wonder how that works?
The actual project can be visited at https://taipeifx.shinyapps.io/nycdsa_blog_data/
For actual code, all my work is stored at my GitHub https://github.com/taipeifx/the_scrapers