Making sense of scraped Reddit commentary using NLP techniques.
Why would I do this?
Any institutionโs lifeblood rests upon agents from the outside. Like any organism, external inputs are necessary to maintain any sort of equilibrium. Because of this, it is necessary to maintain a watchful eye on the opinions of those discussing the organization and its competitors. What are these individuals interested in the most? What do people really care about? By answering these questions an organization can be more effective at marketing its services.
There are different ways of gaining knowledge of the opinions of others. One is through conversation. Another, more modern approach, is the practice of collecting structured and unstructured survey data. A third is by merely reading what people happen to be conversing about. Even as of the year 2017, it is rather unusual for someone to pursue a career in data science, programming, or a related field. It is probable that the small sample that is interested in pursuing this field are in their youth, tech/internet savvy, and who may have developed such an interest in relative isolation. Commentary on certain internet pages would have at least some value.
Because there is so much sentiment, even for such a specific topic, on the internet. I felt that Web scraping made a lot of sense in exploring this issue. The website Reddit is home to a wide array of topics including those already mentioned. Its users usually submit their questions and comments anonymously in various threads. Why would I scrape the data though rather than read it myself? One issue with just reading all the comments is that it is time-consuming. Another issue is that I may fall prey to certain cognitive biases my summaries of such opinion (recency effect). Scraping alone provides no real advantage but getting a lot of comments on your own laptop allows you to conduct whatever analysisย you may find useful.
Process
The specific process involved scraping over 300,000s words from threads on Reddit related to these 2 two sets of conversation by selecting the appropriate tag when inspecting the elements on the page. I selected the threads by searching Redditย for "programming boot camps" and "data science boot camps".
response.xpath('//div[@class="md"]/p/text()')[5:400].extract() using python in Sublime test. I used this same path for the exact number of pages
This response path gives me the comments on each page of commentary that I selected. The first 5 lines were related to the side panel rather than the comments themselves. I set the limit to 400 lines which limited the amount of text one could get from each page examined. This avoided issues related to page loading.
Instead of spending hours upon hours reading through those words to find emerging themes, I instead chose to spend less time cleaning the data in such a way that I could let an algorithm discover the essence of such documents itself. That allowed me to avoid my own biases to a certain degree and was a far more efficient use of time.
I chose Latent Dirichlet allocation as my NLP algorithm because I felt that it was best to use an unsupervised algorithm. The idea is that we make no assumptions about people are discussing on the various threads and allow the algorithm to discover the essence of those documents by uncovering the topics in a probabilistic fashion.ย The โtopicsโ discovered are merely word frequencies that form clusters or โbags of wordsโ.ย The highest word frequency was the way in which I defined the emergent topic in this analysis. I first searched โdata science bootcampsโ โprogramming bootcampsโ on Reddit. One document was defined as data collected from X amount of Reddit threads for data science bootcamps and another document was defined as X amount of threads collected on programming bootcamps.
Results and Interpretationย
WordCloud.ย ย Year, Stats, Level seem to be quite relevant.
The following graphs represent the topics that LDA uncovered. The beta value represents the probability of the term being drawn from each topic. The topic itself is represented by a number. I determined the topic name as being the term with the highest probability. One can easily see this by looking the longest bar on each graph.
As you can see, the term "job" is exceedingly probable given the topics within the two documents. Increasing the number of topics didn't change that fact with the term "job" being in every topic and being the dominant term for 5 of the 7 topics. The term "masters"ย was also dominate in one other topic and "long" in another. You can see that there is some related clustering. Data science-related topics seem to cluster in the "Masters" topic.ย This is because there is also the terms "statistics", "phd', "analysis", and "field".
Since the term "job" was so dominant within these topics one can conclude that this a concern across the Reddit threads examined within either topic. I decided to take the term out of the analysis by putting it in the "stop words". Stop words are simply words that have no relevance to the meaning of either document in the analysis. Articles such as the term "the" nearly always stop words because if "the" was the most probable term it wouldn't add any meaning to the analysis. These post"job" graphs have been produced after the term "job" and a few other stop words were removed.ย The first two topics don't seem to reveal much since they don't really identify the two documents. Once I again increased the number of topics to 7 you can see more interesting trends. The last two topics are dominated by terms related to data science while topic 2 and 4 seem to be related the other programming bootcampsย with the terms such as "web" andย "javascript" appearing. Terms related to time are scattered throughout the topics but are central to the first topic. Terms such as "day", "months". "hours", and "long" frequently appear.
This is the last run of LDA did with only 5 topics. I think this most accurately captured the topics within these two documents (programming bootcamps, data science bootcamps) because there appears to be the least amount of overlap between the topics. The first and 3rd topics are dominated by data science related terms while the 2nd and 4rth topics were more focused on the interests related to programming bootcamps. The last topic seemed to contain shared interests between the two documents with the vague term "report" being far more probable for some reason.
I think the most interesting insight that one can draw from this analysis is that the term "masters" is one of the most probable terms associated with the data science focused clusters while "masters" does not appear in the programming bootcamp clusters.ย This contrast may suggest differences in the interests and backgrounds of those interested in pursuing one camp or the other. Masters programs may be a center of topic for those interested in a career in data science or those getting out of masters programs may be the most interested in going to such attending a bootcamp. Those interested in programming bootcamps seem to be more interested in the online material and free material. These individuals are less interested in masters degree.ย Time also seems to be a major center of conversation for both documents.