Internet forums or message boards are online discussion sites. They're used for a variety of purposes, including general socialization, asking specific questions, or sharing experiences.
Forums can be an abundant source of unstructured text data, often generated by specific communities united by shared interests. And analysis can be applied to these online discussions to uncover hidden patterns or to organize, summarize, and understand these communities -- both the members themselves and what they talk about.
In internet parlance, the participants are referred to as 'users', the conversations are called 'threads', and the unit of interaction is called a 'post'.
In this project, we use topic modeling to infer a structure or a set of common topics behind a collection of threads. Additionally, we try to group users to see if we can determine whether users fall into identifiable archetypes.
We used Python's Scrapy library to gather user-posts from www.veggieboards.com, an internet community for vegetarians and vegans. The final dataset consists of ~450,000 posts from Oct. 2001 to Oct. 2017. The posts are divided into ~22,000 threads and made by ~14,600 users. We stored the individual posts as Python dictionaries in a MongoDB database - an example can be seen below:
We followed the workflow set forth by Patrick Harrison who did Modern NLP in Python, relying heavily on the Python Spacy and Gensim libraries for natural language processing.
- Text Normalization: transforming all text into a consistent format in preparation for processing
- Phrase Modeling: combining commonly co-occurring tokens into phrases. Below is what our text looks like after the phrase modeling step. We only modeled up to four token phrases. Additionally, all pronouns are replaced with a generic -PRON- term. The phrases below are outlined in red.
- Dictionary Creation: defining a vocabulary of valid tokens
- In this step we can remove stopwords or very common words (occurring in over 40% of our documents) that might not provide any explanatory information
- We also remove very rare words (occurring less than 10 times) out of practicality -- because the processing time for subsequent steps depends on dictionary size
- Latent Dirichlet Allocation:
- We use the Gensim library to perform LDA, first treating all posts in a thread as our unit of analysis (referred to as a document in natural language processing) and then a second time treating all posts by a user as our documents.
- LDA is a probabilistic model that tries to uncover latent "topics" from all our documents (also referred to as a corpus). Unlike clustering, which would assign each document to a single cluster, the assumption is that documents may belong to a number of different topics.
We can inspect the generated LDA models using the pyLDAvis library. The tool reduces the dimensionality of the model from K, the number of topics, down to 2D space for visualization purposes. The conceit being that topics that are more similar to each other are closer together and topics that are less similar are farther apart.
We chose 40 topics for the thread-centric approach, and 25 topics for the user-centric approach, based on human judgment -- looking at the topics and seeing if they made sense.
The topics generated by the LDA process aren't always human interpretable, and they aren't always easy for humans to distinguish between. Some topics are vague, and many topics share the same words. The clarity of division depends on the underlying documents themselves. Perhaps LDA may not be wholly appropriate for sub-forums, which are already narrowed down to specific categories. Even still, we were able to extract some promising topics.
The modeling process is fairly labor intensive, because it's difficult to evaluate the models without human intervention, especially if the goal of the model is for human understanding.
For modeling on users, the results are much worse, perhaps because generally users participate in a variety of different topics, taking part in many different threads. A lot of the pre-processing that's helpful in topic modeling such as removing stopwords, stemming, and lemmatization may actually be counter-productive for user differentiation. Those processes remove stylistic information unique to users or user groups, thus a user-centric study may benefit from a different type of dictionary.
Potential avenues for further investigation include using LDA as an intermediary step for authorship attribution, troll detection, and deceptive opinion spam.
Analyzing Internet Forums:
Modern NLP in Python:
Improving LDA Topic Models:
Authorship Attribution with LDA: