Scraping the Partisan Divide: Sentiment, Text, & Network Analysis of an online political discussion forum using Python.
In this post I apply data science techniques to investigate partisan division on scraped user post data from the online political discussion forum Liberalforum.net.
My process:
- Scrape Text and user partisan identity and text for each post.
- Consolidate Partisanship from all partisan identities into a general 'liberal' and 'conservative' buckets.
- Distinguish Top Phrases for each bucket.
- Analyze Sentiment between partisan identities on posts matching political keywords.
- Identify Communities via network analysis to investigate partisan division.
Demonstrated Python Skills:
- Web-scraping with Scrapy
- Text Analysis with NLTK and Sklearn.CountVectorizer
- Sentiment Analysis with TextBlob
- Network Analysis with NetworkX and Community
- Visualization with Seaborrn
Check out my Github gists for both my scraping and analyses.
The Data
I chose to web-scrape this online political discussion specifically because:
- Social-based text data - apply NLP & Sentiment Analysis
- Website data not available via API
- A high percentage of of users self-identify their partisan identity, an interesting factor for analysis.
I must additionally be careful to not fully extrapolate the results gleamed from these heavy forum users to the general public.
Web-Scraping
I performed my scraping using the open source Python package Scrapy identifying and pulling text via html tags.
- # of Posts: 249,749
- Date Range: 1/1/16 - 5/12/18
Structure of the scraped data:
Defined # of Threads -> X Pages per Thread -> X Posts per Page
My Scrapy spider crawls through each thread chronology, identifies the total # of posts per thread, and crawls through each post of page.
Scraped data includes:
- post date and text
- thread title and first poster
- user name, total number of posts, and political ideology
Political Ideology
Of the 253 users who posted on the forum, 129 (51%) self-identified politically, who accounted for 62% of the total posts.
I next consolidated these ideologies with n > 5 along the conventional liberal to conservative US ideological spectrum, discluding 'independents'.
Conservative | Liberal |
Conservative; Republican;
Libertarian; Capitalist |
Liberal; Progressive; Socialist;
Democratic; Anarchist; Green |
Interestingly enough, despite equal numbers of liberal and conservative users, on Liberalforum.net, conservative users author a large majority of posts.
Top Words
Using the CountVectorizer package from sklearn.feature_extraction.text, I created a 2 n-gram corpus of phrases from the NLP=processed user-post text data.
I split the data between liberal and conservative users, and sorted the output to gain the top 20 2-word phrases for each group.
Findings:
- Politicians
- Conservatives had a higher count of 'hilary clinton' than 'donald trump and Liberals vice versa.
- This supports the idea that users more frequently discuss topics they are disparaging.
- Conservatives had a higher count of 'hilary clinton' than 'donald trump and Liberals vice versa.
- Policy topics
- Much more frequent in Liberals top list than Conservative top list.
- Conservatives - 'tax cut',
- Liberals - 'tax cut', 'health care', 'wall street', 'foreign polici',
- Much more frequent in Liberals top list than Conservative top list.
- 'Gibberish Nonsense'
- Emotive or sarcastic phrases expressing style of users
- Indicates higher sophistication of sentiment analysis necessary down the line.
Analyzing Sentiment
Using the Python package TextBlob, I ran a sentiment analysis on the text of each of the 250k+ scraped posts.
I first applied standard NLP cleaning to the postText data, removing stopwords and stemming the text. I then fed the simplified outputs into the TextBlob sentiment analyzer.
The analyzer works via Naive Bayes classification, built upon the Stanford NLTK program.
TextBlob's sentiment analysis outputs two scores:
- Subjectivity
- 0 (highly objective) to 1 (highly subjective)
- Polarity
- -1 (highly negative attitude) to +1 (highly positive attitude)
Political Conversations
I next look to identify partisan differences in sentiment regarding known polarized political keywords. .
For each keyword, I segmented out all containing posts along with the # of identifying users.
keywords | # posts | # users - liberal | # users - conservative |
'trump' |
13,731
|
25 | 28 |
'hilary'; 'clinton' |
3,425
|
18 | 22 |
'obama | 2,454 | 24 | 15 |
'russia' | 1,240 | 11 | 18 |
Since the posts/user distribution is so skewed, I chose as the unit of study the distribution of mean sentiment values of each user.
This prevents the scores of a few high posting users from flooding out other users and thus keeps the analysis focused on the conversation occurring between users.
Subjectivity and Polarity box-plot distributions for each political keyword:
Findings:
- Subjectivity
- Each side tends to have lower subjectivity toward political keywords they support, and higher subjectivity toward political keywords they oppose.
- Possibly users are being more emotional in railing against other side, while being more rational in defending their own side
- The "Russia" keyword has the largest difference between the two views.
- Does lower subjectivity ratings among liberals suggest they structuring their arguments more objectivity than conservatives? What does this signify?
- Each side tends to have lower subjectivity toward political keywords they support, and higher subjectivity toward political keywords they oppose.
- Polarity
- Each side is more more positive toward political figures they support!
- On 'Russia', Conservatives have a higher mean and larger distribution than Liberals
- Across all keywords, means are centered around 0.
- Further Work: Limit text analyzed to single sentence containing keyword rather than the entire post.
Social Network Analysis
To what extent are political discussions on the forum crossing the ideological divide?
To find out, I implemented a network analysis via the Python package NetworkX.
A network analysis is made up of nodes, in this cases users, linked by edges, in this case posting on another user's thread..
I color my nodes via the partisan identification of the users (grey = not segmented):
The network visually consists of:
- one main cluster
- inner core layer - users all highly connected to one-another
- outer layer - users connected to one or a few users in the inner region
- a second minuscule cluster of 3 users connected together via only one user.
Next, I used the best_partition() function of the communities package to partition the nodes into the groupings which maximizes the modularity, the ratio of the connection density within each groups relative to the connection density between groups, using the Louvain heuristics. I also weighted the partition via the total number of posts by each user. In this case we are returned 5 segments.
Now to resolve my question, I analyze the makeup of each of these 5 segments:
With the caveat that partisan identity data is available for only about half of all users, analyzing the breakdown among those users (the difference in count between 'LeftRight_binary' and 'conservative_binary' for each row) shows that the communities within the main cluster are all considerably balanced between conservative and liberal users.
Thus, there is evidence that users are socializing across the partisan divide and the forum is serving as a space for political debate.
Future Work
- Build out more sophisticated NLP tagging and sentiment analysis: entities, key phrases and multi-dimensional sentiment score.
- Create a Naive Bayes classifier to predict whether posts were written by a 'Liberal' or 'Conservative' user based on their text.
Check out my code for this project via my Jupyter notebook Gist on Github.