Data Analysis on Characterization of Tweets
The skills I demoed here can be learned through taking Data Science with Machine Learning bootcamp with NYC Data Science Academy.
Github
Background and Motivation
For any company that provides a product or service, understanding its audience is essential. Understanding and listening to what the audience of its product is saying, whether it be positive or negative, can guide companies into making changes to their product that keep their customers happy. Their are a number of ways where companies can get feedback from their audience, including focus groups, data from surveys, and customer reviews.
An additional resource where companies can get feedback from is the popular social networking site Twitter. Twitter is a platform containing over 100 million users where people can freely express their opinions on a multitude of different topics.
For companies, being able to analyze the overall sentiment and understand what people are tweeting about in regards to their service, can lead to a variety of insights. With this in mind, I decided to build both a topic and sentiment classification model and use these models to analyze tweets returned by the Twitter API.
Building the Sentiment Classification Model
Dataset
The dataset I used to build the sentiment classifier came from Kaggle.com. The dataset contained 1.6 million tweets from the year 2009 where each tweet was given a label of positive or negative.
Transfer Learning and BERT
Transfer learning is the idea of taking knowledge from a task that was already learned and using that knowledge as a starting point for a new task. I utilized this concept of transfer learning to obtain knowledge from a model known as BERTweet, which had already been trained to learn the language of english tweets.
BERTweet was trained on 850 million english tweets, a majority of which were streamed from January, 2012 to August, 2019. There are other pre-trained language-models such as BERT, the model which BERTweet was based off of, that were trained on other text corpuses like Wikipedia. Since tweets can contain a lot of slang and can be structured in ways that are not grammatically correct, I chose to use the BERTweet language model because it would likely give more accurate representations of what I was trying to classify.
Data Preparation
The labels in the dataset contained two values where 0 represented tweets with a negative sentiment and 4 represented tweets with a positive sentiment. In order to get these labels in a format suitable for Tensorflow, I simply converted all labels with a 4 to 1, where 1 now represented tweets with a positive sentiment.
The natural language processing library "Hugging Face" contained the pre-trained BERTweet model along with a data preprocesser specifically designed for the BERTweet model. The data preprocesser automatically split tweets into tokens and converted user mentions and url links into @USER and @HTTPURL, respectively.
Data on Model Building
A summary of the deep learning model I built is shown below.
The first hidden layer is the network is the embedding layer from the BERTweet model. This embedding layer essentially converts input tokens into embedding vectors that capture the contextual meaning of tokens in a tweet. The output of the model is a single value that represents the probability of a tweet being positive.
The weights from the embedding layer are not updated during the training process. These weights are the "pre-trained" or "already-learned" knowledge that we are transferring over from the BERTweet model to solve this problem. While these weights will not change, the weights in the hidden layers that follow the embedding layer will be updated.
Model Results Data
The model applied to the test set, produced an accuracy of around 84%. Using this model on individual text yields some of the following results.
The first two example tweets in the table above are pretty straight-forward and likely easy for most sentiment classification models to correctly classify. Classifying the last two example tweets however, may be a little more difficult for classification models to correctly classify.
Major advantages of BERT models include the fact that a word embedding is generated based on its context in a sentence. A word embedding generated by BERT will take into account the position of the word in a tweet as well as the words that precede and follow it. Thus, the same word can have a different embedding based on how it is used in a sentence. For the last two example tweets, even though they contain the same exact words, they are still given a different classification.
While the last two example tweets can actually be classified as both positive and negative in different situations, the model classifies the tweet as negative when the word "hate" comes before "love", and classifies the tweet as positive when "love" comes before "hate". This is an indication to us that the model is taking the order of words into account when making a classification decision.
Using this model and applying it to tweets returned by Twitter API, we can get back the overall sentiment surrounding a specific subject. For example, if we were to search for 1000 tweets related to the term "Olympics", we get the back a count for the number of positive and negative tweets below.
Building the Topic Model
To build a topic model for tweets, I followed the methods discussed by Maarten Grootendorst in his article discussing how to utilize BERT for topic modeling (https://towardsdatascience.com/topic-modeling-with-bert-779f7db187e6).
In addition to using BERT for building a sentiment classifier, I also utilized BERT to build a topic model.
The model I used is known as Sentence-BERT, a modification of a pre-trained BERT model that creates semantically meaningful sentence embeddings. I used Sentence-BERT to convert tweets into vectors that captured their semantic meaning. Tweets that are similar to each other would correspond to vectors that are close to one another in vector space. If I could cluster vectors close to one another in vector space, then I could use these clusters to define topics.
One problem however, is that the embeddings returned by the sentence transformer were very high-dimensional, each being 768 dimensions in length. Because many clustering algorithms do not work well with high dimensional data, I had to reduce the dimension of these embedding vectors.
UMAP Data
To reduce the dimension of these embedding vectors, I used a dimensionality-reduction algorithm known as UMAP. At a very high-level, UMAP creates a high-dimensional graph representation of the data and converts that graph representation into a low-dimensional representation, all while trying to preserve the structure of the high-dimensional graph as closely as possible.
There are two hyper-parameters that UMAP uses to control the balance between the local and global structure of the data. The first parameter, n_neighbors, is used when the high-dimensional graph is constructed. When this parameter is lower, the construction of the high-dimensional graph will focus more on the local structure or finer details of the data. This would lead to more clusters as well as more topics. When this parameter is higher, the construction of the high-dimensional graph will focus more on the global structure of the data. This would result in less clusters or fewer topics.
The other parameter, min_dist, represents the minimum distance between points in the low-dimensional space. Lower values lead to more tightly-packed embeddings while larger values lead to more loosely-packed embeddings. After performing UMAP, every vector representing a tweet reduced in size from 768 dimensions to 5 dimensions. Since the data was now represented in a lower dimension, a algorithm could be used to group these vectors into clusters.
HDBSCAN Data
HDBSCAN is a density-based algorithm for clustering. At a high level, HDBSCAN finds regions of high-density in the data and clusters points in these high-density regions. One major advantage of HDBSCAN is that it does not force points into clusters. We can imagine that a lot of tweets will not fall within a specific topic, and so, rather than forcing them to belong to a certain topic, we label them as outliers.
Data Results
Applying the topic model to 1000 tweets for the search term "Olympics" produces the following table, which shows ten of the 32 generated topics and the number of tweets in those topics. Topic -1 contain all the tweets labeled as outliers.
A plot of all tweets colored by the topic in which they belong to is shown below.
The below tables shows some of the tweets that fall under topic 24.
Looking at the tweets above, we can see that tweets that fall under this topic are related to the British weightlifter Emily Campbell. We can use our sentiment classifier to get the overall sentiment for tweets that fall under this topic. Most tweets under this topic appear to be positive as shown below.
Since the numbered topic labels do not really tell us any information about the topic, we can utilize python's TF-IDF vectorizer to obtain keywords for each topic. The TF-IDF vectorizer computes an importance score for each token in a document.
Words that frequently appear in a document will be scored higher, but if those words appear frequently across all documents in a collection, they will be scored lower. For our case, a word like "Olympics" will have a low importance score because it appears in all of the returned tweets.
Using this idea, if we consider all tweets in a topic as being one document, we can use the TF-IDF vectorizer across all of these "topic" documents to get the most important words for each topic. Applying this idea to the topics we generated for the term "Olympics", we get the most important words for topic 24 shown below.
Conclusion
Topic modeling and classifying the sentiment of tweets can give information regarding what is being said about a topic and the public sentiment around that topic. While the examples above just use the search term "Olympics" for illustration purposes, these models can be used for many other applications as well. As already mentioned in the introduction, one such application where these models can be useful is in customer analysis.
References
Grootendorst, Maarten. “Topic Modeling with Bert.” Medium, Towards Data Science, 6 Oct. 2020, towardsdatascience.com/topic-modeling-with-bert-779f7db187e6.
Briggs, James. “Build a Natural Language Classifier with Bert and Tensorflow.” Medium, Better Programming, 2 June 2021, betterprogramming.pub/build-a-natural-language-classifier-with-bert-and-tensorflow-4770d4442d41.
Djaja, Ferry. “Multi Class Text Classification with Keras and LSTM.” Medium, Medium, 9 June 2020, djajafer.medium.com/multi-class-text-classification-with-keras-and-lstm-4c5525bef592.