Identifying "Fake News" With NLP
Introduction
What is fake news? We’ve all heard of it, but it is not always easy to identify. Fake news is a type of yellow journalism or propaganda that consists of purposeful misinformation. It has traditionally been spread through print and broadcast mediums, but with the rise of social media, it can now be disseminated virally. As a result, large technology companies have begun to take steps to address this trend. For example, Google has adjusted its news rankings to prioritize well-known sites and has banned sites with a history of spreading fake news. Facebook has integrated fact checking organizations into its platform.
How significant is this issue? Buzzfeed analyzed the 20 most-shared fake and real news articles leading up and relating to the 2016 presidential election. They found that the top fake stories had more engagement on Facebook than the top real stories.
Workflow
Our goal for this project was to find a way to utilize Natural Language Processing (NLP) to identify and classify fake articles. We gathered our data, preprocessed the text, and converted our articles into features for use in both supervised and unsupervised models.
Data Collection
We knew from the start that categorizing an article as “fake news” could be somewhat of a gray area. For that reason, we utilized an existing Kaggle dataset that had already collected and classified fake news. The articles were derived using the B.S. Detector, a browser extension that searches all links on a page for references to unreliable sources and checks them against a third-party list of domains. Since these fake articles were gathered during November 2016 from webhose.io, a news aggregation site, we collected our real news data from that same site and timeframe. To ensure we did not include articles from questionable sources in that dataset, we manually identified and filtered on a list of reliable organizations (i.e., The New York Times, Washington Post, Forbes). In the end, our final dataset included over 23,000 real articles and 11,000 fake articles.
Preprocessing the Text
The performance of a text classification model is highly dependent on the words in a corpus and the features created from those words. Common words (otherwise known as stopwords) and other “noisy” elements increase feature dimensionality but do not usually help to differentiate between documents. We used the spaCy and gensim packages in Python to tokenize our text and perform the following preprocessing steps:
These steps helped reduce the size of our corpus and add context prior to feature conversion. In particular, lemmatization converts each word to its root form, turning different words into a single representation. N-grams combine nearby words into single features, which helps give context to words that may have little meaning on their own. For our project, we tested both bigrams and trigrams.
Converting Text to Features
To analyze and model text after it has been preprocessed, it must first be converted into features. Techniques may include TF-IDF or Word2Vec.
Term Frequency - Inverse Document Frequency (TF-IDF)
TF-IDF is a statistic that aims to reflect how important a word is to a document in a corpus. It increases proportionally with the number of times a word appears in a document, but is offset by its frequency in the overall corpus. While TF-IDF is a good basic metric for extracting descriptive terms, it does not take into consideration a word’s position or context.
Word2Vec
The Word2Vec technique converts text to features while maintaining the original relationships between words in a corpus. Word2Vec is not a single algorithm but a combination of two techniques – CBOW (Continuous bag of words) and the skip-gram model. Both are shallow neural networks which map word(s) to the target variable, which is also a word(s). Both techniques learn weights which act as word vector representations.
The quality of word vectors increases significantly with the amount of data used to train them, so we used pre-trained vectors trained on the Google News dataset (about 100 billion words). The model contains 300-dimensional vectors for 3 million words and phrases. We averaged the word vectors within each article to get a single vector representation for every document.
Classification
Scoring with Imbalanced Data
Since our data was not split evenly across both classes, we chose metrics that would not overstate our results when evaluating our models. A confusion matrix is useful for gauging outcomes in classification problems. Since our goal was to recognize fake news articles, the ones we correctly classified as fake are our True Positives, and the fake articles we incorrectly classified as real are our False Negatives (Type II error). Real articles that we correctly classified are our True Negatives and incorrectly classified real articles are our False Positives (Type I error).
To build an effective model, our goal was to minimize both the False Negatives and False Positives. The F1 score helps strike a balance between precision (fake articles classified correctly over the total number of articles predicted as fake) and sensitivity/recall (the proportion of fake articles classified correctly). For that reason, we used the F1 metric as our optimization parameter when using cross-validation to tune our hyperparameters.
Finally, we used the ROC AUC score to visualize our model results. The ROC graphs the True Positive rate (sensitivity/recall) on the y-axis and the False Positive rate (real news articles that we classified incorrectly) on the x-axis.
Model Results – TF-IDF
We utilized cross-validation and a grid search to find the best parameters for the TF-IDF algorithm and each individual model. The Logistic Regression and Support Vector Machine models produced the best results using TF-IDF to convert our text to features. However, the Logistic Regression model was much faster to train, which is important from a time-complexity standpoint when evaluating model performance. Seen in the ROC graph below, the Logistic Regression model had a high sensitivity (it predicted fake news articles quite well) and a low False Positive rate (it did not predict a large portion of real news as fake).
We also trained each model using Word2Vec to convert our text to features, but the results were worse across all model types.
Topic Modeling
Since our articles covered a wide range of topics, we utilized unsupervised learning to better understand our data. Topic modeling allows us to describe and summarize the documents in a corpus without having to read each individual article. It works by finding patterns in the co-occurrence of words using the frequency of words in each document.
Latent Dirichlet Allocation (LDA) is one of the most popular models used in NLP to describe documents. LDA assumes that documents are produced from a mixture of topics and topics are generated from a mixture of words associated with that topic. Additionally, LDA assumes that these mixtures follow a Dirichlet probability distribution. This means that for each document, we can assume there should only be a handful of topics covered and that for each topic, only a handful of words are associated with that topic. For example, in an article about sports, we would not expect to find many different topics covered.
The graphic above illustrates this process. For each document, the model will select a topic from a distribution of topics and then a word from a distribution based on the topic. The model will initialize randomly and update topics and words as it iterates through every document to find a certain number of topics and associated words. The hyper-parameters alpha and beta can be adjusted to control the topic distribution per document and word distribution per topic, respectively. A high alpha means that every document is likely to contain a mixture of most topics (documents will appear more similar to one another) and a high beta means that each topic is likely to contain a mixture of most words (topics will appear more similar to one another).
LDA is completely unsupervised, but the user must provide the model with a specific number of topics to describe the entire set of documents. For our dataset, we chose 20 topics. As shown below, topics are not named, but we can get a better understanding of each topic by looking at the words associated with each.
Based on the words associated with Topic 2, it seems to be related to the election. The distance between topics directly relates to how similar topics are to one another. As we can see, Topic 15 is far from Topic 2 and likely relates to the arts.
Stacked Model Results
For our final model, we generated a stacked model using the predictions from our original seven models. Stacked models often out-perform individual models because they can discern where each performs well and where each performs poorly. We also added our topic modeling results as new features, as well as the length of each article and whether it had an author.
Conclusion
The rise of fake news has become a global problem that even major tech companies like Facebook and Google are struggling to solve. It can be difficult to determine whether a text is factual without additional context and human judgement. Although our stacked model performed well on our test data, it would likely not perform as well on new data from a different time period and topic distribution. The chart below displays the “most fake” words in our dataset, determined by looking at words that were proportionally used much more often in fake news than real news.
Writing style is also crucial to separating real news from fake. With more time, we would revisit our text preprocessing strategy to maintain some of the style elements of our articles (i.e., capitalization, punctuation) and improve performance.
Link to our Github.
Selected Sources