Identifying "Fake News" With NLP

Posted on Sep 18, 2017


What is fake news? We’ve all heard of it, but it is not always easy to identify. Fake news is a type of yellow journalism or propaganda that consists of purposeful misinformation. It has traditionally been spread through print and broadcast mediums, but with the rise of social media, it can now be disseminated virally. As a result, large technology companies have begun to take steps to address this trend. For example, Google has adjusted its news rankings to prioritize well-known sites and has banned sites with a history of spreading fake news. Facebook has integrated fact checking organizations into its platform.

How significant is this issue? Buzzfeed analyzed the 20 most-shared fake and real news articles leading up and relating to the 2016 presidential election. They found that the top fake stories had more engagement on Facebook than the top real stories.


Our goal for this project was to find a way to utilize Natural Language Processing (NLP) to identify and classify fake articles. We gathered our data, preprocessed the text, and converted our articles into features for use in both supervised and unsupervised models.

Data Collection

We knew from the start that categorizing an article as “fake news” could be somewhat of a gray area. For that reason, we utilized an existing Kaggle dataset that had already collected and classified fake news. The articles were derived using the B.S. Detector, a browser extension that searches all links on a page for references to unreliable sources and checks them against a third-party list of domains. Since these fake articles were gathered during November 2016 from, a news aggregation site, we collected our real news data from that same site and timeframe. To ensure we did not include articles from questionable sources in that dataset, we manually identified and filtered on a list of reliable organizations (i.e., The New York Times, Washington Post, Forbes). In the end, our final dataset included over 23,000 real articles and 11,000 fake articles.

Preprocessing the Text

The performance of a text classification model is highly dependent on the words in a corpus and the features created from those words. Common words (otherwise known as stopwords) and other “noisy” elements increase feature dimensionality but do not usually help to differentiate between documents. We used the spaCy and gensim packages in Python to tokenize our text and perform the following preprocessing steps:

These steps helped reduce the size of our corpus and add context prior to feature conversion. In particular, lemmatization converts each word to its root form, turning different words into a single representation. N-grams combine nearby words into single features, which helps give context to words that may have little meaning on their own. For our project, we tested both bigrams and trigrams.

Converting Text to Features

To analyze and model text after it has been preprocessed, it must first be converted into features. Techniques may include TF-IDF or Word2Vec.

Term Frequency - Inverse Document Frequency (TF-IDF)

TF-IDF is a statistic that aims to reflect how important a word is to a document in a corpus. It increases proportionally with the number of times a word appears in a document, but is offset by its frequency in the overall corpus. While TF-IDF  is a good basic metric for extracting descriptive terms, it does not take into consideration a word’s position or context.Using TF-IDF, we found the relative importance of words in both our fake news and real news datasets. There was significant overlap between the two - “trump” was the most important word in both types of articles, and words like “clinton”, “fbi”, and “email” also ranked highly.


The Word2Vec technique converts text to features while maintaining the original relationships between words in a corpus. Word2Vec is not a single algorithm but a combination of two techniques – CBOW (Continuous bag of words) and the skip-gram model. Both are shallow neural networks which map word(s) to the target variable, which is also a word(s). Both techniques learn weights which act as word vector representations.

The quality of word vectors increases significantly with the amount of data used to train them, so we used pre-trained vectors trained on the Google News dataset (about 100 billion words). The model contains 300-dimensional vectors for 3 million words and phrases. We averaged the word vectors within each article to get a single vector representation for every document.


Scoring with Imbalanced Data

Since our data was not split evenly across both classes, we chose metrics that would not overstate our results when evaluating our models. A confusion matrix is useful for gauging outcomes in classification problems. Since our goal was to recognize fake news articles, the ones we correctly classified as fake are our True Positives, and the fake articles we incorrectly classified as real are our False Negatives (Type II error). Real articles that we correctly classified are our True Negatives and incorrectly classified real articles are our False Positives (Type I error).

To build an effective model, our goal was to minimize both the False Negatives and False Positives. The F1 score helps strike a balance between precision (fake articles classified correctly over the total number of articles predicted as fake) and sensitivity/recall (the proportion of fake articles classified correctly). For that reason, we used the F1 metric as our optimization parameter when using cross-validation to tune our hyperparameters.

Finally, we used the ROC AUC score to visualize our model results. The ROC graphs the True Positive rate (sensitivity/recall) on the y-axis and the False Positive rate (real news articles that we classified incorrectly) on the x-axis.

Model Results – TF-IDF

We utilized cross-validation and a grid search to find the best parameters for the TF-IDF algorithm and each individual model. The Logistic Regression and Support Vector Machine models produced the best results using TF-IDF to convert our text to features. However, the Logistic Regression model was much faster to train, which is important from a time-complexity standpoint when evaluating model performance. Seen in the ROC graph below, the Logistic Regression model had a high sensitivity (it predicted fake news articles quite well) and a low False Positive rate (it did not predict a large portion of real news as fake).

Model Results – Word2Vec

We also trained each model using Word2Vec to convert our text to features, but the results were worse across all model types.

Topic Modeling

Since our articles covered a wide range of topics, we utilized unsupervised learning to better understand our data. Topic modeling allows us to describe and summarize the documents in a corpus without having to read each individual article. It works by finding patterns in the co-occurrence of words using the frequency of words in each document.

Latent Dirichlet Allocation (LDA) is one of the most popular models used in NLP to describe documents. LDA assumes that documents are produced from a mixture of topics and topics are generated from a mixture of words associated with that topic. Additionally, LDA assumes that these mixtures follow a Dirichlet probability distribution. This means that for each document, we can assume there should only be a handful of topics covered and that for each topic, only a handful of words are associated with that topic. For example, in an article about sports, we would not expect to find many different topics covered.

The graphic above illustrates this process. For each document, the model will select a topic from a distribution of topics and then a word from a distribution based on the topic. The model will initialize randomly and update topics and words as it iterates through every document to find a certain number of topics and associated words. The hyper-parameters alpha and beta can be adjusted to control the topic distribution per document and word distribution per topic, respectively. A high alpha means that every document is likely to contain a mixture of most topics (documents will appear more similar to one another) and a high beta means that each topic is likely to contain a mixture of most words (topics will appear more similar to one another).

LDA is completely unsupervised, but the user must provide the model with a specific number of topics to describe the entire set of documents. For our dataset, we chose 20 topics. As shown below, topics are not named, but we can get a better understanding of each topic by looking at the words associated with each.

Based on the words associated with Topic 2, it seems to be related to the election. The distance between topics directly relates to how similar topics are to one another. As we can see, Topic 15 is far from Topic 2 and likely relates to the arts.

Stacked Model Results

For our final model, we generated a stacked model using the predictions from our original seven models. Stacked models often out-perform individual models because they can discern where each performs well and where each performs poorly. We also added our topic modeling results as new features, as well as the length of each article and whether it had an author.These features, in combination with logistic regression, gave us quite good results - only 34 of our fake articles were misclassified from a test set of 2,208, and our AUC score was .9876.


The rise of fake news has become a global problem that even major tech companies like Facebook and Google are struggling to solve. It can be difficult to determine whether a text is factual without additional context and human judgement. Although our stacked model performed well on our test data, it would likely not perform as well on new data from a different time period and topic distribution. The chart below displays the “most fake” words in our dataset, determined by looking at words that were proportionally used much more often in fake news than real news.Words like “hillary”, “clinton”, and “email” were used much more frequently in fake news with a ratio of almost 2 to 1. Therefore, our model might have trouble classifying new real articles about those subjects correctly because they are so prevalent in fake news.

Writing style is also crucial to separating real news from fake. With more time, we would revisit our text preprocessing strategy to maintain some of the style elements of our articles (i.e., capitalization, punctuation) and improve performance.

Link to our Github.

Selected Sources

About Authors

Julia Goldstein

Julia has over five years of experience delivering business insight through data analysis and visualization. As an analytics and management consultant, she was responsible for managing projects, identifying solutions, and developing support among senior-level stakeholders. Moving forward, Julia...
View all posts by Julia Goldstein >

Mike Ghoul

Mike is a strategic analyst with 5 years of financial services experience coupled with data science skills and an insatiable drive to solve problems. While at Morgan Stanley, he built predictive compensation models forecasting future costs and presented...
View all posts by Mike Ghoul >

Related Articles

Leave a Comment

Google January 5, 2021
Google Below you will discover the link to some web-sites that we consider you'll want to visit.
Google December 29, 2020
Google Below you’ll obtain the link to some web-sites that we consider you need to visit.
CBD Oil For Dogs December 14, 2020
CBD Oil For Dogs [...]Sites of interest we've a link to[...]
MKsOrb August 26, 2020
MKsOrb [...]here are some hyperlinks to web sites that we link to since we think they're really worth visiting[...]
OnHax Me August 19, 2020
OnHax Me [...]the time to study or visit the content material or web pages we've linked to beneath the[...] August 5, 2020 [...]The information mentioned within the post are a number of the best available [...] July 30, 2020 [...]Here are some of the internet sites we advocate for our visitors[...]
cbd for cats July 9, 2020
cbd for cats [...]please go to the websites we adhere to, which includes this 1, because it represents our picks from the web[...]
Google January 25, 2020
Google Here is a great Weblog You may Locate Fascinating that we encourage you to visit.
Google January 2, 2020
Google The facts talked about inside the article are a few of the ideal obtainable.
ismail November 25, 2018
Just one word Awesome!!!
shoaib February 15, 2018
excellent information
seo salary toronto October 4, 2017
Very good site you have here but I was wanting to know if you knew of any message boards that cover the same topics discussed in this article? I'd really like to be a part of online community where I can get feed-back from other experienced individuals that share the same interest. If you have any suggestions, please let me know. Thanks!

View Posts by Categories

Our Recent Popular Posts

View Posts by Tags

#python #trainwithnycdsa 2019 2020 Revenue 3-points agriculture air quality airbnb airline alcohol Alex Baransky algorithm alumni Alumni Interview Alumni Reviews Alumni Spotlight alumni story Alumnus ames dataset ames housing dataset apartment rent API Application artist aws bank loans beautiful soup Best Bootcamp Best Data Science 2019 Best Data Science Bootcamp Best Data Science Bootcamp 2020 Best Ranked Big Data Book Launch Book-Signing bootcamp Bootcamp Alumni Bootcamp Prep boston safety Bundles cake recipe California Cancer Research capstone car price Career Career Day citibike classic cars classpass clustering Coding Course Demo Course Report covid 19 credit credit card crime frequency crops D3.js data data analysis Data Analyst data analytics data for tripadvisor reviews data science Data Science Academy Data Science Bootcamp Data science jobs Data Science Reviews Data Scientist Data Scientist Jobs data visualization database Deep Learning Demo Day Discount disney dplyr drug data e-commerce economy employee employee burnout employer networking environment feature engineering Finance Financial Data Science fitness studio Flask flight delay gbm Get Hired ggplot2 googleVis H20 Hadoop hallmark holiday movie happiness healthcare frauds higgs boson Hiring hiring partner events Hiring Partners hotels housing housing data housing predictions housing price hy-vee Income Industry Experts Injuries Instructor Blog Instructor Interview insurance italki Job Job Placement Jobs Jon Krohn JP Morgan Chase Kaggle Kickstarter las vegas airport lasso regression Lead Data Scienctist Lead Data Scientist leaflet league linear regression Logistic Regression machine learning Maps market matplotlib Medical Research Meet the team meetup methal health miami beach movie music Napoli NBA netflix Networking neural network Neural networks New Courses NHL nlp NYC NYC Data Science nyc data science academy NYC Open Data nyc property NYCDSA NYCDSA Alumni Online Online Bootcamp Online Training Open Data painter pandas Part-time performance phoenix pollutants Portfolio Development precision measurement prediction Prework Programming public safety PwC python Python Data Analysis python machine learning python scrapy python web scraping python webscraping Python Workshop R R Data Analysis R language R Programming R Shiny r studio R Visualization R Workshop R-bloggers random forest Ranking recommendation recommendation system regression Remote remote data science bootcamp Scrapy scrapy visualization seaborn seafood type Selenium sentiment analysis sentiment classification Shiny Shiny Dashboard Spark Special Special Summer Sports statistics streaming Student Interview Student Showcase SVM Switchup Tableau teachers team team performance TensorFlow Testimonial tf-idf Top Data Science Bootcamp Top manufacturing companies Transfers tweets twitter videos visualization wallstreet wallstreetbets web scraping Weekend Course What to expect whiskey whiskeyadvocate wildfire word cloud word2vec XGBoost yelp youtube trending ZORI