Predicting Behavior to Retain Customers Through Marketing

Avatar
Posted on Dec 31, 2019

Introduction

Marketing is the action or business of promoting and selling products or services, including market research and advertising. Businesses are often inundated with large amounts of data within different departments. One of the most important departments is that of marketing which often times deals with how to generate new customers as well as keep current customers. 

This capstone project is based on a Kaggle competition named "Telco Customer Churn". The goal of the project is to predict whether customers will leave or stay with their telecom provider based on relevant customer data. This is known in the marketing industry as "customer churn". It is defined as the loss of clients or customers.

Dataset

The dataset was obtained from Kaggle. The origin of the data is the "IBM Watson Analytics Community." The raw data contains 7043 rows (customers) and 21 columns (features). The data includes a variety of customer data such as "gender", "senior citizen", "tenure", "monthly charges", etc.  The last column in the dataset is the "churn" column which is our primary target in this exercise.

Preprocessing

The first steps of this project was to examine the data and gain an understanding of what would be important.

Of the 21 feature columns, it was determined that two (2) were "integers" (int64), one (1) was "floating" (float64) and the remaining eighteen (18) were "object" or strings. The three numerical columns were: "senior citizen", "tenure", and "monthly charges". However, we had one more column "total charges" which had to be converted from an "object" to a numerical data "int64".

Dummy variables were also applied to the eighteen (18) "object" variables since these were categorical in nature.

Data Exploration

The dataset was explored through some "visualization" tools to gain a general understanding of the customer data. Here are some examples of some issues that were examined:

  • A pie chart was used to analyze our target column "Churn".  

73.5% (5174) customers did not churn or leave the telecom company whereas 26.5% (1869) customers did churn and leave.

  • Bar chart was used to group and compare the amount of customers that churned with respect to the length of the customer contracts:

Customers on month-to-month contracts have a large amount of churn.

  • Another barchart was used to also compare churn across gender.

Churn rates across males and females seems approximately identical.

Feature Exploration

A correlation matrix was used to get a sense of how each feature correlates to other features.

The top 5 features showing the most correlation are below:

The length of the contract tenure has the most correlation to  churning.

Modelling

All the features were used in several different models to determine which one is the best at predicting customer churn. To test the models, the data was split into a "70% - 30% train test split". First the models were instantiated and then fitted. The models were fit using "X_Train" and "y_train", and then scored using "X_Test" and "y_test".

"Cross validation" with "grid search" was used to select the best hyperparameters for the model that seemed to perform the best. The best hyperparameters were then used on the full train set, and tested on the model test set. 

The models that were evaluated were: "Logistic Regression", "Random Forest" "Decision Trees", "K Nearest Neighbors", "Neural Network", and "Support Vector Machines". The performances of the models were evaluated using the r-squared value. A high r-squared value means a higher accuracy.

The top 3 performers  based on the "test dataset" were:

  • Logistic Regression
  • Neural Network
  • Decision Tree - Bagging Classifier

Logistic regression had the best performance with respect to to the test dataset when measured with the r-squared.

"Cross validation" with "grid search" was used to further tune the hyperparameters. The best parameter was an "L2" logistic regression with a "C" value of 0.0061. The r-squared results are shown below for the logistic regression:  

  • Logistic Regression

R-squared for training dataset - 0.8049

R-squared for test dataset - 0.8012

Conclusion

In conclusion, the logistic regression was the best predictor of customer churn based on the customer data. Greater accuracy may be derived by exploring whether any features can be removed or engineered to add more predictive value.

About Author

Avatar

Steven Owusu

Steven Owusu has several years experience working as a credit analyst. He holds a Masters of Business Administration from Columbia Business School. Steven loves applying data science techniques to solving real world business problems.
View all posts by Steven Owusu >

Leave a Comment

No comments found.

View Posts by Categories


Our Recent Popular Posts


View Posts by Tags

#python #trainwithnycdsa 2019 airbnb Alex Baransky alumni Alumni Interview Alumni Reviews Alumni Spotlight alumni story Alumnus API Application artist aws beautiful soup Best Bootcamp Best Data Science 2019 Best Data Science Bootcamp Best Data Science Bootcamp 2020 Best Ranked Big Data Book Launch Book-Signing bootcamp Bootcamp Alumni Bootcamp Prep Bundles California Cancer Research capstone Career Career Day citibike clustering Coding Course Demo Course Report D3.js data Data Analyst data science Data Science Academy Data Science Bootcamp Data science jobs Data Science Reviews Data Scientist Data Scientist Jobs data visualization Deep Learning Demo Day Discount dplyr employer networking feature engineering Finance Financial Data Science Flask gbm Get Hired ggplot2 googleVis Hadoop higgs boson Hiring hiring partner events Hiring Partners Industry Experts Instructor Blog Instructor Interview Job Job Placement Jobs Jon Krohn JP Morgan Chase Kaggle Kickstarter lasso regression Lead Data Scienctist Lead Data Scientist leaflet linear regression Logistic Regression machine learning Maps matplotlib Medical Research Meet the team meetup Networking neural network Neural networks New Courses nlp NYC NYC Data Science nyc data science academy NYC Open Data NYCDSA NYCDSA Alumni Online Online Bootcamp Online Training Open Data painter pandas Part-time Portfolio Development prediction Prework Programming PwC python python machine learning python scrapy python web scraping python webscraping Python Workshop R R language R Programming R Shiny r studio R Visualization R Workshop R-bloggers random forest Ranking recommendation recommendation system regression Remote remote data science bootcamp Scrapy scrapy visualization seaborn Selenium sentiment analysis Shiny Shiny Dashboard Spark Special Special Summer Sports statistics streaming Student Interview Student Showcase SVM Switchup Tableau team TensorFlow Testimonial tf-idf Top Data Science Bootcamp twitter visualization web scraping Weekend Course What to expect word cloud word2vec XGBoost yelp