Data Analysis on Employee Attrition

Posted on Sep 11, 2017
The skills the author demoed here can be learned through taking Data Science with Machine Learning bootcamp with NYC Data Science Academy.

Introduction

Attrition is a common issue that every company has to deal with. The goal of the HR data analytics project is to build a model that can help the company to predict whether or not a certain employee will leave as well as identify important factors of leave. The information can be vital in future recruitment and reduction in employee attrition.

Data Analysis

The data set has 14999 instances and 10 features, and without any missing value.  As for the data type, some features are numeric(float point numbers and integers), and the others are class values read in as objects.

Since the summary of the distribution of each feature showed me that the data are not on the same scale, I need to descale it before applying machine learning algorithms to the data set, so that the features having larger order of variance wouldn't impact the skills of some algorithms.  On the classification problem I also need to know how balanced the class values are. It turns out that the data is unbalanced-there are over three times the number of observations with class 0 (no left) than there are with class 1(left).

Data Analysis on Employee Attrition

Although accuracy is the most common evaluation metric for classification problems, it's only suitable when there are an equal number of observations in each class and that all predictions and prediction errors are equally important, which is not the case this time. As a result I used accuracy only to get a quick idea of the model performances.

Skewness in imput variables may impact the performance of machine learning techniques. The skewness test detect strong positive skewness in the feature"time". Correcting the skew may improve the model performances. A power transform like a Box-Cox transform might be useful.

Data Analysis on Employee Attrition

Histograms

To understand the distribution of each feature of the data set independently, I visualized data with histograms. I could also use density plots to smooth them out a bit.

 

The graph above shows that some features such as Evaluation and AverageMonthlyHours have bimodel distribution.

Correlation gives an indication of how related the changes are between two variables. Neither pearson correlation nor correlation matrix found any variables that are highly correlated to each other which is good to know, because some machine learning algorithms wouldn't perform well if they exist.

 

Feature Preprocessing

I chose to standardize the data to descale it. I could also normalize the data or leave the it unchanged and check cross validation scores before making a choice. As mentioned before, none of the continuous variables are highly correlated to each other, and with only ten features, I chose not to do any feature selection such as PCA. Before standardizing continuous features, I made changes to the ordinal and nominal features for modeling purpose.

First I dummified the nominal feature, Department,  and to get rid of one dummy variable to avoid the dummy variable trap, I arbitrarily chose "accounting", thus the coefficients on other dummified variables would show effect of them relative to "accounting". As for the ordinal feature, "salary", I implemented the mapping function to have it converted to numeric values. Then I splitted the data, where I kept 80% of the dataset for training and the other 20% for validation. The data isn't small, so I also used 10 fold cross-validation, which is a good standard test harness configuration.

Algorithms Evaluation

I didn't know which algorithms would do well on the data set yet, however I did know I need to find out what features are indicative of employee's leaving or not, so this is a typical binary classification problem. I  tried out classification algorithms  and to begin with, used accuracy metric  to get a quick idea of how each model performs. The algorithms I selected are:

Logistic Regression(logis)

Linear Discriminant Analysis(lda)

Naïve Bayes(nb)

K-nearest Neighbours(knn)

Support Vector Machines(svm)

Random Forest(rf)

Gradient Boosting(gdb)

AdaBoost(adb)

 

Data Results and Summary

As shown below, rf achieved the highest accuracy-99% on the training data set, followed by gdb with second highest accuracy of 98%.

Data Analysis on Employee Attrition

To finalize the model, I ran both rf and gdb on the testing data set and summarized the results as  a final test score, a confusion matrix, and a classification report.  Rf achieved the accuracy of almost 99% on the holdout dataset. The confusion matrix indicated fewer errors rf made, and the classification report also provided more better performance by rf in terms of precision, recall, f1 score and support.

As this is a binary classification problem, I could also use Area Under ROC Curve(AUC) as an alternative test performance metric.

Last, I used rf to score each feature where the larger the score the more important the feature. The scores suggests the most important factor of employee attrition- employee satisfaction, followed by time at company and number of projects the employee was involved.

Conclusion

If time allows, I would grid search algorithm parameters for both gdb and rf  to find the one that yield best results, reliably and fast. Of course the model and numbers it provided are not solution deciders, however they are very important and can help company make better decisions in recruitment and attrition reduction.

About Author

Maggie Zou

Maggie Zou got a masters' degree in Math education and has been teaching Math and science in local school district for the past ten years . She also works as an interpreter on the side for the governments...
View all posts by Maggie Zou >

Related Articles

Leave a Comment

No comments found.

View Posts by Categories


Our Recent Popular Posts


View Posts by Tags

#python #trainwithnycdsa 2019 2020 Revenue 3-points agriculture air quality airbnb airline alcohol Alex Baransky algorithm alumni Alumni Interview Alumni Reviews Alumni Spotlight alumni story Alumnus ames dataset ames housing dataset apartment rent API Application artist aws bank loans beautiful soup Best Bootcamp Best Data Science 2019 Best Data Science Bootcamp Best Data Science Bootcamp 2020 Best Ranked Big Data Book Launch Book-Signing bootcamp Bootcamp Alumni Bootcamp Prep boston safety Bundles cake recipe California Cancer Research capstone car price Career Career Day citibike classic cars classpass clustering Coding Course Demo Course Report covid 19 credit credit card crime frequency crops D3.js data data analysis Data Analyst data analytics data for tripadvisor reviews data science Data Science Academy Data Science Bootcamp Data science jobs Data Science Reviews Data Scientist Data Scientist Jobs data visualization database Deep Learning Demo Day Discount disney dplyr drug data e-commerce economy employee employee burnout employer networking environment feature engineering Finance Financial Data Science fitness studio Flask flight delay gbm Get Hired ggplot2 googleVis H20 Hadoop hallmark holiday movie happiness healthcare frauds higgs boson Hiring hiring partner events Hiring Partners hotels housing housing data housing predictions housing price hy-vee Income Industry Experts Injuries Instructor Blog Instructor Interview insurance italki Job Job Placement Jobs Jon Krohn JP Morgan Chase Kaggle Kickstarter las vegas airport lasso regression Lead Data Scienctist Lead Data Scientist leaflet league linear regression Logistic Regression machine learning Maps market matplotlib Medical Research Meet the team meetup methal health miami beach movie music Napoli NBA netflix Networking neural network Neural networks New Courses NHL nlp NYC NYC Data Science nyc data science academy NYC Open Data nyc property NYCDSA NYCDSA Alumni Online Online Bootcamp Online Training Open Data painter pandas Part-time performance phoenix pollutants Portfolio Development precision measurement prediction Prework Programming public safety PwC python Python Data Analysis python machine learning python scrapy python web scraping python webscraping Python Workshop R R Data Analysis R language R Programming R Shiny r studio R Visualization R Workshop R-bloggers random forest Ranking recommendation recommendation system regression Remote remote data science bootcamp Scrapy scrapy visualization seaborn seafood type Selenium sentiment analysis sentiment classification Shiny Shiny Dashboard Spark Special Special Summer Sports statistics streaming Student Interview Student Showcase SVM Switchup Tableau teachers team team performance TensorFlow Testimonial tf-idf Top Data Science Bootcamp Top manufacturing companies Transfers tweets twitter videos visualization wallstreet wallstreetbets web scraping Weekend Course What to expect whiskey whiskeyadvocate wildfire word cloud word2vec XGBoost yelp youtube trending ZORI