Data Analysis on Employee Attrition
The skills the author demoed here can be learned through taking Data Science with Machine Learning bootcamp with NYC Data Science Academy.
Introduction
Attrition is a common issue that every company has to deal with. The goal of the HR data analytics project is to build a model that can help the company to predict whether or not a certain employee will leave as well as identify important factors of leave. The information can be vital in future recruitment and reduction in employee attrition.
Data Analysis
The data set has 14999 instances and 10 features, and without any missing value. As for the data type, some features are numeric(float point numbers and integers), and the others are class values read in as objects.
Since the summary of the distribution of each feature showed me that the data are not on the same scale, I need to descale it before applying machine learning algorithms to the data set, so that the features having larger order of variance wouldn't impact the skills of some algorithms. On the classification problem I also need to know how balanced the class values are. It turns out that the data is unbalanced-there are over three times the number of observations with class 0 (no left) than there are with class 1(left).
Although accuracy is the most common evaluation metric for classification problems, it's only suitable when there are an equal number of observations in each class and that all predictions and prediction errors are equally important, which is not the case this time. As a result I used accuracy only to get a quick idea of the model performances.
Skewness in imput variables may impact the performance of machine learning techniques. The skewness test detect strong positive skewness in the feature"time". Correcting the skew may improve the model performances. A power transform like a Box-Cox transform might be useful.
Histograms
To understand the distribution of each feature of the data set independently, I visualized data with histograms. I could also use density plots to smooth them out a bit.
The graph above shows that some features such as Evaluation and AverageMonthlyHours have bimodel distribution.
Correlation gives an indication of how related the changes are between two variables. Neither pearson correlation nor correlation matrix found any variables that are highly correlated to each other which is good to know, because some machine learning algorithms wouldn't perform well if they exist.
Feature Preprocessing
I chose to standardize the data to descale it. I could also normalize the data or leave the it unchanged and check cross validation scores before making a choice. As mentioned before, none of the continuous variables are highly correlated to each other, and with only ten features, I chose not to do any feature selection such as PCA. Before standardizing continuous features, I made changes to the ordinal and nominal features for modeling purpose.
First I dummified the nominal feature, Department, and to get rid of one dummy variable to avoid the dummy variable trap, I arbitrarily chose "accounting", thus the coefficients on other dummified variables would show effect of them relative to "accounting". As for the ordinal feature, "salary", I implemented the mapping function to have it converted to numeric values. Then I splitted the data, where I kept 80% of the dataset for training and the other 20% for validation. The data isn't small, so I also used 10 fold cross-validation, which is a good standard test harness configuration.
Algorithms Evaluation
I didn't know which algorithms would do well on the data set yet, however I did know I need to find out what features are indicative of employee's leaving or not, so this is a typical binary classification problem. I tried out classification algorithms and to begin with, used accuracy metric to get a quick idea of how each model performs. The algorithms I selected are:
Logistic Regression(logis)
Linear Discriminant Analysis(lda)
Naïve Bayes(nb)
K-nearest Neighbours(knn)
Support Vector Machines(svm)
Random Forest(rf)
Gradient Boosting(gdb)
AdaBoost(adb)
Data Results and Summary
As shown below, rf achieved the highest accuracy-99% on the training data set, followed by gdb with second highest accuracy of 98%.
To finalize the model, I ran both rf and gdb on the testing data set and summarized the results as a final test score, a confusion matrix, and a classification report. Rf achieved the accuracy of almost 99% on the holdout dataset. The confusion matrix indicated fewer errors rf made, and the classification report also provided more better performance by rf in terms of precision, recall, f1 score and support.
As this is a binary classification problem, I could also use Area Under ROC Curve(AUC) as an alternative test performance metric.
Last, I used rf to score each feature where the larger the score the more important the feature. The scores suggests the most important factor of employee attrition- employee satisfaction, followed by time at company and number of projects the employee was involved.
Conclusion
If time allows, I would grid search algorithm parameters for both gdb and rf to find the one that yield best results, reliably and fast. Of course the model and numbers it provided are not solution deciders, however they are very important and can help company make better decisions in recruitment and attrition reduction.