Utilizing Data to Model Credit Risks
The skills I demoed here can be learned through taking Data Science with Machine Learning bootcamp with NYC Data Science Academy.
Source Code, Linkedin
Purpose
The credit risk of an investment vehicle such as debts is the risk of default that may arise from a borrower failing to repay or to fulfill the debt agreement. The importance of credit risk plays a big role in investment risk assessment as the investor would lose a portion or the entirety of the loan when a loan has defaulted or the borrower is failing to repay the loan. Knowing the credit risk data can help you measure your risk exposure and, to some extent, prevent it.
The goal of this research is to use classification techniques to develop a credit risk model for predicting loan default risk. The methodology might be utilized by investment businesses or peer-to-peer investors looking to invest in individual loans.
Data Summary
The dataset that was used in the model training is from a company called LendingClub; It is a financial technology firm that offers peer-to-peer lending services. The collection contains a total of 22 million entities, 20 million of which are rejected loan data and 2 million of which are accepted loan data. Because the 20 million rejected loans data are irrelevant to the goal of this project, only the accepted loans data was used to train the model.
In the dataset, there are two categories of data: application data and behavioral data. Application data includes loan amounts, loan terms, and interest rates, whereas behavioral data includes credit limits, debt-to-income ratio, and credit balance.
EDA Data
The loan status variable is the target variable. Figure 1.1 depicts a total of six categories. Among them, default accounts for only 0.0029 percent of the total dataset. To make the modeling process easier, I consolidated loan statuses that are either default or charged off as default, because charged-off loans are simply loans that were declared uncollectible. The result shows the percentage of loans with a status of default is about 20%, while the percentage of loans without a status of default is nearly 80%.
Note: Imbalance classes on loan status, particularly default, account for only one-fourth of the total. As a result, upsampling will be used to resolve the problem.
There are a total of 150 features in the dataset. Among them, interest rate, loan amount and grade seem to be relevant to predict default risk as they are the data that are generated from the loan application. They are the reflections of the applicant's financial status from the bank or the lender’s perspective. As the figure 1.2 shows, the clouds of default applicants(Blue dots) are mostly scattered in the higher range of interest rate. This also happened to grade as the figure 1.3 shows, lower the grading is, the higher the interest rate.
Debt and Income
Debt-to-income ratios and the yearly salary were plotted against the rate of interest and loan grade for behavioral type. As illustrated in the graph, default applicants appear to cluster around the low debt-to-income ratio with a high interest rate. The plot of annual income versus loan grade shows that the higher the income, the higher the grade, yet the odds of default appear to be the same in both groups.
Note: Features with more than 80% missing values were eliminated throughout the EDA process. It is simply due to the fact that the degree of missingness on these features may result in unwanted noises. There seem to be signs of anomalies in the above figures, that will be removed for logistic regression preprocessing.
And, for missing data, zero imputation and some calculation were the strategies I relied on the most. This is due to the fact that the majority of the missing data was of the application type, and there are indications of relationships between some of them. For example, the bankcard utilization rate is computed by dividing the total balance of the bank card by the total credit limit of the bankcard; thus, imputing using such a computation makes sense.
Data Models and Results
Logistic regression, support vector machine, random forest, and CatBoost have been used to perform this classification task. Logistic regression and support vector machines are commonly used in banking, particularly for risk modeling. Among the two, logistic regression has consistently proven to be one of the most effective model strategies for credit risk modeling. It has also been widely utilized to build a scorecard model, an advanced math model that uses the default risk model as its foundation. Random forest and CatBoost are tree-based models that handle classification tasks employing techniques such as bagging and ensembling; one advantage of using them is that they are simple to implement and do not require preprocessing.
Data Analysis
According to the plot above, stochastic gradient descent-based SVM has an average score of 0.66 and a running time of 1.8 seconds. SGD was chosen as the optimization strategy since MLE-based(Maximum Likelihood Estimation) SVM is slow and imprecise on large datasets, particularly those with more than 10,000 rows. The model would need to be fine-tuned in terms of hyperparameters and kernel transformation.
Random forest scored similarly to SVM. After 20 to 30 iterations, both models tend to overfit, and both training and testing scores begin to stagnate as they improve. This is because both models pick up on detail and noise in the training data to an extent that it has a negative impact on the models' performance on new data. However, training random forest took substantially less time than SVM; both models should perform better with further tuning.
CatBoost outperforms the other models by using only one-hot encoding for category features; it has the greatest score of the four models. Both f1 scores and accuracy were 98 percent, precision was 94 percent, and recall was 98 percent.
Finally, with an average f1 score of 0.89 and an accuracy score of 0.88, stochastic gradient descent-based logistic regression trumps both SVM and random forest. A precision score of 0.98 indicates that the model predicts default with 98 percent accuracy, while a recall score of 0.77 indicates that the model anticipates default scenarios with 77 percent accuracy. The model's precision and recall should increase with more tuning.
Conclusion
Logistic regression and CatBoost outperform the other three models. It ranks the highest across all scoring metrics, especially with CatBoost. However, if forced to choose one, logistic regression would be a decent choice. The model's predictability can be improved further with some more tuning, and it can be modified to be utilized in scorecard modeling and risk analysis.