Utilizing Data to Model Credit Risks

Posted on Jun 17, 2021
The skills I demoed here can be learned through taking Data Science with Machine Learning bootcamp with NYC Data Science Academy.

Utilizing Data to Model Credit Risks

Source Code, Linkedin

Purpose

The credit risk of an investment vehicle such as debts is the risk of default that may arise from a borrower failing to repay or to fulfill the debt agreement.  The importance of credit risk plays a big role in investment risk assessment as the investor would lose a portion or the entirety of the loan when a loan has defaulted or the borrower is failing to repay the loan. Knowing the credit risk data can help you measure your risk exposure and, to some extent, prevent it.

The goal of this research is to use classification techniques to develop a credit risk model for predicting loan default risk. The methodology might be utilized by investment businesses or peer-to-peer investors looking to invest in individual loans.

Data Summary

The dataset that was used in the model training is from a company called LendingClub; It is a financial technology firm that offers peer-to-peer lending services. The collection contains a total of 22 million entities, 20 million of which are rejected loan data and 2 million of which are accepted loan data. Because the 20 million rejected loans data are irrelevant to the goal of this project, only the accepted loans data was used to train the model.

In the dataset, there are two categories of data: application data and behavioral data. Application data includes loan amounts, loan terms, and interest rates, whereas behavioral data includes credit limits, debt-to-income ratio, and credit balance.

EDA Data

The loan status variable is the target variable. Figure 1.1 depicts a total of six categories. Among them, default accounts for only 0.0029 percent of the total dataset. To make the modeling process easier, I consolidated loan statuses that are either default or charged off as default, because charged-off loans are simply loans that were declared uncollectible. The result shows the percentage of loans with a status of default is about 20%, while the percentage of loans without a status of default is nearly 80%.

Note: Imbalance classes on loan status, particularly default, account for only one-fourth of the total. As a result, upsampling will be used to resolve the problem.

Utilizing Data to Model Credit Risks
Figure1.1

There are a total of 150 features in the dataset. Among them, interest rate, loan amount and grade seem to be relevant to predict default risk as they are the data that are generated from the loan application. They are the reflections of the applicant's financial status from the bank or the lender’s perspective. As the figure 1.2 shows, the clouds of default applicants(Blue dots) are mostly scattered in the higher range of interest rate. This also happened to grade as the figure 1.3 shows, lower the grading is, the higher the interest rate.

Debt and Income

Utilizing Data to Model Credit Risks
Figure 1.2

Debt-to-income ratios and the yearly salary were plotted against the rate of interest and loan grade for behavioral type. As illustrated in the graph, default applicants appear to cluster around the low debt-to-income ratio with a high interest rate. The plot of annual income versus loan grade shows that the higher the income, the higher the grade, yet the odds of default appear to be the same in both groups.

Note: Features with more than 80% missing values were eliminated throughout the EDA process. It is simply due to the fact that the degree of missingness on these features may result in unwanted noises. There seem to be signs of anomalies in the above figures, that will be removed for logistic regression preprocessing.

Figure 1.3

And, for missing data, zero imputation and some calculation were the strategies I relied on the most. This is due to the fact that the majority of the missing data was of the application type, and there are indications of relationships between some of them. For example, the bankcard utilization rate is computed by dividing the total balance of the bank card by the total credit limit of the bankcard; thus, imputing using such a computation makes sense.

Data Models and Results 

Logistic regression, support vector machine, random forest, and CatBoost have been used to perform this classification task. Logistic regression and support vector machines are commonly used in banking, particularly for risk modeling. Among the two, logistic regression has consistently proven to be one of the most effective model strategies for credit risk modeling. It has also been widely utilized to build a scorecard model, an advanced math model that uses the default risk model as its foundation. Random forest and CatBoost are tree-based models that handle classification tasks employing techniques such as bagging and ensembling; one advantage of using them is that they are simple to implement and do not require preprocessing.

Data Analysis

According to the plot above, stochastic gradient descent-based SVM has an average score of 0.66 and a running time of 1.8 seconds. SGD was chosen as the optimization strategy since MLE-based(Maximum Likelihood Estimation) SVM is slow and imprecise on large datasets, particularly those with more than 10,000 rows. The model would need to be fine-tuned in terms of hyperparameters and kernel transformation.

 Random forest scored similarly to SVM.  After 20 to 30 iterations, both models tend to overfit, and both training and testing scores begin to stagnate as they improve. This is because both models pick up on detail and noise in the training data to an extent that it has a negative impact on the models' performance on new data. However, training random forest took substantially less time than SVM; both models should perform better with further tuning.

CatBoost outperforms the other models by using only one-hot encoding for category features; it has the greatest score of the four models. Both f1 scores and accuracy were 98 percent, precision was 94 percent, and recall was 98 percent. 

Finally, with an average f1 score of 0.89 and an accuracy score of 0.88, stochastic gradient descent-based logistic regression trumps both SVM and random forest. A precision score of 0.98 indicates that the model predicts default with 98 percent accuracy, while a recall score of 0.77 indicates that the model anticipates default scenarios with 77 percent accuracy. The model's precision and recall should increase with more tuning.

Conclusion

Logistic regression and CatBoost outperform the other three models. It ranks the highest across all scoring metrics, especially with CatBoost. However, if forced to choose one, logistic regression would be a decent choice. The model's predictability can be improved further with some more tuning, and it can be modified to be utilized in scorecard modeling and risk analysis.

About Author

Evin

With a bachelor's degree in Finance and a bachelor's degree in Statistics, Wei(Evin) Lin is a certified data scientist. He has more than two years of finance and accounting internship experience in the area of sales and trade,...
View all posts by Evin >

Leave a Comment

No comments found.

View Posts by Categories


Our Recent Popular Posts


View Posts by Tags

#python #trainwithnycdsa 2019 2020 Revenue 3-points agriculture air quality airbnb airline alcohol Alex Baransky algorithm alumni Alumni Interview Alumni Reviews Alumni Spotlight alumni story Alumnus ames dataset ames housing dataset apartment rent API Application artist aws bank loans beautiful soup Best Bootcamp Best Data Science 2019 Best Data Science Bootcamp Best Data Science Bootcamp 2020 Best Ranked Big Data Book Launch Book-Signing bootcamp Bootcamp Alumni Bootcamp Prep boston safety Bundles cake recipe California Cancer Research capstone car price Career Career Day citibike classic cars classpass clustering Coding Course Demo Course Report covid 19 credit credit card crime frequency crops D3.js data data analysis Data Analyst data analytics data for tripadvisor reviews data science Data Science Academy Data Science Bootcamp Data science jobs Data Science Reviews Data Scientist Data Scientist Jobs data visualization database Deep Learning Demo Day Discount disney dplyr drug data e-commerce economy employee employee burnout employer networking environment feature engineering Finance Financial Data Science fitness studio Flask flight delay gbm Get Hired ggplot2 googleVis H20 Hadoop hallmark holiday movie happiness healthcare frauds higgs boson Hiring hiring partner events Hiring Partners hotels housing housing data housing predictions housing price hy-vee Income Industry Experts Injuries Instructor Blog Instructor Interview insurance italki Job Job Placement Jobs Jon Krohn JP Morgan Chase Kaggle Kickstarter las vegas airport lasso regression Lead Data Scienctist Lead Data Scientist leaflet league linear regression Logistic Regression machine learning Maps market matplotlib Medical Research Meet the team meetup methal health miami beach movie music Napoli NBA netflix Networking neural network Neural networks New Courses NHL nlp NYC NYC Data Science nyc data science academy NYC Open Data nyc property NYCDSA NYCDSA Alumni Online Online Bootcamp Online Training Open Data painter pandas Part-time performance phoenix pollutants Portfolio Development precision measurement prediction Prework Programming public safety PwC python Python Data Analysis python machine learning python scrapy python web scraping python webscraping Python Workshop R R Data Analysis R language R Programming R Shiny r studio R Visualization R Workshop R-bloggers random forest Ranking recommendation recommendation system regression Remote remote data science bootcamp Scrapy scrapy visualization seaborn seafood type Selenium sentiment analysis sentiment classification Shiny Shiny Dashboard Spark Special Special Summer Sports statistics streaming Student Interview Student Showcase SVM Switchup Tableau teachers team team performance TensorFlow Testimonial tf-idf Top Data Science Bootcamp Top manufacturing companies Transfers tweets twitter videos visualization wallstreet wallstreetbets web scraping Weekend Course What to expect whiskey whiskeyadvocate wildfire word cloud word2vec XGBoost yelp youtube trending ZORI