Lending Club Loans Payment Predictions
What is Lending Club?
Lending Club is a peer-to-peer company(P2P) headquartered in San Francisco, California. P2P lending is the practice of lending money to individuals or businesses through online services that match lenders with borrowers. For Lending Club, and most other P2P lending companies, they make money through origination and service fees.
Why choose Lending Club?
Because their services are online, this allows Lending Club to operate at a lower overhead or costs, and thus provide their services more cheaply than traditional financial institutions. This process allows for lenders to earn higher returns compared to savings and investment products offered by traditional banks, while borrowers can borrow money at lower interest rates.
As investors, there are two ways to invest in loans. 1) As a lender or 2) Purchasing them off the secondary mortgage market (a marketplace where home loans and servicing rights are bought and sold between lenders and investors). The return on investment is therefore relative to what method was chosen. So, as a data scientist, rather than predicting return, it is more tangible to find the projected total payment percentage. Furthermore, simply predicting the probability of default does not give us the full picture. When someone defaults is just important as if someone defaults.
Since we are finding the projected total payment percentage, we need to look at loans that have a total_pymnt. Therefore, we will only be considering loans that have either been fully-paid or are beyond deliquency.
As you can see, this dataset is imbalanced, but that should be expected. We DO NOT want to see an even split amongst Fully Paid off loans and Charged Off or Defaulted Loans. We shall now take a look at a few more variables/features in the dataset.
We can see that Lending club had some tremendous growth in the dollar amount of loans issued ever since 2012. We can also check the growth for each Credit Grade.
Intuitively, the purpose of the loan is important as well. We can see what each credit grade is borrowing for.
Generally speaking, we can see that most loans are used for Credit Card payments and Debt consolidation. From here, we can also see what the most common credit grade is, as well as confirm if there's an inverse relationship between credit grade and interest rate.
We see that borrowers with credit grades B and C are the most common, and we can confirm that there is an inverse relationship between credit grade and interest rates. Naturally, investors want higher return for higher risk (in this case, more likely to default) . Finally, we can see what the most common loan terms are for each credit grade.
Interesting. 36 months is the most common for higher credit grades A-D, but 60 months for E through G.
It is important that we select features that are available at loan origination. So I chose these ones: loan_amnt, term, int_rate, installment, grade, sub_grade, emp_title, emp_length, home_ownership, annual_inc, verification_status, purpose, zip_code, addr_state, dti, earliest_cr_line, fico_range_low, fico_range_high, open_acc, pub_rec, initial_list_status, total_pymnt, application_type, annual_inc_joint, dti_joint, verification_status_joint, pub_rec_bankruptcies.
Note: total_pymnt will be used to create our target variable.
Since there is no linear regression assumption, there are non-linear models that can handle missing values, which is what I ended up using. However, for practice, I did end up taking a look through the features I selected. There were lots of missing data invovle "joint" status. This could just be because not all loan applicants do so with another person. Since the plan is to use XGBoost, which can handle missing errors, the last thing we have to do is label-encode our categorical features.
As mentioned above, total_pymnt will be used to create our target variable. At loan origination, we can easily calculate the expected total payment (expected_total_pymnt) = term*installment. Then, we create our target variable: projected_total_pymnt_percentage = total_pymnt/expected_total_pymnt.
Model and Conclusion:
Here is the feature importance of my model. In this case, I used the "gain" metric which implies the relative contribution of the corresponding feature to the model. A higher value implies it is more important for generating a prediction.
In my test dataset, my model returned a RMSE of 0.22 and a R2 of 0.20. I calculated RMSE such that I can compare it with a different model (perhaps a linear regression) in the future. So, my model can explain 20% of the variance in my target variable.
From here, I decided to test out two situations, both from the perspective of a lender. Therefore, the metric we are measuring, return, is in relation to the loan amount.
- Measuring against other forms of investment. For example, can I create a portfolio from Lending Club's available borrowers that outperforms the S&P500? The S&P 500 has had an average return of around 10.7% for the last 30 years and 13.9% for the last 10. I created a function that returns a list of borrowers that have a higher projected return. That being said, these returns are not risk-adjusted. Again, more work can be done to create a more accurate representation.
- Portfolio Comparison. For simplicity sake, let's say that there's a portfolio contains 25% of B1 borrowers, chosen at random. Is our model able to outperform? I created a function that gives me the top 25% best performing B1 borrowers based on my model, and compared it to those randomly selected.
For the Future:
Assumptions that may not be true were made in the creation of this model. For example, I pooled all the data together i.e. treat all years as equal. However, that may not be an accurate assumption as we know the macro-economic environment plays a huge role on loans/bonds. Looking into a time-series may provide a more accurate model.
Furthermore, interest rate, grade/subgrade, are all determined by Lending Club. If their metric of determining these factors change, then who's to say if the model can work. Somehow reverse-engineering these metrics ourselves and including that into our model may provide a more robust model.