Data Integration and loan default risk analysis using Machine Learning

Posted on Jun 23, 2019

Github link to the project


Marketplace lending is an alternative to traditional financial institutions. It directly pairs the borrowers with lenders through the use of online lending platforms. The concept was originally called 'Peer-to-Peer' lending, meaning that individuals get financed by other 'peer' investors. As more institutional investors join the market, the term is now evolved to 'marketplace lending'. 

    The first company that offers the marketplace lending is Zopa, a UK company founded in February 2005. In the US, Prosper, founded in February 2006, is the first marketplace lending player. LendingClub joined the competition right after. LendingClub was initially launched as a Facebook applications. After receiving $10.26 million Series A funding in August 2007, LendingClub became a full scale company. LendingClub went public in 2014, and now it's the biggest online lending platform.

    As more and more online lending platforms join the competition, one major issue for investors is how to integrate different formats of data so that investors can compare the performance of loans from different company and project the risk of pools they finance.

Project overview

    The goal of this project is to use machine learning to help integrate different sources of data and predict loan default risk. The first step in data integration is to match the column names with same meanings. Usually, online lending platforms would provide a data dictionary that contains the column names and their respective definitions. Often, the column names would contain abbreviations or jargons that people from other industry are hard to tell its meaning without looking at the definitions. Therefore, column matching based on the column definitions is the best way do it. In many companies, this is still a manual process. It's time consuming if there are many columns to pare. Fortunately, with the advancement of NLP algorithms, it's possible to let computers to decide the closest match. With that in mind, we shed the light to BERT, which is a NLP model developed by Google. Unlike traditional NLP models, which read the text input sequentially, BERT performs bidirectional training for language modeling, and therefore results in better contextual representations for each word. With that capability, BERT is suitable for tasks to pair sentence with similar semantic meaning.

Project infrastructure setup

In this project, different services are containerized in separated Docker containers. The main entry point is through Flash API. BERT encoding service and Dash interactive plots are deployed as a stand-alone services using Docker. I used bert-as-service to map sentences to fixed-length vectors, and then computed normalized dot product as score to rank the similarity of sentences. BERT provides pre-trained models that can be used directly for sentence encoding. In this project, LendingClub data dictionary is read in initially and is used as reference to find the best match for Prosper columns.  As the example, when choosing the "Term" column of Prosper data, which has the definition of "The length of the loan expressed in months", the Flask app returns the top 10 closest match. Nonetheless, the "term" column of LendingClub, which has the definition of "The number of payments on the loan. Values are in months and can be either 36 or 60", are ranked in the first place.

Top 10 match for Prosper column "Term"  

I also performed the EDA to assess the origination and loan statistics for LendingClub loans. I used Dash with Plotly Express to create interactive dashboards. 

To predict the default risk of loans, I used random forest to train the data. The model came out with the feature importance. As the result shows, the top one feature that determines the default risk is outstanding principal, followed by interest rate, debt-to-income ratio and credit history of the borrower. 

Feature importance in random forest model


For the future direction of this project, I would like to fine tune the BERT training model to increase the accuracy of matching. Currently, the Flask API allows uploading csv files of LendingClub loans and it will return the prediction of loan status. In the future, I want to integrate the dashboard generation and loan status prediction, so that when a csv file is uploaded, it will generate a full report.

About Author

Jun Kui Chen

Jun obtained Ph.D. from Columbian University in Immunology. He is currently working in a Fintech start up as a Data Analyst.
View all posts by Jun Kui Chen >

Leave a Comment

No comments found.

View Posts by Categories

Our Recent Popular Posts

View Posts by Tags

#python #trainwithnycdsa 2019 airbnb Alex Baransky alumni Alumni Interview Alumni Reviews Alumni Spotlight alumni story Alumnus API Application artist aws beautiful soup Best Bootcamp Best Data Science 2019 Best Data Science Bootcamp Best Data Science Bootcamp 2020 Best Ranked Big Data Book Launch Book-Signing bootcamp Bootcamp Alumni Bootcamp Prep Bundles California Cancer Research capstone Career Career Day citibike clustering Coding Course Demo Course Report D3.js data Data Analyst data science Data Science Academy Data Science Bootcamp Data science jobs Data Science Reviews Data Scientist Data Scientist Jobs data visualization Deep Learning Demo Day Discount dplyr employer networking feature engineering Finance Financial Data Science Flask gbm Get Hired ggplot2 googleVis Hadoop higgs boson Hiring hiring partner events Hiring Partners Industry Experts Instructor Blog Instructor Interview Job Job Placement Jobs Jon Krohn JP Morgan Chase Kaggle Kickstarter lasso regression Lead Data Scienctist Lead Data Scientist leaflet linear regression Logistic Regression machine learning Maps matplotlib Medical Research Meet the team meetup Networking neural network Neural networks New Courses nlp NYC NYC Data Science nyc data science academy NYC Open Data NYCDSA NYCDSA Alumni Online Online Bootcamp Online Training Open Data painter pandas Part-time Portfolio Development prediction Prework Programming PwC python Python Data Analysis python machine learning python scrapy python web scraping python webscraping Python Workshop R R Data Analysis R language R Programming R Shiny r studio R Visualization R Workshop R-bloggers random forest Ranking recommendation recommendation system regression Remote remote data science bootcamp Scrapy scrapy visualization seaborn Selenium sentiment analysis Shiny Shiny Dashboard Spark Special Special Summer Sports statistics streaming Student Interview Student Showcase SVM Switchup Tableau team TensorFlow Testimonial tf-idf Top Data Science Bootcamp twitter visualization web scraping Weekend Course What to expect word cloud word2vec XGBoost yelp