Predicting Early Readmissions in Diabetic Patients

Posted on Feb 4, 2021

Author: Yi Cao, Matt Hope, Ava Park



The cost of healthcare in the United States is much higher than that in other developed countries, and is on track to continue growing as a portion of gross domestic product, despite efforts to rein in cost. Given the challenges in the healthcare sector, data science as a field has a role to play in producing actionable insights to bring down costs and improve outcomes for patients. In this capstone project, our team combined both data analysis and machine learning techniques to predict early hospital readmission in diabetic patients across the country.

Diabetes is a prevalent and costly disease in the United States, but recent numbers from the American Diabetic Association (ADA) have illuminated the extent of the problem. For example, 25 million adult Americans (9.8% of the population) are estimated to have diagnosed diabetes, making it one of the most common medical conditions. Furthermore, in 2017, the ADA estimated that 277,000 deaths were attributable to the disease. Diabetes is a chronic condition that frequently co-occurs with other conditions, such as heart disease, kidney disease, high blood pressure, or stroke. Therefore, 85,000 of those deaths were directly attributable to diabetes, while the majority listed other comorbid conditions as the primary cause.

In addition to its impact on the health of patients, diabetes also has a tremendous impact on the cost of care associated with treating patients. The ADA reports that in 2017, diabetes incurred approximately $237 billion in direct healthcare costs, and an additional $90 billion dollars in lost economic productivity from patients. Healthcare spending on diabetes has increased substantially in the last decade, and it’s estimated that one out of every four healthcare dollars is spent on patients with diabetes.

As mentioned above, diabetes is a chronic condition, with some patients managing well and other patients requiring frequent trips to the hospital. Inpatient hospital visits represent a significant driver of healthcare costs, and hospitals are incentivized to prevent early and/or frequent readmission of patients. As part of this project, we took on the role of data scientists making recommendations to a hospital working with diabetic patients. Since resources are typically limited, hospitals must choose which patients to target for intervention, either during their inpatient stay or after they are discharged for preventative measures. The goal of our project was two-fold: we wanted to explore which attributes of a patient are correlated with readmissions, and then we built a supervised machine-learning model to help hospitals predict whether a patient will be readmitted within 30 days.


Data description:

Our dataset is an extract from the Health Facts database representing 10 years (1999-2008) of clinical care data at 130 hospitals and integrated delivery networks throughout the United States. The data was extracted based on these criteria:

  • The record was an inpatient encounter (a hospital admission).
  • Diabetes was entered into the system as a diagnosis.
  • The length of stay was >= 1 day and <= 14 days.
  • Laboratory tests were performed during the encounter.
  • Medications were administered during the encounter.

After some cleaning, our dataset consisted of 69,970 unique encounters, with each representing a patient. We modeled our target as a binary variable, where the patient either was or was not readmitted within 30 days. The features contained within the dataset include basic demographics such as age, gender, and race, as well as features indicating patient diagnoses, medications, and medical history. In the case of patient diagnoses, we discovered that these features were encoded by International Statistical Classification of Diseases (ICD) numbers, which represent a multitude of different medical conditions. Given the high cardinality of these features, we focused our attention on conditions that had more than 500 patients in our dataset, and similarly focused on medications that had at least 500 patients prescribed. These choices ultimately led to a cleaned dataset with 93 distinct features, which we examined with exploratory data analysis.


Exploratory Analysis (EDA):

We examined each of the patient encounter features in relation to our binarized target variable, ‘readmit_30d’: whether a patient was readmitted within 30 days or not. Below we have summarized features that we observed a relatively strong correlation with readmit_30d. The vertical bars in the plots represent the 95% confidence intervals.

  • ‘age’ (age groups): patients aged 60 or above, especially those between 70 and 90, were more likely to be rehospitalized early than patients in the other age groups.

  • ‘discharge_disposition_id’ (where patients were discharged to after the encounter): patients who were discharged to another type of inpatient care institution (id 5) or to another rehab facility (id 22) had a much higher chance to be rehospitalized early

  • ‘num_diagnoses’ (number of diagnoses entered into the system): patients who ended up rehospitalized early had had more diagnoses on average entered into the record for their prior encounter.

  • ‘num_medications’ (number of distinct, generic medications administered during the encounter): patients who ended up rehospitalized early had been put on more medications on average during their prior encounter.

  • ‘num_lab_procedures’ (number of lab tests performed during the encounter): patients who ended up rehospitalized early had been given more lab tests on average during their prior encounter.

  • ‘num_emergency’ (number of emergency visits in the year preceding the encounter): patients who ended up rehospitalized early had a higher average number of emergency visits in the previous year.

  • ‘num_inpatient’ (number of inpatient visits in the year preceding the encounter): patients who ended up rehospitalized early had a higher average number of inpatient visits in the previous year.


Model testing with decile analysis

Using our findings from the EDA, we then constructed a simple logistic model with 6 input variables and no regularization or parameter tuning. The six variables are: 

  • Discharged to a rehabilitation facility
  • Discharged to another type of inpatient care institution
  • Discharged to home
  • Number of inpatient visits
  • Number emergency visits
  • Age

This model tested an AUC-ROC score of 0.595 and a max lift of 2.30 from the decile analysis. It indicated that the simple model could distinguish the true and false classes to an extent, and especially in the highest-risk patient group (stratified based on the model) - more than twice as many patients were classified correctly than not using any model. 

Encouraged by this result, we then ran a few other supervised machine-learning models to explore ways to further improve the model score. We summarized the model test outcomes in the table below. For each of these models, we have conducted their own decile analysis to look at model performance - how much better we can predict readmission with the model.


# of features 

Features selected via:

Class imbalance remedy

AUC-ROC (train and test)

Cum lift from 10th decile 

Simple Logistic Regression



class_weight = ‘balanced’




Regularized Logistic Regression


The top 43 features from a full model with 93 features

class_weight = ‘balanced’




Decision Tree


The top 43 features from a full model with 93 features

class_weight = ‘balanced subsample’




Random Forest


The top 43 features from a full model with 93 features

class_weight = ‘balanced subsample’




Gradient Boost


The top 43 features from a full model with 93 features

Random oversampling





All of the more complex models demonstrated similar predicting capabilities, with AUC-ROC scores hovering over 0.60. The best test performance came from using a gradient boost classifier, which yielded a test AUC-ROC score of 0.610 and a max lift of 2.41.

In addition, the features that were deemed more ‘important’, either by the coefficients or feature importances, were fairly consistent across these models. We observed that while some features such as the number of inpatient/emergency visits, discharging to a rehabilitation facility/other inpatient care facilities, or the number of lab procedures were positively correlated with the probability of readmission, other features like a respiratory symptom diagnosis or a back/neck pain diagnosis seemed to be more ‘protective’ of being rehospitalized. 


Cost analysis:

To determine which model we would recommend a hospital using in order to target patients for health intervention, we conducted a cost analysis for the simple model and for the gradient boost model, using the cost figures from a study by the American Diabetes Association (see figure below). 

On average, the annual cost of inpatient care for a diabetes-related hospitalization was $2,820 for patients of all ages, and $4,075 for those 65 and above. The simple model predicted 163 out of the 289 patients’s readmission in the highest-risk group (10th decile) correctly. If these 163 patients could be diverted from a readmission through a certain intervention, it could reduce approximately $460K in inpatient care costs for patients of all ages or $664K if all the patients were 65 and above. Similarly, using the gradient boost model could reduce the inpatient care costs by $499K for patients of all ages or $721K if all of them were 65 and above. Although the AUC-ROC scores were similar between the simple and the gradient-boost model, we could see that if both models were implemented, the gradient boost model had a much higher potential in reducing the diabetic-related inpatient-care costs than the simple model, likely due to the high marginal healthcare expenditure. 



Our exploration and analysis yielded several models of varying complexity, from a very simple logistic regression model to a gradient boosted tree-based model. The AUC-ROC scores for the models were very similar, with the more complex models showing a marginally better score compared to the simple one. This raises an important question: should a hospital implement the simple model to track diabetic patients, or is the marginal increase in the AUC-ROC score worth it to implement a more complex model? In sectors outside of healthcare, it’s likely that the simple model would be preferable— however, we found that the high cost of care for diabetic patients makes even small increases in the AUC-ROC worthwhile. 

Based on our estimations, the difference in cost savings between the simple and complex model was $39K per year for the general diabetic population, and $57K for a population 65 and older. Therefore, the potential savings of implementing a more complex model could be substantial, which underscores the importance of carefully following through with cost analysis. We note that the amount of money saved by a hospital will depend on several factors, primarily: the patient population they serve, the effectiveness and cost of intervention, and the cost of implementing and tracking the features in the more complex model. Therefore, we expect that the decision of what model to implement would vary from hospital to hospital, depending on their particular needs.

Lastly, we want to point out that features that were deemed to be important are broadly consistent across the models and logically interpretable. For instance, features that indicate previous interactions with the healthcare system, like the number of inpatient hospital visits, were strongly correlated with rehospitalization. Some of the features deemed to be important were more unexpected but still interpretable, such as being discharged to a rehabilitation facility, or being diagnosed with a cough and/or musculoskeletal problems. These results might suggest to the hospital how to go about implementing an intervention. For instance, our models would suggest paying special attention to patients discharged to rehabilitation facilities, to make sure their blood sugar is controlled while receiving treatment for addiction. In this way, we envision data scientists playing a useful role in both building models and implementing them in the real world, in order to bring down healthcare costs and improve care.


About Author

Leave a Comment

No comments found.

View Posts by Categories

Our Recent Popular Posts

View Posts by Tags

#python #trainwithnycdsa 2019 airbnb Alex Baransky alumni Alumni Interview Alumni Reviews Alumni Spotlight alumni story Alumnus API Application artist aws beautiful soup Best Bootcamp Best Data Science 2019 Best Data Science Bootcamp Best Data Science Bootcamp 2020 Best Ranked Big Data Book Launch Book-Signing bootcamp Bootcamp Alumni Bootcamp Prep Bundles California Cancer Research capstone Career Career Day citibike clustering Coding Course Demo Course Report D3.js data Data Analyst data science Data Science Academy Data Science Bootcamp Data science jobs Data Science Reviews Data Scientist Data Scientist Jobs data visualization Deep Learning Demo Day Discount dplyr employer networking feature engineering Finance Financial Data Science Flask gbm Get Hired ggplot2 googleVis Hadoop higgs boson Hiring hiring partner events Hiring Partners Industry Experts Instructor Blog Instructor Interview Job Job Placement Jobs Jon Krohn JP Morgan Chase Kaggle Kickstarter lasso regression Lead Data Scienctist Lead Data Scientist leaflet linear regression Logistic Regression machine learning Maps matplotlib Medical Research Meet the team meetup Networking neural network Neural networks New Courses nlp NYC NYC Data Science nyc data science academy NYC Open Data NYCDSA NYCDSA Alumni Online Online Bootcamp Online Training Open Data painter pandas Part-time Portfolio Development prediction Prework Programming PwC python python machine learning python scrapy python web scraping python webscraping Python Workshop R R language R Programming R Shiny r studio R Visualization R Workshop R-bloggers random forest Ranking recommendation recommendation system regression Remote remote data science bootcamp Scrapy scrapy visualization seaborn Selenium sentiment analysis Shiny Shiny Dashboard Spark Special Special Summer Sports statistics streaming Student Interview Student Showcase SVM Switchup Tableau team TensorFlow Testimonial tf-idf Top Data Science Bootcamp twitter visualization web scraping Weekend Course What to expect word cloud word2vec XGBoost yelp