Heart Disease Estimation with Logistic Regression: R Shiny App

Posted on Nov 11, 2022

https://github.com/robertjgarciaphd/Capstone-R-Shiny-Heart-Disease.git

Problem

The question behind this work is whether we can use the health information at our disposal to predict heart disease more accurately than we have been able to so far.  To provide some context and highlight the scope of the problem, here are some statistics from the CDC about heart disease in the United States:

  • Heart disease is the leading cause of death for men, women, and people of most racial and ethnic groups in the United States.
  • One person dies every 34 seconds in the United States from cardiovascular disease.
  • About 697,000 people in the United States died from heart disease in 2020—that’s 1 in every 5 deaths.
  • Heart disease cost the United States about $229 billion each year from 2017 to 2018. This includes the cost of healthcare services, medicines, and lost productivity.

https://www.cdc.gov/heartdisease/facts.htm

To sum it up, there is a tremendous annual loss of life, money, and productivity every year due to heart disease.

Not surprisingly, it turns out that this is not an easy problem to solve.  A 2020 study at UT Southwestern showed that using sophisticated genetic testing does not greatly improve predictions based on traditional risk factors like high blood pressure, cholesterol levels, diabetes, and smoking status (1). A 2019 study of about 423,000 UK biobank records only achieved an area under the curve for the ROC of .774 (2).  In looking for research using more sophisticated methods like neural networks and machine learning, the articles I found that achieved models with accuracy closer to 100% tended to be ones based on small data sets with only about 300 observations (3, 4). Based on these studies, unfortunately, we should not expect high accuracy even if we do all the modeling right.

Task

I challenged myself to make an app that estimates the probability of heart disease with user input.  Making a binary outcome variable of heart disease versus no heart disease meant that I would need to use logistic regression as machine learning is currently beyond the scope of my training.  To make an interactive app, I chose to use an R Shiny interface.

Dataset

Originally, the dataset came from the CDC and is a major part of the Behavioral Risk Factor Surveillance System (BRFSS), which conducts annual telephone surveys on the health status of U.S. residents. BRFSS collects data in all 50 states as well as the District of Columbia and three U.S. territories and completes more than 400,000 adult interviews each year, making it the largest continuously conducted health survey system worldwide.

The most recent dataset (as of February 2022) includes data from 2020. I downloaded a filtered version of it from Kaggle that contained 320,000 rows and 17 columns. Most columns are questions asked to respondents about their health status, such as "Do you have serious difficulty walking or climbing stairs?" or "Have you smoked at least 100 cigarettes in your entire life?".

Data Processing

Here is a look at the raw data.  Notice that HeartDisease on the left is binary in the form of "Yes" or "No", while other variables like BMI and PhysicalHealth were numeric.  AgeCategory is a good example of something that appears numeric but is actually an ordinal categorical variable.  Most of the variables were of either numeric or character type.  I had to convert most of them to factors, often binary or ordinal.  I also had to do some re-coding, as in the case of recategorizing gestational diabetes as "No".

  • Binary: df$Diabetic = as.factor(df$Diabetic)
  • Ordinal: df$GenHealth = factor(df$GenHealth, levels = c("Poor", "Fair", "Good", "Very good", "Excellent"))
  • Recoding: df$Diabetic[df$Diabetic == 'Yes (during pregnancy)'] <- "No"

Model Building

The dataset was very skewed, with less than 10% of the sample having heart disease.  This made it necessary to use more advanced sampling techniques to balance the dataset.  The two most relevant options were oversampling, which randomly resamples with replacement from the underrepresented group, and the ROSE, or Random Over-Sampling Examples, a method that generates synthetic data to balance out the underrepresented group.

I created training and testing subsets from the data, 80% of the sample and 20% of the sample, respectively, and used them to determine that the oversampling approach yielded a better area under the ROC curve result, suggesting that it had higher overall predictive accuracy. As you can see, the two sampling approaches were ultimately quite similar in terms of their ROC curves, but there is even more smoothness (and therefore, area) under the oversampling one.

The AUC (area under the curve) for the ROC curve for the ROSE method was .778

The AUC for the ROC curve for the oversampling method was .789

Using the dataset generated with oversampling yielded a McFadden pseudo-R2 of .288, suggesting a good degree of model fit.

Here are the model coefficients.  I highlighted the biggest risk factors in red and the biggest protective factors in green.  A history of having a stroke and being above 50 are the biggest sources of increased risk for heart disease and being in very good or excellent health are the biggest sources of reduced risk of heart disease.

Here I show the confusion matrix for the oversampled data.  You can see that the incorrect predictions from the model are in the tens of thousands, which illustrates how hard it is to make accurate predictions even from a large dataset.  Using that matrix, I can provide standard metrics of model performance.  The accuracy, or total correct predictions over total predictions, was .75.  The precision, or proportion of positive identifications the model got correct, was only .22.  So we see the model is essentially trigger-happy with positive identifications.  The recall, or proportion of actual positives the model identified correctly, was 0.78.

App Design

Here are some key features I considered while designing the app:

  • Allow users to enter responses to health queries
  • Include a button to calculate after users make a selection
  • Repeat button presses will perform new calculations with new menu selections
  • Display caveats about accuracy and precision

App Interface

Here’s a look at how the app turned out.  Once you run the Preprocessing file in the GitHub repository, the use of the app is straightforward: you simply answer the health questions on the left and press the Calculate button when you are ready to view the result.

Insights

So what did we discover from this?  Unfortunately, nothing too profound.  The biggest risk factors identified by the model of stroke and age over 50 are unsurprising.  The same goes for the biggest protective factors of reporting being in very good or excellent health.  Perhaps more interesting is the finding that it appears difficult to surpass 75% accuracy in predictions with this approach.  This highlights how much science still does not know about the specific causes of heart disease across genetic and lifestyle variations.

Future Directions

With more time, I would like to revise the output of the app to provide a confidence interval to give users a better idea of the range of risk associated with their specific health data.  I would like to study machine learning and neural net techniques to make more sophisticated classifiers.  I would also like to see if it is possible to get details about family history and diet and genetic markers to see if it is possible to enhance the predictions with richer data.

Photo Credit: Kenny Eliason: https://unsplash.com/@neonbrand

About Author

Robert Garcia

Robert holds a Ph.D. in Psychology and Social Behavior, an M.A. in Social Ecology, and a B.S. in Cognitive Science. He has experience working in both the for-profit and non-profit sectors.
View all posts by Robert Garcia >

Leave a Comment

No comments found.

View Posts by Categories


Our Recent Popular Posts


View Posts by Tags

#python #trainwithnycdsa 2019 2020 Revenue 3-points agriculture air quality airbnb airline alcohol Alex Baransky algorithm alumni Alumni Interview Alumni Reviews Alumni Spotlight alumni story Alumnus ames dataset ames housing dataset apartment rent API Application artist aws bank loans beautiful soup Best Bootcamp Best Data Science 2019 Best Data Science Bootcamp Best Data Science Bootcamp 2020 Best Ranked Big Data Book Launch Book-Signing bootcamp Bootcamp Alumni Bootcamp Prep boston safety Bundles cake recipe California Cancer Research capstone car price Career Career Day citibike classic cars classpass clustering Coding Course Demo Course Report covid 19 credit credit card crime frequency crops D3.js data data analysis Data Analyst data analytics data for tripadvisor reviews data science Data Science Academy Data Science Bootcamp Data science jobs Data Science Reviews Data Scientist Data Scientist Jobs data visualization database Deep Learning Demo Day Discount disney dplyr drug data e-commerce economy employee employee burnout employer networking environment feature engineering Finance Financial Data Science fitness studio Flask flight delay gbm Get Hired ggplot2 googleVis H20 Hadoop hallmark holiday movie happiness healthcare frauds higgs boson Hiring hiring partner events Hiring Partners hotels housing housing data housing predictions housing price hy-vee Income Industry Experts Injuries Instructor Blog Instructor Interview insurance italki Job Job Placement Jobs Jon Krohn JP Morgan Chase Kaggle Kickstarter las vegas airport lasso regression Lead Data Scienctist Lead Data Scientist leaflet league linear regression Logistic Regression machine learning Maps market matplotlib Medical Research Meet the team meetup methal health miami beach movie music Napoli NBA netflix Networking neural network Neural networks New Courses NHL nlp NYC NYC Data Science nyc data science academy NYC Open Data nyc property NYCDSA NYCDSA Alumni Online Online Bootcamp Online Training Open Data painter pandas Part-time performance phoenix pollutants Portfolio Development precision measurement prediction Prework Programming public safety PwC python Python Data Analysis python machine learning python scrapy python web scraping python webscraping Python Workshop R R Data Analysis R language R Programming R Shiny r studio R Visualization R Workshop R-bloggers random forest Ranking recommendation recommendation system regression Remote remote data science bootcamp Scrapy scrapy visualization seaborn seafood type Selenium sentiment analysis sentiment classification Shiny Shiny Dashboard Spark Special Special Summer Sports statistics streaming Student Interview Student Showcase SVM Switchup Tableau teachers team team performance TensorFlow Testimonial tf-idf Top Data Science Bootcamp Top manufacturing companies Transfers tweets twitter videos visualization wallstreet wallstreetbets web scraping Weekend Course What to expect whiskey whiskeyadvocate wildfire word cloud word2vec XGBoost yelp youtube trending ZORI