Data Science on Heart Disease Prediction

Posted on Oct 25, 2021

The skills I demoed here can be learned through taking Data Science with Machine Learning bootcamp with NYC Data Science Academy.

Data Science Background

Data shows that Cardiovascular diseases are the number one cause of death globally. They are responsible for 17.9 million deaths per year, which accounts for 31% of deaths worldwide. 80% of those cardiovascular disease deaths are due to heart attacks and strokes and a third of these deaths occur in people under the age of 70. Heart failure is common in people with cardiovascular diseases, so being able to make early predictions of the risk of developing a cardiovascular disease is of great interest to health care providers and their patients. Here we present an exploratory data analysis and a machine learning model to better ascertain the risk of cardiovascular disease. [1]

Kaggle Data Set

The source used for this project is a heart failure prediction dataset on Kaggle.com. [1] It combines data from 5 hospitals and institutes. It includes 303 observations from Cleveland Clinic, 294 from the Hungarian Institute of Cardiology, another 123 from University Hospital of Zurich and Basel, 200 from the VA Medical Center Long Beach, and another 272 from the UC Irvine Machine Learning Repository. After duplicates were removed, a final value of 918 observations remained. This is the largest dataset currently available for research into cardiovascular disease risk. 

Data Features

The original datasets contained up to 76 features, but many of them had missing data. When the dataset was consolidated and features with many missing values were removed, 11 of them remained. These features include the age of the patient, sex, chest pain type, resting blood pressure, cholesterol, fasting blood sugar, resting Electrocardiogram (ECG), maximum heart rate, exercise-induced angina, oldpeak, and slope of peak exercise ST segment. These variables will be explained in greater detail throughout this blog post. In addition to these independent variables, the data also included a dependent variable indicating whether or not the patient had developed heart disease. 

In this article we will investigate the use these 11 features to predict heart disease risk. This is of interest to insurance companies and hospitals that can save money by detecting heart disease. Aging patients and their families would also benefit from early heart disease detection. 

Figure 1.

This sex variable in this dataset was a little bit skewed toward males. There were 725 males and only 193 females in this dataset. This is explained by the fact that males have a greater risk for developing heart disease and this dataset is skewed toward patients who are also at risk. 

Chest pain type

Chest pain type is a categorical variable assigned one of four values. Angina is a type of chest pain caused by ischemia, or reduced blood flow to the heart. The first chest pain type is atypical angina (ATA). The second, non-anginal pain (NAP) is if there is some chest pain and it is not caused by reduced blood flow to the heart. Asymptomatic (ASY) chest pain type means that the patient has reduced blood flow to the heart, but they do not report any chest pain. Typical angina (TA) means that the patient reports chest pain and it is caused by reduced blood flow to the heart. In the dataset at large there were a lot of asymptomatic chest pain types. This is indicative of the fact that asymptomatic angina is common and problematic. Dr. Peter Stone from Harvard medical school quotes “People with heart disease have five to ten times as many ['

of silent ischemia as symptomatic ischemia.”[2]  The dataset is somewhat skewed toward people who are already at a risk for developing heart disease since there is such a high number of asymptomatic cases. 

Related Data w3 Factors

Fasting Blood Sugar is related to heart disease. To detect this, physicians would ask patients to fast overnight. If after fasting the blood sugar was elevated a “1” would be recorded and a “0” otherwise. The majority of patients did not have elevated fasting blood sugar. 

Resting Electrocardiogram (ECG) is a categorical variable given one of three values -- Normal, ST, or LVH. LVH is left ventricular hypertrophy, meaning the left ventricle of the heart is abnormally thick. ST wave abnormality is a particular feature in the electrocardiogram which is reported as abnormal. Most of the patients had normal resting ECG.

Patients Exercise

To detect exercise angina, physicians had patients exercise. If they then experienced chest pain due to the reduced blood flow, that would be a “Yes” and a “No” otherwise. The majority of patients did not experience exercise-induced angina. 

Figure 2.

Data on Electrocardiogram

On the right of Fig. 2 is a schematic of the general features of an electrocardiogram. After the QRS complex peak, the ST segment is this brief period before the final feature called the T peak. The ST slope variable indicates whether the ST segment is flat, downsloping, or upsloping. In healthy patients the ST segment is slightly upsloping. Abnormalities can arise if it is flat or downsloping.

Figure 3. 

The dataset contained five numerical variables -- age, resting blood pressure, cholesterol, maximum heart rate, and oldpeak. The median age of this dataset was 54. Resting blood pressure was 130, and cholesterol had a median of 230. There were a substantial number of zero values for the cholesterol column. These were set to the median in the final analysis.

After exercise was induced in the patients, the maximum heart rate is the highest that the patients could achieve. This value had a mean of 136.8 and median of 138. The oldpeak is the distance below the baseline of the ST segment. A high oldpeak value can also indicate a risk for heart disease.

Figure 4.

Next Step

In the next step of our analysis, we examined how the distributions of our features differ between patients with and without heart disease. To aid in this analysis we calculated the Pearson correlations and have presented the features in the order of the absolute value of the correlation. ST slope and exercise angina had the two greatest levels of correlation with the development of heart disease. For ST slope, the number of incidences of the flat category spiked from 19.3 to 75%. Exercise angina also increased from from 13.4 to 62.2%.

Figure 5.

Oldpeak and Max HR

The next two features highly correlated with heart disease were Oldpeak and Max HR. The density plots are shown in Fig. 5 as red for healthy patients and blue for those with heart disease. Healthy patients frequently did not have a depression of the ST segment which is indicated as a spike around 0 for oldpeak. Those who developed heart disease more frequently had nonzero oldpeak values so theres a significant positive shift in the oldpeak density plot. For patients with heart disease, there was also a decrease in the maximum heart rate that could be achieved during exercise when compared to healthy patients. 

Figure 6.

There was also a correlation between chest pain type and heart disease. Those who developed heart disease exhibited substantially more asymptomatic (ASY) chest pain type (Fig 6, purple bar), rising from 25 to 77%. There was also an increase in the percentage of males who developed heart disease versus females. Males made of 90% of patients with heart disease, whereas they made up only 65% of the healthy patient population. 

Figure 7.

Age plays a somewhat muted role in heart disease risk with a Pearson correlation of 0.28. There is a slight shift in the density plots between patients with and without heart disease. There was also an increase in the amount of elevated fasting blood sugar amongst the people who developed heart disease, but again, this effect was muted compared to some of the other features.

Data Science on Heart Disease Prediction

Figure 8.

 

Resting blood pressure and resting ECG

The resting blood pressure and resting ECG did not differ significantly between heart disease patients and healthy patients. In summary, stress-induced conditions are better indicators of whether or not somebody at risk for developing heart disease. Metrics taken from a resting state, like resting BP and resting ECG were among the worst predictors of heart disease. 

The exploratory data analysis revealed that there was a significant correlation between the exercise-induced features, like ST slope, exercise angina, oldpeak, maxHR,  and Chest Pain type. There was a moderate correlation with sex, age, and fasting blood sugar. Little to no correlation was found with resting BP, resting ECG, or cholesterol. The best indicators seem to be the exercise-induced metrics.

Figure 9.

Finally, we also trained a random forest classifier with scikit-learn. Using this, we can examine how the Pearson correlations match up with the feature importance from the random forest classifier. Much of the order of the features remains the same. Some features that are slightly different are that chest pain type, max heart rate, and cholesterol are elevated to a slightly greater level of importance in our random forest classifier. This classifier used 80% of the data to train the model and 20% to test it. The accuracy of the model was 87%, which is a significant improvement over the null model score of 55%. This validates the use of a random forest classifier and that machine learning can be used to get an accurate prediction of whether or not a patient will develop heart disease. 

GitHub

References

[1] https://www.kaggle.com/fedesoriano/heart-failure-prediction

[2] https://www.health.harvard.edu/heart-health/angina-and-its-silent-cousin

About Author

Karl Lundquist

Karl is a data scientist with nine years of performing technical data analysis and research design in an academic setting. He is highly skilled at communicating complex analytic insights to a general audience. He is currently working to...
View all posts by Karl Lundquist >

Leave a Comment

No comments found.

View Posts by Categories


Our Recent Popular Posts


View Posts by Tags

#python #trainwithnycdsa 2019 2020 Revenue 3-points agriculture air quality airbnb airline alcohol Alex Baransky algorithm alumni Alumni Interview Alumni Reviews Alumni Spotlight alumni story Alumnus ames dataset ames housing dataset apartment rent API Application artist aws bank loans beautiful soup Best Bootcamp Best Data Science 2019 Best Data Science Bootcamp Best Data Science Bootcamp 2020 Best Ranked Big Data Book Launch Book-Signing bootcamp Bootcamp Alumni Bootcamp Prep boston safety Bundles cake recipe California Cancer Research capstone car price Career Career Day citibike classic cars classpass clustering Coding Course Demo Course Report covid 19 credit credit card crime frequency crops D3.js data data analysis Data Analyst data analytics data for tripadvisor reviews data science Data Science Academy Data Science Bootcamp Data science jobs Data Science Reviews Data Scientist Data Scientist Jobs data visualization database Deep Learning Demo Day Discount disney dplyr drug data e-commerce economy employee employee burnout employer networking environment feature engineering Finance Financial Data Science fitness studio Flask flight delay gbm Get Hired ggplot2 googleVis H20 Hadoop hallmark holiday movie happiness healthcare frauds higgs boson Hiring hiring partner events Hiring Partners hotels housing housing data housing predictions housing price hy-vee Income Industry Experts Injuries Instructor Blog Instructor Interview insurance italki Job Job Placement Jobs Jon Krohn JP Morgan Chase Kaggle Kickstarter las vegas airport lasso regression Lead Data Scienctist Lead Data Scientist leaflet league linear regression Logistic Regression machine learning Maps market matplotlib Medical Research Meet the team meetup methal health miami beach movie music Napoli NBA netflix Networking neural network Neural networks New Courses NHL nlp NYC NYC Data Science nyc data science academy NYC Open Data nyc property NYCDSA NYCDSA Alumni Online Online Bootcamp Online Training Open Data painter pandas Part-time performance phoenix pollutants Portfolio Development precision measurement prediction Prework Programming public safety PwC python Python Data Analysis python machine learning python scrapy python web scraping python webscraping Python Workshop R R Data Analysis R language R Programming R Shiny r studio R Visualization R Workshop R-bloggers random forest Ranking recommendation recommendation system regression Remote remote data science bootcamp Scrapy scrapy visualization seaborn seafood type Selenium sentiment analysis sentiment classification Shiny Shiny Dashboard Spark Special Special Summer Sports statistics streaming Student Interview Student Showcase SVM Switchup Tableau teachers team team performance TensorFlow Testimonial tf-idf Top Data Science Bootcamp Top manufacturing companies Transfers tweets twitter videos visualization wallstreet wallstreetbets web scraping Weekend Course What to expect whiskey whiskeyadvocate wildfire word cloud word2vec XGBoost yelp youtube trending ZORI