Prediction Model of Metabolic Syndrome in Non-obese Body Population

Avatar
Posted on Jun 21, 2019

Sangwoo Lee

Introduction

Metabolic syndrome is defined as a cluster of conditions that occur together that increases; the risk of certain diseases.

A person is diagnosed to be metabolic syndrome positive if s/he has  three or more of the conditions below (Fig. 1):

  1. Abdominal obesity, measured by waist circumference greater than 40 inches for men, or greater than 35 inches for women
  2. Triglyceride level of 150 milligrams per deciliter of blood (mg/dL) or greater
  3. HDL(high-density lipoprotein) cholesterol of less than 40 mg/dL for men or less than 50 mg/dL for women
  4. Systolic blood pressure (top number) of 130 mmHg or greater, or diastolic blood pressure (bottom number) of 85 mm Hg or greater
  5. Fasting glucose of 100 mg/dL or greater

Metabolic syndrome is a serious health condition that affects about 23 percent of adults. Metabolic syndrome positive persons are at higher risk of cardiovascular disease, diabetes, stroke and other diseases.

Metabolic syndrome positiveness is typically expected to be found for obese people. However, there is another population of people who are not obese but metabolic syndrome positive. Usually, these non-obese but metabolic syndrome positive people are unaware of their condition and may think that they are healthy.

In the project, we wanted to focus on these non-obese but metabolic syndrome positive people and develop a prediction model to predict metabolic syndrome positiveness/negativeness in non-obese body population on the basis of demographic and environmental factors

   

Fig. 1. Decision of  of metabolic syndrome and its effects on diseases

Data Processing

Our EHR (electronic health records) example is shown in Table 1. With such EHR, we did  data processing through machine learning classifications as in Fig. 2.

In Table 1, we can see that there is significant output class imbalance. We solve the output class imbalance problem by applying an oversampling method called SMOTE(synthetic minority oversampling technique) to the majority class records of the training set, followed by applying  random downsampling to the minority class records of the training set. However, positive:negative ratio of 1:1 on the oversampled/downsampled training set may cause significant differences between the distribution of the training set and the distribution of the test set, overfitting problems may occur. As a solution, we chose not to aim at positive:negative ratio of 1:1. In other words, after all these oversampling/downsampling, we still have class imbalance problems.

As in Fig. 2, since we are focusing only on non-obese population in this research, we selected only population satisfying BMI(body mass index)< 25km/m2. After handling missing categorical variables, selecting for BMI < 25km/m2, followed by oversampling/downsampling, we finally got 30953 records on the training set, and 17514 records on the test set.

   

Table 1. An EHR example

   

Fig. 2. Overall flow of data processing ~ classification

Since we are considering logistic regression as one of our machine learning algorithms in our research, we also checked whether there is a linear relationship between the logit of the outcome and each of the predictor variables, as in Fig. 3.

   

Fig. 3. Linear relationship checked between the logit of the outcome and each of the predictor variables.

Besides, we also checked whether there is little or no multicollinearity among the predictor variables. We did multicollinearity checks separately for the categorical variables and continuous variables, respectively. We find that in particular, there is significantly high multicollinearity in the continuous variable plot. This can be understood, since BMI is indeed calculated from height and weight. Plus, height and weight are usually in a linear relationship.

   

Fig. 4. Multicollinearity results

In efforts to decrease the high multicollinearity among the continuous predictor variables, we adopted three different models as in Fig. 2, where model A considers BMI, model B considers both height and weight, and model considers all of BMI, height, and weight. For all of model A ~ C, age, sex, smoking, alcohol, exercise were commonly included.

Results

Our machine learning modelling results can be found in table 2. As it is common to get results with high accuracy but low sensitivity(or high accuracy but low specificity) in typical class-imbalanced problems, our parameter tuning was performed in the domain of F1-score. In addition, we checked performance for accuracy, precision, sensitivity, and F1-score. Our results in table 2 shows that both  logistic regression and random forest classification are able to give alerts to metabolic syndrome positive persons.

   

Table 2. Results from applying machine learning algorithms

Findings and Future Work

In this research, sing basic health checkup results and machine learning, it is possible to to predict  metabolic syndromes positiveness with high accuracy/sensitivity/specificity/precision/f1-score.

In the future work, we are planning to write an R Shiny app which tells predictions for metabolic syndrome positiveness/negativeness to anyone interested, based on his/her basic information such as height, weight, etc, input to the R Shiny app.

 

* Under submission for a journal

About Author

Leave a Comment

No comments found.

View Posts by Categories


Our Recent Popular Posts


View Posts by Tags

#python #trainwithnycdsa 2019 airbnb Alex Baransky alumni Alumni Interview Alumni Reviews Alumni Spotlight alumni story Alumnus API Application artist aws beautiful soup Best Bootcamp Best Data Science 2019 Best Data Science Bootcamp Best Data Science Bootcamp 2020 Best Ranked Big Data Book Launch Book-Signing bootcamp Bootcamp Alumni Bootcamp Prep Bundles California Cancer Research capstone Career Career Day citibike clustering Coding Course Demo Course Report D3.js data Data Analyst data science Data Science Academy Data Science Bootcamp Data science jobs Data Science Reviews Data Scientist Data Scientist Jobs data visualization Deep Learning Demo Day Discount dplyr employer networking feature engineering Finance Financial Data Science Flask gbm Get Hired ggplot2 googleVis Hadoop higgs boson Hiring hiring partner events Hiring Partners Industry Experts Instructor Blog Instructor Interview Job Job Placement Jobs Jon Krohn JP Morgan Chase Kaggle Kickstarter lasso regression Lead Data Scienctist Lead Data Scientist leaflet linear regression Logistic Regression machine learning Maps matplotlib Medical Research Meet the team meetup Networking neural network Neural networks New Courses nlp NYC NYC Data Science nyc data science academy NYC Open Data NYCDSA NYCDSA Alumni Online Online Bootcamp Online Training Open Data painter pandas Part-time Portfolio Development prediction Prework Programming PwC python python machine learning python scrapy python web scraping python webscraping Python Workshop R R language R Programming R Shiny r studio R Visualization R Workshop R-bloggers random forest Ranking recommendation recommendation system regression Remote remote data science bootcamp Scrapy scrapy visualization seaborn Selenium sentiment analysis Shiny Shiny Dashboard Spark Special Special Summer Sports statistics streaming Student Interview Student Showcase SVM Switchup Tableau team TensorFlow Testimonial tf-idf Top Data Science Bootcamp twitter visualization web scraping Weekend Course What to expect word cloud word2vec XGBoost yelp