Finding factors influencing NICU admissions

Introduction:

Every year more than 4 million families in the United States visit maternity wards bringing new children into the world. Ten percent of mothers experience complications resulting in newborns being admitted into the intensive care unit (ICU). Using annual data from the Center for Disease Control and Prevention (CDC). Our goal is to understand trends and relationships between pregnancy risk factors affecting delivery complications, congenital abnormalities, and admittance to the neonatal intensive care unit (NICU). Using a Tableau dashboard, our analysis will show predictive models that provide insight for healthcare policy makers. This application will help systemically vulnerable populations, hospitals and primary care providers optimize the distribution of limited resources. Through this we can identify preventative measures for patients allowing expecting families to better prepare for the challenges of pregnancy.

The Dataset: Extraction and Preparation

The CDC annually publishes information for all births and pregnancy-related events for independent research and analysis. Between the years of 2014-2018 there were approximately four million births. Our team collected data covering over 200 features characterizing parents, methods of delivery, health of the newborn and admittance to the NICU or ICU for the mother which contained approximately 25GB of data. Our data was configured into a fixed width format which required position encoding along lengths of characters. Each year had a PDF user guide detailing the file layout, encoding of features and detailed technical notes explaining data collection and imputation. 

Our application tailored parses to extract each year from the original files into pandas dataframes and saved them into CSV files for manipulation. Each dataset contained approximately 20 million observations. We downsampled each to 5%, which was 200,000 observations from each year. This factor was decided using a second order classification using mean and standard deviation which observed no statistically significant differences between distributions within numerical and categorical features. From this raw selection we observed a marginal amount (less than 3%) of observations in boolean features were missing or unknown. Where applicable, missing or unknown data for boolean features was ignored. 

After extraction and removal of redundant features, the remaining 109 columns were analyzed in depth with respect to NICU admittance.

Analysis:

Between 2014 and 2018 the average rate of live birth admittance to the NICU was 8.65%. The graph below shows there is an increasing admittance to the NICU over time. While some of this increase may be due to improvements in record keeping reducing the number of cases labeled unknown, this is a worrying statistic for families due to the emotional, physical and financial cost involved.

Figure 1. Yearly NICU Admittance

With this in mind, the remaining features have been divided to consider factors occurring during pregnancy, during delivery, and factors measured after delivery. Each group of features is examined first for relevance to NICU admittance, and then the factors from the first group are re-examined for ties to the most relevant factors in the latter two groups. 

Figure 2 shows lower admittance to the NICU of women giving birth aged 20-29, while women over the age of 40 have high admittance. The time series indicates that mothers in their 30s are increasing, while the number of teenage mothers is decreasing. This trend suggests that the rising rate of NICU admittances is likely to continue. Additionally observe that the number of mothers in the highest risk group, those over 40, is two orders of magnitude fewer than the lowest risk group, mothers in their 20s.

Figure 2a: NICU Admittance over time per Mother's Age

​​Figure 2b: NICU Admittance per Mother's Age

Following age, the next feature examined which broadly relates to overall health is the body mass index (BMI), calculated using height and weight. BMI is often used as a baseline indicator for long term health risks. Being within the ‘normal’  BMI range has a slight protective effect against NICU admission, while any level of obesity is associated with an up to 3.4 % increase, as shown in Figure 3b. It is worth noting that being under or over the ideal range had no significant effect, and no data relating to whether dietary or nutritional needs are met is collected by the CDC.

Figure 3a: Count of Mother’s BMI
Figure 3b: Percentage of Mother’s BMI

In figure 4 below, observe that there does not appear to be a great risk posed by smoking before becoming pregnant. However, smoking during the first two trimesters of pregnancy slightly increases the risk of admittance to the NICU to 9.78%, and smoking during the third trimester further increases the risk to 15.61%. While getting smokers to stop smoking in general is a desirable community healthcare goal, it appears providing alternatives to cigarettes to women as they enter their third trimester is of particular importance.

​​Figure 4 : Smoking Risk by Trimester

Figure 5a : Count of Mother’s Medical History Risk Factors
Figure 5b : Percentage of Mother’s Medical History Risk Factors
Figure 6a : Count of Mother’s Medical History Infection

Figure 6b : Percentage of Mother’s Medical History Infections

Figure 7a : Count of Infant Congenital Factors
Figure 7b : Count of Infant Congenital Factors

We compared over X pre existing conditions and they mostly remained constant each year showing little trends increasing or decreasing. One feature we saw increasing was multiple births which has been going down over time. Our goal is to find which features contributed most to an infant being admitted to the NICU. We also want to create a model that could help predict admittance. For this we decided to use two basic classification models to test performance.

Regression:

Having developed some intuition from our graphical analysis we were able to identify some features that were more predictive of NICU admittance. We wanted to validate these assumptions of feature importance and classification using a machine learning model. 

As an initial simple predictor, we trained a Logistic Regression model to predict admittance to the NICU. Since Logistic Regression can only classify between two classes, we used the one-vs-all method to create three sub-models, one for each of the possible outcomes(‘Yes’, ‘No’, ‘Unknown’), that predicts the probability that a given baby will fall into each category(eg ‘Yes’ vs. ‘Not Yes’, ‘No’ vs. ‘Not No’). After calculating three probabilities with these models, the model uses outcomes with the highest probability to select the features it has found to be most important.

However, the results of this model proved not very predictive or reproducible. In Table 1, the results are reported in a matrix showing the percentage of predictions against the actual admittance data. The sensitivity, or ability of the model to predict NICU admittance in cases where a baby was indeed admitted to the NICU, of the resulting model is about 20%. This indicated that the model heavily favored predicting the majority class ‘No’ regardless of the underlying data. Repeated attempts to test the model on different test subsets of the data yielded inconsistent feature rankings. This instability in results indicated that the relationships between our dependent and independent features were likely not linear and reinforced the suspicion that the model was biased towards the majority class.

Table 1 - Confusion Matrix for Regression Model

Random Forest:

Our regression model proved to have volatile results and we wanted to try a model that was more robust. We used a random forest classifier because we found the data did not have a linear trend. We initially performed the random forest classifier on a subset of our down-sampled data frame. This was necessary due to the large size and high cost of a random forest. Our results provided a model with 92% accuracy but after looking at the confusion matrix, we found our model was overfit and only chose the option not to admit to the NICU. We decided we wanted to create a new down-sampled data frame that had equal amounts of observations admitted to the NICU and not. After training this model we found it had 84% accuracy but our model’s sensitivity was correctly predicting 80% of cases that should be admitted to the NICU.

Table 2 - Confusion Matrix for Random Forest Model

Our top features selected by the model correspond with hospital standard practice of admission due to low birth weight, preterm and late-term birth and serious health conditions. This model proved to be more robust and fairly accurate given the complexity of features.

Conclusion:

Talk about conclusions drawn from time series, RF, Reg and how it was displayed for convenience in Tableau. Describe goals again, how we accomplished this, future work?

Our analysis and modeling suggest that X features are likely targets for

Citations:


  1. https://www.managedcaremag.com/archives/2010/1/how-plans-can-improve-outcomes-and-cut-costs-preterm-infant-care

About Authors

Marek Kwasnica

Marek Kwasnica

Marek is currently a data science fellow at NYC Data Science Academy. He has several years experience in biomedical engineering research. He holds a Masters of Engineering in Biological Engineering from Cornell University. Marek is passionate about applying...
View all posts by Marek Kwasnica >
Mohamad Sayed

Mohamad Sayed

Mohamad has an MS in Operations Research Engineering from the University of Southern California. Prior to the bootcamp, he worked in a variety of roles, mainly supply chain and project management. Currently, Mohamad is a Data Science Fellow...
View all posts by Mohamad Sayed >
Connor Haas

Connor Haas

Looking for a new opportunity, I recently graduated from a data science fellowship in Manhattan. I am an electrical engineering graduate with a computer science minor. I worked as an Inside Sales Engineer in a small company that...
View all posts by Connor Haas >

Leave a Comment

No comments found.

View Posts by Categories


Our Recent Popular Posts


View Posts by Tags

#python #trainwithnycdsa 2019 airbnb Alex Baransky alumni Alumni Interview Alumni Reviews Alumni Spotlight alumni story Alumnus API Application artist aws beautiful soup Best Bootcamp Best Data Science 2019 Best Data Science Bootcamp Best Data Science Bootcamp 2020 Best Ranked Big Data Book Launch Book-Signing bootcamp Bootcamp Alumni Bootcamp Prep Bundles California Cancer Research capstone Career Career Day citibike clustering Coding Course Demo Course Report D3.js data Data Analyst data science Data Science Academy Data Science Bootcamp Data science jobs Data Science Reviews Data Scientist Data Scientist Jobs data visualization Deep Learning Demo Day Discount dplyr employer networking feature engineering Finance Financial Data Science Flask gbm Get Hired ggplot2 googleVis Hadoop higgs boson Hiring hiring partner events Hiring Partners Industry Experts Instructor Blog Instructor Interview Job Job Placement Jobs Jon Krohn JP Morgan Chase Kaggle Kickstarter lasso regression Lead Data Scienctist Lead Data Scientist leaflet linear regression Logistic Regression machine learning Maps matplotlib Medical Research Meet the team meetup Networking neural network Neural networks New Courses nlp NYC NYC Data Science nyc data science academy NYC Open Data NYCDSA NYCDSA Alumni Online Online Bootcamp Online Training Open Data painter pandas Part-time Portfolio Development prediction Prework Programming PwC python python machine learning python scrapy python web scraping python webscraping Python Workshop R R language R Programming R Shiny r studio R Visualization R Workshop R-bloggers random forest Ranking recommendation recommendation system regression Remote remote data science bootcamp Scrapy scrapy visualization seaborn Selenium sentiment analysis Shiny Shiny Dashboard Spark Special Special Summer Sports statistics streaming Student Interview Student Showcase SVM Switchup Tableau team TensorFlow Testimonial tf-idf Top Data Science Bootcamp twitter visualization web scraping Weekend Course What to expect word cloud word2vec XGBoost yelp