Supervised Learning With Kaggle Titanic Dataset

Keenan Burke-Pitts
Posted on Jun 11, 2018

Purpose

Why I Chose This Project

Kaggle.com offers an introduction to supervised learning with the Titanic Dataset.  The purpose of the competition is to build an accurate classification model that predicts whether a passenger would survive the Titanic crash.  This is a helpful exercise to reinforce the fundamentals of Machine Learning. There are plenty of resources available to assist in filling in the page and deepen understanding of the fundamentals.  I had no experience in programming or advanced mathematics before starting the bootcamp so I wanted to stay focused on the basics. This competition was the most sensible for my needs.

Questions to Answer

I created a Jupyter notebook that is split into two distinct parts.  The first is an overview of fundamental and important concepts of machine learning, and the second is the application of those concepts on the Titanic dataset.  As I began the process, I set out to answer the following questions:

  • What's the difference between supervised and unsupervised learning?
  • What's the difference between regression and classification in supervised learning?
  • What's the difference between a model and an algorithm?
  • What is the workflow for machine learning modeling?
  • What's the difference between feature engineering, feature importance, and feature selection?
  • What's the difference between training, validation, and testing data set?
  • What are features and response variables?
  • What's the difference and relationship between bias and variance?
  • What's the difference between boosting and bagging?
  • What's the difference between regularization, normalization, and generalization in machine learning?
  • What is cross-validation?
  • How do I select the best model for my application?
  • What's the difference and relationship between model parameters and hyperparameters?
  • What does "tuning" parameters mean?
  • How do I determine the best model parameters?
  • What is a confusion matrix?
  • What's the difference and relationship between sensitivity and specificity?
  • What is a dummy variable?

Process

Where and How I Extracted the Data

The data came from the Kaggle website. It was split into a training and testing csv files.

The dataset was structured with the following features:

  • Variable Definition Key survival Survival 0 = No, 1 = Yes pclass Ticket class 1 = 1st, 2 = 2nd, 3 = 3rd
  • sex Sex
  • Age Age in years
  • sibsp # of siblings / spouses aboard the Titanic
  • parch # of parents / children aboard the Titanic
  • ticket Ticket number
  • fare Passenger fare
  • cabin Cabin number
  • embarked Port of Embarkation C = Cherbourg, Q = Queenstown, S = Southampton Variable Notes
  • pclass: A proxy for socio-economic status (SES) 1st = Upper 2nd = Middle 3rd = Lower
  • age: Age is fractional if less than 1. If the age is estimated, is it in the form of xx.5
  • sibsp: The dataset defines family relations in this way... Sibling = brother, sister, stepbrother, stepsister Spouse = husband, wife (mistresses and fiancés were ignored)
  • parch: The dataset defines family relations in this way... Parent = mother, father Child = daughter, son, stepdaughter, stepson Some children traveled only with a nanny, therefore parch=0 for them.

How I analyzed the Data

Feature Engineering

I performed some exploratory data analysis to get a feel for which features appeared to have a significant effect on survival rate and the number of missing values in the dataset.

I then filled in the missing values and engineered some new features to make the dataset more machine-readable.  This was in preparation for applying multiple algorithms within the SciKit-Learn library.

Feature Selection

I used a random forest classifier to decrease the dimensionality of the features in an attempt to distill the dataset down to only features that had a significant correlation with survival outcome. Feature selection is useful because it reduces redundancy of the data, overfitting, and it also speeds up the training process.

https://gist.github.com/Kiwibp/fce38fcb4e0e55d0af51d1621cd0ba2d

Modeling

Next, I applied cross-validation to get a more robust idea of how well each algorithm might do on the testing data.

https://gist.github.com/Kiwibp/7cc55e06faeb0e0bdb60d630beb5f92a

Evaluation

Finally, I evaluated the models using a confusion matrix to determine how many false positives and false negatives were predicted by each model. I also visualized the ROC curve and Area Under the Curve to determine the performance of each classification model.

https://gist.github.com/Kiwibp/e2b8f186d18836f19415f87d0a292fe4

Results

Insights Gleaned

The features that had a significant impact on survival rate were age, fare, and sex. The gradient boost model achieved the best results on my test dataset and received the best score on my submissions to Kaggle.

Improvement to be Made

This is an evolving Jupyter notebook that I will continue to refine as I practice machine learning into the future. I wanted to create something that I could share with others also starting out on their journey into the world of machine learning. I also wanted something I can refer to when I need a refresh on a concept and also as a reference point to mark my progress. My goal is to see if I can score in the top 20% of this competition within the next year.

You can view the project on my GitHub here: https://github.com/Kiwibp/NYC-DSA-Bootcamp--Machine-Learning.

About Author

Keenan Burke-Pitts

Keenan Burke-Pitts

Keenan has over 3 years of experience communicating and assisting in software and internet solutions to clients. Moving forward, Keenan plans to leverage his technical abilities, communication skills, and business understanding in the digital marketing world. Keenan graduated...
View all posts by Keenan Burke-Pitts >

Related Articles

Leave a Comment

No comments found.

View Posts by Categories


Our Recent Popular Posts


View Posts by Tags

#python #trainwithnycdsa 2019 airbnb Alex Baransky alumni Alumni Interview Alumni Reviews Alumni Spotlight alumni story Alumnus API Application artist aws beautiful soup Best Bootcamp Best Data Science 2019 Best Data Science Bootcamp Best Data Science Bootcamp 2020 Best Ranked Big Data Book Launch Book-Signing bootcamp Bootcamp Alumni Bootcamp Prep Bundles California Cancer Research capstone Career Career Day citibike clustering Coding Course Demo Course Report D3.js data Data Analyst data science Data Science Academy Data Science Bootcamp Data science jobs Data Science Reviews Data Scientist Data Scientist Jobs data visualization Deep Learning Demo Day Discount dplyr employer networking feature engineering Finance Financial Data Science Flask gbm Get Hired ggplot2 googleVis Hadoop higgs boson Hiring hiring partner events Hiring Partners Industry Experts Instructor Blog Instructor Interview Job Job Placement Jobs Jon Krohn JP Morgan Chase Kaggle Kickstarter lasso regression Lead Data Scienctist Lead Data Scientist leaflet linear regression Logistic Regression machine learning Maps matplotlib Medical Research Meet the team meetup Networking neural network Neural networks New Courses nlp NYC NYC Data Science nyc data science academy NYC Open Data NYCDSA NYCDSA Alumni Online Online Bootcamp Open Data painter pandas Part-time Portfolio Development prediction Prework Programming PwC python python machine learning python scrapy python web scraping python webscraping Python Workshop R R language R Programming R Shiny r studio R Visualization R Workshop R-bloggers random forest Ranking recommendation recommendation system regression Remote remote data science bootcamp Scrapy scrapy visualization seaborn Selenium sentiment analysis Shiny Shiny Dashboard Spark Special Special Summer Sports statistics streaming Student Interview Student Showcase SVM Switchup Tableau team TensorFlow Testimonial tf-idf Top Data Science Bootcamp twitter visualization web scraping Weekend Course What to expect word cloud word2vec XGBoost yelp