Supervised Learning With Kaggle Titanic Dataset

Posted on Jun 11, 2018


Why I Chose This Project offers an introduction to supervised learning with the Titanic Dataset.  The purpose of the competition is to build an accurate classification model that predicts whether a passenger would survive the Titanic crash.  This is a helpful exercise to reinforce the fundamentals of Machine Learning. There are plenty of resources available to assist in filling in the page and deepen understanding of the fundamentals.  I had no experience in programming or advanced mathematics before starting the bootcamp so I wanted to stay focused on the basics. This competition was the most sensible for my needs.

Questions to Answer

I created a Jupyter notebook that is split into two distinct parts.  The first is an overview of fundamental and important concepts of machine learning, and the second is the application of those concepts on the Titanic dataset.  As I began the process, I set out to answer the following questions:

  • What's the difference between supervised and unsupervised learning?
  • What's the difference between regression and classification in supervised learning?
  • What's the difference between a model and an algorithm?
  • What is the workflow for machine learning modeling?
  • What's the difference between feature engineering, feature importance, and feature selection?
  • What's the difference between training, validation, and testing data set?
  • What are features and response variables?
  • What's the difference and relationship between bias and variance?
  • What's the difference between boosting and bagging?
  • What's the difference between regularization, normalization, and generalization in machine learning?
  • What is cross-validation?
  • How do I select the best model for my application?
  • What's the difference and relationship between model parameters and hyperparameters?
  • What does "tuning" parameters mean?
  • How do I determine the best model parameters?
  • What is a confusion matrix?
  • What's the difference and relationship between sensitivity and specificity?
  • What is a dummy variable?


Where and How I Extracted the Data

The data came from the Kaggle website. It was split into a training and testing csv files.

The dataset was structured with the following features:

  • Variable Definition Key survival Survival 0 = No, 1 = Yes pclass Ticket class 1 = 1st, 2 = 2nd, 3 = 3rd
  • sex Sex
  • Age Age in years
  • sibsp # of siblings / spouses aboard the Titanic
  • parch # of parents / children aboard the Titanic
  • ticket Ticket number
  • fare Passenger fare
  • cabin Cabin number
  • embarked Port of Embarkation C = Cherbourg, Q = Queenstown, S = Southampton Variable Notes
  • pclass: A proxy for socio-economic status (SES) 1st = Upper 2nd = Middle 3rd = Lower
  • age: Age is fractional if less than 1. If the age is estimated, is it in the form of xx.5
  • sibsp: The dataset defines family relations in this way... Sibling = brother, sister, stepbrother, stepsister Spouse = husband, wife (mistresses and fiancés were ignored)
  • parch: The dataset defines family relations in this way... Parent = mother, father Child = daughter, son, stepdaughter, stepson Some children traveled only with a nanny, therefore parch=0 for them.

How I analyzed the Data

Feature Engineering

I performed some exploratory data analysis to get a feel for which features appeared to have a significant effect on survival rate and the number of missing values in the dataset.

I then filled in the missing values and engineered some new features to make the dataset more machine-readable.  This was in preparation for applying multiple algorithms within the SciKit-Learn library.

Feature Selection

I used a random forest classifier to decrease the dimensionality of the features in an attempt to distill the dataset down to only features that had a significant correlation with survival outcome. Feature selection is useful because it reduces redundancy of the data, overfitting, and it also speeds up the training process.


Next, I applied cross-validation to get a more robust idea of how well each algorithm might do on the testing data.


Finally, I evaluated the models using a confusion matrix to determine how many false positives and false negatives were predicted by each model. I also visualized the ROC curve and Area Under the Curve to determine the performance of each classification model.


Insights Gleaned

The features that had a significant impact on survival rate were age, fare, and sex. The gradient boost model achieved the best results on my test dataset and received the best score on my submissions to Kaggle.

Improvement to be Made

This is an evolving Jupyter notebook that I will continue to refine as I practice machine learning into the future. I wanted to create something that I could share with others also starting out on their journey into the world of machine learning. I also wanted something I can refer to when I need a refresh on a concept and also as a reference point to mark my progress. My goal is to see if I can score in the top 20% of this competition within the next year.

You can view the project on my GitHub here:

About Author

Keenan Burke-Pitts

Keenan has over 3 years of experience communicating and assisting in software and internet solutions to clients. Moving forward, Keenan plans to leverage his technical abilities, communication skills, and business understanding in the digital marketing world. Keenan graduated...
View all posts by Keenan Burke-Pitts >

Related Articles

Leave a Comment

No comments found.

View Posts by Categories

Our Recent Popular Posts

View Posts by Tags

#python #trainwithnycdsa 2019 2020 Revenue 3-points agriculture air quality airbnb airline alcohol Alex Baransky algorithm alumni Alumni Interview Alumni Reviews Alumni Spotlight alumni story Alumnus ames dataset ames housing dataset apartment rent API Application artist aws bank loans beautiful soup Best Bootcamp Best Data Science 2019 Best Data Science Bootcamp Best Data Science Bootcamp 2020 Best Ranked Big Data Book Launch Book-Signing bootcamp Bootcamp Alumni Bootcamp Prep boston safety Bundles cake recipe California Cancer Research capstone car price Career Career Day citibike classic cars classpass clustering Coding Course Demo Course Report covid 19 credit credit card crime frequency crops D3.js data data analysis Data Analyst data analytics data for tripadvisor reviews data science Data Science Academy Data Science Bootcamp Data science jobs Data Science Reviews Data Scientist Data Scientist Jobs data visualization database Deep Learning Demo Day Discount disney dplyr drug data e-commerce economy employee employee burnout employer networking environment feature engineering Finance Financial Data Science fitness studio Flask flight delay gbm Get Hired ggplot2 googleVis H20 Hadoop hallmark holiday movie happiness healthcare frauds higgs boson Hiring hiring partner events Hiring Partners hotels housing housing data housing predictions housing price hy-vee Income Industry Experts Injuries Instructor Blog Instructor Interview insurance italki Job Job Placement Jobs Jon Krohn JP Morgan Chase Kaggle Kickstarter las vegas airport lasso regression Lead Data Scienctist Lead Data Scientist leaflet league linear regression Logistic Regression machine learning Maps market matplotlib Medical Research Meet the team meetup methal health miami beach movie music Napoli NBA netflix Networking neural network Neural networks New Courses NHL nlp NYC NYC Data Science nyc data science academy NYC Open Data nyc property NYCDSA NYCDSA Alumni Online Online Bootcamp Online Training Open Data painter pandas Part-time performance phoenix pollutants Portfolio Development precision measurement prediction Prework Programming public safety PwC python Python Data Analysis python machine learning python scrapy python web scraping python webscraping Python Workshop R R Data Analysis R language R Programming R Shiny r studio R Visualization R Workshop R-bloggers random forest Ranking recommendation recommendation system regression Remote remote data science bootcamp Scrapy scrapy visualization seaborn seafood type Selenium sentiment analysis sentiment classification Shiny Shiny Dashboard Spark Special Special Summer Sports statistics streaming Student Interview Student Showcase SVM Switchup Tableau teachers team team performance TensorFlow Testimonial tf-idf Top Data Science Bootcamp Top manufacturing companies Transfers tweets twitter videos visualization wallstreet wallstreetbets web scraping Weekend Course What to expect whiskey whiskeyadvocate wildfire word cloud word2vec XGBoost yelp youtube trending ZORI