Supervised Learning With Kaggle Titanic Dataset
Purpose
Why I Chose This Project
Kaggle.com offers an introduction to supervised learning with the Titanic Dataset. The purpose of the competition is to build an accurate classification model that predicts whether a passenger would survive the Titanic crash. This is a helpful exercise to reinforce the fundamentals of Machine Learning. There are plenty of resources available to assist in filling in the page and deepen understanding of the fundamentals. I had no experience in programming or advanced mathematics before starting the bootcamp so I wanted to stay focused on the basics. This competition was the most sensible for my needs.
Questions to Answer
I created a Jupyter notebook that is split into two distinct parts. The first is an overview of fundamental and important concepts of machine learning, and the second is the application of those concepts on the Titanic dataset. As I began the process, I set out to answer the following questions:
- What's the difference between supervised and unsupervised learning?
- What's the difference between regression and classification in supervised learning?
- What's the difference between a model and an algorithm?
- What is the workflow for machine learning modeling?
- What's the difference between feature engineering, feature importance, and feature selection?
- What's the difference between training, validation, and testing data set?
- What are features and response variables?
- What's the difference and relationship between bias and variance?
- What's the difference between boosting and bagging?
- What's the difference between regularization, normalization, and generalization in machine learning?
- What is cross-validation?
- How do I select the best model for my application?
- What's the difference and relationship between model parameters and hyperparameters?
- What does "tuning" parameters mean?
- How do I determine the best model parameters?
- What is a confusion matrix?
- What's the difference and relationship between sensitivity and specificity?
- What is a dummy variable?
Process
Where and How I Extracted the Data
The data came from the Kaggle website. It was split into a training and testing csv files.
The dataset was structured with the following features:
- Variable Definition Key survival Survival 0 = No, 1 = Yes pclass Ticket class 1 = 1st, 2 = 2nd, 3 = 3rd
- sex Sex
- Age Age in years
- sibsp # of siblings / spouses aboard the Titanic
- parch # of parents / children aboard the Titanic
- ticket Ticket number
- fare Passenger fare
- cabin Cabin number
- embarked Port of Embarkation C = Cherbourg, Q = Queenstown, S = Southampton Variable Notes
- pclass: A proxy for socio-economic status (SES) 1st = Upper 2nd = Middle 3rd = Lower
- age: Age is fractional if less than 1. If the age is estimated, is it in the form of xx.5
- sibsp: The dataset defines family relations in this way... Sibling = brother, sister, stepbrother, stepsister Spouse = husband, wife (mistresses and fiancรฉs were ignored)
- parch: The dataset defines family relations in this way... Parent = mother, father Child = daughter, son, stepdaughter, stepson Some children traveled only with a nanny, therefore parch=0 for them.
How I analyzed the Data
Feature Engineering
I performed some exploratory data analysis to get a feel for which features appeared to have a significant effect on survival rate and the number of missing values in the dataset.
I then filled in the missing values and engineered some new features to make the dataset more machine-readable. This was in preparation for applying multiple algorithms within the SciKit-Learn library.
Feature Selection
I used a random forest classifier to decrease the dimensionality of the features in an attempt to distill the dataset down to only features that had a significant correlation with survival outcome. Feature selection is useful because it reduces redundancy of the data, overfitting, and it also speeds up the training process.
Modeling
Next, I applied cross-validation to get a more robust idea of how well each algorithm might do on the testing data.
Evaluation
Finally, I evaluated the models using a confusion matrix to determine how many false positives and false negatives were predicted by each model. I also visualized the ROC curve and Area Under the Curve to determine the performance of each classification model.
Results
Insights Gleaned
The features that had a significant impact on survival rate were age, fare, and sex. The gradient boost model achieved the best results on my test dataset and received the best score on my submissions to Kaggle.
Improvement to be Made
This is an evolving Jupyter notebook that I will continue to refine as I practice machine learning into the future. I wanted to create something that I could share with others also starting out on their journey into the world of machine learning. I also wanted something I can refer to when I need a refresh on a concept and also as a reference point to mark my progress. My goal is to see if I can score in the top 20% of this competition within the next year.
You can view the project on my GitHub here: https://github.com/Kiwibp/NYC-DSA-Bootcamp--Machine-Learning.