Guided Procedure to Improve Models in Kaggle Competition

Posted on May 3, 2017

The skills the authors demonstrated here can be learned through taking Data Science with Machine Learning bootcamp with NYC Data Science Academy.

How to Improve Regression Model Accuracy

Kaggle competition has been very popular lately, and lots of people are trying to get high score. But the competitions are very competitive, and winners don't usually reveal how approaches. Usually, the winner just write a brief summary of what they did without revealing much. So it is still a mystery what are the approaches available to improve linear regression model accuracy.

This blog post is about how to improve model accuracy in Kaggle Competition. I will be sharing what are the steps that one could do to get higher score, and rank relatively well (to top 10%). This blog post is organized as follows:

  1. Data Exploratory
    1. Numerical Data
    2. Categorical Data
  2. Model Building
    1. Linear Regression
    2. Lasso Regression
    3. Ridge Regression
      1. Data Transformation
    4. Random Forest
    5. Gradient Boosting Machine
    6. Neural Network
    7. Stacking Models

1.1 Data Exploratory

First, we take a quick look at the data. There are 14 continuous (numerical) features. Surprisingly, the data is fairly clean. Looking at the 2 images below, if we focus on the mean and std, we will realize that mean is around 0.5 and std is roughly 0.2. This means this data may have been transformed already.



Next, we will make histogram plots for all 14 continuous features. What we want to notice here is feature 'cont7' and 'cont9', which are skewed left.

histogram of numerical features

The loss variable (target) did not plot well, so we make a separate histogram plot for loss variable. We realize loss is also skewed left.

histogram of target variable

To find out how skewed are our variables, we calculate the skewness, and 'cont7', 'cont9', and 'loss' are the 3 variables that are the most skewed.

skewness of numerical features

If we further make a boxplot, we can once again see that 'cont7' and 'cont9' have a lot of outliers. If we try to fix the skewness, we might be able to lower the number of outliers.

boxplot of numerical features

Here, we will try 3 types of transformation, compare them, and see which one does the best job: log, sqrt, and Boxcox.

math transform cont7Ā  Ā math transform cont9

math transform loss

We can clearly see the Boxcox worked on all 3 cases, however, we cannot use Boxcox on 'loss' variable, because currently in Python, there is no function to undo Boxcox. So, if we want to transform 'loss' back after prediction to calculate mean absolute error, we will be unable to do that. We will use log later as our transformation for the 'loss' variable.

1.2 Categorical Features

For categorical features, We can make frequency plots. Few majorĀ points about the categorical features are:

  • Features 'cat1' to 'cat72' have only two labels A and B, and B has very few entries.
  • Features 'cat73' to 'cat108' have more than two labels
  • Features 'cat109' to 'cat116' have many labels.

Here are some sampled frequency plots to confirm the above 3 points:

freq plot for cat25 to 28 freq plot for cat93 to 96

freq plot for cat105 to 108

2.1. Model Building - Linear Regression

Now that we have our continuous and categoricalĀ features analyzed, we can start building models. Note that this dataset is very clean, no missing data at all! What we will do here is to set up a working pipeline and start feeding in raw data first. Then as we do different transformation on the data, we can then compare the new model to our baseline model (raw data case) . Our raw data case will be un-transformed continuous features + dummy encoded features. We have to at least do dummy encoding on the categorical data is because sklearn models do not allow strings in the observed data.

We will fit a linear regression as follows:

linear regression

linear score

As shown above, testing score is much larger than training score. This means we are overfitting the training set too much. A note on the evaluation is that we are using mean absolute error, and it's negative here is because sklearnĀ made it negative so that as we our lowering the error (tuning the model to make the error closer to zero), it looks like we are increasing the score (i.e. -1 - (-2) = 1, so the new score has improved the old score by 1).

2.2 LASSO Regression

It is obvious that we need a regularizer here, so we will use LASSO regression. Remember, LASSO is just linear regression + a regularizing term. We can see below with a 5 fold cross validation, we get cross validation score around 1300, which is close to our previous linear regression score of 1288. We are on the right track here! Note the grid search below tells us the best alpha is 0.1, although the 3 different cases yielded very close result.

lasso regression lasso score

2.3 Ridge Regression

Another easy to use regularization is ridge regression. Since we already know that LASSO regression worked well, so this data set is likely to be a linear problem, we will use ridge regression to solve it as well.

ridge regression

ridge score

The cross validationĀ here tells us that alpha=1 is the best, giving a cross validation score of 1300.

2.3.1 Data Transformation

We have the working pipeline setup now, so we will be using ridge regression to test different data transformation to see which one gives the best result. Remember previously we did boxcox transformation on features 'cont7' and 'cont9', but we haven't really implemented it (we used raw continuous features + one hot encoding categorical features until now). So we will implement the transformation now!

We will implement compare theseĀ transformations:

  • Raw (numerical/continuous features) + Dummy Encode (categorical features)
  • Normalized (num) + Dum (cat)
  • Boxcox Transformed & Normalized (Num) + Dum (cat)
  • Box & Norm (num) + Dum (cat) + Log1 (Loss variable)

Below is the table comparing side by side the cross validation Ā error. We found that taking log of the loss (target) variable yielded the best result.

Raw(num) + Dum(cat)

Norm(num) + Dum(cat)

Box / Norm (num)Ā  + Dum(cat)

Box / Norm (num) + Dum(cat) + Log1p(loss)

Ridge Regression






2.4 Random Forest

Now we have our transformation under our belt, and we know this problem is a linear case, we can move on to more complicated model such as random forest.

random forest score

Immediately, we see an improvement of the cross validation score from 1251 (from ridge) to 1197.

2.5 Gradient Boosting Machine

We will take another step further to use a more fancy model called gradient boosting machine. The library is called extreme gradient boosting, as it optimizes the gradient boosting algorithm. Here, I will share the optimized hyperparameters.Ā Tuning xgboost is an art, and time consuming, so we won't be talking about it here. A step by step approach is described in this blog post. I will just outline the general steps taken here:

  1. Guess learning_rate & n_estimator
  2. Cross validate to tune for min_child_weight & max_depth
  3. Cross validate to tune for colsample_bytree & subsample
  4. Cross validate to tune for gamma
  5. Decrease learning_rate & tune for n_estimator

xgb params

Using these parameters, we were able to get cross validation score of 1150. Another improvement!

2.6 Neural Network

We will useĀ also use neural network to fit this dataset. It's almost impossible to win a competition now a day without using neural network.

The main problem with neural network is that it's very hard to tune, and it's hard to know how many layers and how many hidden nodes to use. My approach was to start with a single layer first and use twice the number of features as the number of hidden nodes. Then I slowly add more layers. Finally, I obtained the following structure.Ā I used Keras as front end, and Tensorflow as backend. In this model, I got 1115 cross validation score.

neural network

So, comparing different models:

Ridge Lasso Random Forest Gradient Boosting Machine Neural Network
5 Fold CV MAE 1251 1263 1247 1152 1130

2.7 Stacking Models

Remember, we started out with MAE score of 1300? We've improved it by 14% down to 1115. However, single model is not able to get you good ranks on Kaggle. We need to stack models.

The idea behind stacking is to take the best part of each model where it performs well. An extended guide and explanation is described in this blog post.

A simplified version is as follows:

  1. Split training set into several folds (5 fold in my case)
  2. Train each model in the different folds, and predict on the splitted training data
  3. SetupĀ a simple machine learning algorithm, such as linear regression
  4. Use the trained weights from each model as a feature for the linear regression
  5. Use the original train data set target as the target for the linear regression

To make it easier to understand the above steps, I constructedĀ the following table:

Linear Regression Column 1 Column 2 ... Target Variable
Row 1 Model 1 Trained Weight 1 Model 2 Trained Weight 1 ... Original Train Data Target, y1
Row 2 Model 1 Trained Weight 2 Model 2 Trained Weight 2 ... Original Train Data Target, y2
... ... ... ... Ā ...

My code for stacking is in this github repository. I stacked 2 best models here: xgboost + neural network.

Upon submission of my stacked model, I obtained test score of 1115.75

leaderboard score

Looking at the leaderboard, 1115.75 ranks about 326 out of 3055 teams if I had submitted before the competition ended. You can verify using this Kaggle leaderboard link. This is roughly top 10~11%.

The code for this blog post is here.

About Author

Werner Chao

Werner has been the lead data analyst for KaJin Health (, an online mental health company in Shanghai, and data analyst at SNC-Lavalin, a 7.8 billion dollar public company. He helped KaJin Health analyze web traffic, consumer insights,...
View all posts by Werner Chao >

Related Articles

Leave a Comment

No comments found.

View Posts by Categories

Our Recent Popular Posts

View Posts by Tags

#python #trainwithnycdsa 2019 2020 Revenue 3-points agriculture air quality airbnb airline alcohol Alex Baransky algorithm alumni Alumni Interview Alumni Reviews Alumni Spotlight alumni story Alumnus ames dataset ames housing dataset apartment rent API Application artist aws bank loans beautiful soup Best Bootcamp Best Data Science 2019 Best Data Science Bootcamp Best Data Science Bootcamp 2020 Best Ranked Big Data Book Launch Book-Signing bootcamp Bootcamp Alumni Bootcamp Prep boston safety Bundles cake recipe California Cancer Research capstone car price Career Career Day citibike classic cars classpass clustering Coding Course Demo Course Report covid 19 credit credit card crime frequency crops D3.js data data analysis Data Analyst data analytics data for tripadvisor reviews data science Data Science Academy Data Science Bootcamp Data science jobs Data Science Reviews Data Scientist Data Scientist Jobs data visualization database Deep Learning Demo Day Discount disney dplyr drug data e-commerce economy employee employee burnout employer networking environment feature engineering Finance Financial Data Science fitness studio Flask flight delay gbm Get Hired ggplot2 googleVis H20 Hadoop hallmark holiday movie happiness healthcare frauds higgs boson Hiring hiring partner events Hiring Partners hotels housing housing data housing predictions housing price hy-vee Income Industry Experts Injuries Instructor Blog Instructor Interview insurance italki Job Job Placement Jobs Jon Krohn JP Morgan Chase Kaggle Kickstarter las vegas airport lasso regression Lead Data Scienctist Lead Data Scientist leaflet league linear regression Logistic Regression machine learning Maps market matplotlib Medical Research Meet the team meetup methal health miami beach movie music Napoli NBA netflix Networking neural network Neural networks New Courses NHL nlp NYC NYC Data Science nyc data science academy NYC Open Data nyc property NYCDSA NYCDSA Alumni Online Online Bootcamp Online Training Open Data painter pandas Part-time performance phoenix pollutants Portfolio Development precision measurement prediction Prework Programming public safety PwC python Python Data Analysis python machine learning python scrapy python web scraping python webscraping Python Workshop R R Data Analysis R language R Programming R Shiny r studio R Visualization R Workshop R-bloggers random forest Ranking recommendation recommendation system regression Remote remote data science bootcamp Scrapy scrapy visualization seaborn seafood type Selenium sentiment analysis sentiment classification Shiny Shiny Dashboard Spark Special Special Summer Sports statistics streaming Student Interview Student Showcase SVM Switchup Tableau teachers team team performance TensorFlow Testimonial tf-idf Top Data Science Bootcamp Top manufacturing companies Transfers tweets twitter videos visualization wallstreet wallstreetbets web scraping Weekend Course What to expect whiskey whiskeyadvocate wildfire word cloud word2vec XGBoost yelp youtube trending ZORI