Using Data to Predict Housing Prices in Ames Iowa

Posted on Apr 11, 2018

The skills the author demonstrated here can be learned through taking Data Science with Machine Learning bootcamp with NYC Data Science Academy.

Kaggle competitions offer a great opportunity for those who want to practice and improve their data science skills. It's also always fun to play with different data sets that push you to explore and learn different techniques. It was for this reason that I decided to take on the 'House Prices: Advanced Regression Techniques' challenge. The goal is to predict housing prices in Ames, Iowa. This post will describe how I optimized my pricing model while following best practices that are expected outside the Kaggle environment. The general outline of the process was this:

  1. Understanding the Data
  2. Imputing Missing Values
  3. Feature Engineering/Dimension Reduction
  4. Fixing Skewness and Outliers
  5. Modeling
  6. Evaluation

Understanding Data Types

The data set consists of 79 features that will be the predictors for 'SalePrice'. My first instinct was to look at all the variables and their data types. I saw that features like 'MSSubClass', 'MoSold' and 'YrSold' were integers but should actually be categories. I then went and changed them to object types so that I can change all the necessary variables into categories in one go. I left the variable 'YearBuilt' as an integer due to its linear relationship with 'SalePrice'.

Using Data to Predict Housing Prices in Ames Iowa

There were many ordinal variables likes 'ExterQual' and 'BsmntCond' with values such as "Excellent", "Good" and "Poor"Β  In an effort to lower dimensionality during the dummification process, these variables were label encoded. This preserves the order of the values with the assumption that sequential levels are one unit better than that last.

Understanding Missingness

If a variable has TOO many missing values, we might as well just get rid of the whole column. Imputing that many values would be way too high of a risk in under-representing the true population.

Using Data to Predict Housing Prices in Ames Iowa

I've arbitrarily decided to remove any columns that had more than 20% missing values. As a result, I've removed 'PoolQC', 'MiscFeature', 'Alley', 'Fence', and 'FireplaceQu'. I was really close to removing 'LotFrontage' as well, but seeing the linear relationship of the logs of 'LotFrontage' with 'LotArea', missing values were replaced with the regression of 'LotFrontage' with 'LotArea'.

Using Data to Predict Housing Prices in Ames Iowa

For other missing categorical values, depending on what the variable it is, the missing values were imputed differently. If it was a feature that a house may not necessarily have such as 'GarageType', missing values were replaced with 'None'. On the other hand, if it is a feature that SHOULD have a value like 'MSZoning', missing values were replaces with the mode of that variable. Then there are other categorical features that have 'Other' as one of their values (this can be found in the data description file). For variables like these, missing values were just replaced with 'Other'. As for numerical features, missing values were imputed with the median of the training data.

Feature Engineering/Dimension Reduction

Many of the existing features with less predictive power can be replaced with engineered ones with hopefully more predictive power. Such features include 'GarageYrBlt' and 'YearRemodAdd' which were deleted and replaced with whether or not the house had a garage/was remodeled and how long in between did that happen from when the house was built.

A house may also go for a higher price depending if it can accommodate the tenants well. Bath Capacity and Parking Capacity were features engineered by dividing total number of baths/garage cars by the number of bedrooms.

Finally many of the variables that could be combined, were. This resulted in features like Total Bathrooms, Total Floor SF, Total Porch SF, etc.

The last effort to lower dimensionality was to take out any columns that don't have a lot of variation within its values. Having zero or near zero-variance do not add value in our predictive model which included 'Street', 'Utilities' and 'Condition2'.

Using Data to Analyze Outliers

Outliers can make our model overfit and decrease our model's ability to generalize well. With visual inspection only, I've removed observations that took away from the linearity of the feature plotted with SalePrice. These included features like 'LotFrontage' (removed obs > 250), 'GrLivArea' (removed obs > 4500) and 'TotalPorchSF' (removed obs > 700).

Using Data to Predict Housing Prices in Ames IowaΒ  Β  Β  Β Using Data to Predict Housing Prices in Ames IowaΒ  Β  Β  Β Using Data to Predict Housing Prices in Ames Iowa

Using Data to Understand Skewness

For the dependent variable, we see a rightly skewed distribution. In order to meet assumptions linear regression, we take the log to ensure constant variance of residuals.

Using Data to Predict Housing Prices in Ames IowaΒ  Β  Β Using Data to Predict Housing Prices in Ames Iowa

However, taking the log of the independent variables in order to have a normal distribution can be beneficial and even kill two birds with one stone. 1: It may help in creating a stronger linear relationship with the dependent variable and 2: it may be used as a way of dealing with outliers. Thus, I took the log(x+1) of any numerical feature that had a skew value greater than 0.6 or lower than -0.6 as well.

Using Data to Analyze Modeling

After the data set was dummified (resulting in 250 dimensions), it was time to model. 3 regression models and 3 tree based models were ran while GridSearchCV was used to tune all hyperparameters for all 6 models. Once tuned, they were all testedΒ on a hold out set (1/8 of the original training set). The results in order of lowest hold out set RMSE are listed below:

  1. Elastic Net (.11345 RMSE)
    1. Tuned alpha, l1_ratio
  2. Lasso (.11350 RMSE)
    1. Tuned alpha
  3. Ridge (.11897 RMSE)
    1. Tuned alpha
  4. Xtreme Gradient Boost (.12411)
    1. Tuned max depth, learning rate, n estimators, min child weight, gamma, colsample by tree, reg lambda, reg alpha
  5. Stochastic Gradient Boost (.12547 RMSE)
    1. Tuned max depth, learning rate, n estimators, min samples split, max features, min samples leaf
  6. Random Forest (.13589 RMSE)
    1. Tuned n estimators, max features, max depth, min samples split

No surprise that the regularized linear regression models outperformed the tree based models as most of the features in the data set had a linear relationship with SalePrice. It would seem that Lasso was preferred over Ridge insinuating the need to push many of the Beta coefficients of certain variables to zero.

Submission to Kaggle

Each of these models were then submitted to Kaggle. The results in order of Kaggle's RMSE are listed below:

  1. Lasso (.12996)
  2. Elastic Net (.12449)
  3. Ridge (.12996)
  4. Xtreme Gradient Boost (.13027)
  5. Gradient Boost (.13179)
  6. Random Forest (.14695)

Though the Kaggle scores were slightly higher than the ones I had in my hold out set, the ranking of the models were consistent, only that Lasso had overtaken Elastic Net.

My last trump card to optimize my Kaggle score was to reap the advantages of each model by creating a stacked model. Since the Lasso model had the best hold out set RMSE, it was used as the meta model while the other 5 models were used as base models. The final Kaggle result for the stacked model upon submission was (.12248), an improvement from all the other models. This ranked 990 out of 4634 contestants.

Using Data to Predict Housing Prices in Ames Iowa


If there is one thing I learned from this project that I can pass on to any aspiring data scientist is that slight differences in data imputation can change EVERYTHING. How you decide to deal with missing data can either push your model to generalize well, or it can pull your model into some imaginary world. There is almost a step by step approach to tuning your models, but when it comes to preparing your data, you really have to be creative. At some time in the future, I will definitely go back to this project and see how I can better impute and engineer more powerful predictors.

About Author

Kenny Moy

Kenny has years of experience providing data driven solutions in industries such as marketing, healthcare, real estate, and public service. In addition to machine learning, he loves the AHA! moments, storytelling, and the creativity aspects of data science.
View all posts by Kenny Moy >

Related Articles

Leave a Comment

No comments found.

View Posts by Categories

Our Recent Popular Posts

View Posts by Tags

#python #trainwithnycdsa 2019 2020 Revenue 3-points agriculture air quality airbnb airline alcohol Alex Baransky algorithm alumni Alumni Interview Alumni Reviews Alumni Spotlight alumni story Alumnus ames dataset ames housing dataset apartment rent API Application artist aws bank loans beautiful soup Best Bootcamp Best Data Science 2019 Best Data Science Bootcamp Best Data Science Bootcamp 2020 Best Ranked Big Data Book Launch Book-Signing bootcamp Bootcamp Alumni Bootcamp Prep boston safety Bundles cake recipe California Cancer Research capstone car price Career Career Day citibike classic cars classpass clustering Coding Course Demo Course Report covid 19 credit credit card crime frequency crops D3.js data data analysis Data Analyst data analytics data for tripadvisor reviews data science Data Science Academy Data Science Bootcamp Data science jobs Data Science Reviews Data Scientist Data Scientist Jobs data visualization database Deep Learning Demo Day Discount disney dplyr drug data e-commerce economy employee employee burnout employer networking environment feature engineering Finance Financial Data Science fitness studio Flask flight delay gbm Get Hired ggplot2 googleVis H20 Hadoop hallmark holiday movie happiness healthcare frauds higgs boson Hiring hiring partner events Hiring Partners hotels housing housing data housing predictions housing price hy-vee Income Industry Experts Injuries Instructor Blog Instructor Interview insurance italki Job Job Placement Jobs Jon Krohn JP Morgan Chase Kaggle Kickstarter las vegas airport lasso regression Lead Data Scienctist Lead Data Scientist leaflet league linear regression Logistic Regression machine learning Maps market matplotlib Medical Research Meet the team meetup methal health miami beach movie music Napoli NBA netflix Networking neural network Neural networks New Courses NHL nlp NYC NYC Data Science nyc data science academy NYC Open Data nyc property NYCDSA NYCDSA Alumni Online Online Bootcamp Online Training Open Data painter pandas Part-time performance phoenix pollutants Portfolio Development precision measurement prediction Prework Programming public safety PwC python Python Data Analysis python machine learning python scrapy python web scraping python webscraping Python Workshop R R Data Analysis R language R Programming R Shiny r studio R Visualization R Workshop R-bloggers random forest Ranking recommendation recommendation system regression Remote remote data science bootcamp Scrapy scrapy visualization seaborn seafood type Selenium sentiment analysis sentiment classification Shiny Shiny Dashboard Spark Special Special Summer Sports statistics streaming Student Interview Student Showcase SVM Switchup Tableau teachers team team performance TensorFlow Testimonial tf-idf Top Data Science Bootcamp Top manufacturing companies Transfers tweets twitter videos visualization wallstreet wallstreetbets web scraping Weekend Course What to expect whiskey whiskeyadvocate wildfire word cloud word2vec XGBoost yelp youtube trending ZORI