Iowa house price prediction using machine learning

Posted on Aug 14, 2019


The total market size of has reached $33.3 trillion in 2018, an increase of $10.9 trillion from the housing market crash in 2012. Even though the economic performance plays important roles in influencing the housing market, the house price is also determined by the characteristics of the house itself. In this blog post, I used different machine learning algorithms to predict house price based on features that describe the details of the house, including but not limited to number of rooms, lot size, year built et al.

Exploratory Data Analysis

The Iowa House Price data from Kaggle is used in this analysis. In order to understand the features that are correlated to the sale price, a heatmap of correlation was generated. 

There were many features that are strongly correlated with the sale price. The top 12 features were: OverallQual, GrLivArea, GarageCars, GarageArea, TotalBsmtSF, 1stFlrSF, FullBath, TotRmAbvGrd, YearBuilt, YearRemodAdd, GarageYrBlt, and MasVnrArea. The correlation is clearer when it is presented in scatter plot.

The scatter plot showed that there are some outliers in the dataset (circled out in red). These outliers need to be removed to avoid skewing the prediction results. 

Target Variable Data Transformations

SalePrice is the target variable that we need to predict. Its distribution also impacts the performance of the machine learning algorithms.

The raw data of SalePrice was right skewed. Since many machine learning models, especially the linear regression, favor normally distributed data, data transformation is required to make the data normal. After log transformation, SalePrice is closer to normal distribution.

Impute missing values

Missing and null values must be handled before training the machine learning model. Visualization of the missing values showed that different variables had different percentage of missing values. 

Visualization of missing values

Different strategies were implemented to impute missing values in categorical and numerical features. For categorical features like 'PoolQC', 'MiscFeature', 'Alley', 'Fence' and 'FireplaceQu', the null values mean no feature, and therefore null values were filled with 'None'. For numerical features, like 'LotFrontage', the null values are filled with the median of the neighborhood. Some features, like 'SaleType', had only a few missing values, so using the most frequent values to fill them is feasible. 

Feature scaling and categorical feature encoding

Numerical features were standardized using Scikit-learn package StandardScaler. It subtracts the mean and divides by the standard deviation. Therefore, the transformed data will have mean of 0 and standard deviation of 1. The categorical features were transformed using Pandas get_dummies method to generate dummy variables for each categorical features.

Model selection and hyper-parameter tuning

I used cross-validation to assess the performance of the Lasso Regression, Ridge Regression, and RandomForestRegressor in predicting the house price. Lasso (alpha=0.0005) had root mean squared error (RMSE) of 0.1118, Ridge (alpha=1.0) had RMSE of 0.1201, and RandomForestRegressor (default setting) had RMSE of 0.1465. GridSearch was used to search for the best hyper-parameters for each model. Lasso Regression worked best with alpha=0.0005, and Ridge Regression worked best with alpha=18. Within the range of parameter setting, RandomForestRegressor worked best with max_depth=80, max_features=4, n_estimators=200.

Test Results

The test score for RandomForestRegressor is 0.17133, Ridge Regression is 0.11999, and Lasso Regression is 0.12097. Therefore, Ridge Regression performed best among the three algorithms tested. The top 10 absolute value of coefficients for Ridge Regression are listed below:

About Author


Jun Kui Chen

Jun obtained Ph.D. from Columbian University in Immunology. He is currently working in a Fintech start up as a Data Analyst.
View all posts by Jun Kui Chen >

Leave a Comment

No comments found.

View Posts by Categories

Our Recent Popular Posts

View Posts by Tags

#python #trainwithnycdsa 2019 airbnb Alex Baransky alumni Alumni Interview Alumni Reviews Alumni Spotlight alumni story Alumnus API Application artist aws beautiful soup Best Bootcamp Best Data Science 2019 Best Data Science Bootcamp Best Data Science Bootcamp 2020 Best Ranked Big Data Book Launch Book-Signing bootcamp Bootcamp Alumni Bootcamp Prep Bundles California Cancer Research capstone Career Career Day citibike clustering Coding Course Demo Course Report D3.js data Data Analyst data science Data Science Academy Data Science Bootcamp Data science jobs Data Science Reviews Data Scientist Data Scientist Jobs data visualization Deep Learning Demo Day Discount dplyr employer networking feature engineering Finance Financial Data Science Flask gbm Get Hired ggplot2 googleVis Hadoop higgs boson Hiring hiring partner events Hiring Partners Industry Experts Instructor Blog Instructor Interview Job Job Placement Jobs Jon Krohn JP Morgan Chase Kaggle Kickstarter lasso regression Lead Data Scienctist Lead Data Scientist leaflet linear regression Logistic Regression machine learning Maps matplotlib Medical Research Meet the team meetup Networking neural network Neural networks New Courses nlp NYC NYC Data Science nyc data science academy NYC Open Data NYCDSA NYCDSA Alumni Online Online Bootcamp Online Training Open Data painter pandas Part-time Portfolio Development prediction Prework Programming PwC python python machine learning python scrapy python web scraping python webscraping Python Workshop R R language R Programming R Shiny r studio R Visualization R Workshop R-bloggers random forest Ranking recommendation recommendation system regression Remote remote data science bootcamp Scrapy scrapy visualization seaborn Selenium sentiment analysis Shiny Shiny Dashboard Spark Special Special Summer Sports statistics streaming Student Interview Student Showcase SVM Switchup Tableau team TensorFlow Testimonial tf-idf Top Data Science Bootcamp twitter visualization web scraping Weekend Course What to expect word cloud word2vec XGBoost yelp