Iowa house price prediction using machine learning
The skills I demoed here can be learned through taking Data Science with Machine Learning bootcamp with NYC Data Science Academy.
The total market size of has reached $33.3 trillion in 2018, an increase of $10.9 trillion from the housing market crash in 2012. Even though the economic performance plays important roles in influencing the housing market, the house price is also determined by the characteristics of the house itself. In this blog post, I used different machine learning algorithms to predict house price based on features that describe the details of the house, including but not limited to number of rooms, lot size, year built et al.
Exploratory Data Analysis
The Iowa House Price data from Kaggle is used in this analysis. In order to understand the features that are correlated to the sale price, a heatmap of correlation was generated.
There were many features that are strongly correlated with the sale price. The top 12 features were: OverallQual, GrLivArea, GarageCars, GarageArea, TotalBsmtSF, 1stFlrSF, FullBath, TotRmAbvGrd, YearBuilt, YearRemodAdd, GarageYrBlt, and MasVnrArea. The correlation is clearer when it is presented in scatter plot.
The scatter plot showed that there are some outliers in the dataset (circled out in red). These outliers need to be removed to avoid skewing the prediction results.
Target Variable Data Transformations
SalePrice is the target variable that we need to predict. Its distribution also impacts the performance of the machine learning algorithms.
The raw data of SalePrice was right skewed. Since many machine learning models, especially the linear regression, favor normally distributed data, data transformation is required to make the data normal. After log transformation, SalePrice is closer to normal distribution.
Impute missing values
Missing and null values must be handled before training the machine learning model. Visualization of the missing values showed that different variables had different percentage of missing values.
Different strategies were implemented to impute missing values in categorical and numerical features. For categorical features like 'PoolQC', 'MiscFeature', 'Alley', 'Fence' and 'FireplaceQu', the null values mean no feature, and therefore null values were filled with 'None'. For numerical features, like 'LotFrontage', the null values are filled with the median of the neighborhood. Some features, like 'SaleType', had only a few missing values, so using the most frequent values to fill them is feasible.
Feature scaling and categorical feature encoding
Numerical features were standardized using Scikit-learn package StandardScaler. It subtracts the mean and divides by the standard deviation. Therefore, the transformed data will have mean of 0 and standard deviation of 1. The categorical features were transformed using Pandas get_dummies method to generate dummy variables for each categorical features.
Model selection and hyper-parameter tuning
I used cross-validation to assess the performance of the Lasso Regression, Ridge Regression, and RandomForestRegressor in predicting the house price. Lasso (alpha=0.0005) had root mean squared error (RMSE) of 0.1118, Ridge (alpha=1.0) had RMSE of 0.1201, and RandomForestRegressor (default setting) had RMSE of 0.1465. GridSearch was used to search for the best hyper-parameters for each model. Lasso Regression worked best with alpha=0.0005, and Ridge Regression worked best with alpha=18. Within the range of parameter setting, RandomForestRegressor worked best with max_depth=80, max_features=4, n_estimators=200.
The test score for RandomForestRegressor is 0.17133, Ridge Regression is 0.11999, and Lasso Regression is 0.12097. Therefore, Ridge Regression performed best among the three algorithms tested. The top 10 absolute value of coefficients for Ridge Regression are listed below: