How much is your house worth?
Photo by Nancy Lee
Introduction
Deep learning and machine learning are becoming more and more important for enterprises.
For this project, I have developped algorithms which use a broad spectrum of features to predict realty prices. The analysis relies on a rich dataset that includes housing data and macroeconomic patterns.
Being a data scientist is like being a detective without being one. “There's the joke that 80 percent of data science is cleaning the data and 20 percent is complaining about cleaning the data,” Kaggle founder and CEO Anthony Goldbloom told The Verge over email. “In reality, it really varies. But data cleaning is a much higher proportion of data science than an outsider would expect. Actually training models is typically a relatively small proportion (less than 10 percent) of what a machine learner or data scientist does.”
I've used data from De Cock (2011), who compiled a detailed dataset of residential property sales in a North American city. It is used in one of the most popular practice Kaggle competitions. This dataset is characterised by a large number of predictor variables (including categorical, ordinal, discrete, and continuous variables). See the documentation for a description of the original variables.
The business objective is to predict the final price of each home based on explanatory variables describing (almost) every aspect of residential homes in Ames, Iowa. There are many models available for forecasting. In general, more advanced techniques such as stacking produces small gains with a lot of added complexity – not worth it for most businesses. But complex models are almost always fruitful so it’s almost always used in top Kaggle solutions. I've used Jupyter Notebook (Python) for this project.
Project Workflow
The general framework for my machine learning project is as following
1. Loading Data
2. Adding new features
3. Missingness and Imputation
4. Exploratory Data Analysis and Data Transformation
5. Modeling and Hyperparameter Tuning
6. Prediction
1. Loading Data
I have used Pandas software library to load the data into an easily readable DataFrame object. The index column "Id" has been dropped as it has no value in the modeling process.
2. Adding new features
For our model to be more robust over time, macroeconomic information combined into the same model is the ideal. To extract macroeconomic insights, I have added the 5-Years interest rate and 10-Years interest rate as features to our data.The ten year treasury yield is regarded as the benchmark for the industry but it'll be interesting to compare it with the five year interest rate.
3. Missingness and Imputation
Many statistical methods and machine learning techniques have difficulty incorporating incomplete observations in their algorithms. The process of "filling in" missing values is called imputation. Starting with the numerical values, we see that basement and garage related features, as well as LotFrontage, and MasVnrArea have missing values. Since the area of each street connected to the house property most likely have a similar area to other houses in its neighborhood , we can fill in missing values by the mean LotFrontage of the neighborhood. For the basement and garage related features, as well as MasVnrArea feature , NA most likely means none for these houses. We can fill 0 for these inputs. Finally, we'll replace the GarageYrBlt value missing with the year the house was built. Most garages are built at the same time as the house, and houses without garages get no penalty or benefit by having the Garage Year Built equal to the Year Built for the house.
Before analysing the categorical data, based on the documentation, the MSSubClass, OverallQual and OverallCond features are actually categorical and not numerical. Therefore, I have converted the data to strings. After finding the number of missing values for categorical data, for most of the features in this situation with less than 5 missing values, I have imputed the mode. For MasVnrType, I've assumed that the missing values correspond to "None" and therefore impute that value. As for the rest of the features with missing values, we can associate them to "None" per the data documentation. There is an exception (Id 333) that will be treated manually as there is information on Type 2 finished square feet (BsmtFinSF2), and therefore there should be a Rating of basement finished area for this observation with multiple types (BsmtFinType2). I have used the same quality as BsmtFinType1. Finally, 3 other exceptions arised and were filled with the neighborhood's mode.
4. Exploratory Data Analysis and Data Transformation
Exploratory data analysis (EDA) is the process of discovering features and patterns in the data that should inform the modelling process and in some cases prevent errors.
Due to the fluctuations in supply and demand, it’s during "seasonal pattern" that we find there isn't as much competition from the average homebuyer. With summer being the busiest moving time of year, people buy more aggressively than in the winter, limiting the number of available houses and raising market prices. In the winter, though, since nobody wants to deal with the inconvenience of moving during this time, these low-demand periods are perfect for those who are looking for a good deal. Because sellers aren’t necessarily getting a lot of interest or offers from others, they’re more willing to negotiate and therefore results with a substantial discount on pricing. We can visualize this with our train data and see that the SalePrice totals are highest in June and July. Because of seasonality, we can convert the MoSold feature as a categorical feature.
I've looked at the responsiveness of house prices to US 5 and 10 Years interest rates as it was interesting to see the macroeconomics patterns. My graph titled Average SalesPrice vs Interest rates suggests that lags of interest rate changes would have to be included in an empirical model of house price determination. Using the correlation matrix, we can determine the lagging period by shifting the interest rates as well as the best interest rate to use. I've selected the interest rate with the highest correlation to the SalePrice by testing over a 12 periods (12 months), which is the 10 Years interest rate with a 3 periods shift.
The next step required most analysis. As mentioned above, data cleaning takes most of the time in projects. Given that we have a lot of categorical features that will eventually be dummified, I've analysed the categorical data and merged levels that had similar responsiveness to the SalePrice. Dimensionality can be a curse when our model has many variables, and therefore reducing the dimensionality is ideal. For this step, I have used boxplots to make decisions on which classes to merge together. If classes had similar boxes (quartiles) and median prices, they were merged together.
Next, I have created a plot of SALEPRICE versus GRLIVAREA to identify any outliers. GRLIVAREA is a numerical predictor that is highly correlated with the response. Based on the plot, there are 4 obvious outliers which have an above grade (ground) living area square feet greater than 4000.
As the response variable, Sale Price, is continuous, we'll utilize regression models. One assumption of linear regression models is that the error between the observed and expected values (i.e., the residuals) should be normally distributed. Violations of this assumption often stem from a skewed response variable. Because Sale Price has a right skew, we'll log + 1 transform to normalize its distribution. The "+ 1" is to prevent from getting errors on cells containing value 0. In a similar idea, I used the scipy function boxcox1p which computes the Box-Cox transformation of skewed features.
Finally, I have used pandas.get_dummies() function to convert categorical variables in both train and test datasets. Because there are different dummy variables in the test dataset than from the train dataset, I have added the missing columns in test dataset.
5. Modeling and Hyperparameter Tuning
The following models were tested : lasso, ElasticNet, Kernel Rigdge, GradientBoostingRegressor, XGBRegressor, LGBMRegressor, averaged base models stacking approach and stacked average models stacking approach. I also used the GridSearchCV instance that fits on a dataset all the possible combinations of parameter values evaluated and the best combination is retained. Because LGBMRegressor had the best test score in my modeling, I have used it for my prediction.
6. Prediction
Given all the hype on stacking and XGBoost, I was surprised to see that the LGBMRegressor performed well, with a Kaggle RMSLE result of 0.13207. It is a big improvement compared to a linear regression forecasting!
Future Direction
It'll be interesting to spend more time hypertuning models and see if that will give better test results. We can also try different combinations of models when using the stacking/ensembling techniques. What I have learned over the course of this project is that machine learning is an art that can be perfected over time!
From a business perspective, I will still be inclined to use the linear regression even though it doesn't give the best predictions. It is a great model for it's high interpretability. It allows us to easily determine which features in our home should be changed to increase it's value.