Ames, Iowa - Predicting Sale Price of Houses

The skills the author demoed here can be learned through taking Data Science with Machine Learning bootcamp with NYC Data Science Academy.
Introduction to Ames' Housing Prices
In this project, our primary objective was to create and evaluate multiple machine learning models to predict Sales Prices of homes in Ames, Iowa. This dataset can be found on Kaggle and includes 79 features that influence Sales Price. I tackled this project from the perspective as someone interested in purchasing a home. Are there any features that may stand out more than the rest?
Exploratory Data Analysis
First, we must take a look at the distribution of our target variable (Sales Price).
It is not normally distributed, but rather, right skewed. We then perform a log transformation on Sale Price to see if the distribution in more normal. This will help our linear models perform better.
Afterwards, I split the data into Numerical Variables and Categorical Variables. From the categorical variables, I split them into Ordinal and Nominal Variables.
From there, I checked to see if there was multicollinearity in our dataset by using a Correlation Heatmap.
We notice these pairs of variables are highly correlated with each other:
- GarageYrBlt and YearBuilt
- 1stFlrSF and 2ndFlrSF
- TotRmsAbvGrd and GrLivArea
- Garage Area and GarageCars
- GrLivArea and 2ndFlrSF
Between each pair, I then dropped the one that is least correlated with our target variable. I kept 2ndFlrSF for feature engineering.
Feature Engineering
If we take a look at the data description, "NA" means the feature doesn't exist in the home rather than a, "missing value." Therefore, we will replace "NA" in categorical variables to "None." For the numerical variables, I decided to use the sample median to replace null values, that way we account for outliers. Inb this case, there were only two variables that had missing values: Lot Frontage and MasVnrArea (Masonry Veneer Area in Square Feet). After taking a closer look, I decided that I would use the median Lot Frontage with the median of the neighborhood the house is in, rather than the sample median.
I then proceeded to check outliers. Using GrLivArea as an example:
We can see that there are two outliers in the bottom right. I utilized Robust Scaler to tackle outliers, and the Standard Scaler to standardize the variables. The latter helps with making numerical and categorical coefficients more comparable.
Two new variables were then created: totalSqFeet (X['TotalBsmtSF'] + X['1stFlrSF'] + X['2ndFlrSF']) and totalBath(X['FullBath'] + X['BsmtFullBath'] + 0.5 * (X['HalfBath'] + X['BsmtHalfBath'])). I then dropped the variables used to comprise these two new ones.
Finally, I label-encoded my ordinal variables, and dummified nominal ones. At this point, I had over 200 variables. I decided to get remove 100 of the least correlated features.
One thing to note: I did create a separate dataset for my tree-based models; prior to label-encoding my ordinal variables, I copied my cleaned dataset and label-encoded all the categorical variables since dummifying variables affect the perofrmance of these non-linear models.
Modeling
Eight models were explored:
- Multiple Linear Regression
- Ridge Regression
- Lasso Regression
- Elastic Net
- Random Forest
- Gradient Boosting
- Light GBM
- XGBoost
The goal was to find the model that minimizes RMSE. Our target variable will be log of Sale Price. Since our dependent variable is the log of Sale Price, RMSE is unitless, and thus I converted it into Dollars.
Here are the results.

We see that our Linear Regression Model was the best performing with an average error of $21,08. Out of the tree based models, Gradient Boosting performed the best with an average error of $23,723.22.
Conclusion About Ames' Housing Prices
The Linear Regression Model performed the best in terms of RMSE. The r^2 for the training dataset was 0.9297 and the r^2 for the testing dataset was 0.9235, so pretty good. However, that isn't the end all, be all. Each model has its pros and cons, and honestly, the results were quite close for many of them. Is there a variable(s) that is in multiple models?
I had to split up how I counted these. For linear models, I just took the coefficients. This can be done since there is a linear relationship between the target variable, so the larger the coefficient, the "more important" the feature. In this case, I set the criteria to be any coefficient over 0.05. For tree-based models, it's a little different. I used the feature_importances_ function, and then divided each feature's value by the feature with the max value. Therefore, these results were in relation to the feature variables, rather than our target variable.
So, for our linear models, Overall Quality appeared in all four models, with Normal Sale Condition, totalSqFt, GrLivArea, and Central Air appearing in 3. In the tree-based models, surpisingly totalSqFeet, OverallQual, totalBath, Year Remodel Added, and Fireplaces appeared in all four models.
Overall Quality appeared in every single model. Intuitively, the higher quality house would cost more.
At the end of the day, this gives an idea for what feature(s) to pay attention to for those that are looking to buy a house, as well as those who want to sell their house. Perhaps their current house is missing some of these features, and by adding or renovating certain features, it will help increase the sale price of their house.
Future Work on Ames' House Prices
If I had more time, I would probably spend more time improving EDA and Feature Engineering, but this project was able to provide some insight on the most impactful drivers of housing projects in Ames, Iowa. It would be cool to see how accurate these models would be compared to similar cities (in terms of population, crime, education).