Predicting House Prices in Ames, Iowa

Avatar
Posted on Jun 11, 2021

Background

This project utilizes a variety of machine learning techniques to predict house prices based on a Kaggle dataset of 79 explanatory variables. The data covers the sale of individual residential properties in Ames, Iowa from 2006-2010. The goal of the project is to forecast the prices of 1,459 homes as accurately as possible, as well as identify the features which have to greatest impact on sale price.

 

Part 1: Data Exploration, Cleaning, and Pre-processing

My first step was to read the documentation and contextualize the data. Real estate has a history of being cyclical, and the home sales detailed in this particular dataset take place amidst the 2008 Housing Crisis. The following boxplot shows sale prices broken down by month and year:

Interestingly, there is very little evidence of a pricing bubble or crash here, and very little seasonality overall. We can move on to further analysis.

With the next plot, we see that overall quality shows a strong linear-like correlation with sale price.

Above ground living area also shows a somewhat linear relationship to sale price, as shown in the scatterplot below. The documentation notes there are five outliers in the dataset, three of which are partial sales and two of which are unusually large sales. I removed the two points circled in orange, as those stood out significantly.

Feature Selection (and De-Selection)

Now that I'm clued in on the linear relationships of some variables to sale price, I know that a linear regression may be a good candidate for model fitting. The next step is to try and reduce some of the dataset's dimensionality. I will look at a heatmap to identify variables that are highly correlated to each other, and remove as many redundant columns as possible.

The feature "GarageCars", for example, is highly correlated with "GarageArea". We can drop it since it doesn't offer any new information. I dropped "GarageYrBuilt", "TotRmsAbvGrd", "1stFlSF", and "2ndFlSF" for similar reasons.

Missing Value Imputation

Another important issue to address was the 29 columns in the dataset containing missing values. Out of those, 20 relate to optional "bonus features" of a house such as a pool, fireplace, or fence. I imputed zeros or "none" for these.

8 of the remaining columns with missing values were categorical attributes dominated by one value (i.e., most values of "SaleType" were "Normal). I imputed the mode for these columns. I then dropped "Utilities" since all values were the same except for one.

Finally, the last column requiring imputation was "LotFrontage". 16.7% of these values were missing. I considered dropping the variable, but guessed that lot frontage could greatly impact the home's visual appeal and decided instead to impute the neighborhood mode.

Feature Engineering

I created the following new features:

  • HouseAge- for better interpretability of year built
  • YearsSinceRemod- for better interpretability of year remodeled
  • TotalBathrooms- consolidating the 4 columns for full baths, half baths, and basement baths
  • PorchSF- consolidating the square footage of various porch styles

Encoding Categorical Variables

14 categorical variables were encoded ordinally. Most of these were quality scores which were easy to assign numerical values to.

For linear models, I dummified the remaining categorical variables, including Month and Year sold.

Variable Transformations

To increase the normality of some of the variables with high skew, I applied a log transformation, starting with the dependent variable.

I also applied log transformations to explanatory variables with a skew level of 0.5 or higher.

 

Part 2: Model Fitting & Evaluation

I split the cleaned dataset into train and test splits and analyzed models using 10-fold cross validation. I used root mean square error to evaluate performance.

The Lasso penalized linear regression had the best performance, with an RMSE of 0.1201 after tuning the model for alpha.

Next, I took a look at the prediction plot and residuals.

Visually, both of these plots demonstrate a relatively effective model. We can move on to evaluating coefficients to extract additional insights. 

From the coefficient plot, we can see that the most important factor in determining sale price is above ground living area, which makes sense. After that is Overall Condition.

Additionally, six of the coefficients relate to a neighborhood, which proves that location matters a lot to buyers in Ames.

One unusual surprise here is the top negative coefficient showing as commercial zoning classification. This may require further investigation, especially since the coefficient is disproportionally high.

Conclusion

The top takeaways from the model analysis are as follows:

  • A bigger house is not always a better house, but in Ames, it is most likely a more expensive house. Thus, homeowners can sizably  increase the value of their home through constructing a home extension, if possible.
  • Neighborhoods have a significant impact on house price. Therefore, home buyers looking to save money on a nicer house should consider neighborhoods like Edwards and Old Town.

Opportunities for Further Analysis

My final predictions scored in the top 25th percentile on Kaggle. There is potential to improve the score by further fine-tuning the model parameters. 

In addition, we may get even better results by stacking or blending some of the models. The tradeoff would be that this approach will add a layer of complication and make the model more difficult to interpret.

Finally, there are infinite possibilities when it comes to feature engineering. We can try binning some categorical variables, performing different transformations on skewed variables, and adding/dropping different combinations until the end of time.

 

Additional details and code available on Github

About Author

Related Articles

Leave a Comment

No comments found.

View Posts by Categories


Our Recent Popular Posts


View Posts by Tags

#python #trainwithnycdsa 2019 airbnb Alex Baransky alumni Alumni Interview Alumni Reviews Alumni Spotlight alumni story Alumnus API Application artist aws beautiful soup Best Bootcamp Best Data Science 2019 Best Data Science Bootcamp Best Data Science Bootcamp 2020 Best Ranked Big Data Book Launch Book-Signing bootcamp Bootcamp Alumni Bootcamp Prep Bundles California Cancer Research capstone Career Career Day citibike clustering Coding Course Demo Course Report D3.js data Data Analyst data science Data Science Academy Data Science Bootcamp Data science jobs Data Science Reviews Data Scientist Data Scientist Jobs data visualization Deep Learning Demo Day Discount dplyr employer networking feature engineering Finance Financial Data Science Flask gbm Get Hired ggplot2 googleVis Hadoop higgs boson Hiring hiring partner events Hiring Partners Industry Experts Instructor Blog Instructor Interview Job Job Placement Jobs Jon Krohn JP Morgan Chase Kaggle Kickstarter lasso regression Lead Data Scienctist Lead Data Scientist leaflet linear regression Logistic Regression machine learning Maps matplotlib Medical Research Meet the team meetup Networking neural network Neural networks New Courses nlp NYC NYC Data Science nyc data science academy NYC Open Data NYCDSA NYCDSA Alumni Online Online Bootcamp Online Training Open Data painter pandas Part-time Portfolio Development prediction Prework Programming PwC python Python Data Analysis python machine learning python scrapy python web scraping python webscraping Python Workshop R R language R Programming R Shiny r studio R Visualization R Workshop R-bloggers random forest Ranking recommendation recommendation system regression Remote remote data science bootcamp Scrapy scrapy visualization seaborn Selenium sentiment analysis Shiny Shiny Dashboard Spark Special Special Summer Sports statistics streaming Student Interview Student Showcase SVM Switchup Tableau team TensorFlow Testimonial tf-idf Top Data Science Bootcamp twitter visualization web scraping Weekend Course What to expect word cloud word2vec XGBoost yelp