Ames, Iowa - Predicting Sale Price of Houses

Posted on Jun 24, 2022

Ames, Iowa - Predicting Sale Price of Houses

The skills the author demoed here can be learned through taking Data Science with Machine Learning bootcamp with NYC Data Science Academy.

Introduction to Ames' Housing Prices

In this project, our primary objective was to create and evaluate multiple machine learning models to predict Sales Prices of homes in Ames, Iowa. This dataset can be found on KaggleΒ and includes 79 features that influence Sales Price. I tackled this project from the perspective as someone interested in purchasing a home. Are there any features that may stand out more than the rest?

 

Exploratory Data Analysis

First, we must take a look at the distribution of our target variable (Sales Price).

Ames, Iowa - Predicting Sale Price of Houses

It is not normally distributed, but rather, right skewed. We then perform a log transformation on Sale Price to see if the distribution in more normal. This will help our linear models perform better.

Ames, Iowa - Predicting Sale Price of Houses

Afterwards, I split the data into Numerical Variables and Categorical Variables. From the categorical variables, I split them into Ordinal and Nominal Variables.

From there, I checked to see if there was multicollinearity in our dataset by using a Correlation Heatmap.

Ames, Iowa - Predicting Sale Price of Houses

We notice these pairs of variables are highly correlated with each other:

  • GarageYrBlt and YearBuilt
  • 1stFlrSF and 2ndFlrSF
  • TotRmsAbvGrd and GrLivArea
  • Garage Area and GarageCars
  • GrLivArea and 2ndFlrSF

Between each pair, I then dropped the one that is least correlated with our target variable. I kept 2ndFlrSF for feature engineering.

Feature Engineering

If we take a look at the data description, "NA" means the feature doesn't exist in the home rather than a, "missing value." Therefore, we will replace "NA" in categorical variables to "None." For the numerical variables, I decided to use the sample median to replace null values, that way we account for outliers. Inb this case, there were only two variables that had missing values: Lot Frontage and MasVnrArea (Masonry Veneer Area in Square Feet). After taking a closer look, I decided that I would use the median Lot Frontage with the median of the neighborhood the house is in, rather than the sample median.

I then proceeded to check outliers. Using GrLivArea as an example:

Ames, Iowa - Predicting Sale Price of Houses

We can see that there are two outliers in the bottom right. I utilized Robust Scaler to tackle outliers, and the Standard Scaler to standardize the variables. The latter helps with making numerical and categorical coefficients more comparable.

Two new variables were then created: totalSqFeet (X['TotalBsmtSF'] + X['1stFlrSF'] + X['2ndFlrSF']) and totalBath(X['FullBath'] + X['BsmtFullBath'] + 0.5 * (X['HalfBath'] + X['BsmtHalfBath'])). I then dropped the variables used to comprise these two new ones.

Finally, I label-encoded my ordinal variables, and dummified nominal ones. At this point, I had over 200 variables. I decided to get remove 100 of the least correlated features.

One thing to note: I did create a separate dataset for my tree-based models; prior to label-encoding my ordinal variables, I copied my cleaned dataset and label-encoded all the categorical variables since dummifying variables affect the perofrmance of these non-linear models.

Modeling

Eight models were explored:

  1. Multiple Linear Regression
  2. Ridge Regression
  3. Lasso Regression
  4. Elastic Net
  5. Random Forest
  6. Gradient Boosting
  7. Light GBM
  8. XGBoost

The goal was to find the model that minimizes RMSE. Our target variable will be log of Sale Price. Since our dependent variable is the log of Sale Price, RMSE is unitless, and thus I converted it into Dollars.

Here are the results.

We see that our Linear Regression Model was the best performing with an average error of $21,08. Out of the tree based models, Gradient Boosting performed the best with an average error of $23,723.22.

Conclusion About Ames' Housing Prices

The Linear Regression Model performed the best in terms of RMSE. The r^2 for the training dataset was 0.9297 and the r^2 for the testing dataset was 0.9235, so pretty good. However, that isn't the end all, be all. Each model has its pros and cons, and honestly, the results were quite close for many of them. Is there a variable(s) that is in multiple models?

I had to split up how I counted these. For linear models, I just took the coefficients. This can be done since there is a linear relationship between the target variable, so the larger the coefficient, the "more important" the feature. In this case, I set the criteria to be any coefficient over 0.05. For tree-based models, it's a little different. I used the feature_importances_ function, and then divided each feature's value by the feature with the max value. Therefore, these results were in relation to the feature variables, rather than our target variable.

So, for our linear models, Overall Quality appeared in all four models, with Normal SaleΒ  Condition, totalSqFt, GrLivArea, and Central Air appearing in 3. In the tree-based models, surpisingly totalSqFeet, OverallQual, totalBath, Year Remodel Added, and Fireplaces appeared in all four models.

Overall Quality appeared in every single model. Intuitively, the higher quality house would cost more.

At the end of the day, this gives an idea for what feature(s) to pay attention to for those that are looking to buy a house, as well as those who want to sell their house. Perhaps their current house is missing some of these features, and by adding or renovating certain features, it will help increase the sale price of their house.

Future Work on Ames' House Prices

If I had more time, I would probably spend more time improving EDA and Feature Engineering, but this project was able to provide some insight on the most impactful drivers of housing projects in Ames, Iowa. It would be cool to see how accurate these models would be compared to similar cities (in terms of population, crime, education).

 

 

 

 

 

About Author

Leave a Comment

No comments found.

View Posts by Categories


Our Recent Popular Posts


View Posts by Tags

#python #trainwithnycdsa 2019 2020 Revenue 3-points agriculture air quality airbnb airline alcohol Alex Baransky algorithm alumni Alumni Interview Alumni Reviews Alumni Spotlight alumni story Alumnus ames dataset ames housing dataset apartment rent API Application artist aws bank loans beautiful soup Best Bootcamp Best Data Science 2019 Best Data Science Bootcamp Best Data Science Bootcamp 2020 Best Ranked Big Data Book Launch Book-Signing bootcamp Bootcamp Alumni Bootcamp Prep boston safety Bundles cake recipe California Cancer Research capstone car price Career Career Day citibike classic cars classpass clustering Coding Course Demo Course Report covid 19 credit credit card crime frequency crops D3.js data data analysis Data Analyst data analytics data for tripadvisor reviews data science Data Science Academy Data Science Bootcamp Data science jobs Data Science Reviews Data Scientist Data Scientist Jobs data visualization database Deep Learning Demo Day Discount disney dplyr drug data e-commerce economy employee employee burnout employer networking environment feature engineering Finance Financial Data Science fitness studio Flask flight delay gbm Get Hired ggplot2 googleVis Hadoop hallmark holiday movie happiness healthcare frauds higgs boson Hiring hiring partner events Hiring Partners hotels housing housing data housing predictions housing price hy-vee Income Industry Experts Injuries Instructor Blog Instructor Interview insurance italki Job Job Placement Jobs Jon Krohn JP Morgan Chase Kaggle Kickstarter las vegas airport lasso regression Lead Data Scienctist Lead Data Scientist leaflet league linear regression Logistic Regression machine learning Maps market matplotlib Medical Research Meet the team meetup methal health miami beach movie music Napoli NBA netflix Networking neural network Neural networks New Courses NHL nlp NYC NYC Data Science nyc data science academy NYC Open Data nyc property NYCDSA NYCDSA Alumni Online Online Bootcamp Online Training Open Data painter pandas Part-time performance phoenix pollutants Portfolio Development precision measurement prediction Prework Programming public safety PwC python Python Data Analysis python machine learning python scrapy python web scraping python webscraping Python Workshop R R Data Analysis R language R Programming R Shiny r studio R Visualization R Workshop R-bloggers random forest Ranking recommendation recommendation system regression Remote remote data science bootcamp Scrapy scrapy visualization seaborn seafood type Selenium sentiment analysis sentiment classification Shiny Shiny Dashboard Spark Special Special Summer Sports statistics streaming Student Interview Student Showcase SVM Switchup Tableau teachers team team performance TensorFlow Testimonial tf-idf Top Data Science Bootcamp Top manufacturing companies Transfers tweets twitter videos visualization wallstreet wallstreetbets web scraping Weekend Course What to expect whiskey whiskeyadvocate wildfire word cloud word2vec XGBoost yelp youtube trending ZORI