Data Driven Predictions of Ames, IA House Prices

, , and
Posted on Dec 6, 2019

Data Driven Predictions of Ames, IA House Prices

The skills the authors demonstrated here can be learned through taking Data Science with Machine Learning bootcamp with NYC Data Science Academy.

Introduction to Ames Housing Prices Data

The purchase of houses is something that most of us will go through at least once in his or her lifetime. When it comes to purchasing a home, one of the biggest factors is predicting the sale price of the house in order to decide whether the purchase of the home is a good investment or not.  In this project we will be using the data of sale price of houses from Ames, Iowa.  We will be using different type of models of machine learning to predict the sale price of houses.

Data Pre-Processing

In order to tackle the missingness from the data we used various methods to impute the missing data.

● Lot Frontage - imputed based on KNN for LotConfig and LotArea

● BsmtExposure/BsmtFinType2/Electrical - simple random sampling

While NAs that were described as a true category were replaced with No

Feature Engineering of Data

To begin the feature engineering, the categorical features were first split into either categorical nominal or categorial ordinal.

Categorical Ordinal

●Encoded with 1 being poor through n as higher represented better or higher quality for each feature

○ ExterQual, ExterCond, HeatingQC, KitchenQual, BsmtQual, BsmtCond, FireplaceQu, GarageQual, GarageCond, BsmtFinType1, Functional, GarageFinish, PavedDrive, PoolQC, Fence

Categorical Nominal

● One Hot Encoding was used to dummify the features below ○ LotConfig, Exterior1st, Exterior2nd, Foundation, MasVnrType, SaleCondition, BsmtExposure, Misc

Additional Features

● Combined redundant features and made new features to show different perspective to the dataset

● TotalBath = BsmtFullBath + BsmtHalfhBath/2 + FullBath + HalfBath/2

● Bedroom/Bathroom Ratio = BedroomAbvGr/TotalBath

Binning for Neighborhood

● Computed average price per sqft based for each neighborhood

● Transformed Neighborhood into an ordinal feature based on average price per sqft ranking

Feature Selection of Data

After the feature engineering, each feature was examined to see how they compared in terms of its distribution.  From below we dropped columns that had extreme skewness.

Categorical Nominal 

● Dropped based on distribution of classes

○ MSZoning, Street, Alley, LotShape, LandContour, Utilities, LandSlope, BldgType, HouseStyle,RoofStyle, RoofMatl, BsmtFinType2, Heating, Electrical, Condition1, Condition2, CentralAir, GarageType, SaleType, BsmtFinSF1, BsmtFinSF2, BsmtUnfSF, 1stFlrSF, 2ndFlrSF, LowQualFinSF,MSSubClass

● Binned classes with less than 100 observations into others ○ Exterior1st, Exterior2nd, Foundation

Regularized Regression

● Stepwise regression was applied to full set of features

  •  Each feature was added and dropped based on their significance level

● 33 features were picked based on the selection process

  • OverallQual, GrLivArea, Neighborhood, TotalBsmtSF, BsmtExposure_Gd, KitchenQual, GarageCars, OverallCond, MasVnrArea, BsmtFinType1, SaleCondition_Partial, Fireplaces, MasVnrType_BrkFace, KitchenAbvGr, GarageYrBlt, LotConfig_CulDSac, BsmtExposure_NoBsmt, BsmtQual, TotRmsAbvGrd, WoodDeckSF, ScreenPorch, ExterQual, LotArea, BedroomAbvGr, Functional, Exterior1st_Plywood, YearRemodAdd, TotalBath, Bedroom.Bathroom, SaleCondition_Normal, LotConfig_FR2, GarageQual, YearBuilt
  •  R^2 = 0.844, Adjusted R^2 =0.840, AIC = 34251.1597

● Compared with the features selected by Lasso, kept the 29 features occur amongst both feature selection processes

Lasso Penalization

  1.  Selected the best alpha (from 0.001 to 100) from 10 folds cross validation.
  2. Identified features with non-zero coefficients at the best alpha. Reduced features from 76 to 38
  3. Further reduced features based on BIC from 38 to 29
  4. Split train set with reduced features into train/test set to assess spread of train/test R^2
  5. Cross validated train set with reduced feature to further inspect the stability of R^2

*30% split for testing 

**5-folds cross validation performed

 

Lasso

Lasso

Linear Regression

Hyperparameter

alpha=15.70

alpha=0.001

 

# of features

38

29

29

Train R^2*

0.8425

0.8402

0.8402

Test R^2

0.8329

0.8319

0.8319

Cross Validation**
Mean R^2 

0.8441

0.8421

0.8421

Features for Regularized Regression: 

LotArea, Neighborhood, OverallQual, OverallCond, MasVnrArea, ExterQual, BsmtQual, BsmtFinType1, TotalBsmtSF, GrLivArea, BedroomAbvGr, KitchenAbvGr, KitchenQual,'TotRmsAbvGrd, Functional, Fireplaces, GarageYrBlt, GarageFinish, GarageCars, WoodDeckSF, 3SsnPorch, ScreenPorch' TotalBath, LotConfig_CulDSac, LotConfig_FR2, Exterior1st_Plywood, SaleCondition_Partial, BsmtExposure_Gd, BsmtExposure_NoBsmt

Random Forest: Base Model / Feature Selection 

  • Used random forest regressor to rank feature importance for inclusion in tree-based models
  • Trained initial random forest with all 75 features, and selected features for additional models based on importance:
    • Trained model on 25 features with >1% feature importance
    • From 25 feature model, identified 16 features with >2% importance
    • In 16 feature model, isolated 8 features with >5% importance
  • Even when restricted to only 8 features, high degree of unfitting evident from high train / validation error delta

 

# of Features

Train R2

Validation R2

75

0.9775

0.8428

25

0.9805

0.8675

16

0.9803

0.8602

8

0.9764

0.8373

Random Forest: Hyperparameter Tuning 

  • Evaluated variations in hyperparameters on 8 feature random forest to limit overfitting:
    • # of trees: 100-1,000
    • Max depth: 2-5
    • Max features: 2-5
  • High degree of overfitting regardless of hyperparameter values indicates too many features and / or not suitable data for random forest prediction

Random Forest: Forward Feature Selection

  • Ran a forward feature selection from the null model to isolate the most important features and minimize overfitting
  • High degree of overfitting exists with as low as two features, indicating that random forest may not be the best model for this problem
  • Of first five features in forward selection, one does not overlap with features identified using regularized regression
  • To account for impact of bedroom / bathroom ratio on sale price, 5 feature random forest was selected for inclusion in final model

 

n

Feature Added

Train R2

Validation R2

1

Overall Quality

0.6820

0.6787

2

Neighborhood

0.7979

0.7390

3

Above Ground Living Area

0.9736

0.8174

4

Bedroom / Bathroom Ratio

0.9794

0.8531

5

Total Basement Square Feet

0.9811

0.8719

Gradient Boosting 

  • Trained baseline gradient boosting regressor on all features to evaluate feature importance, with 30 features having >1% importance (high overlap with chosen linear regression features)
  • Selected features using forward feature selection:
    • Gradient boosting models show lower levels of overfitting than random forest at each number of features
    • Selected 8 feature model for inclusion in final model

 

n

Feature Added

Train R2

Validation R2

1

Overall Quality

0.6818

0.6783

2

Above Ground Living Area

0.8111

0.7779

3

Neighborhood

0.8445

0.8260

4

Total Basement Square Feet

0.8701

0.8396

5

Total Bathrooms

0.8771

0.8503

6

Garage Cars

0.8759

0.8500

7

Kitchens Above Grade

0.8754

0.8504

8

Remodel Date

0.8751

0.8552

9

Sale Condition: Partial (New)

0.8690

0.8486

10

Number of Fireplaces

0.8709

0.8495

Stacked Model

Lasso + GBM + Random Forest = R^2 of .985 with MSE of .178

Lasso + GBM = R^2 of .903 with MSE of .155

Future Exploration on Ames Housing Price Data

  • Incorporate features from 3rd party sources such as mortgage interest rate or unemployment rate.
  • Attempt other imputation methodologies to assess whether they improve predictability. 
  • Explore dimension reduction techniques for feature selection purposes

About Authors

DongHwi Kim

James (Dong-Hwi) Kim is NYC Data Science Fellow with a Bachelor's Degree in Applied Mathematics and Statistics from Stony Brook University. Before coming to NYCDSA James was a CEO and founder for a startup where he found a...
View all posts by DongHwi Kim >

Lily Kuo

Liyi (Lily) is a Data Science Fellow at the New York City Data Science Academy with a Masters in Biomedical Engineering and Education. She is aspired to be a data scientist who executes data-driven strategies to increase efficiency,...
View all posts by Lily Kuo >

Jake Kobza

I've spend 3+ years in strategy consulting in the healthcare space, helping Fortune 500 clients solve problems facing their business. I specialize in developing clinical and digital strategy, resulting in improved outcomes for members and patients across the...
View all posts by Jake Kobza >

Related Articles

Leave a Comment

No comments found.

View Posts by Categories


Our Recent Popular Posts


View Posts by Tags

#python #trainwithnycdsa 2019 2020 Revenue 3-points agriculture air quality airbnb airline alcohol Alex Baransky algorithm alumni Alumni Interview Alumni Reviews Alumni Spotlight alumni story Alumnus ames dataset ames housing dataset apartment rent API Application artist aws bank loans beautiful soup Best Bootcamp Best Data Science 2019 Best Data Science Bootcamp Best Data Science Bootcamp 2020 Best Ranked Big Data Book Launch Book-Signing bootcamp Bootcamp Alumni Bootcamp Prep boston safety Bundles cake recipe California Cancer Research capstone car price Career Career Day citibike classic cars classpass clustering Coding Course Demo Course Report covid 19 credit credit card crime frequency crops D3.js data data analysis Data Analyst data analytics data for tripadvisor reviews data science Data Science Academy Data Science Bootcamp Data science jobs Data Science Reviews Data Scientist Data Scientist Jobs data visualization database Deep Learning Demo Day Discount disney dplyr drug data e-commerce economy employee employee burnout employer networking environment feature engineering Finance Financial Data Science fitness studio Flask flight delay gbm Get Hired ggplot2 googleVis H20 Hadoop hallmark holiday movie happiness healthcare frauds higgs boson Hiring hiring partner events Hiring Partners hotels housing housing data housing predictions housing price hy-vee Income Industry Experts Injuries Instructor Blog Instructor Interview insurance italki Job Job Placement Jobs Jon Krohn JP Morgan Chase Kaggle Kickstarter las vegas airport lasso regression Lead Data Scienctist Lead Data Scientist leaflet league linear regression Logistic Regression machine learning Maps market matplotlib Medical Research Meet the team meetup methal health miami beach movie music Napoli NBA netflix Networking neural network Neural networks New Courses NHL nlp NYC NYC Data Science nyc data science academy NYC Open Data nyc property NYCDSA NYCDSA Alumni Online Online Bootcamp Online Training Open Data painter pandas Part-time performance phoenix pollutants Portfolio Development precision measurement prediction Prework Programming public safety PwC python Python Data Analysis python machine learning python scrapy python web scraping python webscraping Python Workshop R R Data Analysis R language R Programming R Shiny r studio R Visualization R Workshop R-bloggers random forest Ranking recommendation recommendation system regression Remote remote data science bootcamp Scrapy scrapy visualization seaborn seafood type Selenium sentiment analysis sentiment classification Shiny Shiny Dashboard Spark Special Special Summer Sports statistics streaming Student Interview Student Showcase SVM Switchup Tableau teachers team team performance TensorFlow Testimonial tf-idf Top Data Science Bootcamp Top manufacturing companies Transfers tweets twitter videos visualization wallstreet wallstreetbets web scraping Weekend Course What to expect whiskey whiskeyadvocate wildfire word cloud word2vec XGBoost yelp youtube trending ZORI