Navigating the MAZE of Real Estate in Ames Iowa

David Zask
Avatar
Swarup Malli
, and
Posted on Jun 9, 2020

Anthony Ali, Elina Egiazarova, Swarup Malli, and David Zask

Github

The objective of the project is to leverage advanced regressions techniques to predict home prices in Ames, Iowa.  The source dataset consisted of train and test csv files. The train csv file was used for training the ML algorithm.

The train data set consisted of 1460 rows and 79 variables. The data set consists of data from 2006  to 2010. There are 38 numeric features and  41 categorical variables. First, we had to Normalize the numeric data as it was right-skewed. We then dealt with the categorical variables by splitting them into nominal and ordinal features.

Data analysis

In order to build linear models to predict prices, we first need to analyze and prepare the data. 

First, we notice that there are features that have over 95% zero values. This means that, for the majority of the properties in our dataset, the values for these features are the same. Therefore, they don't contribute to the changes in prices and we may omit them when choosing the features that influence those changes. So we remove PoolArea, LowQualFinSF, 3SsnPorh. 

We also observe that the issue is not unique to features with zero values: there are categorical features where almost all values are the same. Again, we treat them as information that can not help us predict prices given that their values are almost always the same for each property. Therefore, we remove the features Street, Condition2, Utilities (100% values are the same for the latter - all houses have all public utilities). 

A few features in our dataset have very few unique values and are categorical. For those, we see if we could reduce the number to two (as is the case for LandSlope when a third value appears only 4 times) and then transform them to binary values - 0 and 1. 

To preserve the linearity of the relationship between numerical features and the prices, we need to check for the outliers and remove them, when possible. Some features (LotFrontage, LotArea) have clear outliers that could be removed. This helps to restore the linearity significantly.

For both numerical and categorical features, we check for missing values and impute them with zeros (for numerical features) when appropriate (the feature of the house is not present, e.g. pool, basement). LotFrontage is imputed with mean values for each Neighborhood, and MiscFeature is removed as it turns out to have 96% of its values missing.

Finally, we log transform highly skewed numerical features and dummify the categorical features. Our dataset is now ready for modeling.  

Some features showed a clear linear relationship to price.

Modeling:

After completing the data analysis, we needed a baseline to see how our models would perform, so we tested out training data using a basic Linear Regression model. The results from this test gave us an R² of 0.9128 and an RMSE of 0.1198.

We then reduced features using Lasso Regression to make our models more accurate. In order to test this, we ran the different models with test data from before the feature reduction and after. Our first model was the Lasso Regression model, which resulted in an RMSE of 0.1180 before the feature reduction and an RMSE of .01103 after. This difference in results showed that the feature reduction made our model more accurate.

We then proceeded on using our training data with a Ridge Regression model, which resulted in an RMSE of 0.1225 before the feature reduction and 0.1142 after the feature reduction. Once again the model performed better after the reduction, though not as well as the Lasso Reduction model. The final linear based model that was utilized was our Elastic-Net with Cross-Validation model. This model resulted in an RMSE of 0.1215 before the reduction and an RMSE of 0.1094 after the reduction, making it the best performing model of the linear based models we tested.

In order to determine whether we could get more out of our model and if we could drop more irrelevant features, we turned to Gradient Boosting to further optimize our model. In tuning our model we saw that there was going to be inherent overfitting on the training data because of how powerful the learning algorithm was. We performed a grid search in order to minimize this problem.

Overfitting on the training data measured by R-squared value.

With fully optimized parameters the model had an R-squared value of 0.92 and an RMSE of 0.11 meaning that it performed very well, but only marginally better than the other models. The real value of Gradient Boosting was that we were able to identify the important features and eliminate the unimportant ones without a significant impact on model performance. We saw that only a few features had much impact on the model and that feature importance quickly dropped off after those features.  When we tested the model only using the 15-most important features (there were 63 used in the original model) we saw a very modest drop in performance. The R-squared value was 0.87 and the RMSE was 0.12. Thus, we concluded that the most informative model was a limited feature Gradient Boosted model.

Conclusion:

So what really matters when it comes to housing prices? The most important feature by far was Overall Quality followed by Living Area and then Neighborhood followed by another steep drop in feature importance. We recommend that home buyers/sellers focus on quality first and foremost. Quality is the first thing to depreciate over time and must be tended to when preserving the value of a home. This means that any work on the home should be focused on maintaining functionality as opposed to expensive additions or remodels. Living area SF and neighborhood are also important determinants of price, so be aware of size and location when purchasing a home. If a home is lacking in these qualities and still has a hefty price tag then it is probably not worth the extra money.

About Authors

David Zask

David Zask

Certified data scientist with extensive project experience. Skilled in Machine Learning, Big Data, Unsupervised Learning, Deep Learning, Data Engineering, R, Python, Documentation, Data Analytics, Start-ups, and Business Development. Strong data science and media professional with a BA in...
View all posts by David Zask >
Avatar

Anthony Ali

Anthony Ali has a background in Cyber-Security with a Bachelors degree in Network Forensics and Intrusion Investigation. He is a CISSP certified security analyst with industry skills in threat management, security risk identification and mitigation, and security infrastructure....
View all posts by Anthony Ali >
Swarup Malli

Swarup Malli

Swarup has a Bachelors's degree in Information Technology. He started his career as an ETL developer and eventually transitioned into the Business Intelligence space. He has been consulting as a Business Intelligence professional with 10 + years plus...
View all posts by Swarup Malli >

Leave a Comment

No comments found.

View Posts by Categories


Our Recent Popular Posts


View Posts by Tags

#python #trainwithnycdsa 2019 airbnb Alex Baransky alumni Alumni Interview Alumni Reviews Alumni Spotlight alumni story Alumnus API Application artist aws beautiful soup Best Bootcamp Best Data Science 2019 Best Data Science Bootcamp Best Data Science Bootcamp 2020 Best Ranked Big Data Book Launch Book-Signing bootcamp Bootcamp Alumni Bootcamp Prep Bundles California Cancer Research capstone Career Career Day citibike clustering Coding Course Demo Course Report D3.js data Data Analyst data science Data Science Academy Data Science Bootcamp Data science jobs Data Science Reviews Data Scientist Data Scientist Jobs data visualization Deep Learning Demo Day Discount dplyr employer networking feature engineering Finance Financial Data Science Flask gbm Get Hired ggplot2 googleVis Hadoop higgs boson Hiring hiring partner events Hiring Partners Industry Experts Instructor Blog Instructor Interview Job Job Placement Jobs Jon Krohn JP Morgan Chase Kaggle Kickstarter lasso regression Lead Data Scienctist Lead Data Scientist leaflet linear regression Logistic Regression machine learning Maps matplotlib Medical Research Meet the team meetup Networking neural network Neural networks New Courses nlp NYC NYC Data Science nyc data science academy NYC Open Data NYCDSA NYCDSA Alumni Online Online Bootcamp Online Training Open Data painter pandas Part-time Portfolio Development prediction Prework Programming PwC python python machine learning python scrapy python web scraping python webscraping Python Workshop R R language R Programming R Shiny r studio R Visualization R Workshop R-bloggers random forest Ranking recommendation recommendation system regression Remote remote data science bootcamp Scrapy scrapy visualization seaborn Selenium sentiment analysis Shiny Shiny Dashboard Spark Special Special Summer Sports statistics streaming Student Interview Student Showcase SVM Switchup Tableau team TensorFlow Testimonial tf-idf Top Data Science Bootcamp twitter visualization web scraping Weekend Course What to expect word cloud word2vec XGBoost yelp