Building a home-valuation algorithm

By Jingrou Wei, Catherine Tang Sin Kim, Devin Fagan and Doug Devens


Buying a house is one of the largest financial decisions many people make in their lifetimes.  Homeownership has been encouraged for years by society and, by extension, governments.  However, when facing a decision of such financial magnitude, people may be concerned that they are paying more for the house than it’s worth.  Previously the complexity of this task led people to simply take the price of the house and divide by the total square feet, obtaining a price per square foot which one could compare to other houses in the neighborhood.  Now with the digitization of data and increased computational power, we are able to increase the sophistication of our analysis of the relative worth of a house’s features.  Moreover, there are websites that allow one to compare other properties against the intended purchase. However, because the algorithms aren’t public and the house one is buying is necessarily not the same as the ones on the website (because of location, timing and other factors), it is still necessary to extrapolate to the house actually being purchased.  This project created a model to make the process more transparent.

One could use a model such as we have developed to predict the price for which a specific house will sell.  However, there are many unquantifiable factors in a house’s final sale price, including emotional and macroeconomic factors.  Therefore, our target audience is the homeowner considering upgrading their house for an upcoming sale provide purchases a relative comparator between houses even if they have different features.  We propose to develop a model that can quantify the potential increase in value (within prediction error) for a given upgrade for the house.  This is not to say the homeowner should or should not spend more for an upgrade if it has specific desirable features, but this can guide the amount the homeowner might, on average, expect to recoup from the investment.


The dataset under analysis was the House Prices: Advanced Regression Techniques set on  The location of the houses sold was in Ames, Iowa during the period 2006 to 2010.  There were 1460 houses and their sale prices for which 79 features (e.g. the number of bathrooms, square footage, overall quality rating)  were quantified though something like the Multiple Listing Service..  The competition was to predict the prices of another set of houses for which the prices were not provided.

Data preprocessing and feature engineering

We first examined the dataset to determine the number of missing observations, meaning where a value for a specific feature for a house was missing.  We found relatively few of those, except for features such as pools and fireplaces.  Pools are relatively rare in the Upper Midwest, so that is expected.  Similarly, fireplaces are rarer in new houses that are built with modern heating systems. 

We next took the quality characteristics that ranged from excellent to poor, such as overall quality and exterior quality, or livable to unfinished basement condition. We converted those to ranked continuous numbers on a range of one for poor to 10 for excellent.    We then squared these numbers, as we observed a quadratic relationship between the house sale price and the ordinal rankings in most cases, as shown in the figure below.  Similarly, we modeled the log of the sale price, since as the price increased, the variation between two seemingly similar houses varied more, as we might expect ($5000 means more at a price of $160,000 vs $500,000.)  Taking the logarithm of a number reduces the relative importance of that variation at higher numbers.

We then removed features that correlated strongly with each other (not necessarily with the target house sale price), as shown in the figure below, where darker colors correspond to more correlation between variables.

 That there would be a correlation between some variables is not surprising.  For example, the square footage of the garage should generally vary directly with the number of cars, as we confirmed. We also examined the correlation between categorical features of the house (e.g. neighborhood and the year the house was built) and removed variables where we saw that the categorical explained more than 50% of the variation in the continuous variables.  Finally, we also removed the feature ‘overall quality’ in favor of keeping the quality of specific features, such as the exterior finish, kitchen or basement.  Finally, we scaled the features so that they varied uniformly between 0 and 1, to remove artifactual removal of features in the model.

Model fit and selection

We examined two types of models, multiple linear regression models and tree-based models.  The underlying data are, to some extent, fundamentally linear as alluded to in the introduction, where a potential buyer would divide the house price by the square footage.  The linear models we used were penalized multiple regression models, using lasso and ridge penalizations. We also examined a support vector regression using a linear kernel.  We also examined using a decision tree, a random forest decision tree model and a gradient boost decision tree model.  Tree models have an intrinsic ability to model a non-linear response because they make no assumptions about the nature of the response (sale price in this case.) One advantage of the linear models is that it is more intuitive for extracting information about a particular change to a feature, though one can obtain information about the relative importance of various features from a tree-based model.

In all the models we examined, we found that roughly the same features were important, an example of which we show below.  The exact ordering varied from model to model but the trends were confirmed and an example is shown below from the gradient boost tree.

We found that the linear models were somewhat prone to ‘over-fitting,’ where the model fits itself to random noise that is in the training data set.  This shows up as a lower score in the test set.  Hyperparameter tuning can address this, but we were unable to find settings that reduced this over-fitting to zero.  The support vector regression model had less overfitting, with a more similar performance between the training and test datasets, as shown in the figure below.

The tree-based models had more overfit with comparable test set performance as the support vector regression. The overfit of the tree-based models led to some concern about a decrease in performance in another set of unseen data. Because of this and the ability to extract specific information about changes to particular features, the team decided to select the support vector regression predictions for the submission.

Specific predictions from our model

As we recall, our goal was to predict the sale price of the houses, but that was of secondary importance to the goal of examining the relative importance of various features, such as the number of bathrooms or an improvement in kitchen quality.  As we described previously, we modeled the logarithm of the house sale. To obtain the price. we raise the prediction Y as an exponent of e

SalePrice = eYmodel

Similarly, if we use our model to predict a change in the potential sale price due to a change in a feature we will predict now Y + ΔY, and the sale price can be obtained by:

SalePrice = eYmodel+ΔYmodel = eYmodeleΔYmodel,

The addition of a logarithm is multiplicative, so we must assume a base-case sale price (eYmodel) since any change is multiplied from that base, which we assume to be the median-value house of $163,000.  We then subtract the base-case price from that product and obtain the increase due to the upgrade  Using that as the transformation of our predictions to specific examples, we obtain the following:

  • Adding 150 sq ft of garage space, such as  a shed addition, approximately increases sale price by $4,000
  • Adding 200 sq ft of finished basement space (from unfinished) approximately increases sale price by $1,500
  • Improving Kitchen Quality from Fair to Good approximately increases sale price by $8,300
  • Adding a Full Bathroom approximately increases sale price by $11,300
  • Adding 500 sq ft of living space, such as by finishing an attic, approximately increases sale price by $7,500

These numbers can be expected to be different for houses with different base valuations. 

The homeowner now has a model that can provide a specific estimate of the value increase due to an upgrade, and which may serve as a budget if the upgrade is purely for sale of the house.


About Authors


Doug Devens

Doug Devens has a background in chemical engineering, with a doctorate in rheology of polymers. He has nearly 20 years of experience in medical device product development, with a dozen product launches. It is here he learned the...
View all posts by Doug Devens >

Devin Fagan

Devin graduated from City College of NY with a Bachelor's degree in Political Science. His background includes exposure to economics, public policy, and campaign strategy. He plans to use data to create data-driven public policy that helps the...
View all posts by Devin Fagan >

Leave a Comment

No comments found.

View Posts by Categories

Our Recent Popular Posts

View Posts by Tags

2019 airbnb Alex Baransky alumni Alumni Interview Alumni Reviews Alumni Spotlight alumni story Alumnus API Application artist aws beautiful soup Best Bootcamp Best Data Science 2019 Best Data Science Bootcamp Best Data Science Bootcamp 2020 Best Ranked Big Data Book Launch Book-Signing bootcamp Bootcamp Alumni Bootcamp Prep Bundles California Cancer Research capstone Career Career Day citibike clustering Coding Course Demo Course Report D3.js data Data Analyst data science Data Science Academy Data Science Bootcamp Data science jobs Data Science Reviews Data Scientist Data Scientist Jobs data visualization Deep Learning Demo Day Discount dplyr employer networking feature engineering Finance Financial Data Science Flask gbm Get Hired ggplot2 googleVis Hadoop higgs boson Hiring hiring partner events Hiring Partners Industry Experts Instructor Blog Instructor Interview Job Job Placement Jobs Jon Krohn JP Morgan Chase Kaggle Kickstarter lasso regression Lead Data Scienctist Lead Data Scientist leaflet linear regression Logistic Regression machine learning Maps matplotlib Medical Research Meet the team meetup Networking neural network Neural networks New Courses nlp NYC NYC Data Science nyc data science academy NYC Open Data NYCDSA NYCDSA Alumni Online Online Bootcamp Open Data painter pandas Part-time Portfolio Development prediction Prework Programming PwC python python machine learning python scrapy python web scraping python webscraping Python Workshop R R language R Programming R Shiny r studio R Visualization R Workshop R-bloggers random forest Ranking recommendation recommendation system regression Remote remote data science bootcamp Scrapy scrapy visualization seaborn Selenium sentiment analysis Shiny Shiny Dashboard Spark Special Special Summer Sports statistics streaming Student Interview Student Showcase SVM Switchup Tableau team TensorFlow Testimonial tf-idf Top Data Science Bootcamp twitter visualization web scraping Weekend Course What to expect word cloud word2vec XGBoost yelp