Studying Data to Predict House Prices Using Machine Learning

The skills the author demoed here can be learned through taking Data Science with Machine Learning bootcamp with NYC Data Science Academy.

Group Project:Β Credit to Anthony Parrillo, Akshit Arora, Alan Lin and Antonio Salvador.

Objective:

To employ machine learning techniques to build a model to predict sales prices based on dataset provided in Kaggle. The data set included 1460 observationsΒ  with 80 variables for the training dataset (similar size of dataset as test). The dataset was collected in Ames, IA from 2006 to 2010 and includes most of the questions that a buyer or property assessor would ask to determine the value of a property.

 

Context:

Real Estate is worth almost $30 Trillion in the US economy, and its dynamism is one of the indicators of a healthy economy. There are approximately 2 million active real estate agents in the US;Β  they play an important role linking buyers and sellers. Traditionally, sales prices in the real estate market are estimates based on the β€œlast” comparable sales on similar neighborhood and houses; therefore, finding a model that accurately predicts a house’s selling priceΒ  can be of value to complement current empirical/limited approaches and strengthen objectivity in prediction.

 

Methodology:

We implemented a 4 step approach:

 

1. Understanding the dataset:

We analyzed basics demographics of the city of Ames, including population, economy, household income, geographic location, etc. The original dataset includes sales of almost 3000 houses in a period of 4 years. That represents approximately 10% of all the houses in Ames, IA. a population of about 60,000 with an close to 3 persons per household. Ames IA, could be categorized as a college town, in suburban Midwestern America, home to Iowa State University. This might signify lower volatility in house prices and a economy dependent in educational services (probably more stable that in an industrial town).

We evaluated the types of variables: continuous (mostly related to dimension of different parts of house); discrete variables (quantifying items occurring in house), nominal/categorical data (identifying different conditions) and ordinal variables (ratings to various items of the house).

We conducted EDA for numeric variables to understand their nature, such as standard deviation, skewness, missingness, etc.

 

2. Cleaning & Feature Engineering:

a. Correlation: we evaluated the degree of correlation of numeric variables with sales price in order to have a quick view of potential feature importance and redundancies (similar variables with high degree of correlation).

Studying Data to Predict House Prices Using Machine Learning

b. Missingness:

we identified features with a high percentage of missingness in order to determine impute methodology. In most cases missingness was related to natural absence of the factor itself (no fence, no kitchen, etc) which could be solved by imputation of zeros. In case of Lot Frontage which showed aΒ  significant percentage (20%) of missingness we assessed that median value imputation was most efficient method to use.

 

Studying Data to Predict House Prices Using Machine Learning

 

c. We combined redundant features that were also correlated, such as, 1stΒ floor footage + 2ndΒ floor footage + basement footage as Total footage for house.

Sale Price and Condition

d. We evaluated categorical features against sales price to understand the impact of different classes and the count within each class. In some cases one class within the feature was predominant over others, for example β€œnormal” class within β€œSale Condition’ (this helped later on in β€œdummification” process).

 

e. We worked on several nominal/ordinal data to be transformed into numeric data. In the case of ordinal variables, the transformation was achieved by mirroring the β€œstring” rating into a β€œnumeric β€œ rating (most times 1 to 5 / 1 for poor to 5 for excellent). In the case of categorical data, the approach was β€œdummification.”

It is important to mention that a high priority was placed on the Neighborhood feature, as we found it to be Β an important factor in sales price in any real estate market (location, location, location). We tried to transform this categorical data into groups of neighborhoods by household income, but the information found in this regard was not accurate or correlated to the sales prices in the dataset (i.e low household income was found in most expensive sales prices neighborhood).

The specific neighborhood could be something to look into in future complements to original dataset. One idea could be to geographically located each neighborhood and find its respective US census track in order to access demographic information collected at census level.

Random Forest Test

f. Once categorical data was transformed into numeric form, we performed a random forest test (using two methodologies in scikit -learn) in order to assess their relative importance forsale prices. This was used later on in modeling part giving priority to those features with higher relative weight to assess their impact in the accuracy of model prediction.

 

g. We performed one-dimensional cluster analysis for Total Square Footage and Neighborhood in order to explore potential clusters or outliers. Findings on potential outliers were tested in the model to assess impact in accuracy.

h. We normalized some of the features thatΒ were selected to be part of the model and showed a high degree of skewness (>0.75).

 

3. Modeling Data

When looking at models, we determined that a linear model would be the best choice. So we compared Ridge, Lasso, and ElasticNet models. We had to tune the models setting the appropriate lambda values for each of the models.

We used a 80-20 train-test split cross-validation to train our model.

Ridge and Lasso models were giving the best accuracy values, so we tested them across many different variables that were input. For our final input of 59 variables, Lasso gave us the best overall accuracy.

The following is the results of our predictions versus the actual sale prices:

4. Refining Model

In refining the model, we were very careful to make sure that we did not "overfit" the model. As you can see in the graph above, there are some clear outliers that we needed to remove. Specifically, we removed all the sale prices below $63,000 and above $355,000, as well as a few clear outliers. Fortunately, our model did not overfit and produced results against Kaggle exactly consistent with our cross-validation accuracy.

For a real estate agent in reality, it will be hard to consider 59 factors, so we tried to reduce the number of variables used to a more manageable sox. However, the result of that reduction was 30% poorer accuracy, so we saw that at least with these linear models, it will be best to use more factors rather than less.

 

5. Conclusions:

    • The model with greater complexity delivers more accurate predictions; however, a more simplistic model could be considered for practical use (trade off in lower accuracy).
    • The model could improve if neighborhoods are standardized to US census track for more demographic information like household income.
    • The dataset could be complemented with other variables that seem to be important in house hunting/buying, such as crime, school, and transportation.
    • If would be important to validate the model with more recent data (years) as real estate seem cyclical with ups/downs.

About Authors

Antonio Salvador

I believe in the power and importance of data to understand and improve our decision process. I have been β€œconnecting the dots” in several commercial functions in the international commodity markets (mostly grains).
View all posts by Antonio Salvador >

Anthony Parrillo

A passionate, intuitive problem solver using critical thinking and creative strategies with data to find meaningful insights to deliver practical, profitable results.
View all posts by Anthony Parrillo >

Leave a Comment

No comments found.

View Posts by Categories


Our Recent Popular Posts


View Posts by Tags

#python #trainwithnycdsa 2019 2020 Revenue 3-points agriculture air quality airbnb airline alcohol Alex Baransky algorithm alumni Alumni Interview Alumni Reviews Alumni Spotlight alumni story Alumnus ames dataset ames housing dataset apartment rent API Application artist aws bank loans beautiful soup Best Bootcamp Best Data Science 2019 Best Data Science Bootcamp Best Data Science Bootcamp 2020 Best Ranked Big Data Book Launch Book-Signing bootcamp Bootcamp Alumni Bootcamp Prep boston safety Bundles cake recipe California Cancer Research capstone car price Career Career Day citibike classic cars classpass clustering Coding Course Demo Course Report covid 19 credit credit card crime frequency crops D3.js data data analysis Data Analyst data analytics data for tripadvisor reviews data science Data Science Academy Data Science Bootcamp Data science jobs Data Science Reviews Data Scientist Data Scientist Jobs data visualization database Deep Learning Demo Day Discount disney dplyr drug data e-commerce economy employee employee burnout employer networking environment feature engineering Finance Financial Data Science fitness studio Flask flight delay gbm Get Hired ggplot2 googleVis H20 Hadoop hallmark holiday movie happiness healthcare frauds higgs boson Hiring hiring partner events Hiring Partners hotels housing housing data housing predictions housing price hy-vee Income Industry Experts Injuries Instructor Blog Instructor Interview insurance italki Job Job Placement Jobs Jon Krohn JP Morgan Chase Kaggle Kickstarter las vegas airport lasso regression Lead Data Scienctist Lead Data Scientist leaflet league linear regression Logistic Regression machine learning Maps market matplotlib Medical Research Meet the team meetup methal health miami beach movie music Napoli NBA netflix Networking neural network Neural networks New Courses NHL nlp NYC NYC Data Science nyc data science academy NYC Open Data nyc property NYCDSA NYCDSA Alumni Online Online Bootcamp Online Training Open Data painter pandas Part-time performance phoenix pollutants Portfolio Development precision measurement prediction Prework Programming public safety PwC python Python Data Analysis python machine learning python scrapy python web scraping python webscraping Python Workshop R R Data Analysis R language R Programming R Shiny r studio R Visualization R Workshop R-bloggers random forest Ranking recommendation recommendation system regression Remote remote data science bootcamp Scrapy scrapy visualization seaborn seafood type Selenium sentiment analysis sentiment classification Shiny Shiny Dashboard Spark Special Special Summer Sports statistics streaming Student Interview Student Showcase SVM Switchup Tableau teachers team team performance TensorFlow Testimonial tf-idf Top Data Science Bootcamp Top manufacturing companies Transfers tweets twitter videos visualization wallstreet wallstreetbets web scraping Weekend Course What to expect whiskey whiskeyadvocate wildfire word cloud word2vec XGBoost yelp youtube trending ZORI