Predicting House Price Using Advanced Regression Technique

Posted on Nov 17, 2021

Introduction

Data Science, machine learning is applied in this project to solve the business problem of predicting house sale price . In real estate, the Sale Price of a house can be determined by many factors in addition to economic growth status of the economy. In the housing market,  what determines the market value of a house is a challenging question for buyers who consider key features when buying a house . To answer this question, I analyzed the key features that influence housing price  that buyers are willing to pay  based on desired features. This project finds the influential explanatory variables that affect the price of a house and apply regression algorithms to predict house price. The dataset for this project, is the Kaggle housing dataset from 2006 to 2010 compiled for house sales in Ames, Iowa. It comprises a training set of 1460 observations , 1459 observations of the test set and 81 column variables. Data Science steps was applied in this project by  the following:

  • Loading the data
  • Exploratory Data Analysis
  • Data Cleaning and Feature Engineering
  • Modeling the data using regression algorithms
  • Evaluate the built model using evaluation metrics.

Exploratory Data Analysis

The charts below are the key  features that highly influence sale price.

The dataset is right skewed as shown on the above distribution plot on the left and most of the houses sell at about $150,000. To have a good fitting model, I use log transformation to normalize  the data as shown on the top right above.

The target variable (Sale Price) is also, skewed to the right with a mean of house sale price of around $150,000, this is expected, because in a neighborhood there will be some houses that will sell for a different price which becomes outliers. With log transformation, outliers were compressed and the Sale Price is normalized.

 

                                   

The plot on the left above is a Bar chart of missing values, showing highest missing values for PoolQC, MSCfeature, Alley, Fence, FireplaceQU.  The above Bar Chart on the right shows a positive relationship between  overall house quality and sale price. As overall quality increases, sale price increases. Houses with excellent overall quality have highest sale price and those with poor overall quality have the lowest price.

                                     

From above boxplot, sale price is higher for houses with more full bathrooms. Houses with 1 full bathroom cost on average around $150,000 while those with 3 bathrooms on average cost around $300,000. The violin plot on the upper right shows heating quality and sale price, houses with excellent heating quality cost more compared to those with poor heating quality.

 

                                         

The sale price of a house is higher as square feet increases to some point and beyond which it starts decreasing. From the histogram above, most houses are bought at an average sale price of around $120,000. As house square feet increases, buyers see the house expensive to maintain and waste of space that they don't need, so the demand for houses keep decreasing beyond 150 square feet,  causing a decrease in price. The above factor plot shows the relationship between sale price and kitchen quality; houses with excellent kitchen quality do have the highest sale price while those with fair kitchen quality have the lowest sale price. The difference between houses with one and two fair kitchens is negligible. Buyers are willing to pay more for houses with excellent kitchen quality.

 

 

                                         

The above table and factor plot show sale price based on fireplace quality , houses with two excellent fire places on average cost around $475,000 while  houses with 3 excellent fire places on average cost around $375,000 because reduced excellent quality equivalent to TA quality.  Majority of the houses, 250 do have TA fire place quality.

 

Buyers are willing to pay higher price for houses located in zones with amenities such as shopping center, school, hospital , low crime rate  more . From the above plot houses in the RL zone sells for highest sale price.

 

                   

There is a negative relationship between the age of a house and sale price; as the age of a house increases, sale price decreases because buyers think of the value and unforeseen cost that many may be associated in terms of maintenance cost . On the upper right above, is a heat map showing the relationship among the variables.

Data Cleaning and Feature Engineering

The dataset and the target variable (sale price) are right skewed; to get an accurate model,  I transformed both the data and sale price to normal form by using log transformation. I dropped the target  variable from the rest of the features and transformed  all skewed numeric features to normal form and dropped the following variables  with high percentage of missing values  : Pool quality, Misc feature, Alley, Fence, Fire Place quality, Garage type, Garage condition, Garage finish, Garage quality and impute the other variables with lower missing values with the median. I create dummy variables, calculate new variables year house was sold and month sold to find the age of the house. With  additional created variables, there are 31 numeric and 224 categorical variables.

Modeling  the Data using Regression algorithm 

After exploring the data, engineering of variables and prepared the data, the final step of this project is modelling; I ran the following regression with corresponding Root Mean Square Error:

                       

My first model was linear regression to know the R^2, which is 95.5%. Since, the linear regression gave a high RMSE score and showed potential overfitting, I create other regularized models:  Ridge regression, Lasso regression, Elasticnet regression, Support Vector regression, Gradient Boosting regression, Lightgbm regression and XGBoost regression. To validate the model, I used the K-fold cross-validation. The final step was comparing all the models and conclude that the Support Vector regression with RMSE score of 0.1208 is the best predictive model

Conclusion

The above analysis clearly shows the following key features : overall quality, age of house, number of full bathrooms, kitchen quality, fire place quality, zone the house is located, square footage of the house and heating quality as  the main features that determine house sale price in Ames, Iowa between 2006 to 2010 . The analysis, also, show that the support Vector regression is the best predictive model with the lowest RMSE score of 0.1208 .

 

References:

  1. Aurelien, G,. Hands-on machine learning with scikit-learn, keras & TensorFlow
  2. Oliver,T., Machine learning for absolute beginners

 

About Author

Robert Willoughby

I live in Columbus, Ohio; working as a Data Analyst
View all posts by Robert Willoughby >

Leave a Comment

No comments found.

View Posts by Categories


Our Recent Popular Posts


View Posts by Tags

#python #trainwithnycdsa 2019 airbnb Alex Baransky alumni Alumni Interview Alumni Reviews Alumni Spotlight alumni story Alumnus API Application artist aws beautiful soup Best Bootcamp Best Data Science 2019 Best Data Science Bootcamp Best Data Science Bootcamp 2020 Best Ranked Big Data Book Launch Book-Signing bootcamp Bootcamp Alumni Bootcamp Prep Bundles California Cancer Research capstone Career Career Day citibike clustering Coding Course Demo Course Report D3.js data Data Analyst data science Data Science Academy Data Science Bootcamp Data science jobs Data Science Reviews Data Scientist Data Scientist Jobs data visualization Deep Learning Demo Day Discount dplyr employer networking feature engineering Finance Financial Data Science Flask gbm Get Hired ggplot2 googleVis Hadoop higgs boson Hiring hiring partner events Hiring Partners Industry Experts Instructor Blog Instructor Interview Job Job Placement Jobs Jon Krohn JP Morgan Chase Kaggle Kickstarter lasso regression Lead Data Scienctist Lead Data Scientist leaflet linear regression Logistic Regression machine learning Maps matplotlib Medical Research Meet the team meetup Networking neural network Neural networks New Courses nlp NYC NYC Data Science nyc data science academy NYC Open Data NYCDSA NYCDSA Alumni Online Online Bootcamp Online Training Open Data painter pandas Part-time Portfolio Development prediction Prework Programming PwC python Python Data Analysis python machine learning python scrapy python web scraping python webscraping Python Workshop R R Data Analysis R language R Programming R Shiny r studio R Visualization R Workshop R-bloggers random forest Ranking recommendation recommendation system regression Remote remote data science bootcamp Scrapy scrapy visualization seaborn Selenium sentiment analysis Shiny Shiny Dashboard Spark Special Special Summer Sports statistics streaming Student Interview Student Showcase SVM Switchup Tableau team TensorFlow Testimonial tf-idf Top Data Science Bootcamp twitter visualization web scraping Weekend Course What to expect word cloud word2vec XGBoost yelp