Predicting House Prices with Machine Learning

Avatar
Posted on Nov 21, 2019

    

Introduction
This project aims to predict housing prices in Ames, Iowa using the Ames Housing dataset from the Kaggle Competition: [ House Prices: Advanced Regression Techniques ]. There are 79 different variables that describe almost every aspect of the home, from the number of rooms to the type of roof material. This project conducted an exploratory data analysis and model building using Linear, Ridge, Lasso, ElasticNet, Random Forest, Gradient Boosting, and XGBoost Regression.

Exploratory Data Analysis
1. Transforming Skewed Distributions
2. Removing Outliers
3. Remove and Impute Missing Data
4. Dropping Correlated Variables to Avoid Multicollinearity

1. First, by looking at the target variable, 'Sale Price', the distribution is positively skewed, violating the assumption of Linear Regression, so I Log (Sale Price) to get a more normal distribution. I also look at the x-variable features that have high skewness and decide to apply a boxcox transformation for skewness greater than 0.65.

2. I look at both numerical and categorical variables to better understand the data and remove the outliers in the numerical data. Few variables are displayed below:

3. At first glance, there are quite alot of missing values, but many of them are missing because they simply do not have the feature. However I will drop PoolQC (Pool Quality), Misc. Features, Alley, Fence, and Fireplace Quality due to large amount of constants and I do not believe they are strong drivers of Sale Price. (Regression was tested with and without these features and model was better without)

PoolQC: NA = No Pool
Miscellaneous Feature: NA = No Miscellaneous Features
Alley: NA = No Alley
Fence: NA = No Fence
Fireplace Quality: NA = No Fireplace
Columns with 'Garage': NA = No Garage
Columns with 'Bsmt': NA = No Basement

Lot Frontage: Imputed using 1stFlrSF
Columns with 'Mas': regarding Masonry Veneers. imputed with the mode.
Electrical: imputed with the mode. only missing one.

4. GarageYrBlt and Year Built have a 82.6% correlation. These two features have very similar, if not same, values in two columns, causing multicollinearity. I removed GarageYrBlt because some homes do not have a Garage and I believe Year Built will be a better indicator.

After dummifying categorical features and train-test split, we are ready to move to Regression models.

Regression Models
Linear, Ridge, Lasso, ElasticNet, Random Forest, Gradient Boosting, and XGBoost Regression were performed on this data and optimal hyperparameters were tuned using GridSearch.

For Random Forest, Gradient Boosting, and XG Boost, Top 10 feature importances were generated. Overall Quality Scores, square footage of the general living area, and the car capacity of the garage contributed most to predicting sale price. Linear Regression coefficients showed similar results by showing that Ground Living Area SF, Total Basement SF, Year Built, and Overall Quality as significant variables.

Results
In the end, Elastic Net performed the best followed by Ridge and Lasso regressions. The top three regressions (Elastic Net, Lasso, and Ridge) had an root mean squared error of around 18,000, the amount of error an investor should expect the prediction to vary. Majority of the models agree that Overall Quality (the overal material and finish of the house) and total square footage whether in the basement, living area or garage, are strong indicators of house sale prices.

About Author

Leave a Comment

No comments found.

View Posts by Categories


Our Recent Popular Posts


View Posts by Tags

2019 airbnb Alex Baransky alumni Alumni Interview Alumni Reviews Alumni Spotlight alumni story Alumnus API Application artist aws beautiful soup Best Bootcamp Best Data Science 2019 Best Data Science Bootcamp Best Data Science Bootcamp 2020 Best Ranked Big Data Book Launch Book-Signing bootcamp Bootcamp Alumni Bootcamp Prep Bundles California Cancer Research capstone Career Career Day citibike clustering Coding Course Demo Course Report D3.js data Data Analyst data science Data Science Academy Data Science Bootcamp Data science jobs Data Science Reviews Data Scientist Data Scientist Jobs data visualization Deep Learning Demo Classes Demo Day Demo Lesson Discount dplyr employer networking feature engineering Finance Financial Data Science Flask gbm Get Hired ggplot2 googleVis Hadoop higgs boson Hiring hiring partner events Hiring Partners Industry Experts Instructor Blog Instructor Interview Job Job Placement Jobs Jon Krohn JP Morgan Chase Kaggle Kickstarter lasso regression Lead Data Scienctist Lead Data Scientist leaflet Lectures linear regression Live Chat Live Online Bootcamp Logistic Regression machine learning Maps matplotlib Medical Research Meet the team meetup Networking neural network Neural networks New Courses nlp NYC NYC Data Science nyc data science academy NYC Open Data NYCDSA NYCDSA Alumni Online Online Bootcamp Online Lectures Online Training Open Data painter pandas Part-time Portfolio Development prediction Prework Programming PwC python python machine learning python scrapy python web scraping python webscraping Python Workshop R R language R Programming R Shiny r studio R Visualization R Workshop R-bloggers random forest Ranking Realtime Interaction recommendation recommendation system regression Remote remote data science bootcamp Scrapy scrapy visualization seaborn Selenium sentiment analysis Shiny Shiny Dashboard Spark Special Special Summer Sports statistics streaming Student Interview Student Showcase SVM Switchup Tableau team TensorFlow Testimonial tf-idf Top Data Science Bootcamp twitter visualization web scraping Weekend Course What to expect word cloud word2vec XGBoost yelp