Studying Data From Machine Learning to Predict Housing Price

, and
Posted on Mar 19, 2018
The skills the author demoed here can be learned through taking Data Science with Machine Learning bootcamp with NYC Data Science Academy.

Introduction

Inspired by the accomplishments of the women in the movie, β€œHidden Figures” we named our team after the movie. We are an all-girls team of three who come from diverse parts of the world -- Lebanon, India, and China.

This blog post is about our machine learning project, which was a past kaggle competition, β€œHouse Prices: Advanced Regression Techniques.” The data set contains β€œ79 explanatory variables describing (almost) every aspect of residential homes in Ames, Iowa…[and the goal is] to predict the final price of each home” (reference).

We collaborated on certain parts of the project and completed other parts individually as if it were a research project. The goal of the project, as aspiring data scientists, was to utilize our arsenal of machine learning knowledge to predict housing prices.

The following blog post is categorized into four main parts: Exploratory Data Analysis & Feature Engineering, Creating Models, Conclusion, and Relevant Links.

 

  • Exploratory Data Analysis & Feature Engineering

- Multicollinearity
Studying Data From Machine Learning to Predict Housing Price

 

We started the project research by analyzing the data and visualizing it. Real estate is a new area for all three of us, but we managed to gain some interesting insights. For example, some variables are closely correlated with one and other. Some pairs are correlated by nature, such as β€œBasement finished area” and β€œBasement unfinished Area” while other pairs were correlated by deduction, such as β€œOverall condition” and β€œYear built.”

- SeasonalityStudying Data From Machine Learning to Predict Housing PriceStudying Data From Machine Learning to Predict Housing Price

Next, we explored the data to see if there were trends in sales prices associated with seasons. We found out that there are more sales during summer. Research shows that high supply doesn’t necessary means high price since the price of housing normally peaked around summer. One theory is that when there are more options in the marketplace, people are more likely to find their ideal house and put down the deposit.

- Neighborhood

The neighborhood is an important factor when it comes to buying houses. Since the raw data does not contain school district information and crime rate, neighborhood was an important factor, implying above factors. After plotting out the graph down below, we went back and checked the accuracy, neighborhood with the higher price was equipped with high-end facilities, besides school districts and great locations. We can also see the variance of an expensive neighborhood is typically higher, which explains the skewness of the sales price density as well.

 

  • Creating Models
- Approach 1:Β Neha Chanu

β€œEverything should be made simple as possible, but not simpler,” said Einstein and I took this advice when I started creating models. First, I focused on simpler models, such as lasso and elastic net, before creating more complex models. Besides lasso and elastic net, I utilized gradient boosting regression, XGBoost, light gradient boosting, and random forest algorithms to build models.

To ensure how well the models were predicting sales price, I split the training data into two parts. One part was used to train my models, and another part was to check how well the trained model predicted sales prices. Through cross-validation techniques such as parameter and hyperparameter tuning, the best possible metrics were calculated to check for model performance. This metric was called cross-validation score.

Although I created several models, I selected the top 5 best models based on the cross-validation scores and combined them by averaging.

Β  Β  Β  Β  Β  Β  Β  Β  - Approach 2:Β Fatima Hamdan

After trying the linear models approach, it is good to see the data from a different angle, so I decided to try the tree-based models approach. These are the steps I followed:

Β  Β  Β  Β  Β  Β  Β  Β  Β  Β  Β  Β  Β  1. Random Forest Feature Importance Analysis

Why start with Random Forest? Random Forest Model is considered one of the best models for feature importance analysis. The reason is that the tree-based strategies used by random forests naturally ranks by how well they improve the purity of the node.

The below graph shows the 30 most important variables in our dataset:

The accuracy score of Random Forest Model on the house price prediction is 0.88. Here comes the question: Is it the best accuracy score?

The following step is a comparison between several tree-based models to check which model has the best accuracy score in predicting House prices.

Β  Β  Β  Β  Β  Β  Β  Β  Β  Β  Β  Β  Β  2. Comparison between the following regression models

After applying each of the following models on the data set, different accuracy scores were achieved as shown in the following graph:

The Gradient Boosting Model has the highest Score with 0.90Β and with error 0.095. So, in the following steps, I relied on Gradient Boosting Model in my analysis.

Β  Β  Β  Β  Β  Β  Β  Β  Β  Β  Β  Β  Β  Β 3. Hyper-Parameter Tuning of the model with the highest score

To have a better performance of the gradient boosting model on our data set, I used the GridSearch function to tune the parameters of the model. After several trials of GridSearch, the following parameters were chosen with specific ranges:

Loss: Huber or Ls

Learning Rates: Range( 0.0001, 0.001,0.01,0.1,1)

Number of Estimators: Range(100, 1000, 10000, 14800)

Maximum Depth: Range(1 -> 15 )

Here are some of the parameters that gave me the lowest mean squared error:

Since stacking or averaging several models together might give a better accuracy score in the prediction, two gradient boosting models were used in the analysis depending on the first two rows of the above table as parameters.

On the other hand, linear models might help in improving the score as well. I used the following two models that gave me good accuracy scores as well.

Β  Β  Β  Β  Β  Β  Β  Β  Β  Β  Β  Β  Β  Β Β  Β 4. Linear Models: Lasso & Ridge

In Ridge model analysis, after the parameter tuning step, I chose alpha to be 10. The error of this model is 0.11.Β In Lasso model analysis, the same steps of the previous models were applied and the error is 0.11.

Β  Β  Β  Β  Β  Β  Β  Β  Β  Β  Β  Β  Β  Β  Β Β 5. Averaging Several Models

To sum up, the final model was the average of the 4 best models together:Β Gradient Boosting Model 1,Β Gradient Boosting Model, Ridge Model and Lasso Model. The final model scored 0.11846 on Kaggle House Price Prediction Competition.

- Approach 3: Lainey

Due to the lack of observations, the first step should be linear models. I started by implementing Lasso and ridge, both yield 0.92 (pretty strong) CV scores.

Since the data contains a lot of categorical variables, I was curious how well tree based model fit the model. Random forest, Extreme Random forest, XGboosting all yield okay result at the best during cross-validation. The best performer has to be Gradient boosting, which Fatima mentioned in details.

I also explored a little in SVM and finally combined my models using stacking.

 

  • Conclusion

Each one of us collaborated on the initial exploratory data analysis and feature engineering part. Then we worked on creating predictive models individually. Thanks for reading this blog post. Feel free to leave any comments or questions and reach out to us on through LinkedIn.

 

  • Relevant Links

Presentation

GitHub

Fatima Hamdan’s LinkedIn

Nan Liu (Lainey)’s LinkedIn

Neha Chanu’s LinkedIn

About Authors

Neha Chanu

Ms. Chanu, 2017 Hesselbein Student Leader Fellow, was one of 50 selected from more than 800 student leader nominees from around the world. She is an honors graduate of the University of Pittsburgh and the Cornell Pre-Law Summer...
View all posts by Neha Chanu >

Fatima Hamdan

Fatima got her bachelor's degree in Computer Engineering from Lebanese American University. She was chosen as one of the 24 women in engineering change makers from all over the world to attend the Women in Engineering conference in...
View all posts by Fatima Hamdan >

Nan(Lainey) Liu

Nan(Lainey) is a master student at New York University studying Financial Engineering. She is passionate in the applications of machine learning technique in financial industry eg. High-Frequency Trading, Option Pricing. Nan developed a shiny app to research on...
View all posts by Nan(Lainey) Liu >

Related Articles

Leave a Comment

Google August 28, 2021
Google Sites of interest we've a link to.
Google January 11, 2021
Google The information talked about in the article are a few of the best out there.
Google January 6, 2021
Google Please pay a visit to the internet sites we comply with, like this a single, as it represents our picks from the web.
CBD For Dogs December 17, 2020
CBD For Dogs [...]Here are some of the internet sites we recommend for our visitors[...]
MKsOrb November 16, 2020
MKsOrb [...]The information and facts talked about within the write-up are a few of the ideal accessible [...]
mksorb.com November 14, 2020
mksorb.com [...]below youΒ’ll uncover the link to some websites that we consider you should visit[...]
Google September 26, 2020
Google We came across a cool website which you could possibly get pleasure from. Take a appear should you want.
YouTube Backlink August 28, 2020
YouTube Backlink [...]very couple of web sites that transpire to be comprehensive below, from our point of view are undoubtedly well worth checking out[...]
MKsOrb August 26, 2020
MKsOrb [...]we prefer to honor quite a few other internet websites around the internet, even when they arenΒ’t linked to us, by linking to them. Below are some webpages worth checking out[...]
Google August 21, 2020
Google Always a significant fan of linking to bloggers that I adore but do not get a great deal of link really like from.
mksorb.com August 5, 2020
mksorb.com [...]please check out the internet sites we stick to, including this one, as it represents our picks through the web[...]
mksorb.com July 30, 2020
mksorb.com [...]below you will obtain the link to some web sites that we consider you should visit[...]
cbd oil for pain July 9, 2020
cbd oil for pain [...]here are some hyperlinks to sites that we link to mainly because we consider they're really worth visiting[...]

View Posts by Categories


Our Recent Popular Posts


View Posts by Tags

#python #trainwithnycdsa 2019 2020 Revenue 3-points agriculture air quality airbnb airline alcohol Alex Baransky algorithm alumni Alumni Interview Alumni Reviews Alumni Spotlight alumni story Alumnus ames dataset ames housing dataset apartment rent API Application artist aws bank loans beautiful soup Best Bootcamp Best Data Science 2019 Best Data Science Bootcamp Best Data Science Bootcamp 2020 Best Ranked Big Data Book Launch Book-Signing bootcamp Bootcamp Alumni Bootcamp Prep boston safety Bundles cake recipe California Cancer Research capstone car price Career Career Day citibike classic cars classpass clustering Coding Course Demo Course Report covid 19 credit credit card crime frequency crops D3.js data data analysis Data Analyst data analytics data for tripadvisor reviews data science Data Science Academy Data Science Bootcamp Data science jobs Data Science Reviews Data Scientist Data Scientist Jobs data visualization database Deep Learning Demo Day Discount disney dplyr drug data e-commerce economy employee employee burnout employer networking environment feature engineering Finance Financial Data Science fitness studio Flask flight delay gbm Get Hired ggplot2 googleVis H20 Hadoop hallmark holiday movie happiness healthcare frauds higgs boson Hiring hiring partner events Hiring Partners hotels housing housing data housing predictions housing price hy-vee Income Industry Experts Injuries Instructor Blog Instructor Interview insurance italki Job Job Placement Jobs Jon Krohn JP Morgan Chase Kaggle Kickstarter las vegas airport lasso regression Lead Data Scienctist Lead Data Scientist leaflet league linear regression Logistic Regression machine learning Maps market matplotlib Medical Research Meet the team meetup methal health miami beach movie music Napoli NBA netflix Networking neural network Neural networks New Courses NHL nlp NYC NYC Data Science nyc data science academy NYC Open Data nyc property NYCDSA NYCDSA Alumni Online Online Bootcamp Online Training Open Data painter pandas Part-time performance phoenix pollutants Portfolio Development precision measurement prediction Prework Programming public safety PwC python Python Data Analysis python machine learning python scrapy python web scraping python webscraping Python Workshop R R Data Analysis R language R Programming R Shiny r studio R Visualization R Workshop R-bloggers random forest Ranking recommendation recommendation system regression Remote remote data science bootcamp Scrapy scrapy visualization seaborn seafood type Selenium sentiment analysis sentiment classification Shiny Shiny Dashboard Spark Special Special Summer Sports statistics streaming Student Interview Student Showcase SVM Switchup Tableau teachers team team performance TensorFlow Testimonial tf-idf Top Data Science Bootcamp Top manufacturing companies Transfers tweets twitter videos visualization wallstreet wallstreetbets web scraping Weekend Course What to expect whiskey whiskeyadvocate wildfire word cloud word2vec XGBoost yelp youtube trending ZORI