Inspired by the accomplishments of the women in the movie, βHidden Figuresβ we named our team after the movie. We are an all-girls team of three who come from diverse parts of the world -- Lebanon, India, and China.
This blog post is about our machine learning project, which was a past kaggle competition, βHouse Prices: Advanced Regression Techniques.β The data set contains β79 explanatory variables describing (almost) every aspect of residential homes in Ames, Iowaβ¦[and the goal is] to predict the final price of each homeβ (reference).
We collaborated on certain parts of the project and completed other parts individually as if it were a research project. The goal of the project, as aspiring data scientists, was to utilize our arsenal of machine learning knowledge to predict housing prices.
The following blog post is categorized into four main parts: Exploratory Data Analysis & Feature Engineering, Creating Models, Conclusion, and Relevant Links.
Exploratory Data Analysis & Feature Engineering
- Multicollinearity
We started the project research by analyzing the data and visualizing it. Real estate is a new area for all three of us, but we managed to gain some interesting insights. For example, some variables are closely correlated with one and other. Some pairs are correlated by nature, such as βBasement finished areaβ and βBasement unfinished Areaβ while other pairs were correlated by deduction, such as βOverall conditionβ and βYear built.β
- Seasonality
Next, we explored the data to see if there were trends in sales prices associated with seasons. We found out that there are more sales during summer. Research shows that high supply doesnβt necessary means high price since the price of housing normally peaked around summer. One theory is that when there are more options in the marketplace, people are more likely to find their ideal house and put down the deposit.
- Neighborhood
The neighborhood is an important factor when it comes to buying houses. Since the raw data does not contain school district information and crime rate, neighborhood was an important factor, implying above factors. After plotting out the graph down below, we went back and checked the accuracy, neighborhood with the higher price was equipped with high-end facilities, besides school districts and great locations. We can also see the variance of an expensive neighborhood is typically higher, which explains the skewness of the sales price density as well.
Creating Models
- Approach 1:Β Neha Chanu
βEverything should be made simple as possible, but not simpler,β said Einstein and I took this advice when I started creating models. First, I focused on simpler models, such as lasso and elastic net, before creating more complex models. Besides lasso and elastic net, I utilized gradient boosting regression, XGBoost, light gradient boosting, and random forest algorithms to build models.
To ensure how well the models were predicting sales price, I split the training data into two parts. One part was used to train my models, and another part was to check how well the trained model predicted sales prices. Through cross-validation techniques such as parameter and hyperparameter tuning, the best possible metrics were calculated to check for model performance. This metric was called cross-validation score.
Although I created several models, I selected the top 5 best models based on the cross-validation scores and combined them by averaging.
Β Β Β Β Β Β Β Β - Approach 2:Β Fatima Hamdan
After trying the linear models approach, it is good to see the data from a different angle, so I decided to try the tree-based models approach. These are the steps I followed:
Why start with Random Forest? Random Forest Model is considered one of the best models for feature importance analysis. The reason is that the tree-based strategies used by random forests naturally ranks by how well they improve the purity of the node.
The below graph shows the 30 most important variables in our dataset:
The accuracy score of Random Forest Model on the house price prediction is 0.88. Here comes the question: Is it the best accuracy score?
The following step is a comparison between several tree-based models to check which model has the best accuracy score in predicting House prices.
Β Β Β Β Β Β Β Β Β Β Β Β Β 2. Comparison between the following regression models
After applying each of the following models on the data set, different accuracy scores were achieved as shown in the following graph:
The Gradient Boosting Model has the highest Score with 0.90Β and with error 0.095. So, in the following steps, I relied on Gradient Boosting Model in my analysis.
Β Β Β Β Β Β Β Β Β Β Β Β Β Β 3. Hyper-Parameter Tuning of the model with the highest score
To have a better performance of the gradient boosting model on our data set, I used the GridSearch function to tune the parameters of the model. After several trials of GridSearch, the following parameters were chosen with specific ranges:
Loss: Huber or Ls
Learning Rates: Range( 0.0001, 0.001,0.01,0.1,1)
Number of Estimators: Range(100, 1000, 10000, 14800)
Maximum Depth: Range(1 -> 15 )
Here are some of the parameters that gave me the lowest mean squared error:
Since stacking or averaging several models together might give a better accuracy score in the prediction, two gradient boosting models were used in the analysis depending on the first two rows of the above table as parameters.
On the other hand, linear models might help in improving the score as well. I used the following two models that gave me good accuracy scores as well.
In Ridge model analysis, after the parameter tuning step, I chose alpha to be 10. The error of this model is 0.11.Β In Lasso model analysis, the same steps of the previous models were applied and the error is 0.11.
To sum up, the final model was the average of the 4 best models together:Β Gradient Boosting Model 1,Β Gradient Boosting Model, Ridge Model and Lasso Model. The final model scored 0.11846 on Kaggle House Price Prediction Competition.
- Approach 3: Lainey
Due to the lack of observations, the first step should be linear models. I started by implementing Lasso and ridge, both yield 0.92 (pretty strong) CV scores.
Since the data contains a lot of categorical variables, I was curious how well tree based model fit the model. Random forest, Extreme Random forest, XGboosting all yield okay result at the best during cross-validation. The best performer has to be Gradient boosting, which Fatima mentioned in details.
I also explored a little in SVM and finally combined my models using stacking.
Conclusion
Each one of us collaborated on the initial exploratory data analysis and feature engineering part. Then we worked on creating predictive models individually. Thanks for reading this blog post. Feel free to leave any comments or questions and reach out to us on through LinkedIn.
Ms. Chanu, 2017 Hesselbein Student Leader Fellow, was one of 50 selected from more than 800 student leader nominees from around the world. She is an honors graduate of the University of Pittsburgh and the Cornell Pre-Law Summer...
Fatima got her bachelor's degree in Computer Engineering from Lebanese American University. She was chosen as one of the 24 women in engineering change makers from all over the world to attend the Women in Engineering conference in...
Nan(Lainey) is a master student at New York University studying Financial Engineering. She is passionate in the applications of machine learning technique in financial industry eg. High-Frequency Trading, Option Pricing. Nan developed a shiny app to research on...
Google
The information talked about in the article are a few of the best out there.
Google January 6, 2021
Google
Please pay a visit to the internet sites we comply with, like this a single, as it represents our picks from the web.
CBD For Dogs December 17, 2020
CBD For Dogs
[...]Here are some of the internet sites we recommend for our visitors[...]
MKsOrb November 16, 2020
MKsOrb
[...]The information and facts talked about within the write-up are a few of the ideal accessible [...]
mksorb.com November 14, 2020
mksorb.com
[...]below youΒll uncover the link to some websites that we consider you should visit[...]
Google September 26, 2020
Google
We came across a cool website which you could possibly get pleasure from. Take a appear should you want.
YouTube Backlink August 28, 2020
YouTube Backlink
[...]very couple of web sites that transpire to be comprehensive below, from our point of view are undoubtedly well worth checking out[...]
MKsOrb August 26, 2020
MKsOrb
[...]we prefer to honor quite a few other internet websites around the internet, even when they arenΒt linked to us, by linking to them. Below are some webpages worth checking out[...]
Google August 21, 2020
Google
Always a significant fan of linking to bloggers that I adore but do not get a great deal of link really like from.
mksorb.com August 5, 2020
mksorb.com
[...]please check out the internet sites we stick to, including this one, as it represents our picks through the web[...]
mksorb.com July 30, 2020
mksorb.com
[...]below you will obtain the link to some web sites that we consider you should visit[...]
cbd oil for pain July 9, 2020
cbd oil for pain
[...]here are some hyperlinks to sites that we link to mainly because we consider they're really worth visiting[...]