Using Data to Predict - Boston House Pricing
Github Repository
Real estate is always one of most popular investments for decades. People own real estate property not only for a place to stay, but also a way to earn profit after upgrading the conditions of the property. With rising of demand, housing market is booming with rising prices and sales during the Covid-19 pandemic. This phenomenon is interesting to me as I personally also like to read news on housing market. I therefore decided to work on a project to find out: What are the important conditions that affect house pricing, and based on above, how can we predict pricing of a house with given conditions? In this text we will use data to predict Boston House Pricing.
With my inquiries on housing market, I found a data set package from Kaggle that contains two datasets of house condition of Boston city. One is train data set with 80 features, including the house pricing, of 1460 samples. The other is a test data set with 79 features, excluding the house pricing, of 1459 samples. Therefore, the test dataset will be the one for prediction and train dataset will be the one used to fit machine learning models. Python is the language I used on this project.
Data Cleaning
Before applying machine learning models of the train dataset, I need to clean the data first. Therefore, I combined both datasets into one as a full dataset for data cleaning in order that their formats are consistent.
After checking null values of each feature, I removed below features as they have more than 400 missing values out of my total of 2919 samples.
- Pool Quality
- Miscellaneous feature not covered in other categories
- Alley
- Fence
- Fireplace Quality
- Linear feet of street connected to property
I then selected numerical features from original train dataset to find the correlation coefficient between each feature and sale price. Below is a correlation chart including features that have absolute value greater than 0.5 with sale price, which indicates that these features affect a house pricing significantly. As we see, the most important feature is above-ground living area with an absolute value of 0.71. Therefore, if one wants to upgrade a house for selling, I would recommend to upgrade below 9 features in the chart. For the ones below absolute value of 0.5, I removed them from the full dataset as they do not affect much on the sale price.
When looking into the values of categorical features, I found that for the feature street, there are only 12 samples with gravel while 2907 samples are having paved street. I removed this feature as there’s a large bias. Same as the feature of utilities since there is only 1 sample which is different to others.
After removing some of the features, there are 51 left for later use. I filled “NA” to null value fields of categorical features and mean value to the null value fields of numerical features.
I checked the numerical feature again on outliers. From below boxplot, we can tell that there are 4 outliers under feature of above-ground living area, 1 outlier under total basement square feet, 1 outlier under first floor area. These samples were deleted from dataset accordingly.
In order to prepare dataset for machine learning, I then dummified all the categorical features and split the full data back to train dataset and test dataset since only the train dataset contains sale price to fit models.
Machine Learning
I used 70/30 percent to split dataset to train-test data and fitted below machine learning models.
- LinearRegression
- Ridge
- Lasso
- ElasticNet
- LogisticRegression
- LinearDiscriminantAnalysis
- GaussianNB
- MultinomialNB
- GradientBoostingRegressor
- RandomForestRegressor
- SVR
Below is the comparison of train score and test score of each model. Blue line in the graph is train score and orange line is test score. Since I get high test scores on ridge regression, lasson regression, multiple linear regression, and gradient boosting regression, I decided to dig deeper to these 4 models with changing the hyper parameters and to compare the square root of mean square error.
The sqrt of MSE of multiple linear regression is 26,206.06. Which means that with this predictive model, there will be an average of $26,206.06 error on each house pricing.
The lowest sqrt of MSE of ridge regression is $24,757.30 when alpha equals 5.26.
The lowest sqrt of MSE of lasso regression is $24,370.03 when alpha equals 31.58.
The lowest sqrt of MSE of gradient boosting regression is $23,201.06 when n_estimator equals 50,100 and learning rate equals 0.01. This is the model I used to ultimately predict the test dataset since it has high test score (accuracy) and lowest sqrt of MSE.
Conclusion
Through study on this project, I have created a machine learning model to predict the house pricing of Boston city which can be used by real estate agency in the Boston area. I have also figured out some conditions of house that most affects the price of house. Wish this can help some house owners on what they want to upgrade before selling the house to increase the value of house.
The skills I demoed here can be learned through taking Data Science with Machine Learning bootcamp with NYC Data Science Academy.