Using Data to Predict - Boston House Pricing

Posted on Aug 3, 2021

Github Repository

Real estate is always one of most popular investments for decades. People own real estate property not only for a place to stay, but also a way to earn profit after upgrading the conditions of the property. With rising of demand, housing market is booming with rising prices and sales during the Covid-19 pandemic. This phenomenon is interesting to me as I personally also like to read news on housing market. I therefore decided to work on a project to find out: What are the important conditions that affect house pricing, and based on above, how can we predict pricing of a house with given conditions? In this text we will use data to predict Boston House Pricing.

With my inquiries on housing market, I found a data set package from Kaggle that contains two datasets of house condition of Boston city. One is train data set with 80 features, including the house pricing, of 1460 samples. The other is a test data set with 79 features, excluding the house pricing, of 1459 samples. Therefore, the test dataset will be the one for prediction and train dataset will be the one used to fit machine learning models. Python is the language I used on this project.

Data Cleaning

Before applying machine learning models of the train dataset, I need to clean the data first. Therefore, I combined both datasets into one as a full dataset for data cleaning in order that their formats are consistent.

After checking null values of each feature, I removed below features as they have more than 400 missing values out of my total of 2919 samples.

  • Pool Quality
  • Miscellaneous feature not covered in other categories
  • Alley
  • Fence
  • Fireplace Quality
  • Linear feet of street connected to property

I then selected numerical features from original train dataset to find the correlation coefficient between each feature and sale price. Below is a correlation chart including features that have absolute value greater than 0.5 with sale price, which indicates that these features affect a house pricing significantly. As we see, the most important feature is above-ground living area with an absolute value of 0.71. Therefore, if one wants to upgrade a house for selling, I would recommend to upgrade below 9 features in the chart. For the ones below absolute value of 0.5, I removed them from the full dataset as they do not affect much on the sale price.

When looking into the values of categorical features, I found that for the feature street, there are only 12 samples with gravel while 2907 samples are having paved street. I removed this feature as there’s a large bias. Same as the feature of utilities since there is only 1 sample which is different to others.

After removing some of the features, there are 51 left for later use. I filled “NA” to null value fields of categorical features and mean value to the null value fields of numerical features.

I checked the numerical feature again on outliers. From below boxplot, we can tell that there are 4 outliers under feature of above-ground living area, 1 outlier under total basement square feet, 1 outlier under first floor area. These samples were deleted from dataset accordingly.

In order to prepare dataset for machine learning, I then dummified all the categorical features and split the full data back to train dataset and test dataset since only the train dataset contains sale price to fit models.

Machine Learning

I used 70/30 percent to split dataset to train-test data and fitted below machine learning models.

  • LinearRegression
  • Ridge
  • Lasso
  • ElasticNet
  • LogisticRegression
  • LinearDiscriminantAnalysis
  • GaussianNB
  • MultinomialNB
  • GradientBoostingRegressor
  • RandomForestRegressor
  • SVR

Below is the comparison of train score and test score of each model. Blue line in the graph is train score and orange line is test score. Since I get high test scores on ridge regression, lasson regression, multiple linear regression, and gradient boosting regression, I decided to dig deeper to these 4 models with changing the hyper parameters and to compare the square root of mean square error.

The sqrt of MSE of multiple linear regression is 26,206.06. Which means that with this predictive model, there will be an average of $26,206.06 error on each house pricing.

The lowest sqrt of MSE of ridge regression is $24,757.30 when alpha equals 5.26.

The lowest sqrt of MSE of lasso regression is $24,370.03 when alpha equals 31.58.

The lowest sqrt of MSE of gradient boosting regression is $23,201.06 when n_estimator equals 50,100 and learning rate equals 0.01. This is the model I used to ultimately predict the test dataset since it has high test score (accuracy) and lowest sqrt of MSE.

Conclusion

Through study on this project, I have created a machine learning model to predict the house pricing of Boston city which can be used by real estate agency in the Boston area. I have also figured out some conditions of house that most affects the price of house. Wish this can help some house owners on what they want to upgrade before selling the house to increase the value of house.

The skills I demoed here can be learned through taking Data Science with Machine Learning bootcamp with NYC Data Science Academy.

 

About Author

Cassandra Jones

Cassandra Jones is a certified data scientist with a focus on data science technologies and banking. Working at investment bank for 4 years on client services. Passionate about any data driven business insights going forward...
View all posts by Cassandra Jones >

Leave a Comment

No comments found.

View Posts by Categories


Our Recent Popular Posts


View Posts by Tags

#python #trainwithnycdsa 2019 2020 Revenue 3-points agriculture air quality airbnb airline alcohol Alex Baransky algorithm alumni Alumni Interview Alumni Reviews Alumni Spotlight alumni story Alumnus ames dataset ames housing dataset apartment rent API Application artist aws bank loans beautiful soup Best Bootcamp Best Data Science 2019 Best Data Science Bootcamp Best Data Science Bootcamp 2020 Best Ranked Big Data Book Launch Book-Signing bootcamp Bootcamp Alumni Bootcamp Prep boston safety Bundles cake recipe California Cancer Research capstone car price Career Career Day citibike classic cars classpass clustering Coding Course Demo Course Report covid 19 credit credit card crime frequency crops D3.js data data analysis Data Analyst data analytics data for tripadvisor reviews data science Data Science Academy Data Science Bootcamp Data science jobs Data Science Reviews Data Scientist Data Scientist Jobs data visualization database Deep Learning Demo Day Discount disney dplyr drug data e-commerce economy employee employee burnout employer networking environment feature engineering Finance Financial Data Science fitness studio Flask flight delay gbm Get Hired ggplot2 googleVis H20 Hadoop hallmark holiday movie happiness healthcare frauds higgs boson Hiring hiring partner events Hiring Partners hotels housing housing data housing predictions housing price hy-vee Income Industry Experts Injuries Instructor Blog Instructor Interview insurance italki Job Job Placement Jobs Jon Krohn JP Morgan Chase Kaggle Kickstarter las vegas airport lasso regression Lead Data Scienctist Lead Data Scientist leaflet league linear regression Logistic Regression machine learning Maps market matplotlib Medical Research Meet the team meetup methal health miami beach movie music Napoli NBA netflix Networking neural network Neural networks New Courses NHL nlp NYC NYC Data Science nyc data science academy NYC Open Data nyc property NYCDSA NYCDSA Alumni Online Online Bootcamp Online Training Open Data painter pandas Part-time performance phoenix pollutants Portfolio Development precision measurement prediction Prework Programming public safety PwC python Python Data Analysis python machine learning python scrapy python web scraping python webscraping Python Workshop R R Data Analysis R language R Programming R Shiny r studio R Visualization R Workshop R-bloggers random forest Ranking recommendation recommendation system regression Remote remote data science bootcamp Scrapy scrapy visualization seaborn seafood type Selenium sentiment analysis sentiment classification Shiny Shiny Dashboard Spark Special Special Summer Sports statistics streaming Student Interview Student Showcase SVM Switchup Tableau teachers team team performance TensorFlow Testimonial tf-idf Top Data Science Bootcamp Top manufacturing companies Transfers tweets twitter videos visualization wallstreet wallstreetbets web scraping Weekend Course What to expect whiskey whiskeyadvocate wildfire word cloud word2vec XGBoost yelp youtube trending ZORI