House Prices Prediction

Posted on Jul 19, 2021

Your new home is waiting in Ames, Iowa. Let us explore the real estate market in Ames.


This project encompasses the process of predicting the house sale prices in Ames, Iowa. The data set from Kaggle provides 80 features that contribute to predictions and I the various models were trained for accuracy.

Interesting Facts:

Before diving into the technical aspects of the project, here are some interesting facts about Ames.

As of July 2021, at the time of this writing, the average price per sqft in Ames is $230.

Below graph shows the home ownership rate is below 35% only.

 Now that we have explored some real facts, now it’s time to delve into real estate facts from the data set.

Table of Contents:


Data Cleaning

Removal of outliers

Feature selection

Model selection

Model performance

Exploratory data Analysis:

There are 80 features in the dataset, and some are numerical, and some are categorical. First, we look at the correlation between the features and highlight the features that has value >0.5.

Here are the important features:

Data Pre-processing: Treating null values.

There are lot of features with null values, instead of dropping, those are populated with ‘None’ for categorical and ‘Mode’ for numerical features. Below graph shows all the features with null values.

Analyzing outliers in important features:

In the scatter plot of GrLivArea vs Saleprice, we see some outliers , couple of large houses sold for a cheap price. Our objective is to predict the regular prices, so we will consider only the ones with below 1000 sqft in out feature selection.

Overall Quality:

As the sale price highly dependent on overall quality of the different features of the house, if the overall quality is high then the sale price is higher.

Removal of outliers:

Here is how I have removed the outliers for below features.

Dummification of categorical features:

All the categorical variables are processed for label encoding and then some of the features are combined for use in model selection.

TotalBsmtSF+ 1stFlrSF+ 2ndFlrSF is combined as TotalSF for easy analysis.

Model selection:

For model selection, I have started from training conservative models to efficient models, to evaluate the accuracy and performance. I have trained Lasso, Ridge regressors with alpha value ranging from 0.1 to 0.8. Random forest regressor is trained with the depth of 5 and it predicted with 78%. I have also used KNN, Gradient boosting and SVM. XGBoost regressor’ learning rate with 0.1-0.9 predicted with 84%.

Model performance:

After fitting each model, compared the model performance. The metric we have used is the root mean squared error.

Overall Gradient Boost predicted with over 87% accuracy.

Future work:

Continue to explore best methods to reduce overfitting and parameter tuning. Apply other machine learning algorithms such as AdaBoost or train the XGBoost with more better estimators. Gather more latest data for accuracy.

About Author



Data Science enthusiast with 9 years experience in database development and analytics. Proficient with data visualization and machine learning techniques in Python, R and SQL. Experienced in executing data driven solutions to increase efficiency and utility of internal...
View all posts by SWATHI BATTULA >

Leave a Comment

No comments found.

View Posts by Categories

Our Recent Popular Posts

View Posts by Tags

#python #trainwithnycdsa 2019 airbnb Alex Baransky alumni Alumni Interview Alumni Reviews Alumni Spotlight alumni story Alumnus API Application artist aws beautiful soup Best Bootcamp Best Data Science 2019 Best Data Science Bootcamp Best Data Science Bootcamp 2020 Best Ranked Big Data Book Launch Book-Signing bootcamp Bootcamp Alumni Bootcamp Prep Bundles California Cancer Research capstone Career Career Day citibike clustering Coding Course Demo Course Report D3.js data Data Analyst data science Data Science Academy Data Science Bootcamp Data science jobs Data Science Reviews Data Scientist Data Scientist Jobs data visualization Deep Learning Demo Day Discount dplyr employer networking feature engineering Finance Financial Data Science Flask gbm Get Hired ggplot2 googleVis Hadoop higgs boson Hiring hiring partner events Hiring Partners Industry Experts Instructor Blog Instructor Interview Job Job Placement Jobs Jon Krohn JP Morgan Chase Kaggle Kickstarter lasso regression Lead Data Scienctist Lead Data Scientist leaflet linear regression Logistic Regression machine learning Maps matplotlib Medical Research Meet the team meetup Networking neural network Neural networks New Courses nlp NYC NYC Data Science nyc data science academy NYC Open Data NYCDSA NYCDSA Alumni Online Online Bootcamp Online Training Open Data painter pandas Part-time Portfolio Development prediction Prework Programming PwC python Python Data Analysis python machine learning python scrapy python web scraping python webscraping Python Workshop R R language R Programming R Shiny r studio R Visualization R Workshop R-bloggers random forest Ranking recommendation recommendation system regression Remote remote data science bootcamp Scrapy scrapy visualization seaborn Selenium sentiment analysis Shiny Shiny Dashboard Spark Special Special Summer Sports statistics streaming Student Interview Student Showcase SVM Switchup Tableau team TensorFlow Testimonial tf-idf Top Data Science Bootcamp twitter visualization web scraping Weekend Course What to expect word cloud word2vec XGBoost yelp