Using Data to Predict Ames, Iowa Home Sale Prices

The skills the authors demonstrated here can be learned through taking Data Science with Machine Learning bootcamp with NYC Data Science Academy.

Investing in a home is one of the biggest decisions a person can make, as real estate purchases make up a large proportion of one’s net worth. This makes it important to make good investments when it comes to real estate and know whether a home is priced fairly or even underpriced. In this text we will use data to predict Ames' home sale prices.

However, when it comes to predicting housing prices, many factors can come into play. The adage about location, location, location doesn’t tell the whole story. Neighborhood is an important factor, but it’s not the only one. Factors that affect a home’s price include considerations of size like square footage, number of bedrooms, basement area, garage size, and overall property area.

Overall condition and quality of the home is another key factor contributing to home value, which could be affected by the date and type of renovations applied. Our goal is to build a model using a Kaggle dataset collected from Ames, Iowa that can help both realtors and prospective homeowners accurately price a house based on its available features, so that a homeowner can know whether or not a house is worth the listed price.

Exploratory Data Analysis

Using Data to Predict Ames, Iowa Home Sale Prices

The original dataset had 2580 data points, each one representing the sale of a home, and 81 columns, each describing a feature of that home. The data was fairly right skewed with a few outliers with higher sale prices. 

Using Data to Predict Ames, Iowa Home Sale Prices

After a log transformation of the sale price, the data became more normally distributed, which improves the regression models, as can be seen later in the improvement of the model scores. The sale price outliers seen in the original data are less drastic after the log transformation and did not need to be removed for the analysis.

Using Data to Predict Ames, Iowa Home Sale Prices

Next, we wanted to look at the distribution of sale prices based on the neighborhood they were in to see if there was anything we could glean from it. Based on the plot we generated, we found that the median prices in most neighborhoods were between $100,000 and just over $200,000. The most expensive neighborhood was Northridge with a median of $302,000, and the least expensive was Meadow Village with a median of $89,375

Using Data to Predict Ames, Iowa Home Sale Prices

The plot above shows the count of sales in each neighborhood in the same order as the median sale price boxplot. It shows the number of sales per neighborhood varied greatly. North Ames had over 400 sales in the dataset, while several neighborhoods had fewer than 50 sales. The median sale price and number of sales per neighborhood did not appear to correlate.

Using Data to Predict Ames, Iowa Home Sale Prices

This plot shows the average sale price of homes based on the year the home was built. The shaded region represents the 95% confidence interval, and the line represents the mean for the year.  In general, as the year built increases, the sale price also increases. There are a few exceptions, with a high spike around the 1890s. We think that this might be due to fewer homes being built or remaining from that year, and some of them might be considered historic mansions. Despite some of the spikes, the year built plot shows that people tend to pay more for newer homes.

Using Data to Predict Ames, Iowa Home Sale Prices

The dataset has a feature for home condition and a feature for home quality. In the data description, the home condition was described as “the overall condition of the house” with high scores of 10 representing “very excellent” and low scores of 1 representing “very poor.” The home quality was described as “the overall material and finish of the house,” with scores also ranging from 1 to 10, representing “very poor” to “very excellent.”

We were interested to see how they would differ by the year the home was built. To our surprise, they were quite different. The plot of quality and year built was more of what we were expecting, with newer homes tending to have higher quality scores, but overall there was a wide variety of scores across the years. The plot of condition and year built was surprising, with newer homes built after 2000 commonly having a condition score of 5. This shows that for these newer homes, there might have been a standard for filling in a condition of 5.

Using Data to Predict Ames, Iowa Home Sale Prices

Since condition and quality differed with the year built, we were interested to see how they might differ with the sale price of the homes. Again, quality shows a stronger linear trend, as the quality scores increase, the sale prices of the homes also tend to increase. The condition of the home does not exhibit as clear of a linear trend with sale price, with homes of condition 5 and 6, having the highest prices. Since we were planning to use regression models, we thought that quality would offer more predictive information than condition, but we decided to keep both since condition also had a strong impact on the sale price of the homes. 

Data Cleaning

The data cleaning process included checking for duplicate rows, examining rows and columns with missing values, and imputing some of those missing values.

Each sale in the dataset has a unique property ID, or PID. However, one PID is repeated. Upon further examination, we found that  this sale had two identical rows, so we deleted one entry.

Next we checked for columns with missing values to determine which features may not add enough value to the analysis, or if any missing values will need to be interpolated. The chart below shows the percent of missing values by column. Several columns were missing values more than half of the time, including FireplaceQu (Fireplace Quality) and Fence. Several categories were missing value more than 90% of the time: PoolQC (Pool Quality), MiscFeature (describes if houses had features such as sheds), and Alley. We decided to drop PoolQC, MiscFeature, and Alley, since few of the home sales included this information. Fireplace quality was dropped as well, and we engineered a fireplace attribute and a fence attribute, as explained later on. 

Using Data to Predict Ames, Iowa Home Sale Prices

Lot Frontage was missing a relatively small number of values, but since the column did not share similarities with other columns, we wanted to retain the information. We imputed values for the home sales that did not include a Lot Frontage value. To do this, we looked at the average lot frontage of houses that were the same type of housing in the MSSubclass column (i.e. single family home, townhouse) and in the same neighborhood to estimate missing lot frontage values. We removed a few rows from the dataset which could not be imputed due to a lack of similar houses.

Finally, we removed many columns that seemed to provide overlapping information with other columns. We dropped several columns relating to the basement and garage. We also dropped non-descriptive columns such as Street and MiscVal.

Feature Engineering

We utilized feature engineering to create new interesting features with the data, as well as simplify some columns. For example, we created a new feature named “Remodeled,” which is a binary feature denoting whether the house had ever been remodeled. We subtracted the remodel year from the year the house was built, and if the value was greater than 0, the house was considered remodeled.

In order to consolidate information about the homes’ basements, we created a more descriptive column for finished basement square feet. We subtracted basement unfinished square feet from total basement square feet. This allowed us to comfortably drop several other columns related to the basement.

The fireplaces feature describes how many fireplaces a home has. Most homes had 0 or 1 fireplaces, but a few had more than one. We created a new binary feature describing whether or not a home has fireplaces at all, as shown in the charts below. This revealed that the houses were approximately evenly split between homes that had at least one fireplace and homes that did not.

Using Data to Predict Ames, Iowa Home Sale Prices Using Data to Predict Ames, Iowa Home Sale Prices


In order to perform machine learning on a dataset, nominal categorical variables such as neighborhood and type of housing can be dummified. This entails turning these variables into columns of zeros and ones. First, categorical variables had to be identified as either nominal or ordinal variables. Ordinal variables were left as numbers. For example, the number of bathrooms in a home is an ordinal variable, and two bathrooms is more than one bathroom, so it is logical to leave that column as number values. We had 32 nominal variables left after data cleaning, and we dummified them all, resulting in 230 columns in our final cleaned dataset. The chart below shows the value counts of categorical variables in the dataset.

Using Data to Predict Ames, Iowa Home Sale Prices

Using Data for Modeling

After completing the exploratory data analysis and feature engineering, the next steps were to build and run machine learning models. Models were selected based on which were most likely to have the highest accuracy. Two categories of models were selected, seven models in total. The first category is linear regression models, such as Multiple Linear Regression, Lasso, Ridge, and Elastic Net, and the second category, Tree-based Models, including Random Forest, XGBoost, and Gradient Boost.

Once the model selection was complete, we conducted a train and test split of 20% of the dataset.. Each model was trained on the training split, and grid search and cross-validation were done to tune the alpha parameter for Lasso, Ridge, and Elastic Net, and to find optimal parameters for tree-based models. The alpha coefficient acts as a control parameter. As alpha increases, the coefficients of less contributive variables shrink towards zero.

The main difference between Ridge and Lasso is that Lasso allows for coefficients to reach zero, whereas Ridge never sets the value of coefficient to zero, though they may become very small. For the linear models, gridsearch tested evenly spaced integers from 0 to 20.1 for corresponding models' alphas. For the tree-based models, grid search implemented different values for parameters such as n_estimators, max_depth, and max_features. 

Different values for cross-validation were used, such as 2, 3, and 5. Cross validation is a resampling method that uses different proportions of the data to test and train on different iterations. For example, in two-fold validation, the dataset is randomly shuffled into two sets and each set in train and test on. Once the results were outputted, R2 scores were compared and checked for underfitting and overfitting. A check for underfitting or overfitting was done by comparing the train and test scores for each model and observing if their scores had a large disparity. A large disparity indicates an overfit. Underfitting would be indicated by low train and test scores.

During the model evaluation phase, linear regression models (Lasso, Ridge, & Elastic Net) demonstrated high R2 scores. Taking the log of the target variable and sale price helped further improve the models' performance. The improvement in model performance is due to Lasso's and Ridge's ability to decrease or drop feature coefficients to 0. This decrease in coefficients helped remove redundancies and the effects of multicollinearity. 

To visualize the different scores outputted by the seven models, a table was built with each model's training and test scores. Despite tree-based models having higher training scores, their respective test scores were lower than their linear regression counterparts. 

Using Data to Predict Ames, Iowa Home Sale Prices

Based on the table above with the models and R2 scores, the Log Elastic Net model outputted the highest R2 – 93.6% – for their testing split. A 93.6% R2 score represents the proportion of the variance for the dependent variable, sales price, which is explained by the independent variables. This model has the best fit, most likely due to the log transform of our target variable, reducing the effect of outliers as well as the reduction of coefficients of less important variables through alpha penalization.

Using Data to Predict Ames, Iowa Home Sale Prices Using Data to Predict Ames, Iowa Home Sale Prices

The graphs above demonstrate the effect of the log transformation on the fit of the model. The top graph shows the fit of the model with sale prices log transformed, and the bottom one shows without the log transformation. The points that are above the line shows that our model predicted the price to be too high and the points that fall below the line, shows that our model predicted too low. As seen in the plot with the log transformation, the outliers have moved closer to the best fit line, and the log transformed model is a better predictor.


We were able to predict home sale prices in Ames with 93.6% accuracy using our best machine learning model. This will enable potential buyers and sellers to determine whether they are getting a fair price for a home. Buying a home is a big decision, and buyers will want to be sure they are making a good investment. Sellers will want to be sure they are pricing their home fairly in order to actually sell the house.

Future work

Analyzing the Ames housing price data doesn’t have to stop with regression and tree-based machine learning models. Future work could include using unsupervised learning, for example Principal Component Analysis (PCA) to group houses together and describe the dataset in different ways. Additionally, multiple models could be assembled together for greater machine learning power. Finally, an interface could be set up using R Shiny to allow users to see predicted sale prices for other homes in Ames based on our machine learning price predictions.

About Authors

Caroline Keough

Data Scientist with a background in energy and sustainability. I'm hoping to apply my new skills in Python, R, and Machine Learning to make a positive impact on the planet.
View all posts by Caroline Keough >

Tam Trinh

Hi, I had my start in analytics with psychology and am interested in data science, particularly within social fields.
View all posts by Tam Trinh >

Joaquin Gomez

Data Scientist with Masters in Business Administration. Highly motivated to combine interdisciplinary skillset with a passion for Data Analytics to solve modern-day business issues. Capable of creating, manipulating, and deploying Machine Learning models using Python, R, and SQL...
View all posts by Joaquin Gomez >

Michael O'Brien

Graduate of Colgate University '16 with degree in biochemistry. After college, began worked as a laboratory technician in the food chemistry sector, then moved into web development as a full-stack developer. Currently starting a new chapter and moving...
View all posts by Michael O'Brien >

Related Articles

Leave a Comment

No comments found.

View Posts by Categories

Our Recent Popular Posts

View Posts by Tags

#python #trainwithnycdsa 2019 2020 Revenue 3-points agriculture air quality airbnb airline alcohol Alex Baransky algorithm alumni Alumni Interview Alumni Reviews Alumni Spotlight alumni story Alumnus ames dataset ames housing dataset apartment rent API Application artist aws bank loans beautiful soup Best Bootcamp Best Data Science 2019 Best Data Science Bootcamp Best Data Science Bootcamp 2020 Best Ranked Big Data Book Launch Book-Signing bootcamp Bootcamp Alumni Bootcamp Prep boston safety Bundles cake recipe California Cancer Research capstone car price Career Career Day citibike classic cars classpass clustering Coding Course Demo Course Report covid 19 credit credit card crime frequency crops D3.js data data analysis Data Analyst data analytics data for tripadvisor reviews data science Data Science Academy Data Science Bootcamp Data science jobs Data Science Reviews Data Scientist Data Scientist Jobs data visualization database Deep Learning Demo Day Discount disney dplyr drug data e-commerce economy employee employee burnout employer networking environment feature engineering Finance Financial Data Science fitness studio Flask flight delay gbm Get Hired ggplot2 googleVis H20 Hadoop hallmark holiday movie happiness healthcare frauds higgs boson Hiring hiring partner events Hiring Partners hotels housing housing data housing predictions housing price hy-vee Income Industry Experts Injuries Instructor Blog Instructor Interview insurance italki Job Job Placement Jobs Jon Krohn JP Morgan Chase Kaggle Kickstarter las vegas airport lasso regression Lead Data Scienctist Lead Data Scientist leaflet league linear regression Logistic Regression machine learning Maps market matplotlib Medical Research Meet the team meetup methal health miami beach movie music Napoli NBA netflix Networking neural network Neural networks New Courses NHL nlp NYC NYC Data Science nyc data science academy NYC Open Data nyc property NYCDSA NYCDSA Alumni Online Online Bootcamp Online Training Open Data painter pandas Part-time performance phoenix pollutants Portfolio Development precision measurement prediction Prework Programming public safety PwC python Python Data Analysis python machine learning python scrapy python web scraping python webscraping Python Workshop R R Data Analysis R language R Programming R Shiny r studio R Visualization R Workshop R-bloggers random forest Ranking recommendation recommendation system regression Remote remote data science bootcamp Scrapy scrapy visualization seaborn seafood type Selenium sentiment analysis sentiment classification Shiny Shiny Dashboard Spark Special Special Summer Sports statistics streaming Student Interview Student Showcase SVM Switchup Tableau teachers team team performance TensorFlow Testimonial tf-idf Top Data Science Bootcamp Top manufacturing companies Transfers tweets twitter videos visualization wallstreet wallstreetbets web scraping Weekend Course What to expect whiskey whiskeyadvocate wildfire word cloud word2vec XGBoost yelp youtube trending ZORI