NYC Data Science Academy| Blog
Bootcamps
Lifetime Job Support Available Financing Available
Bootcamps
Data Science with Machine Learning Flagship ๐Ÿ† Data Analytics Bootcamp Artificial Intelligence Bootcamp New Release ๐ŸŽ‰
Free Lesson
Intro to Data Science New Release ๐ŸŽ‰
Find Inspiration
Find Alumni with Similar Background
Job Outlook
Occupational Outlook Graduate Outcomes Must See ๐Ÿ”ฅ
Alumni
Success Stories Testimonials Alumni Directory Alumni Exclusive Study Program
Courses
View Bundled Courses
Financing Available
Bootcamp Prep Popular ๐Ÿ”ฅ Data Science Mastery Data Science Launchpad with Python View AI Courses Generative AI for Everyone New ๐ŸŽ‰ Generative AI for Finance New ๐ŸŽ‰ Generative AI for Marketing New ๐ŸŽ‰
Bundle Up
Learn More and Save More
Combination of data science courses.
View Data Science Courses
Beginner
Introductory Python
Intermediate
Data Science Python: Data Analysis and Visualization Popular ๐Ÿ”ฅ Data Science R: Data Analysis and Visualization
Advanced
Data Science Python: Machine Learning Popular ๐Ÿ”ฅ Data Science R: Machine Learning Designing and Implementing Production MLOps New ๐ŸŽ‰ Natural Language Processing for Production (NLP) New ๐ŸŽ‰
Find Inspiration
Get Course Recommendation Must Try ๐Ÿ’Ž An Ultimate Guide to Become a Data Scientist
For Companies
For Companies
Corporate Offerings Hiring Partners Candidate Portfolio Hire Our Graduates
Students Work
Students Work
All Posts Capstone Data Visualization Machine Learning Python Projects R Projects
Tutorials
About
About
About Us Accreditation Contact Us Join Us FAQ Webinars Subscription An Ultimate Guide to
Become a Data Scientist
    Login
NYC Data Science Acedemy
Bootcamps
Courses
Students Work
About
Bootcamps
Bootcamps
Data Science with Machine Learning Flagship
Data Analytics Bootcamp
Artificial Intelligence Bootcamp New Release ๐ŸŽ‰
Free Lessons
Intro to Data Science New Release ๐ŸŽ‰
Find Inspiration
Find Alumni with Similar Background
Job Outlook
Occupational Outlook
Graduate Outcomes Must See ๐Ÿ”ฅ
Alumni
Success Stories
Testimonials
Alumni Directory
Alumni Exclusive Study Program
Courses
Bundles
financing available
View All Bundles
Bootcamp Prep
Data Science Mastery
Data Science Launchpad with Python NEW!
View AI Courses
Generative AI for Everyone
Generative AI for Finance
Generative AI for Marketing
View Data Science Courses
View All Professional Development Courses
Beginner
Introductory Python
Intermediate
Python: Data Analysis and Visualization
R: Data Analysis and Visualization
Advanced
Python: Machine Learning
R: Machine Learning
Designing and Implementing Production MLOps
Natural Language Processing for Production (NLP)
For Companies
Corporate Offerings
Hiring Partners
Candidate Portfolio
Hire Our Graduates
Students Work
All Posts
Capstone
Data Visualization
Machine Learning
Python Projects
R Projects
About
Accreditation
About Us
Contact Us
Join Us
FAQ
Webinars
Subscription
An Ultimate Guide to Become a Data Scientist
Tutorials
Data Analytics
  • Learn Pandas
  • Learn NumPy
  • Learn SciPy
  • Learn Matplotlib
Machine Learning
  • Boosting
  • Random Forest
  • Linear Regression
  • Decision Tree
  • PCA
Interview by Companies
  • JPMC
  • Google
  • Facebook
Artificial Intelligence
  • Learn Generative AI
  • Learn ChatGPT-3.5
  • Learn ChatGPT-4
  • Learn Google Bard
Coding
  • Learn Python
  • Learn SQL
  • Learn MySQL
  • Learn NoSQL
  • Learn PySpark
  • Learn PyTorch
Interview Questions
  • Python Hard
  • R Easy
  • R Hard
  • SQL Easy
  • SQL Hard
  • Python Easy
Data Science Blog > Student Works > Using Data to Analyze Ames Machine Learning Applications

Using Data to Analyze Ames Machine Learning Applications

Anthony Fargnoli
Posted on Jul 6, 2020

The skills the author demonstrated here can be learned through taking Data Science with Machine Learning bootcamp with NYC Data Science Academy.

AMES IOWA HOUSING PRICE PREDICTION WITH MACHINE LEARNING ALGORITHMS

Using Data to Analyze The Purpose:

Based on data, real estate purchases comprise the single largest investment the average American enters at one point over their lifetimes.  Real estate websites agencies and other information sources have sought machine learning approaches to model and predict housing price trends.  Often controlling for location and other factors buyers and sellers wish to capture real time market data or historical data to best optimize their transaction strategies.  Here the Ames IA pricing data set features 1460 selected single family housing transactions between 2006-2010.  The purpose of the project here was to:  
  1. Identify key features that would influence pricing
  2. Apply machine learning algorithms of several classes to identify a viable model
  3. Optimize select and reduced models to arrive at a final model
  4. Enter models into Kaggle to determine position rank
 

Approach:

  The general approach was to perform iterative progression toward the ideal model in these steps shown in the block diagram:  
                                         

Data sets

Python was the sole package used for the analysis with Pandas being used to create a dataframe sets for starting point test and train sets. The initial train set contained 1460 rows with approximately 89 features with single target SalePrice data points.   Feature selection was performed with visual exploratory data analysis tools including histograms frequency plots and other measures of variances. Data fields with exaggerated (i.e. >98% of distribution of one categorical data frequency) extreme missing values (i.e. data with greater than 80% of set missing) and other incomplete factors were determined not worthy of any imputation approach were eliminated.  Missingness was handled for select remaining features with SimpleImputer as the median for float and most common data for categorical to complete data sets.  

Model Fitting

Model Fitting โ€“ Ridge regression Random Forest Gradient Boosting machine and the well characterized XGBoosting machine were implemented on test/train/split data from the original train set at 25% data in the test. The overall approach was to start with an easy to apply model that was parsimonious to obtain a baseline readout of model performance with the existing features.  Given that ensembling methods are more powerful and accurate for relevant scoring a shift to 3 robust ML models of Random forest Gradient Boosting Regressor and the XG Boosting Regressor were applied.  Each baseline model assessment to compare the test vs. train error rates were noted as well as a baseline cross validation scoring to assess basic variance for each.   Optimization and Final Model Predictions โ€“ After all 4 models were fit to the training set a combined GridSearch Cross validation process was applied to each model to identify optimal hyperparameters. These varied per protocol for each specific machine learning method.  Best parameters were utilized to re-train the baseline model.  A new model was re-trained to the entire data set with the optimal parameters prior to final product predictions.  Each ensembling model was used to make final predictions and entered in the official Kaggle.com scoring for assessment.  

Data Results:

  Per correlation analysis and basic feature plotting many variables suffered very high co-linearity hyperexaggerrated distribution with >98% in favor of one level or seemingly little bearing on housing prices.  Below is a sample of data via count analysis demonstrating significant missingness or insignificance with overlapping variables in similar categories:   FireplaceQu      730 โ€“ 4 other features for FirePlaces GarageType       76  - Attached vs. Detached; bias either way GarageYrBlt      78  - Largely a function of home age; Quality vs. Age GarageFinish     78  - Arbitrary PoolQC           1456   ALL EXTREME HIGH RATE OF MISSINGNESS Fence            1169 MiscFeature      1408 Alley            1352 These variables were selectively removed from further analysis.  Many were either highly co-linear with other features which had more complete sets and or high degrees of missingness or non informative bias patterns with one level feature representing over 90% of total data.   Utilities BsmtFinType1 BsmtCond MasVnrArea PoolQC Fence MiscFeature FireplaceQu Alley LotFrontage MSSubClass GarageYrBlt GarageQual GarageCond StreetCondition 2RoofStyle RoofMatl Exterior2nd MasVnrType BsmtFinSF2 HeatingCentralAir FunctionalGarage QualGarage CondSaleType SaleCondition GarageAreaGarage FinishYearRemod AddBsmtUnfSF GarageType BsmtFinType2Bsmt ExposureBsmt     Figure 1. Exploratory data analysis for selected features that were advanced in the model analysis stage.  A total of 44 features with relevant distributions and or expected impact on SalePricing were advanced.    Missingness in these sets were minimal and a simple imputer function in Python was used to impute median in the case of float, and most frequent in the case of categorical.  Please refer to remaining Pre-processing execution in the main posted on my GitHub.  

Ridge Regression

  Starting with the simplest model attempt, RidgeRegressor was fit to the test/train split train set with the following results:   The intercept is 1764731.9110The slopes are MSZoning        -761.241261  LotArea          0.452596    LotShape        -1249.027722 LandContour      3884.979427 LotConfig       -62.624721   LandSlope        6679.699746 Neighborhood     310.843423  Condition1      -668.159269  BldgType        -4953.965713 HouseStyle      -806.616131  OverallQual      13146.529312OverallCond      5354.995245 YearBuilt        334.544071  Exterior1st     -537.337348  ExterQual       -13931.468240ExterCond        1755.804705 Foundation       1225.461027 BsmtFinSF1       9.475112    TotalBsmtSF      6.426640    HeatingQC       -594.731061โ€ฆโ€ฆโ€ฆโ€ฆ..Yr Sold         -1202.532386The training error is: 0.17666The test error is: 0.17563 GridSearch was applied to identify the hyperparameter lambda, then applied to a revised re-trained Ridge obtaining the following scores: The training error is: 0.17719The test error is: 0.17450

Using Data to Analyze Random Forest Machine Learning

  The baseline Random Forest Regressor yieled an excellent initial score, indicating a much better yield than Ridge as expected, with test set error less than 9% as a starting point.   The training score is: 0.97613The test score is: 0.91970   An initial model cross validation yielded a decrease in mean score to about 0.85 suggesting an overfit training data status.  Default parameters were identified prior to GridSearchCV:   {'bootstrap': True, 'ccp_alpha': 0.0, 'criterion': 'mse', 'max_depth': None, 'max_features': 'auto', 'max_leaf_nodes': None, 'max_samples': None, 'min_impurity_decrease': 0.0, 'min_impurity_split': None, 'min_samples_leaf': 1, 'min_samples_split': 2, 'min_weight_fraction_leaf': 0.0, 'n_estimators': 100, 'n_jobs': None, 'oob_score': False, 'random_state': 0, 'verbose': 0, 'warm_start': False}   After a few attempts to optimize each of these, it was obvious it was computationally too expensive.  Thus, a refined list of major, select hyperparameters were applied for the next iteration of Random Forest:   parameters = {'max_depth': [10,50,None], 'max_features': ['auto','sqrt'], 'n_estimators' : [100,200,500], 'min_samples_split':[2,5,10], 'min_samples_leaf':[1,2,4] } Alternative approaches for Random Forest optimization as reported on Kaggle and other data science sites included RandomCVGridSearch.  This analysis was more expensive and difficult to execute, thus the simple paradigm was used above to optimize a revised fitted tree model.   The revised model yielded an improved score:   The Score on the  training data is 0.982The Score on the test data is 0.987  

Using Data to Analyze Gradient Boost Machine Learning

  Gradient boosting ensembling method was an excellent candidate for this housing project following with 44 features and a dataset of 1460 rows.  The Gradient boosting machine builds its estimates from an assembly of weak learners which are optimized at a rate.  The base learning rate defaults were used for the first iteration to produce the following performance:   The Training Score is 0.962The Testing Score is 0.930   Five fold cross validation on the training set, demonstrating relative consistency but with a suggestion the first run of Gradient Boosting overfit the training set:   scores([0.84114232, 0.85495313, 0.7733961 , 0.85166659, 0.91937407])   A GridSearchCV with this format to opmize the major drivers of Gradient boosting was used as follows:   parameters = {'learning_rate': [0.01,0.02,0.03], 'subsample'    : [0.9, 0.5, 0.2], 'n_estimators' : [100,500,1000], 'max_depth'    : [4,6,8] } The best parameters across ALL searched params: {'learning_rate': 0.03, 'max_depth': 6, 'n_estimators': 500, 'subsample': 0.5}   A re-fit model with these optimized parameters yield the following performance.  Notable gains were made in reducing the training data error and a higher average CV score with more consistent variance profile.   The Friedman_MSE on the  training data is 0.998The Friedman_MSE on the test data is 0.924   scores([0.88476961, 0.88683934, 0.80215266, 0.83565457, 0.9134895] Some reduction was noted in the testing rate, however these were less in relation to the gains in the variance reduction in the CV from the original model.

Using Data to Analyze XG Boost Machine Learning

The XGBoost has been reported the King of Kaggle competitions for a variety of challenges and thus has gained wide popularity.  Given the positive results with Gradient boosting, it was logical to attempt this as a final model attempt to complete this project.  XG boost packages and plugins were installed and the baseline model was fitted. Results were solid as expected, however cross validation returned some disparity combined with a high train score indicative of overfitting: The training Score is: 0.99958 The test Score is below

Text Score

0.91529 CV scores([0.87249273, 0.82699026, 0.79097786, 0.85860529, 0.89877748]) GridSearch CV with the following key parameters for XGBoost were applied: The best estimator across ALL searched params:  XGBRegressor(base_score=0.5, booster='gbtree', colsample_bylevel=1,             colsample_bynode=1, colsample_bytree=0.9, gamma=0, gpu_id=-1,             importance_type='gain', interaction_constraints='',             learning_rate=0.300000012, max_delta_step=0, max_depth=20,             min_child_weight=1, missing=nan, monotone_constraints='()',             n_estimators=200, n_jobs=0, num_parallel_tree=1,             objective='reg:squarederror', random_state=0, reg_alpha=0,             reg_lambda=1, scale_pos_weight=1, subsample=1, tree_method='exact',             validate_parameters=1, verbosity=None)

A new iteration

A new iteration with these parameters yielded: The Score on the  training data is 1.000The Score on the test data is 0.907 CV scores([0.87160822, 0.84076335, 0.77042395, 0.8128964 , 0.90150892]) Given these results, while exciting to achieve a perfect score on the training data this combined with marginal loss in the CV scoring it is very likely overall without further modification the XGBoost is overfit.

Final Model Evaluation: Kaggle Results

  Each of the optimized models of Random Forest, Gradient Boosting, and XGBoost were loaded into the official Ames Iowa Kaggle competition for scoring.  The Gradient boosting model as a result was the best overall achieving a >35% percentile rank well above the other two models.    

Summary of Feature Importance

  Generally, the feature importanceโ€™s between the models did not change much and below is an example from the best performing model.  The top 15 results were: 1.  OverallQual', 0.14471587922967497) 2.  GrLivArea', 0.11851858124286388) 3.  ExterQual', 0.08545406813704338) 4.  TotalBsmtSF', 0.08308420400688572) 5.  GarageCars', 0.08129439906946702) 6.  YearBuilt', 0.05259058831073286) 7.  FullBath', 0.05245474540526278) 8.  1stFlrSF', 0.048267085912680516) 9.  BsmtFinSF1', 0.04544276425388854) 10.LotArea', 0.03742499662899613) 11.KitchenQual', 0.03536025676082719) 12.2ndFlrSF', 0.028929776122335108) 13.Fireplaces', 0.024372398428322197) 14.Foundation', 0.02071006879400414) 15.OpenPorchSF', 0.017767820031488368) Overall quality, total first floor living area, curb appeal via Exterior quality are all expected drivers of pricing since generally all buyers seek these.  The Year built and having additional garage space is also a major feature as expected.  Kitchen quality and upper level square footage as well as the remaining contribute to a lessor amount. The most unique feature of interest seems to be the 4th ranked Total basement square feet. The best explanation for this may be that the state of Iowa is in a hurricane corridor, unbeknownst to the majority of US, is ranked 6 on the list of US states with tornadoes.  In fact, Ames Iowa during timeline within 10 miles of city center had many tornadoes.  Basement areas in these cases could be considered lifelines that a buyer would be willing to pay for and also a function of more expensive homes with finished area. Figure 2.  Feature importance rank results from the optimized Gradient Boost model selection.

Conclusions:

Ensembling methods offer much higher performance as standard for basic machine learning. Overfitting on a relatively elementary data set appeared to be the issue for an advanced algorithm such has XGBoostยทGradient boosting with some adjustment to the learning rate from 0.1 to 0.09 offered the best available evaluated model with highest Kaggle rankยทImprovements: Feature engineering could reduce the number of variables further, transformations applied to imbalanced setsยทGridSearch for Random forest better approached with an alternative approach.                        

About Author

Anthony Fargnoli

View all posts by Anthony Fargnoli >

Leave a Comment

No comments found.

View Posts by Categories

All Posts 2399 posts
AI 7 posts
AI Agent 2 posts
AI-based hotel recommendation 1 posts
AIForGood 1 posts
Alumni 60 posts
Animated Maps 1 posts
APIs 41 posts
Artificial Intelligence 2 posts
Artificial Intelligence 2 posts
AWS 13 posts
Banking 1 posts
Big Data 50 posts
Branch Analysis 1 posts
Capstone 206 posts
Career Education 7 posts
CLIP 1 posts
Community 72 posts
Congestion Zone 1 posts
Content Recommendation 1 posts
Cosine SImilarity 1 posts
Data Analysis 5 posts
Data Engineering 1 posts
Data Engineering 3 posts
Data Science 7 posts
Data Science News and Sharing 73 posts
Data Visualization 324 posts
Events 5 posts
Featured 37 posts
Function calling 1 posts
FutureTech 1 posts
Generative AI 5 posts
Hadoop 13 posts
Image Classification 1 posts
Innovation 2 posts
Kmeans Cluster 1 posts
LLM 6 posts
Machine Learning 364 posts
Marketing 1 posts
Meetup 144 posts
MLOPs 1 posts
Model Deployment 1 posts
Nagamas69 1 posts
NLP 1 posts
OpenAI 5 posts
OpenNYC Data 1 posts
pySpark 1 posts
Python 16 posts
Python 458 posts
Python data analysis 4 posts
Python Shiny 2 posts
R 404 posts
R Data Analysis 1 posts
R Shiny 560 posts
R Visualization 445 posts
RAG 1 posts
RoBERTa 1 posts
semantic rearch 2 posts
Spark 17 posts
SQL 1 posts
Streamlit 2 posts
Student Works 1687 posts
Tableau 12 posts
TensorFlow 3 posts
Traffic 1 posts
User Preference Modeling 1 posts
Vector database 2 posts
Web Scraping 483 posts
wukong138 1 posts

Our Recent Popular Posts

AI 4 AI: ChatGPT Unifies My Blog Posts
by Vinod Chugani
Dec 18, 2022
Meet Your Machine Learning Mentors: Kyle Gallatin
by Vivian Zhang
Nov 4, 2020
NICU Admissions and CCHD: Predicting Based on Data Analysis
by Paul Lee, Aron Berke, Bee Kim, Bettina Meier and Ira Villar
Jan 7, 2020

View Posts by Tags

#python #trainwithnycdsa 2019 2020 Revenue 3-points agriculture air quality airbnb airline alcohol Alex Baransky algorithm alumni Alumni Interview Alumni Reviews Alumni Spotlight alumni story Alumnus ames dataset ames housing dataset apartment rent API Application artist aws bank loans beautiful soup Best Bootcamp Best Data Science 2019 Best Data Science Bootcamp Best Data Science Bootcamp 2020 Best Ranked Big Data Book Launch Book-Signing bootcamp Bootcamp Alumni Bootcamp Prep boston safety Bundles cake recipe California Cancer Research capstone car price Career Career Day ChatGPT citibike classic cars classpass clustering Coding Course Demo Course Report covid 19 credit credit card crime frequency crops D3.js data data analysis Data Analyst data analytics data for tripadvisor reviews data science Data Science Academy Data Science Bootcamp Data science jobs Data Science Reviews Data Scientist Data Scientist Jobs data visualization database Deep Learning Demo Day Discount disney dplyr drug data e-commerce economy employee employee burnout employer networking environment feature engineering Finance Financial Data Science fitness studio Flask flight delay football gbm Get Hired ggplot2 googleVis H20 Hadoop hallmark holiday movie happiness healthcare frauds higgs boson Hiring hiring partner events Hiring Partners hotels housing housing data housing predictions housing price hy-vee Income industry Industry Experts Injuries Instructor Blog Instructor Interview insurance italki Job Job Placement Jobs Jon Krohn JP Morgan Chase Kaggle Kickstarter las vegas airport lasso regression Lead Data Scienctist Lead Data Scientist leaflet league linear regression Logistic Regression machine learning Maps market matplotlib Medical Research Meet the team meetup methal health miami beach movie music Napoli NBA netflix Networking neural network Neural networks New Courses NHL nlp NYC NYC Data Science nyc data science academy NYC Open Data nyc property NYCDSA NYCDSA Alumni Online Online Bootcamp Online Training Open Data painter pandas Part-time performance phoenix pollutants Portfolio Development precision measurement prediction Prework Programming public safety PwC python Python Data Analysis python machine learning python scrapy python web scraping python webscraping Python Workshop R R Data Analysis R language R Programming R Shiny r studio R Visualization R Workshop R-bloggers random forest Ranking recommendation recommendation system regression Remote remote data science bootcamp Scrapy scrapy visualization seaborn seafood type Selenium sentiment analysis sentiment classification Shiny Shiny Dashboard Spark Special Special Summer Sports statistics streaming Student Interview Student Showcase SVM Switchup Tableau teachers team team performance TensorFlow Testimonial tf-idf Top Data Science Bootcamp Top manufacturing companies Transfers tweets twitter videos visualization wallstreet wallstreetbets web scraping Weekend Course What to expect whiskey whiskeyadvocate wildfire word cloud word2vec XGBoost yelp youtube trending ZORI

NYC Data Science Academy

NYC Data Science Academy teaches data science, trains companies and their employees to better profit from data, excels at big data project consulting, and connects trained Data Scientists to our industry.

NYC Data Science Academy is licensed by New York State Education Department.

Get detailed curriculum information about our
amazing bootcamp!

Please enter a valid email address
Sign up completed. Thank you!

Offerings

  • HOME
  • DATA SCIENCE BOOTCAMP
  • ONLINE DATA SCIENCE BOOTCAMP
  • Professional Development Courses
  • CORPORATE OFFERINGS
  • HIRING PARTNERS
  • About

  • About Us
  • Alumni
  • Blog
  • FAQ
  • Contact Us
  • Refund Policy
  • Join Us
  • SOCIAL MEDIA

    ยฉ 2025 NYC Data Science Academy
    All rights reserved. | Site Map
    Privacy Policy | Terms of Service
    Bootcamp Application