NYC Data Science Academy| Blog
Bootcamps
Lifetime Job Support Available Financing Available
Bootcamps
Data Science with Machine Learning Flagship ๐Ÿ† Data Analytics Bootcamp Artificial Intelligence Bootcamp New Release ๐ŸŽ‰
Free Lesson
Intro to Data Science New Release ๐ŸŽ‰
Find Inspiration
Find Alumni with Similar Background
Job Outlook
Occupational Outlook Graduate Outcomes Must See ๐Ÿ”ฅ
Alumni
Success Stories Testimonials Alumni Directory Alumni Exclusive Study Program
Courses
View Bundled Courses
Financing Available
Bootcamp Prep Popular ๐Ÿ”ฅ Data Science Mastery Data Science Launchpad with Python View AI Courses Generative AI for Everyone New ๐ŸŽ‰ Generative AI for Finance New ๐ŸŽ‰ Generative AI for Marketing New ๐ŸŽ‰
Bundle Up
Learn More and Save More
Combination of data science courses.
View Data Science Courses
Beginner
Introductory Python
Intermediate
Data Science Python: Data Analysis and Visualization Popular ๐Ÿ”ฅ Data Science R: Data Analysis and Visualization
Advanced
Data Science Python: Machine Learning Popular ๐Ÿ”ฅ Data Science R: Machine Learning Designing and Implementing Production MLOps New ๐ŸŽ‰ Natural Language Processing for Production (NLP) New ๐ŸŽ‰
Find Inspiration
Get Course Recommendation Must Try ๐Ÿ’Ž An Ultimate Guide to Become a Data Scientist
For Companies
For Companies
Corporate Offerings Hiring Partners Candidate Portfolio Hire Our Graduates
Students Work
Students Work
All Posts Capstone Data Visualization Machine Learning Python Projects R Projects
Tutorials
About
About
About Us Accreditation Contact Us Join Us FAQ Webinars Subscription An Ultimate Guide to
Become a Data Scientist
    Login
NYC Data Science Acedemy
Bootcamps
Courses
Students Work
About
Bootcamps
Bootcamps
Data Science with Machine Learning Flagship
Data Analytics Bootcamp
Artificial Intelligence Bootcamp New Release ๐ŸŽ‰
Free Lessons
Intro to Data Science New Release ๐ŸŽ‰
Find Inspiration
Find Alumni with Similar Background
Job Outlook
Occupational Outlook
Graduate Outcomes Must See ๐Ÿ”ฅ
Alumni
Success Stories
Testimonials
Alumni Directory
Alumni Exclusive Study Program
Courses
Bundles
financing available
View All Bundles
Bootcamp Prep
Data Science Mastery
Data Science Launchpad with Python NEW!
View AI Courses
Generative AI for Everyone
Generative AI for Finance
Generative AI for Marketing
View Data Science Courses
View All Professional Development Courses
Beginner
Introductory Python
Intermediate
Python: Data Analysis and Visualization
R: Data Analysis and Visualization
Advanced
Python: Machine Learning
R: Machine Learning
Designing and Implementing Production MLOps
Natural Language Processing for Production (NLP)
For Companies
Corporate Offerings
Hiring Partners
Candidate Portfolio
Hire Our Graduates
Students Work
All Posts
Capstone
Data Visualization
Machine Learning
Python Projects
R Projects
About
Accreditation
About Us
Contact Us
Join Us
FAQ
Webinars
Subscription
An Ultimate Guide to Become a Data Scientist
Tutorials
Data Analytics
  • Learn Pandas
  • Learn NumPy
  • Learn SciPy
  • Learn Matplotlib
Machine Learning
  • Boosting
  • Random Forest
  • Linear Regression
  • Decision Tree
  • PCA
Interview by Companies
  • JPMC
  • Google
  • Facebook
Artificial Intelligence
  • Learn Generative AI
  • Learn ChatGPT-3.5
  • Learn ChatGPT-4
  • Learn Google Bard
Coding
  • Learn Python
  • Learn SQL
  • Learn MySQL
  • Learn NoSQL
  • Learn PySpark
  • Learn PyTorch
Interview Questions
  • Python Hard
  • R Easy
  • R Hard
  • SQL Easy
  • SQL Hard
  • Python Easy
Data Science Blog > Machine Learning > Regression Model Efficacy on the Ames Housing Data Set

Regression Model Efficacy on the Ames Housing Data Set

Alexander Bruening and Ryan Willett
Posted on Jun 4, 2019

Project GitHub | LinkedIn:   Niki   Moritz   Hao-Wei   Matthew   Oren

The skills we demoed here can be learned through taking Data Science with Machine Learning bootcamp with NYC Data Science Academy.

The Ames Housing Data Set is popular largely due to its use as an introductory competition on the data science competition website Kaggle. In the competition, users are challenged at minimizing Mean Squared Log Error (MSLE) on a test set with target values withheld from the publicly available data set. This data set is particularly valuable because it has features that make both linear regression models and tree-based regression models appealing. Here we study the efficacy of different models applied to this problem and discuss their tradeoffs.

Introduction

The Ames Housing data set concerns housing sales in Ames, Iowa with 80 features used to predict sale price. These features can be further divided into nominal, ordinal, and numerical. We provide a summary of these variables below but further information about what the variable names correspond to can be found here.

Kaggle, for the purposes of the competition, provides a training data set of 1460 house sales described by the aforementioned 80 features as well as their corresponding sale price. A corresponding test data set of 1459 observations across the 80 features with sale price withheld for the purpose of contestants predicting the sale price value to be evaluated by Kaggle for scoring.

This division defines the classic train/test split used by data scientists to assess model performance. A model with high variance will perform well on the training data set and under-perform on the provided test set. Theoretically, it is possible to fit a model that perfectly describes the training data with zero error (evaluated by MSLE) as long as there are no degenerate observations over the independent features with different values for the target variable.

However, this model will perform poorly on the testing data because the underlying relationship between the sales price and the independent features is not captured by the model. Instead, an over-fit model is an encoding that maps the feature space of the training data directly into the observed sale prices in the training data with no regard for the testing data.

Consequently, contestants are required to properly bias their model to reduce over-fitting to the training data set to harness the information in the features that describes trends present in the testing data and yield good predictions; defined by low error in Kaggle's evaluation.

Here we will discuss this data set, modifications applied to it, and the impact of both previous items on the outcomes of our fit models for a variety of different regression techniques.

Data Exploration

The classic approach to regression problems is linear regression, a centuries old technique that uses simple lines to predict values from input variables. In order for linear models to work well, the underlying relationship between the dependent and independent variables must be linear. Interestingly, the relationship between the features and the sale price target variable is incredibly linear; which can be inferred from this plot:

Sale Price of houses in the training data set against their above ground square footage colored by neighborhood. Notice the two outliers in the bottom right.

The conic pattern is due to the underlying dependence on the neighborhood the house is in. The facet plot below demonstrates that, once neighborhood is accounted for, this relationship is strongly linear. Unfortunately basic linear models do not account well for this behavior which is addressed well by tree-based regression models.

Facet plot by neighborhood of Sale Price against above ground living area. Titles are abbreviations for neighborhoods in Ames, Iowa.

Knowing that there is a linear relationship, the next question is which variables most strongly correlate with the response. These variables are our strongest predictors for a linear model. Graphically, this can be captured by a correlation heat map.

Heat map of linear correlation between variables. Green indicates positive correlation while pink indicates negative correlation.

Deep colors on the bottom row show strong linear predictors for the target variable. We immediately see that "OverallQual" (overall quality, an integer categorical variable) and "GrLivArea" (above ground living area square footage) have the strongest correlations with "SalePrice". Additionally, we see that independent variables are highly correlated, violating assumptions of linear regression.

Importantly are the year the garage was built, the year the house was remodeled, and the year the house was built. The strong correlation will make it harder to harness the information contained in these variables. Regularization could also just eliminate strongly correlated variables through feature selection. Finally we can locate variables that are likely completely degenerate, particularly the number of cars that fit in the the garage and the square footage of the garage.

Missingness and Imputation

Large quantities of missing data are accounted for in the documentation associated with the data set. Standard missing values indicate that the feature is not present in the house. In the plot below, it is apparent that the missingness in the basement variables is correlated and corresponds in reality to houses lacking a basement. The same can be said for variables denoting the presence of an alley bordering the house, a pool, and fireplaces. These values can simply be simply marked missing or the missingness can be represented by an additional boolean feature.

A plot of the missing value locations over rows against columns of the training data set. White marks indicate missing values.

Numerical variables that are not associated with previously mentioned categorical variables were scarcely missing. The greatest exception to that trend is the "LotFrontage" variable which is a length measure of the house border with the street. It is unclear whether this missingness is intentional or indicative of a deeper pattern.

These values were median imputed per neighborhood along with an additional boolean feature indicating that they were imputed. The boolean feature for missingness will be handled well by tree models, which are sensitive to nonlinear behavior that may be associated with this quantity.

Feature Engineering

The target variable fit is not the sale price but the logarithm of the sale price. This operation serves to normalize the response variable distribution as well as reduce the effect of large sale prices on the fit value. The scoring metric is also dependent not on the error but the logarithm of the error. In the below plot of the distribution before transformation, the sale price is clearly strongly right-skewed which is effectively corrected by the log-transform. The sale price was finally standardized for model training.

Probability density plots of the observed sale price (left) and the sale price after applying a logarithmic transform (right).

Some features encoded as strings reflect ordinal rankings. These features were mapped to numerical rankings which could be fed into all models used directly without one-hot encoding. This process could negatively impact the linear models if the relationship were not linear, while speeding up training of tree models due to the reduced dimensionality from avoiding one-hot encoding. This mapping is described in the table below.

Additional boolean variables were imputed for the presence of a basement, a second floor, and whether the house was being sold new. Additional features as linear combinations of other variables are inspired by statistics often used to describe houses for sale including total square footage and total number of bathrooms (in half-integers). Features describing sale date were transformed so that temporal variation could be modeled sinusoidally to reflect periodicity over months.

As previously mentioned, the year the house was built is highly correlated with both the year the garage was built and the year the house was remodeled. In fact year built effectively places a lower bound on the latter two variables. In order to capture the information in these variables, two new variables are imputed as the difference between the year of garage construction or remodel addition and the house construction.

The imputed variables capture a linear correlation with the sale price of the house at the expense of behavior around zero on the domain corresponding to construction at the same year as the original house. While proper decision boundaries for tree models can model this behavior, standard linear models are incapable of accommodating deviations from linearity.

Engineered feature describing years a garage was constructed after the original house against sale price, colored by neighborhood. The behavior at 0 years is highly nonlinear.

Finally categorical features were one-hot encoded for compatibility with models used. Numerical variables were power-transformed and robust-scaled for normalization and resistance to effects of outliers. The code used for the treatment described in this section can be found in this GitHub repository.

Modelling

Different models select feature importance based on how well the model is able to capture the relationship between the feature and the target variable. Below is a feature importance computed from a random forest model on the engineered data set.

Feature importance found by random forest regression. Other tree models will capture the same features.

Notably, the "OverallQual" variable, square footage variables, and garage size variables that are strong predictors for linear models are absent from this table. This works both ways, tree models struggle to capture simple linear relationships that explain large quantities of response variance while eventually being capable of doing so with more complex ensemble models that perturbatively approach linear boundaries at the expense of potential over-fitting.

Linear models do not well describe categorical features or fail to include hierarchical information because they don't linearly correlate with the response or demonstrate dependence of feature variables.

The models presented here, with the exception of KNN, work are presented with the full set of features and allowed to perform feature selection on their own. Anecdotally, the regularization provided by the models following effective hyperparameter searches with cross-validation more rapidly determine feature inclusion and importance than human selection despite the increase in model training time.

Regularized Linear Regression (ElasticNet)

ElasticNet is the simplest form of regularized linear regression. Two hyperparameters define the model: alpha for regularization magnitude and the L1 ratio that controls the tendency to eliminate variables over just reducing the magnitudes of fitted coefficients. In fitting an ElasticNet model to our data, we found experimentally that both the regularization magnitude and L1 ratio are small in an optimal model. This reflects the wealth of observations to fit the relatively simple model over the dimensionality of the feature space.

Additionally, performance may be gained by engineering more features to the extent the model must be regularized more heavily. This is inherently limited to the breadth of the observed search space but the strengths of this model are the rapid fitting process and the low dimensionality of the hyperparameter search space. Consequently, it is easy to have confidence in hyperparameter choice with a relatively short time searching.

For this particular problem, as has been previously indicated, linear models are particularly powerful because of the strong linear correlation between the response predictive variables. We were able to obtain a Kaggle score of 0.12846, making it the most performant on this metric of the models that we will present in this posting. The greatest weakness of this model is the high dependence of strong predictors on other features that are not well utilized by a linear model.

Chiefly, this is the function the neighborhood has on the house price that ideally would manifest itself in the relationship strong predictors like house square footage have with the sale price, as was presented previously.

However regularized simple linear regression like ElasticNet is not capable of capturing this relationship due to the lack of linear correlation between these features; more advanced models such as hierarchical linear models are necessary to fully appreciate this behavior. Additional feature engineering to produce interaction features may further reduce assessment score for this model.

Ridge Regression

The ElasticNet model was found to not rely strongly on L1 ("lasso") regularization. Consequently, a purely L2-regularized ("ridge") model was studied as a point of comparison. At the cost of larger value for the single regularization hyperparameter used by this model, ridge regression was nearly as performant on the assessment metric with a Kaggle score of 0.12486, only a slight deviation from the score achieved by the more complex ElasticNet model. This reinforces interpretation of L1 regularization as a weak effect for producing a well-regularized linear regression model on this data set.

Gradient Boosted Regression Trees

Gradient boosted trees are an ensemble model that benefits from being able to take advantage of the hierarchical structure present in this data set by discriminating against these features early in the construction of the tree. Over a large number number of trees, this model is able to perturbatively improve the decision boundary used to derive predictions and capture linear, nonlinear, and interactions of features for prediction of the response variable.

The consequence is that to learn these behaviors a large number of trees must be used making the model susceptible to over-fitting. The hyperparameter space for gradient boosted tree models is also significantly larger than that for linear regression models. Combined with the additional time complexity of training individual models, searching for and fitting an optimal model has a much longer time requirement than the previous linear models.

We found experimentally that a large number of estimators is needed to train on this data set however the trees used were shallow and highly regularized to make up for over-fitting on a simple linear relationship. Ultimately we achieved a score of 0.12597 with this model.

Another package for training gradient boosted trees is Microsoft's LightGBM which improves upon the basic gradient boosted tree algorithm. Those improvements also come with an even higher dimensional hyperparameter space to search and multiple different ensembling methods. Improvements to the algorithm provide for faster tree training in comparison to XGBoost, and also provides good out-of-the-box performance. Ultimately we achieved a slightly better score model, evaluating to 0.12577.

For both of these algorithms improvements can likely be had at the price of significantly more time spent on the hyperparameter search.

K-Nearest Neighbors (KNN)

Beyond the tree and linear regression models, we implemented a KNN model to compare how a model that is highly dependent on feature space dimensionality would perform on this data set. Models based on KNN have the advantage of resiliency to noisy data and nonlinear features. Due to the reliance on geometric distance, KNNs are particularly fragile in high dimension feature space.

For this reason, we predetermined feature importance to constrain the number of variables put into the model: numerical features (continuous and discrete) were assessed by lasso regression and the top 15 influential categorical variables (nominal and ordinal) were selected by random forest. A primary challenge with this model was selecting the optimal method for dealing with the distance metric for categorical variables.

Ultimately the best performing model was the one in which the nominal categorical variables were one-hot encoded and combined with normalized and scaled numerical variables. Importantly, this model underestimated the importance of the neighborhood feature in the data set. We attempted to weight the distance metric applied to the categorical features by standardized percent increase MSE from the random forest (the metric for feature importance) but this degraded the model.

The final version of this KNN model had middling performance on prediction of the Kaggle set with a score of 0.1623, likely due to the sacrifice of features for the sake of dimensional reduction.

Support Vector Regression (SVR)

Support vector regressors provide another method of studying linear relationships while also being adaptable to nonlinear relationships using other kernels. Using SVR we were able to confirm that our data was properly linear (or linearized) by observing a performance loss when adopting polynomial kernels in comparison to linear. However, radial basis functions were observed to provide the best performance of the three kernels commonly used with this technique.

Intuitively, the strong linear relationships we observed would guide motivate the use of linear kernels. One-hot encoded variables introduced clusterings around their binary values that are well-handled by the support vectors using RBF kernels. SVR with RBF evaluated well against the assessment metric with a Kaggle score of 0.12686.

We conclude that SVR with RBF kernels provides a third method of dealing with the relationship between linear numerical variables and one-hot encoded categorical features. Further gains may be possible through feature elimination as the regularization used to prevent over-fitting in SVR is done by biasing the model rather than selecting features as in linear regression or decision trees.

Neural Network

The nonlinear interactions between variables not captured by linear model potentially favors a neural network which may learn both nonlinear interactions and the linear dependence for small networks. We constructed a neural network with 2 hidden layers of 10 nodes each and using a logistic activation function. Although the prediction for the majority of the points was relatively accurate, the model made some very poor outlier predictions, which diminished its prediction performance on the test data set for a final Kaggle score of 0.19072.

We attribute the low performance of the neural network to the relatively few number of observations in comparison to the numerous variables and internal feature weights; enough to learn the test set but not prevent over fitting for performance on the test set. Future work would concern reducing the node count of the network and applying regularization technique. This is likely not to yield much benefit due to the relatively small number of observations for a model of this complexity.

Conclusion

Various models were studied for their predictive power on the Ames Housing data set. Linear models were found to be highly effective and easily tunable but unable to account for all some elements of the data set. Tree models account for these variables at the expense others and the cost of a much larger hyperparameter search space limiting their final performance. Further work is necessary to ameliorate over-fitting as well as provide a feature space for optimal performance in evaluation.

Summary of Results

Example code, plotting commands, and the full data work-up can be found at this GitHub repository.

Future Direction

Further study of this data set for improved model performances as well as further models of interest:

  • Addition of more features
  • Feature selection for some models
  • Hierarchical linear models
  • Model stacking

About Authors

Alexander Bruening

Alex graduated from the University of Toronto in 2018 with a Master's of Science in Physics following a multidisciplinary degree from Rensselaer Polytechnic Institute. His focus was in interpretations of physical models and hopes to extend that knowledge...
View all posts by Alexander Bruening >

Ryan Willett

Ryan completed the NYCDSA program in June 2019. He holds a PhD in Pharmacology and Molecular Signaling from Columbia University, and BS and BA in Biology and Biochemistry, respectively, from Brandeis University. After a postdoctoral research fellowship in...
View all posts by Ryan Willett >

Related Articles

Community
Data Science Bootcamps: Four Reasons To Apply Early
Community
Introducing a Special Summer Bootcamp for 2019
Data Science News and Sharing
Your Data Science Career Being Accelerated and Here Is How!

Leave a Comment

Cancel reply

You must be logged in to post a comment.

No comments found.

View Posts by Categories

All Posts 2399 posts
AI 7 posts
AI Agent 2 posts
AI-based hotel recommendation 1 posts
AIForGood 1 posts
Alumni 60 posts
Animated Maps 1 posts
APIs 41 posts
Artificial Intelligence 2 posts
Artificial Intelligence 2 posts
AWS 13 posts
Banking 1 posts
Big Data 50 posts
Branch Analysis 1 posts
Capstone 206 posts
Career Education 7 posts
CLIP 1 posts
Community 72 posts
Congestion Zone 1 posts
Content Recommendation 1 posts
Cosine SImilarity 1 posts
Data Analysis 5 posts
Data Engineering 1 posts
Data Engineering 3 posts
Data Science 7 posts
Data Science News and Sharing 73 posts
Data Visualization 324 posts
Events 5 posts
Featured 37 posts
Function calling 1 posts
FutureTech 1 posts
Generative AI 5 posts
Hadoop 13 posts
Image Classification 1 posts
Innovation 2 posts
Kmeans Cluster 1 posts
LLM 6 posts
Machine Learning 364 posts
Marketing 1 posts
Meetup 144 posts
MLOPs 1 posts
Model Deployment 1 posts
Nagamas69 1 posts
NLP 1 posts
OpenAI 5 posts
OpenNYC Data 1 posts
pySpark 1 posts
Python 16 posts
Python 458 posts
Python data analysis 4 posts
Python Shiny 2 posts
R 404 posts
R Data Analysis 1 posts
R Shiny 560 posts
R Visualization 445 posts
RAG 1 posts
RoBERTa 1 posts
semantic rearch 2 posts
Spark 17 posts
SQL 1 posts
Streamlit 2 posts
Student Works 1687 posts
Tableau 12 posts
TensorFlow 3 posts
Traffic 1 posts
User Preference Modeling 1 posts
Vector database 2 posts
Web Scraping 483 posts
wukong138 1 posts

Our Recent Popular Posts

AI 4 AI: ChatGPT Unifies My Blog Posts
by Vinod Chugani
Dec 18, 2022
Meet Your Machine Learning Mentors: Kyle Gallatin
by Vivian Zhang
Nov 4, 2020
NICU Admissions and CCHD: Predicting Based on Data Analysis
by Paul Lee, Aron Berke, Bee Kim, Bettina Meier and Ira Villar
Jan 7, 2020

View Posts by Tags

#python #trainwithnycdsa 2019 2020 Revenue 3-points agriculture air quality airbnb airline alcohol Alex Baransky algorithm alumni Alumni Interview Alumni Reviews Alumni Spotlight alumni story Alumnus ames dataset ames housing dataset apartment rent API Application artist aws bank loans beautiful soup Best Bootcamp Best Data Science 2019 Best Data Science Bootcamp Best Data Science Bootcamp 2020 Best Ranked Big Data Book Launch Book-Signing bootcamp Bootcamp Alumni Bootcamp Prep boston safety Bundles cake recipe California Cancer Research capstone car price Career Career Day ChatGPT citibike classic cars classpass clustering Coding Course Demo Course Report covid 19 credit credit card crime frequency crops D3.js data data analysis Data Analyst data analytics data for tripadvisor reviews data science Data Science Academy Data Science Bootcamp Data science jobs Data Science Reviews Data Scientist Data Scientist Jobs data visualization database Deep Learning Demo Day Discount disney dplyr drug data e-commerce economy employee employee burnout employer networking environment feature engineering Finance Financial Data Science fitness studio Flask flight delay football gbm Get Hired ggplot2 googleVis H20 Hadoop hallmark holiday movie happiness healthcare frauds higgs boson Hiring hiring partner events Hiring Partners hotels housing housing data housing predictions housing price hy-vee Income industry Industry Experts Injuries Instructor Blog Instructor Interview insurance italki Job Job Placement Jobs Jon Krohn JP Morgan Chase Kaggle Kickstarter las vegas airport lasso regression Lead Data Scienctist Lead Data Scientist leaflet league linear regression Logistic Regression machine learning Maps market matplotlib Medical Research Meet the team meetup methal health miami beach movie music Napoli NBA netflix Networking neural network Neural networks New Courses NHL nlp NYC NYC Data Science nyc data science academy NYC Open Data nyc property NYCDSA NYCDSA Alumni Online Online Bootcamp Online Training Open Data painter pandas Part-time performance phoenix pollutants Portfolio Development precision measurement prediction Prework Programming public safety PwC python Python Data Analysis python machine learning python scrapy python web scraping python webscraping Python Workshop R R Data Analysis R language R Programming R Shiny r studio R Visualization R Workshop R-bloggers random forest Ranking recommendation recommendation system regression Remote remote data science bootcamp Scrapy scrapy visualization seaborn seafood type Selenium sentiment analysis sentiment classification Shiny Shiny Dashboard Spark Special Special Summer Sports statistics streaming Student Interview Student Showcase SVM Switchup Tableau teachers team team performance TensorFlow Testimonial tf-idf Top Data Science Bootcamp Top manufacturing companies Transfers tweets twitter videos visualization wallstreet wallstreetbets web scraping Weekend Course What to expect whiskey whiskeyadvocate wildfire word cloud word2vec XGBoost yelp youtube trending ZORI

NYC Data Science Academy

NYC Data Science Academy teaches data science, trains companies and their employees to better profit from data, excels at big data project consulting, and connects trained Data Scientists to our industry.

NYC Data Science Academy is licensed by New York State Education Department.

Get detailed curriculum information about our
amazing bootcamp!

Please enter a valid email address
Sign up completed. Thank you!

Offerings

  • HOME
  • DATA SCIENCE BOOTCAMP
  • ONLINE DATA SCIENCE BOOTCAMP
  • Professional Development Courses
  • CORPORATE OFFERINGS
  • HIRING PARTNERS
  • About

  • About Us
  • Alumni
  • Blog
  • FAQ
  • Contact Us
  • Refund Policy
  • Join Us
  • SOCIAL MEDIA

    ยฉ 2025 NYC Data Science Academy
    All rights reserved. | Site Map
    Privacy Policy | Terms of Service
    Bootcamp Application