Analyzing House Prices Data: New Housing Option for Students
The skills I demoed here can be learned through taking Data Science with Machine Learning bootcamp with NYC Data Science Academy.
Introduction
Ames, Iowa is a small college town which is home to Iowa State University(ISU). Close to half the townβs population comprises students and staff from the university. University housing is a challenge in terms of availability and cost. At most universities, data shows more than a third of yearly college cost is attributed to room and board (dorms). Therefore there is a need to quickly increase the available student housing options.
Rich universities situated in cities, like NYU, have found a solution that works for them: buying and transforming entire neighborhoods to accommodate their students. However, colleges situated in small towns have not yet found a solution to their problem. We propose a new solution where colleges can use machine learning to predict house prices in the area around them and therefore identify the cheaper ones which can be converted into student housing.
Problem Definition and Algorithm
1. Task Definition
We are using the Ames, Iowa dataset to train a model that can accurately predict the house prices of a given list of houses and their features. We then run the created list into a pipeline that can identify potential houses that the university can purchase and convert into student housing.Β
2. Algorithm
Since we wanted to keep the model interpretable, we decided that the final model would be a multiple linear regression where the values of the coefficients give you clear insights on how they affect the sale price of the house. However, weΒ used more complex models in the preprocessing part of the project to solve some of the challenging aspects of the project.Β
Methodology on Our Data
1. Exploratory Data Analysis
The Ames housing data set has 2,580 observations and 80 features. Of the 80 features, 38 are numerical and 42 are categorical. The following are some of the key characteristics of the data and the implications:Β
(a) Number of Features
With 80 features, of which 42 had to be βdummified," we were looking at over 153 features. With such a large number of features, we had to consider the βcurse of dimensionality.β This phenomenon refers to the condition when the ratio of observations to number of features is small. When this happens, the standard error of the coefficients grow and makes the model less trustworthy in general. Paring down features was going to be the primary task before any prediction could be done.Β
(b) Highly Collinear FeaturesΒ
As usually expected with a large number of features, there is a lot of collinearity among them. High level of collinearity results in poor accuracy of the regression coefficients and causes high standard error. To avoid this, features must either be dropped or combined.Β
(c) Skewed Distribution and Non-LinearityΒ
The target variable SalePrice has a distribution that is skewed to the right, We also noticed that many of the variables did not meet many assumptions of linear regression, i.e., a linear relation with the target variable and constant variance of error terms. The plots below show some examples.Β
Therefore, we decided to use the log of sale price as target variable to address the issue. The results are shown below:Β
(d) Data MissingnessΒ
The feature data has several values missing A methodical approach to impute the missing value is essential to ensure that biases are not introduced to the dataset. The team decided on the following strategy for imputation:
- Determine if missing features had related variables. If a related variable had meaningful value that explained the missing value, then the missing variable was imputed with the mean(for numeric variables) or the mode(categorical variable) of the houses in the neighborhood.
- If the missing features did not have a related variable, then it was imputed with a 0 for numerical variables or the βnot availableβ category for a categorical variable.
WorkflowΒ
Before feature selection, it is essential that the training and test data are not correlated in any way. This is necessary to ensure that there is no leakage from the training data which could result in overfitting of the test data. KeepingΒ independent pipelines for building the training and test data was an essential foundation for the project. The following workflow was adopted for the project:Β
Once the team had a strong pipeline to build clean data, they were able to build training and test data on the fly. The resulting clean data was then fed to the models being studied.
Data on Feature Selection
The team selected two models for feature selection (1) lasso and (2) random forest. Lasso is a linear model feature selection methodology that uses the shrinkage method. This method drives down the insignificant feature coefficients to zero, thereby allowing us to eliminate them from the model. Random forest, on the other hand, is an ensemble methodology based on decision trees. This methodology eliminates features by randomly selecting observations and a subset of features. Several trees are constructed, and the predictions are averaged across the trees. Accuracy of random forest can be very high, but it lends itself to overfitting and can be CPU intensive.
Next, we compare both methodologies.
Lasso (Shrinkage Method)
We ran Lasso using the Python Sci-kit Learn package. We first performed a grid search to find the optimal alpha value. The following graph shows that the optimal value was at around .002
Using the alpha value above, we were able to drive down several feature coefficients to zero. The top 25 features were as follows:Β
Data on Random Forest (Ensemble method)
Next, we ran the Random Forest regressor package from sklearn in Python. We also performed a grid search to find the optimal hyper parameters for the model. The values were as follows:
Unlike Lasso, which drives coefficients down to 0, Random Forest simply returns a list of features by importance. The top 25 features selected by Random Forest were as follows:Β
Upon comparing both methodologies, we found the following:Β
Both models had eleven features in common in their top 25 feature selection. Running a linear regression on these eleven features revealed an adjusted R2 of about 0.69, which is not good enough for accurate predictions. We also realized that Random Forest had overfitted, and we would be better off doing a backward selection with Lasso. We still had a very large number of features (72) with Lasso, and we had not studied the multicollinearity among them to determine if the adjusted R2 was picking up some noise. So our next step was to calculate the Variance Inflation Factor(VIF) for the Lasso features. See graph below:
We found seven variables that were violating the VIF threshold of 5. We were able to easily eliminate some features and bring the VIF down for all variables except for GrLivArea. To determine further which variables would need to be eliminated, we decided to run Principal Component Analysis(PCA).Β
PCA helped us identify variable importance and we manually removed the low importance variables and brought the VIF for GrLivArea down to 5.4 (as shown in the graph below).Β
Data Modelling
The final model has 53 features. We fit a multiple linear regression and analyzed the results of the regression to make sure we are not violating any assumptions of the model.Β
The first graph shows the residuals vs fitted values. The graph indicates no violations of the homoscedasticity assumption. The second graph shows that the errors are normally distributed. We also checked if there are any outliers or high leverage points affecting the model and the results are shown here:
From those visual tests, we can safely say that there are none such points. The results of the regression can be summed up in the following graph:Β
The plot of predicted values against actual values follow the expected y = x form. The test Adjusted R2 value was 0.901 and the RMSE Score was 0.108Β
Model Application
Now that we had a good model to predict house prices, we were able to identify candidate homes for the university. We built a βpredict pipelineβ that took in housing data, filtered the data to meet the universityβs criteria for selecting homes for expanding their housing options. The criteria was as follows:Β
- Retrieve 10% of homes with the lowest Sales PriceΒ
- Filter further to meet University requirement of the following criteria:Β
- 1 mile distance from campusΒ
- Bedroom to bathroom ratio of less than or equal to 2Β
- Overall condition and quality is fair and aboveΒ
We ran the pipeline on the entire housing dataset, and it yielded the following properties based on the first criteria (the circle indicates 1 mile radius from the university):
Additional filtering on the second criteria yielded four homes as follows:Β
Looking Forward And Next Steps
With an adjusted R2 of 0.901, additional work would have to be done to increase the accuracy of the final model. We also would like to develop a more user friendly interface for the university to enter their criteria for finding homes that will meet their requirements.