Data Analysis on NYC Profitable Property

Posted on Jun 12, 2021
The skills I demoed here can be learned through taking Data Science with Machine Learning bootcamp with NYC Data Science Academy.

Background

In this project, I planned to find profitable or undervalued properties data to invest in by:

- looking into a dataset of past NYC property sales

- finding out what makes a property profitable

- finding out what makes a profitable property different from other properties

This project is aimed to help anyone who wants to invest in real estate and looking for insights to make a more informed decision. This could be anyone from a real estate company looking to expand, property flippers looking for their next home to refurbish or your regular person on the street trying to build their assets.

The dataset I used came from Kaggle, but originates from the website of the Department of Finance of NYC. It contains data from the property sales in NYC from 2016 to 2017. It consists of 22 variables and nearly 85,000 observations.

Define Profitable Property

I defined a 'profitable property' in 2 ways:

- a property that sold for lower than the average sale price of properties in the same neighborhood and of the same type (1bd apt, studio, house, etc).

- a property that has an increasing property value and has a mortgage price that is less than the average rent price for that neighborhood and type.

Unfortunately, this dataset does not contain the right information to find the average rent price by neighborhood and property type.

For this project, I will focus on the properties with a below average sale price for their neighborhood and type.

Data Cleaning

There were sets of documentation provided for this dataset, the Glossary of Terms and Building Classifications Glossary. They showed that a lot of data cleaning was required.

To clean the data, I had to:
- change the variable names for ease of use
- remove nonsensical and unrelated variables
- change the sale_price, gross_sqft and land_sqft variables to be numeric and remove observations that had null values for any of these variables
- remove observations with sale prices below $100,000 (this number can be adjusted if needed) because as many properties had been passed down or inherited for $0 or sold to a family member for a low price. This also removed observations of garage parking sales.
- change boroughs to the proper names
- only include the top 100 neighborhood building class groups, with the largest amount of properties sold (some groups with too few properties would not be able to provide accurate trends in the data)

I ended up with only a little over 8,200 observations.

Data on Feature Engineering

To do proper analysis, I needed to account for the vast difference in price caused by the different neighborhoods and building classes. Without accounting for this, I would be generally comparing high price homes to low price homes. The expected and obvious result would be that larger homes and homes in certain neighborhood have higher prices. For example, a home in Chelsea and a home in Flushing would have a very different price, as would a one family home and three family home.

For this reason, I added the following variables to account for this:

  • avg_sale_price
  • sd_sale_price
  • avg_residential_units
  • avg_commercial_units
  • avg_total_units
  • avg_land_sqft
  • avg_gross_sqft
  • avg_year_built

I took each observation and divided the corresponding variable value by the average of that variable, again, for its neighborhood and building class, to get another set of variables.

  • ratio_sale_price
  • ratio_residential_units
  • ratio_commercial_units
  • ratio_total_units
  • ratio_land_sqft
  • ratio_gross_sqft
  • ratio_year_built

These normalized ratio variables are what I later used in my exploratory data analysis.

Now with the variables, avg_sale_price and sd_sale_price, I assigned a value to each observation based on whether it had a good price or not. If the property was more than one standard deviation below its average sale price, sale_price < (avg_sale_price - sd_sale_price), the value under good_price would be 1, otherwise, that value would be 0.

This conditional can be change to be sale_price < (avg_sale_price - chosen_price_difference), with the variable, chosen_price_difference, being the amount below the average sale price within the neighborhood and building class it would need to be to constitute a profit. Due to lack of domain knowledge, I will use the standard deviation in place of chosen_price_difference.

Exploratory Data Analysis

Numerical Data

The numerical variables show little to no influence on the profitability of a property. The box plots and density plots involving the residential units, commercial units and total units on a property are all confined within one value, whether they have a good price or not:

  • ratio_residential_units = 1
  • ratio_commercial_units = 0
  • ratio_total_units = 1

After further investigation, I realized that all the neighborhood building class groups in the top 100 only had one family, two family and three family homes. That puts these values in the proper context. All one family homes  have 1 residential unit, 0 commercial units and 1 total unit. All two family homes  have 2 residential units, 0 commercial units and 2 total units. Dividing by each average will give you 1 for ratio_residential_units and ratio_total_units and a null value or 0 for ratio_commercial_units.

For the other three variables, there is only a small difference between the good price and not good price group.  Deeper statistical analysis would be required to determine if this difference in negligible or not. The variables are the:

  • ratio_land_sqft
  • ratio_gross_sqft
  • ratio_year_built
Categorical Data

The categorical values show some difference between the good price and not good price groups.

To graph this, I created a ratio of the number of good priced properties or not good priced properties out of the total properties within that group. By graphing them I could see certain groups had higher ratios of profitable properties over others. The tables attached to these graphs listed the properties in order of their ratio of good priced to not good priced properties, called ratio_gp. With this, you can see which neighborhoods, building classes or zip codes to look into and which to avoid.

Conclusion

With all the numeric variables, I made boxplots and density plots. The variables showed the same or very similar trend between the good price and bad price group. This showed that these variables had little to no effect on having a 'good price'. To be certain, these differences can be looked into further using statistical analysis.

I also made bar graphs for each of the categorical variables. The variables had a similar ratio of amount of good price to bad price properties. When comparing the top and bottom 5 observations, there is a more noticeable difference. A few observations in each variable stood higher than the others, which are as follows:

- borough: Bronx
- neighborhood: Port Richmond, Williamsburg
- building class: A9, B3
- zip code: 10303, 11377

However, the tables that were created did a better job in showing where not to by property. The lowest ratio_gp had terrible chances of having a profitable property.

These observations are where there is the most difference between the good price and bad price groups. More analysis is required to know exactly how viable this information is.

Future Work

If I had more time, I would gather the data from the last few year and see if I get similar results. I believe that it would be different. There were not enough observations within the good price group to get a strong result. I would also scrape housing websites to gather more variables to compare to the good and bad price groups.

I would also like to try doing this a different way. I would calculate the sale price per gross square foot and analyze all the variables based on this.

About Author

Nixon Lim

I am a data science fellow at NYC Data Science Academy with a Bachelors in Mathematics and Psychology. I am looking for opportunities to improve efficiency and maximize resource utilization using data visualization and statistical analysis.
View all posts by Nixon Lim >

Related Articles

Leave a Comment

No comments found.

View Posts by Categories


Our Recent Popular Posts


View Posts by Tags

#python #trainwithnycdsa 2019 2020 Revenue 3-points agriculture air quality airbnb airline alcohol Alex Baransky algorithm alumni Alumni Interview Alumni Reviews Alumni Spotlight alumni story Alumnus ames dataset ames housing dataset apartment rent API Application artist aws bank loans beautiful soup Best Bootcamp Best Data Science 2019 Best Data Science Bootcamp Best Data Science Bootcamp 2020 Best Ranked Big Data Book Launch Book-Signing bootcamp Bootcamp Alumni Bootcamp Prep boston safety Bundles cake recipe California Cancer Research capstone car price Career Career Day citibike classic cars classpass clustering Coding Course Demo Course Report covid 19 credit credit card crime frequency crops D3.js data data analysis Data Analyst data analytics data for tripadvisor reviews data science Data Science Academy Data Science Bootcamp Data science jobs Data Science Reviews Data Scientist Data Scientist Jobs data visualization database Deep Learning Demo Day Discount disney dplyr drug data e-commerce economy employee employee burnout employer networking environment feature engineering Finance Financial Data Science fitness studio Flask flight delay gbm Get Hired ggplot2 googleVis H20 Hadoop hallmark holiday movie happiness healthcare frauds higgs boson Hiring hiring partner events Hiring Partners hotels housing housing data housing predictions housing price hy-vee Income Industry Experts Injuries Instructor Blog Instructor Interview insurance italki Job Job Placement Jobs Jon Krohn JP Morgan Chase Kaggle Kickstarter las vegas airport lasso regression Lead Data Scienctist Lead Data Scientist leaflet league linear regression Logistic Regression machine learning Maps market matplotlib Medical Research Meet the team meetup methal health miami beach movie music Napoli NBA netflix Networking neural network Neural networks New Courses NHL nlp NYC NYC Data Science nyc data science academy NYC Open Data nyc property NYCDSA NYCDSA Alumni Online Online Bootcamp Online Training Open Data painter pandas Part-time performance phoenix pollutants Portfolio Development precision measurement prediction Prework Programming public safety PwC python Python Data Analysis python machine learning python scrapy python web scraping python webscraping Python Workshop R R Data Analysis R language R Programming R Shiny r studio R Visualization R Workshop R-bloggers random forest Ranking recommendation recommendation system regression Remote remote data science bootcamp Scrapy scrapy visualization seaborn seafood type Selenium sentiment analysis sentiment classification Shiny Shiny Dashboard Spark Special Special Summer Sports statistics streaming Student Interview Student Showcase SVM Switchup Tableau teachers team team performance TensorFlow Testimonial tf-idf Top Data Science Bootcamp Top manufacturing companies Transfers tweets twitter videos visualization wallstreet wallstreetbets web scraping Weekend Course What to expect whiskey whiskeyadvocate wildfire word cloud word2vec XGBoost yelp youtube trending ZORI