NYC Real Estate Inventory

Posted on Oct 20, 2018

As a native New Yorker who has moved several times in the past eight years, I know that moving in the city is a difficult and tiring process. Whether you are renting or buying, the task of finding a listing online, seeing the apartment, and securing it is arduous. As NYC is home to more than 8.5 million inhabitants and not the most extensive inventory of living spaces, a newly listed apartment can be snapped up within day or even hours. That's why, for my project I wanted to scrape the inventory of a real estate agency. I chose Corcoran Group, which has thousands of listings across the five boroughs of New York City. But for this project, I was only concerned with two of them: Manhattan and Brooklyn. My objective is to find what's available in the two boroughs and to analyze it to provide useful insight for those in the market for apartments. It's possible to extrapolate generalizations about the larger NYC real estate market from the neighborhood and bedroom analysis of these two boroughs.

Data Collection/Cleaning

After reading the html code for the Corcoran search results, it was evident that the site used ajax applications, which meant using scrapy or beautiful soup package was not an option. In order to scrape the website I used the selenium package as well as use the pandas, ggplot, seaborn in the data manipulation, analysis and visualization. The search results are set up as 36 to a page with each listing a link to listings details. I wrote my code to go to each page of listings, scrape each listing for their urls and make a list of those URLs.

From that list, my code would iterate through into each listing URL and scrape the entire listing, only keeping the data for address, neighborhood, price, apt type(Condo/Co-op/Townhouse), number of bedrooms, bathrooms, total rooms, sqft and description. I ran into an issue at this step, though; all the listings did not use the same set template as the one below. Some listings included all of the keys I was pulling, but others were missing certain pieces of information.

I worked through this issue by creating a dictionary for the information they did have, and if it matched the keys I was most interested in, passing it to a csv file. The end result was a listing for all 1317 Manhattan listings as well as 553 Brooklyn listings. To clean the data I had to remove dollar signs and comas from the price and change the types to an integer to be able to perform analysis on it. Unfortunately not all the listings are complete, and using averages or interpolation would not be appropriate. Consequently, I only used complete listings with data for all keys for my analysis. However, as each listing did include a description, I was able to draw on the complete catalog of listings for my word cloud analysis. The cleaned data result was a set of listings that included 690 in Manhattan and 302 in Brooklyn.

Going forward

This could work as a living document, scraping inventory nightly to keep an ongoing list of available inventory. A trend analysis can be don on that basis.

Also including a shiny app could make it possible to view the graphs for individual neighborhoods for different criteria.


About Author


Kent Burgess

A data scientist with a focus on machine learning, big data, advanced statistics and analytics, R and Python development, data visualization and packages, SQL, and Git/Github. Ambitious professional with a track record of executing strategic business initiatives in...
View all posts by Kent Burgess >

Leave a Comment

No comments found.

View Posts by Categories

Our Recent Popular Posts

View Posts by Tags

2019 airbnb alumni Alumni Interview Alumni Spotlight alumni story Alumnus API artist aws beautiful soup Best Bootcamp Best Data Science 2019 Best Data Science Bootcamp Big Data bootcamp Bootcamp Prep Bundles California Cancer Research capstone Career citibike clustering Coding Course Report D3.js data Data Analyst data science Data Science Academy Data Science Bootcamp Data Scientist Data Scientist Jobs data visualization Deep Learning Demo Day dplyr employer networking feature engineering Finance Financial Data Science Flask gbm Get Hired ggplot2 googleVis Hadoop higgs boson Hiring hiring partner events Industry Experts Job JP Morgan Chase Kaggle lasso regression Lead Data Scienctist Lead Data Scientist leaflet linear regression Logistic Regression machine learning Maps matplotlib Medical Research meetup Networking neural network Neural networks New Courses nlp NYC NYC Data Science nyc data science academy NYC Open Data NYCDSA NYCDSA Alumni Open Data painter pandas Portfolio Development prediction Programming PwC python python machine learning python scrapy python web scraping python webscraping Python Workshop R R language R Programming R Shiny r studio R Visualization R Workshop R-bloggers random forest recommendation recommendation system regression Scrapy scrapy visualization seaborn Selenium sentiment analysis Shiny Shiny Dashboard Spark Special Special Summer Sports statistics streaming Student Interview Student Showcase SVM Tableau Testimonial tf-idf Top Data Science Bootcamp twitter visualization web scraping What to expect word cloud word2vec XGBoost yelp