Data Study on NYC Real Estate Inventory
The skills the author demoed here can be learned through taking Data Science with Machine Learning bootcamp with NYC Data Science Academy.
As a native New Yorker who has moved several times in the past eight years, I know that moving in the city is a difficult and tiring process. Whether you are renting or buying, the task of finding a listing online, seeing the apartment, and securing it is arduous. As NYC is home to more than 8.5 million inhabitants and not the most extensive inventory of living spaces, data records show a newly listed apartment can be snapped up within day or even hours.
For my project I wanted to scrape the inventory of a real estate agency. I chose Corcoran Group, which has thousands of listings across the five boroughs of New York City. But for this project, I was only concerned with two of them: Manhattan and Brooklyn. My objective is to find what's available in the two boroughs and to analyze it to provide useful insight for those in the market for apartments. It's possible to extrapolate generalizations about the larger NYC real estate market from the neighborhood and bedroom analysis of these two boroughs.
After reading the html code for the Corcoran search results, it was evident that the site used ajax applications, which meant using scrapy or beautiful soup package was not an option. In order to scrape the website I used the selenium package as well as use the pandas, ggplot, seaborn in the data manipulation, analysis and visualization. The search results are set up as 36 to a page with each listing a link to listings details. I wrote my code to go to each page of listings, scrape each listing for their urls and make a list of those URLs.
From that list, my code would iterate through into each listing URL and scrape the entire listing, only keeping the data for address, neighborhood, price, apt type(Condo/Co-op/Townhouse), number of bedrooms, bathrooms, total rooms, sqft and description. I ran into an issue at this step, though; all the listings did not use the same set template as the one below. Some listings included all of the keys I was pulling, but others were missing certain pieces of information.
I worked through this issue by creating a dictionary for the information they did have, and if it matched the keys I was most interested in, passing it to a csv file. The end result was a listing for all 1317 Manhattan listings as well as 553 Brooklyn listings. To clean the data I had to remove dollar signs and comas from the price and change the types to an integer to be able to perform analysis on it.
Unfortunately not all the listings are complete, and using averages or interpolation would not be appropriate. Consequently, I only used complete listings with data for all keys for my analysis. However, as each listing did include a description, I was able to draw on the complete catalog of listings for my word cloud analysis. The cleaned data result was a set of listings that included 690 in Manhattan and 302 in Brooklyn.
This could work as a living document, scraping inventory nightly to keep an ongoing list of available inventory. A trend analysis can be don on that basis.
Also including a shiny app could make it possible to view the graphs for individual neighborhoods for different criteria.