Rental Listing Scraping Using python
Contributed by Aravind Kolumum Raja. He attended in the NYC Data Science Academy 12 week full time Data Science Bootcamp program taking place between January 11th to April 1st, 2016. This post is based on his third class project - Web Scraping (due on the 6th week of the program).
The third academic project involved scraping a well known real listing listing site for New York City using Python. The website consisted details on listings for more than 18000 units across all boroughs and neighbourhoods in NYC. The features one can capture from a listing consists of
- Address
- Rent
- Fee/No Fee
- Square footage
- Studio/1Bed/2Bed /3 Bed
- Neighbourhood
- Days on market
- Price change
- Amenities - Elevator , Pets, Doorman (etc)
- Nearest subway
- Documentation details
- Location latitude/longitude( hidden behind maps)
- Description of the property
Typically a listing would look like this.
The initial lookup of rental listings gives more than 1500 pages containing 14 listings each.
The Web scraping process was divided into 2 parts
- Scrape the 1500+ pages to extract the Names, Neighbourhoods and Urls of the 18000+ rental listings ,Store the results in a dictionary and combine them into a pandas data frame. This took about couple of hours to run.
- Run a 2nd Scraper that would go into each url and extract the features of the listing and store it in a new data frame. This was an overnight process that took about 6-7 hours to complete.
The loop would read the html parse it each time and append the details into a pandas frame. Finally the two data sets were merged and cleaned for data analysis.
Bottlenecks ,solutions/ tips and comments:
- The rental search listing pages , when refreshed did not show the same page again. That is , Url links to http://xyz.com/for-rent/nyc?page=3
resulted in different pages for each refresh. To get around this , Selenium Python bindings (package) was used for automated web browsing.
http://selenium-python.readthedocs.org/getting-started.html.
- Timeout exceptions and visibility issues - It is preferable to use WebdriverWait until all elements you need to extract are visible to avoid any timeout execeptions
- Hidden Javascript - Beautiful Soup package was used to parse the html . The general idea is to look at "Inspect element" on your web browser, check for different tags asssociated with elements and extract them using the package. However , some of the elements on page are not parsed because they are hidden behind javascript. The method to extract them is to directly use element locator in Selenium
- Unstructured Sequences/Lists - The difficult aspect here was differences in the positions of various elements and missing items across different pages.In order to get around that, I stored the sequence of names in cells is stored in a list in the form of the single string for further manipulation later. In the example below, you see a specific listing, However, the order and availability of details in the cells below differ for each listing.
By storing them in a string, one can use string manipulation to extract specific keywords to get to an element (like beds, or baths for example)
- Hidden elements on page - Skimming through the beautiful soup parsed text is useful because , sometimes we can extract elements that are often not shown on the page. Like, latitude/longitude.
- Dealing with unicode issues
In this image, the direction of price change of the rental listing is shown using a unicode character which has to be extracted from the database and then converted into a meaningful sign (+ or - ) The encoder that is used matters in this regard and the associated element has to be gathered carefully.
- Saving the csv, every n iterations
Especially when a long scrape is involved, there is an increased probability of errors, exceptions and internet connectivity issues that might stop the loop unexpectedly. Hence it is preferable to save the appended data frame into a csv every 100 iterations or so.
For example, if the loop runs from i = 1 to 18000,
if (i%100==0):
frame.to_csv('nycrental'+str(i)+'.csv')
ensures the data frame is stored as a new csv with the count of the elements in it.
- Cleaning
The string manipulation is faster and easier in spreadsheet applications like Excel when the number of observations is less than half a million or so and the merged dataset was converted into a more analyzable csv file.
- A simple interactive visualisation on leaflet is provided here https://rpubs.com/aravindkr/155808