Rental listing scraping using python

Aravind Kolumum Raja
Posted on Mar 31, 2016

Contributed by Aravind Kolumum Raja. He attended in the NYC Data Science Academy 12 week full time Data Science Bootcamp program taking place between January 11th to April 1st, 2016. This post is based on his third class project - Web Scraping (due on the 6th week of the program).

The third academic project  involved scraping a well known real listing listing site for New York City using Python. The website consisted details on  listings  for more than 18000 units across all boroughs and neighbourhoods in NYC.   The features one can capture from a listing consists of

  1. Address
  2. Rent
  3. Fee/No Fee
  4. Square footage
  5. Studio/1Bed/2Bed /3 Bed
  6. Neighbourhood
  7. Days on market
  8. Price change
  9. Amenities - Elevator , Pets, Doorman (etc)
  10. Nearest subway
  11. Documentation details
  12. Location latitude/longitude( hidden behind maps)
  13. Description of the property

Typically a listing would look like this.

listings2listings3

listings4

The initial lookup of rental listings gives more than 1500 pages containing 14 listings each.

 

listings1

 

 

The Web scraping process was divided into 2 parts

  1. Scrape the 1500+ pages to extract the Names, Neighbourhoods and Urls  of the 18000+ rental listings ,Store the  results in a dictionary and combine them into a pandas data frame. This took about couple of hours to run.
  2. Run a 2nd Scraper that would go into each url and extract the features of the listing and store it in a new  data frame.  This was an overnight process that took about 6-7 hours to complete.

The loop would read the html parse it each time and append the details into a pandas frame. Finally the two data sets were merged and cleaned for data analysis.

Bottlenecks ,solutions/ tips and comments:

  • The rental search listing pages , when refreshed did not show the same page again.  That is , Url links to http://xyz.com/for-rent/nyc?page=3

resulted in different pages for each refresh.  To get around this ,  Selenium Python bindings (package) was used for automated web browsing.

http://selenium-python.readthedocs.org/getting-started.html.

 

 

  • Timeout exceptions and visibility issues -  It is preferable to use WebdriverWait until all elements you need to extract are visible to avoid any timeout execeptions

 

https://gist.github.com/kraravind/43feb4963136b0d95770fb9628dcf9a3

 

  •    Hidden Javascript - Beautiful Soup package was used to parse the html . The general idea is to look at "Inspect element" on your web browser, check for different tags asssociated with elements and extract them using the package.  However ,  some of the elements on page are not parsed because they are hidden behind javascript. The method to extract them is to directly use element locator in Selenium

 

  •  Unstructured Sequences/Lists -  The difficult aspect here was differences in the positions of various elements and missing items across different pages.In order to get around that, I stored  the sequence of names  in cells  is stored in a list in the form of the single string for further manipulation later.  In the example below, you see a specific listing, However, the order and availability of details in the cells below differ for each listing.

listings5

By storing them in a string, one can use string manipulation to extract specific keywords to get to an element (like beds, or baths for example)

 

https://gist.github.com/kraravind/0fa38da651104a73eb233b79843b7814

 

  •  Hidden elements on page - Skimming through the beautiful soup parsed text is useful because , sometimes we can extract elements that are often not shown on the page.  Like, latitude/longitude.

https://gist.github.com/kraravind/5f59c86b1f1626d0593eabf35419a683

  • Dealing with unicode  issues

Screenshot 2016-03-31 17.52.06

In this image, the direction of  price change of the rental listing is shown using a unicode character which has to be extracted from the database and then converted into a meaningful sign (+ or - )  The encoder that is used matters in this regard and the associated element has to be gathered carefully.

 

  • Saving the csv, every n iterations

Especially when a long scrape is involved,  there is an increased probability of errors, exceptions and internet connectivity issues that might stop the loop unexpectedly.   Hence it is preferable  to save the appended data frame into a csv every 100 iterations or so.

For example, if the loop runs from i  = 1 to 18000,

if (i%100==0):
frame.to_csv('nycrental'+str(i)+'.csv')

ensures the data frame is stored as a new csv with the count of the elements in it.

 

  • Cleaning

Capture

 

The string manipulation is faster and easier in spreadsheet applications like Excel  when the number of observations is less than half a million or so and the merged dataset was converted into a more analyzable csv file.

 

  •  A simple interactive visualisation on leaflet is provided here https://rpubs.com/aravindkr/155808

About Author

Aravind Kolumum Raja

Aravind Kolumum Raja

Aravind obtained his Masters degree in Statistics from Columbia University in 2012 and is presently an Analyst with a global investment management firm based in New York. His primary interests are in Mathematics, Statistics & Machine learning. He...
View all posts by Aravind Kolumum Raja >

Related Articles

Leave a Comment

Avatar
Bob M March 31, 2020
Thanks for the detailed article. I also found another helpful article on web scraping from trulia.com using Beautiful Soap- https://www.techmanyu.com/web-scraping-rental-properties-using-python/ Hope this helps !

View Posts by Categories


Our Recent Popular Posts


View Posts by Tags

#python #trainwithnycdsa 2019 airbnb Alex Baransky alumni Alumni Interview Alumni Reviews Alumni Spotlight alumni story Alumnus API Application artist aws beautiful soup Best Bootcamp Best Data Science 2019 Best Data Science Bootcamp Best Data Science Bootcamp 2020 Best Ranked Big Data Book Launch Book-Signing bootcamp Bootcamp Alumni Bootcamp Prep Bundles California Cancer Research capstone Career Career Day citibike clustering Coding Course Demo Course Report D3.js data Data Analyst data science Data Science Academy Data Science Bootcamp Data science jobs Data Science Reviews Data Scientist Data Scientist Jobs data visualization Deep Learning Demo Day Discount dplyr employer networking feature engineering Finance Financial Data Science Flask gbm Get Hired ggplot2 googleVis Hadoop higgs boson Hiring hiring partner events Hiring Partners Industry Experts Instructor Blog Instructor Interview Job Job Placement Jobs Jon Krohn JP Morgan Chase Kaggle Kickstarter lasso regression Lead Data Scienctist Lead Data Scientist leaflet linear regression Logistic Regression machine learning Maps matplotlib Medical Research Meet the team meetup Networking neural network Neural networks New Courses nlp NYC NYC Data Science nyc data science academy NYC Open Data NYCDSA NYCDSA Alumni Online Online Bootcamp Open Data painter pandas Part-time Portfolio Development prediction Prework Programming PwC python python machine learning python scrapy python web scraping python webscraping Python Workshop R R language R Programming R Shiny r studio R Visualization R Workshop R-bloggers random forest Ranking recommendation recommendation system regression Remote remote data science bootcamp Scrapy scrapy visualization seaborn Selenium sentiment analysis Shiny Shiny Dashboard Spark Special Special Summer Sports statistics streaming Student Interview Student Showcase SVM Switchup Tableau team TensorFlow Testimonial tf-idf Top Data Science Bootcamp twitter visualization web scraping Weekend Course What to expect word cloud word2vec XGBoost yelp