Data Web Scraping Craigslist

Posted on Jun 11, 2018
The skills the author demoed here can be learned through taking Data Science with Machine Learning bootcamp with NYC Data Science Academy.


Why I Chose This Project

As the old adage goes, ‘One man’s trash is another man’s treasure.’  This was my first introduction to Scrapy. My goal was to scrape used items from and perform some basic EDA on the data.

I expanded this project into more detail in my final project. You can check it out if you’re interested

Questions to Answer

I set out to answer what type of items were available, where the items were located, and how many items there were for each popular location. I also wanted to find out what the price distribution was for the items in the popular locations.


Where and How I Extracted the Data

Using Scrapy

This spider scraped the first page of the used items for sale section. This dataset is a small sample of what items were actually available on Craigslist in the area. I decided to go this route because I knew that Craigslist blocks IP addresses if they notice unusual request activity they deem inappropriate. I scraped the entire used items for sale section in my final project, but that required using the paid Crawlera service from Because this was my first exploration into web scraping, I focused on the fundamentals of creating a Scrapy spider and performing EDA on the data.

How I Visualized The Data

Using Pandas, Matplotlib, and Seaborn

I was able to resolve the questions I set out to answer using Pandas to clean and analyze the data as well as Matplotlib and Seaborn to visualize the data. I created subsets of items to group high-priced motor vehicles together as well as popular locations so I could gain insight into the price distribution of subsets.

Data Web Scraping Craigslist

Data Web Scraping Craigslist

Data Web Scraping Craigslist

Using wordcloud

I decided to add a simple word cloud visualization as well for another helpful visualization of available items in the area.

Data Results

Data Insights Gleaned

This sample-sized dataset was too small for a confident understanding of the price distributions for all used items in the area. But I was able to determine that $1000 would be enough to buy the majority of items available that weren’t motor vehicles.

This was my first introduction to data wrangling and EDA with Python and I enjoyed the learning process. You can view my GitHub repo here:

About Author

Keenan Burke-Pitts

Keenan has over 3 years of experience communicating and assisting in software and internet solutions to clients. Moving forward, Keenan plans to leverage his technical abilities, communication skills, and business understanding in the digital marketing world. Keenan graduated...
View all posts by Keenan Burke-Pitts >

Leave a Comment

Web Scraping Craigslist Items With Python | Adventures In Analytics August 4, 2019
[…] can view my guest post on the NYC Data Science Academy […]

View Posts by Categories

Our Recent Popular Posts

View Posts by Tags

#python #trainwithnycdsa 2019 2020 Revenue 3-points agriculture air quality airbnb airline alcohol Alex Baransky algorithm alumni Alumni Interview Alumni Reviews Alumni Spotlight alumni story Alumnus ames dataset ames housing dataset apartment rent API Application artist aws bank loans beautiful soup Best Bootcamp Best Data Science 2019 Best Data Science Bootcamp Best Data Science Bootcamp 2020 Best Ranked Big Data Book Launch Book-Signing bootcamp Bootcamp Alumni Bootcamp Prep boston safety Bundles cake recipe California Cancer Research capstone car price Career Career Day citibike classic cars classpass clustering Coding Course Demo Course Report covid 19 credit credit card crime frequency crops D3.js data data analysis Data Analyst data analytics data for tripadvisor reviews data science Data Science Academy Data Science Bootcamp Data science jobs Data Science Reviews Data Scientist Data Scientist Jobs data visualization database Deep Learning Demo Day Discount disney dplyr drug data e-commerce economy employee employee burnout employer networking environment feature engineering Finance Financial Data Science fitness studio Flask flight delay gbm Get Hired ggplot2 googleVis H20 Hadoop hallmark holiday movie happiness healthcare frauds higgs boson Hiring hiring partner events Hiring Partners hotels housing housing data housing predictions housing price hy-vee Income Industry Experts Injuries Instructor Blog Instructor Interview insurance italki Job Job Placement Jobs Jon Krohn JP Morgan Chase Kaggle Kickstarter las vegas airport lasso regression Lead Data Scienctist Lead Data Scientist leaflet league linear regression Logistic Regression machine learning Maps market matplotlib Medical Research Meet the team meetup methal health miami beach movie music Napoli NBA netflix Networking neural network Neural networks New Courses NHL nlp NYC NYC Data Science nyc data science academy NYC Open Data nyc property NYCDSA NYCDSA Alumni Online Online Bootcamp Online Training Open Data painter pandas Part-time performance phoenix pollutants Portfolio Development precision measurement prediction Prework Programming public safety PwC python Python Data Analysis python machine learning python scrapy python web scraping python webscraping Python Workshop R R Data Analysis R language R Programming R Shiny r studio R Visualization R Workshop R-bloggers random forest Ranking recommendation recommendation system regression Remote remote data science bootcamp Scrapy scrapy visualization seaborn seafood type Selenium sentiment analysis sentiment classification Shiny Shiny Dashboard Spark Special Special Summer Sports statistics streaming Student Interview Student Showcase SVM Switchup Tableau teachers team team performance TensorFlow Testimonial tf-idf Top Data Science Bootcamp Top manufacturing companies Transfers tweets twitter videos visualization wallstreet wallstreetbets web scraping Weekend Course What to expect whiskey whiskeyadvocate wildfire word cloud word2vec XGBoost yelp youtube trending ZORI