Scraping Used Items on Craigslist.org with Scrapy

Keenan Burke-Pitts
Posted on Jun 11, 2018

Purpose

Why I Chose This Project

As the old adage goes, ‘One man’s trash is another man’s treasure.’  This was my first introduction to Scrapy. My goal was to scrape used items from Craigslist.org and perform some basic EDA on the data.

I expanded this project into more detail in my final project. You can check out if you’re interested https://nycdatascience.com/blog/student-works/capstone/local-used-items-analysis-with-python-and-tableau.

Questions to Answer

I set out to answer what type of items were available, where the items were located, and how many items there were for each popular location. I also wanted to find out what the price distribution was for the items in the popular locations.

Process

Where and How I Extracted the Data

Using Scrapy

This spider scraped the first page of the used items for sale section. This dataset a small sample of what items were actually available on Craigslist in the area. I decided to go this route because I knew that Craigslist blocks IP addresses if they notice unusual request activity they deem inappropriate. I scraped the entire used items for sale section in my final project, but that required using the paid Crawlera service from Scrapinghub.com. Because this was my first exploration into web scraping, I focused on the fundamentals of creating a Scrapy spider and performing EDA on the data.

How I Visualized The Data

Using Pandas, Matplotlib, and Seaborn

I was able to resolve the questions I set out to answer using Pandas to clean and analyze the data as well as Matplotlib and Seaborn to visualize the data. I created subsets of items to group high-priced motor vehicles together as well as popular locations so I could gain insight into the price distribution of subsets.

 

Using wordcloud

I decided to add a simple word cloud visualization as well for another helpful visualization of available items in the area.

Results

Insights Gleaned

This sample-sized dataset was too small for a confident understanding of the price distributions for all used items in the area. But I was able to determine that $1000 would be enough to buy the majority of items available that weren’t motor vehicles.

This was my first introduction to data wrangling and EDA with Python and I enjoyed the learning process. You can view my GitHub repo here: https://github.com/Kiwibp/NYC-DSA-Bootcamp--Web-Scraping.

About Author

Keenan Burke-Pitts

Keenan Burke-Pitts

Keenan has over 3 years of experience communicating and assisting in software and internet solutions to clients. Moving forward, Keenan plans to leverage his technical abilities, communication skills, and business understanding in the digital marketing world. Keenan graduated...
View all posts by Keenan Burke-Pitts >

Leave a Comment

Your email address will not be published. Required fields are marked *

No comments found.

View Posts by Categories


Our Recent Popular Posts


View Posts by Tags