Can you predict what your car is worth?

Posted on Sep 30, 2018

When you purchase a brand new car, the manufacturer's suggested retail price, the MSRP, is predetermined. But when you sell your car, and ownership of the vehicle changes hands, how do you determine what the car is worth? You most likely go to Kelley Blue Book or CarFax to get an appraisal of your car. There are many factors that contribute to a car's value, from mileage to engine type. In this project I will show you how you to use machine learning to model car resell value.

The Dataset

The data I am using was gathered from Kelley Blue Book's website (www.kbb.com) using a Python scripts and Scrapy.

How Scrapy Works

Scrapy allows you to program a bot (i.e. a Python script) that collects whatever data you want from a webpage. Although I won’t go in to depth about all the code a Scrapy bot requires, I will outline key points in Scrapy bot design.

  1. Initialize a Scrapy bot directory . Starting a Scrapy project is easy: After installing Scrapy got to your Python terminal (such as Anaconda Prompt) and type: scrapy startproject <Name of Project> This will create a Scrapy directory folder with all the files you need to start web scraping
  2. Setup the “Items.py” Script . This script tells the bot what elements of a website you want to scrape . For this project I want to scrape the details about the used car that is being listed
  3. Setup the “Spider.py” script . This is the brain of the bot. The Spider script essentially “crawls” the webpage and collects the data you assigned in the Items script. This is a hefty script! I will save the explanation of this code for a blog post.
  4. Setup the “pipelines.py” script . This script tells the bot how you want to save the data it collects. CSV format? Text file? It’s up to you!
  5. Setup the “settings.py” script . This script controls the settings of the bot like how fast should the bot scrape. You can change these settings here.
That’s it! Now navigate to your Python command line and enter: scrapy crawl <name of spider> The spider bot will begin to crawl the site, collect the items you specified, and save them in a predetermined format in the root of your project directory!

Model Creation

For simplicity's sake, I will focus on only quantitative features in my model creation:  Year, consumer review, expert review, mileage, MPG, and price.  In a complete data science project, I would also account for the categorical variables.

I remove outliers like some of the luxury/rare vehicles that are skewing the distribution.

Mileage (x-axis) vs. Price (y-axis)

Mileage (x-axis) vs. Price (y-axis)

I apply "Box-Cox" transformation to help normalize the distribution of skewed features.

Finally, I train the model on 80% of the data and test on the remaining 20%.  I use cross-validation (10-fold) to ensure that I sample a different set of data points each time I train the model. As is best practice, I use regularization (in this case Ridge) to better tune the model.

I apply the model to test set data and get a root mean squared log error (RMSLE) of 0.263

I plot the actual prices vs. the predicted prices in a density histogram:

We see here that our predicted prices closely mirror our actual prices.

Future Steps

Below are a few steps we can take to better improve our model:

  1. Encode our categorical variables and add them to our model
  2. Test this model on data the model has never seen before
  3. Test different machine learning algorithms and ensemble these models together

About Author

Taino Pacheco

Taino Pacheco holds a master's degree in Biomolecular Science from Central Connecticut State University and is an alumni of the NYC Data Science Academy.
View all posts by Taino Pacheco >

Leave a Comment

Your email address will not be published. Required fields are marked *

No comments found.

View Posts by Categories


Our Recent Popular Posts


View Posts by Tags