Data Analysis and Kaggle AXA Telematics: Trip Matching

Posted on Apr 1, 2015
The skills we demoed here can be learned through taking Data Science with Machine Learning bootcamp with NYC Data Science Academy.

Contributed by Alex Adler as part of NYC Data Science Academy Bootcamp 1, February–April 2015. The following is a repost from my personal blog, approved for use by the NYCDSA. The full code can be found on my Github.

I’m proud to say I was part of the NYC Data Science Academy Bootcamp Team, Vivi’s Angels, for the AXA Telematics Kaggle competition. Now that the competition is over and the scores have been tallied, we are all learning so much from those who have started to share their approaches to solving the problem of identifying the primary owner of a car merely from the x-y data of the trip he or she took.

While no one here at the bootcamp is walking away with the $30,000 prize money, we do want to share our approach and code to such an intriguing and fun challenge. I was part of the feature engineering team and spent most of my time developing the trip-matching feature for use in our GBM modeling. This post will discuss how (and why) we wanted to get this feature up and running.

Early in the competition, we came across posts like this one that spoke of the success of trip matching in improving model results. Some domain experts even posted in the forums, saying that in the real world, GPS data was the best indicator of whether a driver was the owner of the car or not. The problem was that AXA rotated and flipped some trips so that they couldn’t be matched in this way! The first order of business was to re-align the trips, then compare them to one another to test for matches.

Once the team decided that trip matching should be incorporated into the features of our model, there were three challenges:

  • Translating the approach from Python to R
  • Optimizing computational time (the original codes took days to run on multiple cores)
  • Finding the best way to implement the data as a feature

Coding for Trip Matching

This code was written in R to do two things:

  • Rotate/flip trips so that most of the trip was in the first quadrant.
  • Compare trips of roughly similar shape using euclidean distance.

With 200 trips to compare for each of over 2700 drivers, parallelization was a must. library(doParallel) enabled the use of foreach loops that greatly sped up computation, even on only three cores on my Macbook Air. The flag, .combine=rbind takes the output from each loop and rbinds it to the dataframe in a manner.

cl <- makeCluster(1)

dataDir <- "data/"
drivers <- list.files(dataDir)

similarTrips<-foreach(driver=drivers,.combine=rbind) %dopar%{
 # Each driver's 200 trips were loaded as a binary file before being compared with one another

Rotating/flipping trips

Before the comparison for loops, transformations were applied via mutate() in the dplyr package. These transformations rotate each trip so that the last point is on the positive x-axis by changing the coordinate system. The trips were then flipped if more than half of the x or y coordinates were negative. This was useful later when trips were compared roughly before calculating their euclidean distances. When the euclidean distances were calculated, the rotated y values were used.

 group_by(tripID) %>%
 mutate(x = x, y = y,
 rot.x.flip=ifelse(sum(rot.xfloor(rows/2), -rot.x, rot.x),
 rot.y.flip=ifelse(sum(rot.yfloor(rows/2), -rot.y, rot.y)

To give a visual idea of this rotation, Figure 1 and Figure 2 show the raw and rotated trips, respectively.

Raw x-y data for driver 1, all 200 trips.

Fig. 1: Raw x-y data for driver 1, all 200 trips.

Fig. 2 Trips rotated such that the last point is on the positive x-axis, then flipped such that most points are in the first quadrant.

Fig. 2: Trips rotated such that the last point is on the positive x-axis, then flipped such that most points are in the first quadrant.

Even at this point, the naked eye can see some similarity among the rotated, spaghetti-esque mess. The next steps will automate the trip comparison to ultimately reveal similar trips.

Comparing individual trips

Only unique pairs of trips were considered, greatly reducing computational time.

 # begin a nested loop to check all UNIQUE combinations of trips
 for(i in 1:199){
 focus<-select(test[test$tripID==i,], foc.x=rot.x, foc.y=rot.y)
 for(k in (i+1):200) {
 compare<-select(test[test$tripID==k,], cmp.x=rot.x, cmp.y=rot.y)

Once two trips were selected for potential comparison, their “footprint” was compared. Trips that were more than 20% different in x- or y- range were not compared, since that large a difference would probably indicate a poor match. This selection by if statements reduced computational time by over 30% over checking every pair, generally resulting in approximately 1000 comparisons.

 if(!(diff(range(focus$foc.x))<0.8*diff(range(compare$cmp.x))) &
 !(diff(range(compare$cmp.x))<0.8*diff(range(focus$foc.x))) &
 !(diff(range(focus$foc.y))<0.8*diff(range(compare$cmp.y))) &

In order to compare trips with euclidean distance, their vectors must be the same length. The forum post that inspired this post imputed extra values by repeating values from the shorter of the two trips. Initially, that was my approach; however, it was faster (by about 10%) and more conservative (no imputed values) to truncate the longer of the two vectors.

Finally, the euclidean distance was calculated between the focusTrim and compareTrim y-values. I had to apply a normalization in order for the euclidean distance to be an acurate metric for similarity of trips of all sizes. Without such a normalization, trips with larger y- values would have greater euclidean distance, even if they were just as good a match. I chose to normalize by the mean of the largest y-values from each of the two trips.

 # Trim Trips
 trimLength<- min(nrow(compare),nrow(focus))

 #Calculate the Euclidean Distance
 data.frame(driver=driverID, tripA=i,tripB=k,
 } # end if loop
 } # end comparison loop
 } # end focus loop

Visualizing the result

Arranging the output (similarTrips) by eucDist, we get a “rank” of trip similarity, with the most similar trips towards the top. Keep in mind that eucDist doesn’t hold much value, but serves to indicate which trips are relatively similar.

> head(arrange(similarTrips,eucDist),n=10)
 driver tripA tripB eucDist
1     1    76   182 0.6139725
2     1    46    86 0.7146952
3     1    33    68 0.8082965
4     1   139   160 0.8460641
5     1    60    68 0.9580528
6     1    31   160 1.0072537
7     1    17   170 1.0351923
8     1    68   154 1.0593948
9     1    33    91 1.0752706
10    1    43   129 1.1583642

After looking at the spaghetti plot (Fig. 1) for so long, plotting the unique trips from the tripA and tripB column produces a gratifyingly organized verification of some success in matching trips:

Unique trips from the top 10 matched pairs for Driver 1. Note this is the plot of rotated trips.

Fig. 3: Unique trips from the top 10 matched pairs for Driver 1. Note this is the plot of rotated trips.

Our team now had a trip matching algorithm of our own (written in R instead of Python), and it was relatively fast at about 50 seconds per driver per cpu (about 38 cpu hours for the whole set).

Which eucDist should we choose?

This was not an easy question to answer. Choosing several drivers at random and plotting the above facets along with the ranked list of eucDist values. Our thinking was to favor sensitivity: it was better to miss a true positive than to introduce false positives by choosing a threshold that was too high. Eventually, we chose a threshold such that approximately 30 pairs were nearly all true positives. Later we would see that it wasn’t about finding pure matches.

If all of this manual verification sounds like it was mindnumbing, don’t worry: the monotony of verifying a few of these top matches for several drivers was outweighed by the sheer joy of finding order amidst the chaos.

Trip matches as features

Now that we could get a rough idea of which trips were similar, we needed to somehow code these trips for our gradient boosting machines model. We kicked around a lot of ideas:

Maybe any trip that matches another trip is the primary owner.

This turned out to produce bad results. Intuitively, we should have realized that even non-owners of the vehicles might take the same routes from time to time.

Maybe trips that have a match should “round” our initial prediction to 0 or 1.

Again, this reduced our performance on the public leaderboard. Although it would have been so nice to apply such a manual post-processing boost to our predictions!

Maybe trips should receive a score based on their similarity at different eucDist thresholds.

This was what ultimately produced an improvement in our public leaderboard score. I chose 5 thresholds, ranging from very conservative to very lax and assigned scores to them such that trips that matched at conservative thresholds (and more lax ones as a result) received a higher score than those that only matched at the “liberal thresholds.”

A Happy Ending

We had a blast in our first-ever Kaggle competition, finally able to edge our way into the top 10%. More importantly, we learned a TON about team organization and workflow and this was my first project working in the Agile/Spring Development cycle. Admittedly, the trip matching algorithm was successful weeks before its results could be turned into useful predictors. This is where having a great team helped. Conversations over coffee or lunch often produced new insights. I was lucky to be part of such a dedicated and hard-working team.

Further Reading

If you liked this blog post, please check out work by my teammates (links upcoming):

Julian Asano - Team Organization
Tim Schmeier - Visualization and Dynamic Time Warping
Sylvie Lardeux - Trip Attributes
Jason Liu - GBM Mechanic

About Author

Related Articles

Leave a Comment

No comments found.

View Posts by Categories

Our Recent Popular Posts

View Posts by Tags

#python #trainwithnycdsa 2019 2020 Revenue 3-points agriculture air quality airbnb airline alcohol Alex Baransky algorithm alumni Alumni Interview Alumni Reviews Alumni Spotlight alumni story Alumnus ames dataset ames housing dataset apartment rent API Application artist aws bank loans beautiful soup Best Bootcamp Best Data Science 2019 Best Data Science Bootcamp Best Data Science Bootcamp 2020 Best Ranked Big Data Book Launch Book-Signing bootcamp Bootcamp Alumni Bootcamp Prep boston safety Bundles cake recipe California Cancer Research capstone car price Career Career Day citibike classic cars classpass clustering Coding Course Demo Course Report covid 19 credit credit card crime frequency crops D3.js data data analysis Data Analyst data analytics data for tripadvisor reviews data science Data Science Academy Data Science Bootcamp Data science jobs Data Science Reviews Data Scientist Data Scientist Jobs data visualization database Deep Learning Demo Day Discount disney dplyr drug data e-commerce economy employee employee burnout employer networking environment feature engineering Finance Financial Data Science fitness studio Flask flight delay gbm Get Hired ggplot2 googleVis H20 Hadoop hallmark holiday movie happiness healthcare frauds higgs boson Hiring hiring partner events Hiring Partners hotels housing housing data housing predictions housing price hy-vee Income Industry Experts Injuries Instructor Blog Instructor Interview insurance italki Job Job Placement Jobs Jon Krohn JP Morgan Chase Kaggle Kickstarter las vegas airport lasso regression Lead Data Scienctist Lead Data Scientist leaflet league linear regression Logistic Regression machine learning Maps market matplotlib Medical Research Meet the team meetup methal health miami beach movie music Napoli NBA netflix Networking neural network Neural networks New Courses NHL nlp NYC NYC Data Science nyc data science academy NYC Open Data nyc property NYCDSA NYCDSA Alumni Online Online Bootcamp Online Training Open Data painter pandas Part-time performance phoenix pollutants Portfolio Development precision measurement prediction Prework Programming public safety PwC python Python Data Analysis python machine learning python scrapy python web scraping python webscraping Python Workshop R R Data Analysis R language R Programming R Shiny r studio R Visualization R Workshop R-bloggers random forest Ranking recommendation recommendation system regression Remote remote data science bootcamp Scrapy scrapy visualization seaborn seafood type Selenium sentiment analysis sentiment classification Shiny Shiny Dashboard Spark Special Special Summer Sports statistics streaming Student Interview Student Showcase SVM Switchup Tableau teachers team team performance TensorFlow Testimonial tf-idf Top Data Science Bootcamp Top manufacturing companies Transfers tweets twitter videos visualization wallstreet wallstreetbets web scraping Weekend Course What to expect whiskey whiskeyadvocate wildfire word cloud word2vec XGBoost yelp youtube trending ZORI