Joining Data Without A Key

Posted on Oct 31, 2017

More detail for this project (including analysis and findings) can be found in my team’s capstone project write-up.  The purpose of this post is to go into greater detail on one specific aspect of the story:  research done to join two tables without a proper key field between them.  Some of the content of the aforementioned blog post is repeated here in order to tell this story.

Problem Statement

For this project we were presented with two data tables:

  1. Image Capture data for people who passed in front of a camera
  2. Beverage Dispense data reflecting volumes of different beverage choices

The Image Capture data started and ended about one day earlier than the Beverage Dispense data.  Syncing the time stamps involved matching up two completely different time lines with events occurring on completely different scales. In an ideal world, the before and after of the sync might look like this:

Before:

Time Sync: Before

After:

Time Sync: After

In reality, the more you match up some of the date/times, the more you throw off some of the others, and the risk of error in what you do match up is high.  Further complicating this problem, there could be one or more beverage dispense events that you were then looking to map to 5, 10, or even 20 or more image capture frame records (in the other table).  I believe there was even one case where the number of potentials was as high as 75 records.

Event Mapping

Solving The Problem

Our team explored many different approaches to both understanding and solving these issues.  This includes:

  1. Brute Force: common sense guessing based on a spot check analysis of time fields and attempting to join different combinations of time fields with different time shifts.
  2. Code that attempted to shift the date/time stamps in a loop attempting to find the best synch up
  3. Code that performed the join based on whether a date/time in the beverage table could be found within the min/max date/time interval or records in the Image Capture table.  This assumed a “best date/time” for the beverage event would be found and then the join could be performed without needing an exact 1-to-1 date/time match up.
  4. Use of Gradient Descent in tests of both a probability maximization formula and error minimization formula.
  5. The final approach described below that ultimately got used

The Gradient Descent approach seemed the most promising initially, but analysis of the results showed that some records in the joined result set did not seem to make sense.  The amount of time to dispense drinks based on their volume was not sufficient for what we were seeing after joining the Image Capture data to the beverage dispense data.  

This led to the team brainstorming together, and a final approach emerged:

  1. The error between potential matchup records was defined as the sum of the seconds of all beverage events to the nearest image event. When looking at potential records, the code sought the ones with the lowest error.
  2. To link the data, we needed a time span in which it is reasonable that the persons from the image capturing are responsible for the dispense events. The data collection and sampling process described in the “stake out” section of our team blog post was invaluable to this process.
  3. We took two sigma before and two sigma after the mean to filter out potential records in the distribution of what could be joined from the set of Image Capture frames near the beverage events records.  This resulted in only 59% (4 sigma) of the frame capture records being targeted for inclusion in the join.
  4. We then defined a formula for “best guess” ranking to find the top 5 records to include in the join.  The formula included these factors in its ranking:
    1. Time distance from beverage observation to first Image Capture Observation
    2. User duration in front of camera (how long were they there?)
    3. Number of user frames in the time window (how often were they there?)
    4. Face size bonus:  a number recorded closeness to the camera at time of observation.  Larger was assumed to be better for data capture in this context.

Conclusion

This approach produced the most useful results from this specific data set.  All the previous approaches (not used) could have application to other problems in other data sets.  Ultimately, it is analysis of the data that should always drive decisions of this nature.

About Author

Mitch

Mitch Abramson has served as: Business Writer/Techwriter, Researcher, Editor, Problem Solver, Tinkerer, Communicator, Code Hacker, System Administrator, Trainer, Content Strategist, XML Reuse Architect and facilitator of projects and initiatives. He jokes that his career took him from being...
View all posts by Mitch >

Related Articles

Leave a Comment

No comments found.

View Posts by Categories


Our Recent Popular Posts


View Posts by Tags

#python #trainwithnycdsa 2019 2020 Revenue 3-points agriculture air quality airbnb airline alcohol Alex Baransky algorithm alumni Alumni Interview Alumni Reviews Alumni Spotlight alumni story Alumnus ames dataset ames housing dataset apartment rent API Application artist aws bank loans beautiful soup Best Bootcamp Best Data Science 2019 Best Data Science Bootcamp Best Data Science Bootcamp 2020 Best Ranked Big Data Book Launch Book-Signing bootcamp Bootcamp Alumni Bootcamp Prep boston safety Bundles cake recipe California Cancer Research capstone car price Career Career Day citibike classic cars classpass clustering Coding Course Demo Course Report covid 19 credit credit card crime frequency crops D3.js data data analysis Data Analyst data analytics data for tripadvisor reviews data science Data Science Academy Data Science Bootcamp Data science jobs Data Science Reviews Data Scientist Data Scientist Jobs data visualization database Deep Learning Demo Day Discount disney dplyr drug data e-commerce economy employee employee burnout employer networking environment feature engineering Finance Financial Data Science fitness studio Flask flight delay gbm Get Hired ggplot2 googleVis H20 Hadoop hallmark holiday movie happiness healthcare frauds higgs boson Hiring hiring partner events Hiring Partners hotels housing housing data housing predictions housing price hy-vee Income Industry Experts Injuries Instructor Blog Instructor Interview insurance italki Job Job Placement Jobs Jon Krohn JP Morgan Chase Kaggle Kickstarter las vegas airport lasso regression Lead Data Scienctist Lead Data Scientist leaflet league linear regression Logistic Regression machine learning Maps market matplotlib Medical Research Meet the team meetup methal health miami beach movie music Napoli NBA netflix Networking neural network Neural networks New Courses NHL nlp NYC NYC Data Science nyc data science academy NYC Open Data nyc property NYCDSA NYCDSA Alumni Online Online Bootcamp Online Training Open Data painter pandas Part-time performance phoenix pollutants Portfolio Development precision measurement prediction Prework Programming public safety PwC python Python Data Analysis python machine learning python scrapy python web scraping python webscraping Python Workshop R R Data Analysis R language R Programming R Shiny r studio R Visualization R Workshop R-bloggers random forest Ranking recommendation recommendation system regression Remote remote data science bootcamp Scrapy scrapy visualization seaborn seafood type Selenium sentiment analysis sentiment classification Shiny Shiny Dashboard Spark Special Special Summer Sports statistics streaming Student Interview Student Showcase SVM Switchup Tableau teachers team team performance TensorFlow Testimonial tf-idf Top Data Science Bootcamp Top manufacturing companies Transfers tweets twitter videos visualization wallstreet wallstreetbets web scraping Weekend Course What to expect whiskey whiskeyadvocate wildfire word cloud word2vec XGBoost yelp youtube trending ZORI