Predicting Customer's Drink of Choice From Real-Time Image Capture


Our team had the privilege of collaborating with the data science team of one of the largest beverage companies in the world.  For this particular project,  the team was given data from two completely separate machines, one soda dispensing machine and one image capture machine.   The idea was that through the soda dispensing machine, the team can track what drinks are being chosen by users, but by adding an image capture machine, the team can also gather demographic information of each user. Our task was to be able to merge together information given from both machines to make a connection between WHO (user demographic information) was ordering WHAT (patterns in dispensed drinks).  The end result will be valuable insights into customer preferences and behavior that can be scaled seamlessly.

Description of Data

As mentioned above, our data was retrieved from two unrelated machines with widely differing functions and output.

  1. Soda Dispenser Machine- Similar to soda machines found in fast food restaurants, this machine dispenses various fountain drinks.  This machine was special in that it is able to keep track of user activity i.e. beverage chosen, time and duration of pour, quantity of poor, etc.  In addition, these machines also had an added feature for customers to add flavor shots to their drink of choice providing a way for experimenting with brand new drink combinations that are not currently on the market
  2. Image Capture Machine - Strategically placed above the soda dispenser, this machine was able to capture images of the user as he/she makes her drink choice.  Motion detection(proximity to camera of the machine) triggered image capture and the machine would capture images for the entire duration of the activity at a speed of one image per second.  This machine would be able to extract useful demographic information from the images such as age, gender, etc.  It is important to mention that the demographic data that the machine outputted was a best guess based on the images as opposed to factual data.

Key Challenges

Before any analysis and statistical learning can be performed.  There were two glaring challenges that needed to be resolved to accurately merge data from both machines.

  1. The soda dispensing machine does not capture individual events. It keeps track of solely dispensing data (beverage name, qty dispensed, etc).  Therefore, we do not know from just the data provided when one user is dispensing his/her drink and when a new user enters the picture.
  2. We needed to use time stamps from each machine in order to correctly identify the same event within each machine's dataset, but we discovered that the two machines were not operating under synced atomic clocks, that is if an event occurring at 12:10pm on one machine, we would not see a matching 12:10pm on the other machine's data due to shifts in time.

Approaching the Time Sync

Since the time stamps of two machines were not synced, our team needed to figure out a way to correctly piece together the correct time of each user event.  The method we chose was cross-correlation which would identify activity patterns in two time series and essentially apply a shift to find a best fit between the two data.  Using this method, we were able to discover that the max number of linked events were occurring at a 1 hour and 5 second shift, meaning if we shifted the soda dispenser time stamps by 1 hour and 5 seconds to match the image capture data, we would see the max number of overlap between data from each machine.  By finding the time shift we were one step closer in being able to make a connection between user demographic information and trends in drink preferences.  

Figure 1: Example of two different time series, in our case the two logged data points from each machine.

Figure 2: Result of cross-correlation algorithm.  The peak in the middle of the graph shows the shift in time (seconds) that has the largest overlap in activity between the two datasets.

Identifying Individual User Events

As mentioned above, the soda dispenser machine only recorded beverage data and there was no explicit manner of distinguishing individual users.  For example, if 100 beverage dispense events occurred in an hour, we did not know how many users were responsible for the 100 beverage dispense events.  One user could have spent a relatively large amount of time trying out multiple drinks (5-10 beverage dispense events) and another user could know exactly what they want in which case one user accounted for one beverage dispense event.  In order to identify unique user events, we had to study the data and make logical rules to try and distinguish when one user is finished dispensing a drink and another user is using the machine.  We came up with the below three rules as a guideline for separating events:

  1. Time Between Pours – Intuitively, we know it takes some time for one user to a pour a drink, wrap up, and walk away from the machine. Conversely, if the same user is pouring multiple drinks, there would be a much lower time between pours.  Using this logic, we concluded that if there is greater than 15 seconds between pours it is highly likely a new user was using the machine.
  2. Same Beverage Poured - A dead giveaway in the data was the same beverage being poured multiple times with short intervals between pours. This may have been an issue with how the Spire machine records poured data (i.e. segments one long pour into 4 rows of data all with the same beverage name).
  3. Same Dispensed Quantity- Similar to repeated beverage names, we would see that many rows of data had consecutive identical dispensed quantities often times with very short intervals between recorded rows.

Using a combination of all three rules, we created a function that determines individual user events across the entire data.  Now that we knew the time shift between the machines and had identified individual events for both Spire and EyeQ data we were able to merge the two data and create a connection between user and beverage choice.

Key Findings and Results

Out of respect to the NDA agreement with our partner team, we will not be publicly sharing the key findings from our analysis.  As expected though, piecing together demographic information with user beverage preferences allowed for powerful insights for the company's strategy moving forward.   The team was able to find drink preferences by both gender and age groups.  Not only did we see what the most popular beverages were in certain gender and age groups, we were able to dive deeper into more detailed criteria such as preferences for carbonation and sugar levels.  Finally, one of the most valuable findings, were tracking users use of a flavor addition feature in the soda dispenser.  Users were able to choose up to 3 flavors to add to widely known fountain beverages.  Essentially,  this feature served as a test market for brand new beverage products and our team provided key insights to particular flavor/beverage combinations that have a lot of traction.

Implications For Machine Learning

The team also attempted multiple machine learning algorithms in order to predict and/or assess user behavior.  Below is a list of summaries for different machine learning algorithms applied to our data.

  1. Clustering/ PCA Analysis - Naturally, we started with a couple unsupervised learning methods to see if any previously unseen patterns occurred in our data.  Clustering initially proved to not work particularly well as there was a lot of noise in the data.  Therefore we moved on to Principal Component Analysis (PCA) to try and boil down the data to its most important factors.  Our findings were that we can narrow down the data into 6 principal components that explains 88% of our data.  The major issue with PCA is the complexity of its results.  Each component is a mixture of all the original variables and the more variables there are each principal component becomes increasingly difficult to understand.
  2.  Random Forest/ Gradient Boost/ Logistic Regression - The team was able to run multiple variations of these three classification methods.   The universal finding was that our data was too sparse.  There were dozens of possible classes (beverage choices) with too few observations within each class.  Therefore, although our models were cross-validated to achieve 97% accuracy, the majority of the time, our models were predicting "False" for most if not all classes(user did not choose the drink).  Even with a very simple binary classifier of diet vs non-diet, we would see inconsistent results.
  3. Association Rules -   This particular algorithm was intriguing for its flexibility working with fewer observations.  Most often used for shopping cart analysis, this model finds strength of relationships between different user choices.  We found some pronounced signals between certain drinks and flavors, but overall strong relationships were more neatly explained through structured data analysis.

Figure 3: Principal Component Analysis - We see for each principal component is a mixture of many independent variables that is rather challenging to interpret.


Figure 4: Confusion Matrices for Random Forest and Gradient Boost Models-  We see that for both models, we reached rather low errors in predictions.  But upon deeper inspection we see that the majority of predictions were just predicting the user will not choose certain drinks.

Overall, the conclusion was that the  data had two few observations( too few instances for each unique drink, thus losing predictive power) and there were too few features to effectively run models (variable importance plots varied widely from model to model and in some cases iterations of the same models.  Larger number of observations seems to have helped to run machine learning models more effectively.

Future Work

Our team strongly emphasized making our work with this particular datasets scalable.  As mentioned earlier in the blog post, not only did we figure out a method of syncing two unrelated machines, we created an algorithm to identify unique user events.  This means that our partner beverage company can deploy numerous image capture/soda dispenser machines across multiple locations to collect data and perform analysis.  This opens the doors to possibly understanding trends in beverage choices by location in addition to gender and age.

Additionally, all of our machine learning models are ready to be deployed.  Once more data is gathered,  we are confident our machine learning models will have much more success in terms of predicting customer behavior.

About Authors

Josh Yoon

Josh graduated from the University of California San Diego with a bachelor's degree in Psychology. Upon graduating, he attended Columbia University's Post Baccalaureate Program for Medicine. After realizing medicine was not his passion, he worked as a Product...
View all posts by Josh Yoon >

Ben Brunson

Ben Brunson is a man whose curiosity has led him to work in many industries. He handled day to day operations and special projects for AuST Development, a medical devices development company. He Managed paid search campaigns for...
View all posts by Ben Brunson >

Hsiang-Yuan(Joshua) Lee

Hsiang-Yuan Lee graduated from New York University with a M.S. degree in Industrial Engineering. He loves finding insights from different types of data and is open to learn new skills. Hsiang-Yuan decided to become a professional data scientist...
View all posts by Hsiang-Yuan(Joshua) Lee >

Tianyi Gu

Tianyi Gu is a creative thinker with strong quantitative and analytical skills. Tianyi received his MS in Urban Informatics from New York University and BS in Actuarial Science from SUNY Buffalo. With great passion in infinite possibilities in...
View all posts by Tianyi Gu >

Related Articles

Leave a Comment

No comments found.

View Posts by Categories

Our Recent Popular Posts

View Posts by Tags

#python #trainwithnycdsa 2019 2020 Revenue 3-points agriculture air quality airbnb airline alcohol Alex Baransky algorithm alumni Alumni Interview Alumni Reviews Alumni Spotlight alumni story Alumnus ames dataset ames housing dataset apartment rent API Application artist aws bank loans beautiful soup Best Bootcamp Best Data Science 2019 Best Data Science Bootcamp Best Data Science Bootcamp 2020 Best Ranked Big Data Book Launch Book-Signing bootcamp Bootcamp Alumni Bootcamp Prep boston safety Bundles cake recipe California Cancer Research capstone car price Career Career Day citibike classic cars classpass clustering Coding Course Demo Course Report covid 19 credit credit card crime frequency crops D3.js data data analysis Data Analyst data analytics data for tripadvisor reviews data science Data Science Academy Data Science Bootcamp Data science jobs Data Science Reviews Data Scientist Data Scientist Jobs data visualization database Deep Learning Demo Day Discount disney dplyr drug data e-commerce economy employee employee burnout employer networking environment feature engineering Finance Financial Data Science fitness studio Flask flight delay gbm Get Hired ggplot2 googleVis H20 Hadoop hallmark holiday movie happiness healthcare frauds higgs boson Hiring hiring partner events Hiring Partners hotels housing housing data housing predictions housing price hy-vee Income Industry Experts Injuries Instructor Blog Instructor Interview insurance italki Job Job Placement Jobs Jon Krohn JP Morgan Chase Kaggle Kickstarter las vegas airport lasso regression Lead Data Scienctist Lead Data Scientist leaflet league linear regression Logistic Regression machine learning Maps market matplotlib Medical Research Meet the team meetup methal health miami beach movie music Napoli NBA netflix Networking neural network Neural networks New Courses NHL nlp NYC NYC Data Science nyc data science academy NYC Open Data nyc property NYCDSA NYCDSA Alumni Online Online Bootcamp Online Training Open Data painter pandas Part-time performance phoenix pollutants Portfolio Development precision measurement prediction Prework Programming public safety PwC python Python Data Analysis python machine learning python scrapy python web scraping python webscraping Python Workshop R R Data Analysis R language R Programming R Shiny r studio R Visualization R Workshop R-bloggers random forest Ranking recommendation recommendation system regression Remote remote data science bootcamp Scrapy scrapy visualization seaborn seafood type Selenium sentiment analysis sentiment classification Shiny Shiny Dashboard Spark Special Special Summer Sports statistics streaming Student Interview Student Showcase SVM Switchup Tableau teachers team team performance TensorFlow Testimonial tf-idf Top Data Science Bootcamp Top manufacturing companies Transfers tweets twitter videos visualization wallstreet wallstreetbets web scraping Weekend Course What to expect whiskey whiskeyadvocate wildfire word cloud word2vec XGBoost yelp youtube trending ZORI