Predicting Customer's Drink of Choice From Real-Time Image Capture
Our team had the privilege of collaborating with the data science team of one of the largest beverage companies in the world. For this particular project, the team was given data from two completely separate machines, one soda dispensing machine and one image capture machine. The idea was that through the soda dispensing machine, the team can track what drinks are being chosen by users, but by adding an image capture machine, the team can also gather demographic information of each user. Our task was to be able to merge together information given from both machines to make a connection between WHO (user demographic information) was ordering WHAT (patterns in dispensed drinks). The end result will be valuable insights into customer preferences and behavior that can be scaled seamlessly.
Description of Data
As mentioned above, our data was retrieved from two unrelated machines with widely differing functions and output.
- Soda Dispenser Machine- Similar to soda machines found in fast food restaurants, this machine dispenses various fountain drinks. This machine was special in that it is able to keep track of user activity i.e. beverage chosen, time and duration of pour, quantity of poor, etc. In addition, these machines also had an added feature for customers to add flavor shots to their drink of choice providing a way for experimenting with brand new drink combinations that are not currently on the market
- Image Capture Machine - Strategically placed above the soda dispenser, this machine was able to capture images of the user as he/she makes her drink choice. Motion detection(proximity to camera of the machine) triggered image capture and the machine would capture images for the entire duration of the activity at a speed of one image per second. This machine would be able to extract useful demographic information from the images such as age, gender, etc. It is important to mention that the demographic data that the machine outputted was a best guess based on the images as opposed to factual data.
Before any analysis and statistical learning can be performed. There were two glaring challenges that needed to be resolved to accurately merge data from both machines.
- The soda dispensing machine does not capture individual events. It keeps track of solely dispensing data (beverage name, qty dispensed, etc). Therefore, we do not know from just the data provided when one user is dispensing his/her drink and when a new user enters the picture.
- We needed to use time stamps from each machine in order to correctly identify the same event within each machine's dataset, but we discovered that the two machines were not operating under synced atomic clocks, that is if an event occurring at 12:10pm on one machine, we would not see a matching 12:10pm on the other machine's data due to shifts in time.
Approaching the Time Sync
Since the time stamps of two machines were not synced, our team needed to figure out a way to correctly piece together the correct time of each user event. The method we chose was cross-correlation which would identify activity patterns in two time series and essentially apply a shift to find a best fit between the two data. Using this method, we were able to discover that the max number of linked events were occurring at a 1 hour and 5 second shift, meaning if we shifted the soda dispenser time stamps by 1 hour and 5 seconds to match the image capture data, we would see the max number of overlap between data from each machine. By finding the time shift we were one step closer in being able to make a connection between user demographic information and trends in drink preferences.
Figure 2: Result of cross-correlation algorithm. The peak in the middle of the graph shows the shift in time (seconds) that has the largest overlap in activity between the two datasets.
Identifying Individual User Events
As mentioned above, the soda dispenser machine only recorded beverage data and there was no explicit manner of distinguishing individual users. For example, if 100 beverage dispense events occurred in an hour, we did not know how many users were responsible for the 100 beverage dispense events. One user could have spent a relatively large amount of time trying out multiple drinks (5-10 beverage dispense events) and another user could know exactly what they want in which case one user accounted for one beverage dispense event. In order to identify unique user events, we had to study the data and make logical rules to try and distinguish when one user is finished dispensing a drink and another user is using the machine. We came up with the below three rules as a guideline for separating events:
- Time Between Pours – Intuitively, we know it takes some time for one user to a pour a drink, wrap up, and walk away from the machine. Conversely, if the same user is pouring multiple drinks, there would be a much lower time between pours. Using this logic, we concluded that if there is greater than 15 seconds between pours it is highly likely a new user was using the machine.
- Same Beverage Poured - A dead giveaway in the data was the same beverage being poured multiple times with short intervals between pours. This may have been an issue with how the Spire machine records poured data (i.e. segments one long pour into 4 rows of data all with the same beverage name).
- Same Dispensed Quantity- Similar to repeated beverage names, we would see that many rows of data had consecutive identical dispensed quantities often times with very short intervals between recorded rows.
Using a combination of all three rules, we created a function that determines individual user events across the entire data. Now that we knew the time shift between the machines and had identified individual events for both Spire and EyeQ data we were able to merge the two data and create a connection between user and beverage choice.
Key Findings and Results
Out of respect to the NDA agreement with our partner team, we will not be publicly sharing the key findings from our analysis. As expected though, piecing together demographic information with user beverage preferences allowed for powerful insights for the company's strategy moving forward. The team was able to find drink preferences by both gender and age groups. Not only did we see what the most popular beverages were in certain gender and age groups, we were able to dive deeper into more detailed criteria such as preferences for carbonation and sugar levels. Finally, one of the most valuable findings, were tracking users use of a flavor addition feature in the soda dispenser. Users were able to choose up to 3 flavors to add to widely known fountain beverages. Essentially, this feature served as a test market for brand new beverage products and our team provided key insights to particular flavor/beverage combinations that have a lot of traction.
Implications For Machine Learning
The team also attempted multiple machine learning algorithms in order to predict and/or assess user behavior. Below is a list of summaries for different machine learning algorithms applied to our data.
- Clustering/ PCA Analysis - Naturally, we started with a couple unsupervised learning methods to see if any previously unseen patterns occurred in our data. Clustering initially proved to not work particularly well as there was a lot of noise in the data. Therefore we moved on to Principal Component Analysis (PCA) to try and boil down the data to its most important factors. Our findings were that we can narrow down the data into 6 principal components that explains 88% of our data. The major issue with PCA is the complexity of its results. Each component is a mixture of all the original variables and the more variables there are each principal component becomes increasingly difficult to understand.
- Random Forest/ Gradient Boost/ Logistic Regression - The team was able to run multiple variations of these three classification methods. The universal finding was that our data was too sparse. There were dozens of possible classes (beverage choices) with too few observations within each class. Therefore, although our models were cross-validated to achieve 97% accuracy, the majority of the time, our models were predicting "False" for most if not all classes(user did not choose the drink). Even with a very simple binary classifier of diet vs non-diet, we would see inconsistent results.
- Association Rules - This particular algorithm was intriguing for its flexibility working with fewer observations. Most often used for shopping cart analysis, this model finds strength of relationships between different user choices. We found some pronounced signals between certain drinks and flavors, but overall strong relationships were more neatly explained through structured data analysis.
Figure 3: Principal Component Analysis - We see for each principal component is a mixture of many independent variables that is rather challenging to interpret.
Figure 4: Confusion Matrices for Random Forest and Gradient Boost Models- We see that for both models, we reached rather low errors in predictions. But upon deeper inspection we see that the majority of predictions were just predicting the user will not choose certain drinks.
Overall, the conclusion was that the data had two few observations( too few instances for each unique drink, thus losing predictive power) and there were too few features to effectively run models (variable importance plots varied widely from model to model and in some cases iterations of the same models. Larger number of observations seems to have helped to run machine learning models more effectively.
Our team strongly emphasized making our work with this particular datasets scalable. As mentioned earlier in the blog post, not only did we figure out a method of syncing two unrelated machines, we created an algorithm to identify unique user events. This means that our partner beverage company can deploy numerous image capture/soda dispenser machines across multiple locations to collect data and perform analysis. This opens the doors to possibly understanding trends in beverage choices by location in addition to gender and age.
Additionally, all of our machine learning models are ready to be deployed. Once more data is gathered, we are confident our machine learning models will have much more success in terms of predicting customer behavior.