Data Analysis on Attention Monitoring
The skills I demoed here can be learned through taking Data Science with Machine Learning bootcamp with NYC Data Science Academy.
LinkedIn | GitHub | Email | Notebook | Webapp
The Idea
Hate to break it to you, but you cannot trust people. Initially, there were no cabin-facing cameras in Tesla vehicles. It was thought that an agreement before using Autopilot and detecting whether adequate pressure was applied to the steering wheel was sufficient to ensure that the driver would pay attention. Even when cameras were added, starting with the Model 3, data stated that they were not intended for driver monitoring. Elon Musk, CEO and Product Architect of Tesla, stated they were implemented to prevent people from vandalizing cars when they would be eventually used as robotaxis.
However, George Hotz, President of Comma.ai was convinced that Musk would ultimately have to add driver-monitoring cameras to their vehicles. His prediction coming from his data, was that people would misuse Autopilot as Tesla’s driver-assistance features advanced and gained in popularity turned out to be correct.
"Do I still need to pay attention while using Autopilot?
Yes. Autopilot is a hands-on driver assistance system that is intended to be used only with a fully attentive driver. It does not turn a Tesla into a self-driving car nor does it make a car autonomous.
Before enabling Autopilot, you must agree to “keep your hands on the steering wheel at all times” and to always “maintain control and responsibility for your car.” Once engaged, Autopilot will also deliver an escalating series of visual and audio warnings, reminding you to place your hands on the wheel if insufficient torque is applied. If you repeatedly ignore these warnings, you will be locked out from using Autopilot during that trip.
You can override any of Autopilot’s features at any time by steering, applying the brakes, or using the cruise control stalk to deactivate."
https://www.tesla.com/support/autopilot
The many reports of people sleeping, reading, and engaging in other activities that showcased their utter disregard for the Autopilot agreement were proof enough that more obtrusive mechanisms had to be implemented to garner greater compliance among its drivers. Beginning in early 2020, Volvo (Pilot Assist) also began adding driver-monitoring cameras to all of their vehicles to combat distracted driving. They have joined the likes of Comma.ai’s Openpilot, GM’s Super Cruise, Ford’s Co-Pilot360, and a myriad of other companies and their driver-assistance technologies that incorporate driver-monitoring.
The Implementation
Building a platform on a neural network that utilized a human-facing camera to determine whether or not the person was attentive seemed straightforward and scalable. Swap a camera mounted on a rearview mirror for a standard laptop camera, and you have my project, essentially.
The Steps
Take a look at my notebook to see how the steps below played out.
- Understand Neural Networks ✓
- Outline Project ✓
- Collect and Prepare Data ✓
- Build, Test, Iterate ✓
- Final Results ✓
1. Understand The Data
I chose a convolutional neural network (CNN) because it is commonly used in image classification. CNNs are preferred over traditional neural networks because they reduce the number of input nodes, tolerate pixel shifts in the image, and take advantage of the correlation observed in complex images where similarly colored pixels tend to be close together. I went with a pre-trained CNN as it would yield much better results than training the CNN from scratch, given my limited dataset. This process is called transfer learning.
ResNet-152 was chosen due to its better performance compared to other pre-trained architectures offered by the Keras module in TensorFlow. Its depth is a big contributor to its effectiveness; the “152” stands for 152 layers. ResNet-152 is trained on a subset of the ImageNet dataset consisting of 1.2 million images with 1000 categories. The main features of deep residual networks are the shortcuts between layers which prevent vanishing gradients and mitigate accuracy saturation.
2. Outline
- Train a pre-trained CNN to classify images.
- Use CNN to classify video clips.
- Use CNN to classify live video.
- Learn how to train the model on AWS (TBD).
- Deploy a web application (TBD).
3. The Data
Ideally, I should have had a bunch of interns helping me with this, and I am only half joking. It was the longest and most tedious, hands-on portion of my project (the longest portion was training the model). I was unable to find a single dataset that met my requirements with regard to the subject’s body position and diversity, accessibility, and data size. So I spent a great deal of time compiling and labeling the data.
By the end, I had 10427 photos. I divided them manually into a 70-30 train-evaluation split, then further manually divided the 30 to 15-15 validation-test split. This had to be done manually so the photos from the various data sources were more-or-less evenly distributed.
Here is how I partitioned the data:
Train: 7302 images belonging to 2 classes
Validate: 1561 images belonging to 2 classes.
Test: 1564 images belonging to 2 classes.
The training data was augmented to create a more diverse data set and make the model more generalizable. The first set of parameters I used achieved that goal but resulted in unrealistic training data:
So I tweaked the parameters when I retrained the model:
4. Build, Test, Iterate
Initially, I trained a ResNet-152 model with frozen base layers and the unfrozen fully-connected layers I constructed. The base layers were frozen to prevent their weights from being updated during training.
As mentioned, I tweaked the augmentation parameters to get more realistic images, and unfroze the last convolution block so that the last block could be trained with the head I constructed.
1st model training results:
loss: 0.4209 - accuracy: 0.7615
2nd model training results:
loss: 0.3175 - accuracy: 0.8561
Although the second model reported a better loss and accuracy, one can see by the graphs that it has a greater degree of overfitting. Tweaking the augmentation parameters and using techniques such as regularization could mitigate that overfitting.
5. Data Results
I utilized the OpenCV computer vision library for the video classification portion of my project. To my surprise and delight, there was no substantial difference in code between classifying video clips and classifying live video from my laptop’s webcam.
As I tested the model I noticed a few things:
- It was better at predicting when I was not paying attention versus when I was paying attention.
- The further away I was from the camera, the less accurate it was at detecting if I was paying attention.
- It was better at detecting attentiveness based on head movement and the presence of obstacles versus eye movement.
I mitigated the first two issues by adjusting when the model would classify an image as being attentive and not attentive. Instead of labeling the image as attentive with less than .50 prediction and not attentive with equal to and greater than .50, I labeled images with less than .99 prediction as attentive.
Keep in mind that the subdirectories in which the data is stored are automatically labeled in alphabetical order; the directory containing the attentive photos was labeled zero, while the directory containing the not attentive photos was labeled one. That means the prediction values between zero and one correspond to how confident the model is that the image it is classifying deserves a zero label or a one label, which is why a label of .945 would “normally” be classified as closer to one.
Note: In the images below I multiplied the prediction values by 100.
Before Change:
After Change:
To test the generalizability of the model, I let my hair down since there were no images in the dataset with my hair type, and put my glasses on since there were few images with people wearing glasses. Neither action seemed to impair the results.
The last issue can be mitigated by better labeling of the data set and better quality data. However, I will NOT be going through those images again, so perhaps next time I will utilize those interns...
Here is an example of the "finished" product:
Conclusion
The awesome thing about my project is that it has several potential real-world applications. Perhaps drivers can install a similar feature themselves that utilize after-market cabin-facing cameras. Employers could use attention monitoring software to increase workplace productivity. I doubt I would work with such an employer but that is always an option. Video chat platforms can implement it into their software. Students could use it to reduce their distractions. There are many possibilities.
Attention monitoring aside, autonomous driving is one of the problems I most look forward to being solved within this decade--actually, I would love to contribute to the solution even more. And despite the different approaches regarding the types and combinations of inputs (camera, radar, lidar, GPS), they all utilize neural networks. So machine learning models like the one I developed will increasingly have immense real-world impact.
All in all, this was fun (aside from the data collection and preparation). Time constraints impacted my ability to complete steps D and E of my project outline. Perhaps I will return to them in the future--if so I will include them below. If not, next stop: solve autonomous driving.
Updates
(07/23/21) I created a webapp demonstrating my model using Streamlit: https://share.streamlit.io/tyronewilkinson/attentionmonitoring/webapp/webapp.py
(08/23/21) Successfully trained the model on AWS. Check out my notebook on GitHub explaining the process: Attention Monitoring on AWS
(08/26/21) Started to convert code to PyTorch. Check out what I have completed so far: Attention Monitoring with PyTorch
(10/18/21) Completed code conversion to PyTorch: Attention Monitoring with PyTorch
References
Data
https://fei.edu.br/~cet/facedatabase.html
A Brazilian face database that contains a set of face images taken between June 2005 and March 2006 at the Artificial Intelligence Laboratory of FEI in São Bernardo do Campo, São Paulo, Brazil.
http://www.anefian.com/research/face_reco.htm
Georgia Tech face database contains images taken between 06/01/99 and 11/15/99 at the Center for Signal and Image Processing at Georgia Institute of Technology.
https://generated.photos/
An AI-based face generator built on a proprietary dataset of tens of thousands of images of people taken in studio.
https://www.google.com/imghp
Google Images
Helpful Data Sources
NYC Data Science Academy
StatQuest!!!
https://www.pyimagesearch.com/
https://machinelearningmastery.com/
https://towardsdatascience.com/
Applications Used
Image Downloader: Batch Image Download Browser Extension
IrfanView: Batch Image Convert and Rename
Git LFS: Store Full Dataset on GitHub