Attention Monitoring

Tyrone Wilkinson
Posted on Jul 17, 2021

LinkedIn | GitHub | Email | Notebook | Webapp 

The Idea

Hate to break it to you, but you cannot trust people. Initially, there were no cabin-facing cameras in Tesla vehicles. It was thought that an agreement before using Autopilot and detecting whether adequate pressure was applied to the steering wheel was sufficient to ensure that the driver would pay attention. Even when cameras were added, starting with the Model 3, they were not intended to monitor the driver. Elon Musk, CEO and Product Architect of Tesla, stated they were implemented to prevent people from vandalizing cars when they were used as robotaxis. However, George Hotz, President of Comma.ai was convinced that Musk would ultimately have to add driver-monitoring cameras to their vehicles. His prediction that people would misuse Autopilot as Tesla’s driver-assistance features advanced and gained in popularity turned out to be correct.

"Do I still need to pay attention while using Autopilot?

Yes. Autopilot is a hands-on driver assistance system that is intended to be used only with a fully attentive driver. It does not turn a Tesla into a self-driving car nor does it make a car autonomous.

Before enabling Autopilot, you must agree to “keep your hands on the steering wheel at all times” and to always “maintain control and responsibility for your car.” Once engaged, Autopilot will also deliver an escalating series of visual and audio warnings, reminding you to place your hands on the wheel if insufficient torque is applied. If you repeatedly ignore these warnings, you will be locked out from using Autopilot during that trip.

You can override any of Autopilot’s features at any time by steering, applying the brakes, or using the cruise control stalk to deactivate."

https://www.tesla.com/support/autopilot

The many reports of people sleeping, reading, and engaging in other activities that showcased their utter disregard for the Autopilot agreement were proof enough that more obtrusive mechanisms had to be implemented to garner greater compliance among its drivers. Beginning in early 2020, Volvo (Pilot Assist) also began adding driver-monitoring cameras to all of their vehicles to combat distracted driving. They have joined the likes of Comma.ai’s Openpilot, GM’s Super Cruise, Ford’s Co-Pilot360, and a myriad of other companies and their driver-assistance technologies that incorporate driver-monitoring.

 

The Implementation

Building a platform on a neural network that utilized a human-facing camera to determine whether or not the person was attentive seemed straightforward and scalable. Swap a camera mounted on a rearview mirror for a standard laptop camera, and you have my project, essentially.

 

The Steps

Take a look at my notebook to see how the steps below played out.

  1. Understand Neural Networks ✓
  2. Outline Project
  3. Collect and Prepare Data
  4. Build, Test, Iterate ✓ 
  5. Final Results

 

1. Understand

I chose a convolutional neural network (CNN) because it is commonly used in image classification. CNNs are preferred over traditional neural networks because it reduces the number of input nodes, tolerates pixel shifts in the image, and takes advantage of the correlation observed in complex images where similarly colored pixels tend to be close together. I went with a pre-trained CNN as it would yield much better results than training the CNN from scratch, given my limited dataset. This process is called transfer learning

ResNet-152 was chosen due to its better performance compared to other pre-trained architectures offered by the Keras module in TensorFlow. Its depth is a big contributor to its effectiveness; the “152” stands for 152 layers. ResNet-152 is trained on a subset of the ImageNet dataset consisting of 1.2 million images with 1000 categories.

 

2. Outline

  1. Train a pre-trained CNN to classify images.

    Top Layer

  2. Use CNN to classify video clips.
  3. Use CNN to classify live video.
  4. Learn how to train the model on AWS (TBD).
  5. Deploy a web application (TBD).

 

3. The Data

Ideally, I should have had a bunch of interns helping me with this, and I am only half joking. It was the longest and most tedious, hands-on portion of my project (the longest portion was training the model). I was unable to find a single dataset that met my requirements with regard to the subject’s body position and diversity, accessibility, and data size. So I spent a great deal of time compiling and labeling the data. By the end, I had 10427 photos. I divided them manually into a 70-30 train-evaluation split, then further manually divided the 30 to 15-15 validation-test split. This had to be done manually so the photos from the various data sources were more-or-less evenly distributed.

Here is how I partitioned the data:
Train: 7302 images belonging to 2 classes
Validate: 1561 images belonging to 2 classes.
Test: 1564 images belonging to 2 classes.

The training data was augmented to create a more diverse data set and make the model more generalizable. The first set of parameters I used achieved that goal but resulted in unrealistic training data:

So I tweaked the parameters when I retrained the model:

 

4. Build, Test, Iterate

Initially I trained a ResNet-152 model with frozen base layers and unfrozen fully-connected layers I constructed. The base layers were frozen to prevent their weights from being updated during training.

As mentioned, I tweaked the augmentation parameters to get more realistic images, and unfroze the last convolution block so that the last block could be trained with the head I constructed.

1st model training results:
loss: 0.4209 - accuracy: 0.7615

2nd model training results:
loss: 0.3175 - accuracy: 0.8561

Although the second model reported a better loss and accuracy, one can see by the graphs that it has a greater degree of overfitting. Tweaking the augmentation parameters and using techniques such as regularization could mitigate that overfitting.

 

5. Results

I utilized the OpenCV computer vision library for the video classification portion of my project. To my surprise and delight, there was no substantial difference in code between classifying video clips and classifying live video from my laptop’s webcam.

As I tested the model, I noticed a couple things:

  • It was better at predicting when I was not paying attention versus when I was paying attention.
  • The further away I was from the camera, the less accurate it was at detecting if I was paying attention.
  • It was better at detecting attentiveness based on head movement and the presence of obstacles versus eye movement.

I mitigated the first two issues by adjusting when the model would classify an image as being attentive and not attentive. Instead of labeling the image as attentive with less than .50 prediction and not attentive with equal to and greater than .50, I labeled images with less than .99 prediction as attentive. Keep in mind that the subdirectories in which the data is stored are automatically labeled in alphabetical order; the directory containing the attentive photos was labeled zero, while the directory containing the not attentive photos was labeled one. That means the prediction values between zero and one correspond to how confident the model is that the image it is classifying deserves a zero label or the one label. Which is why a label of .945 would “normally” be classified as closer to one.
Note: In the images below I multiplied the prediction values by 100.

Before Change:

After Change:

To test the generalizability of the model, I let my hair down since there were no images in the dataset with my hair type, and put my glasses on since there were few images with people wearing glasses. Neither action seemed to impair the results.

The last issue can be mitigated by better labeling of the data set and better quality data. However, I will NOT be going through those images again, so perhaps next time I will utilize those interns.

 

Here is an example of the "finished" product:

 

Conclusion

The awesome thing about my project is that it has several potential real-world applications. Perhaps drivers can install a similar feature themselves that utilize after-market cabin-facing cameras. Employers could use attention monitoring software to increase workplace productivity. I doubt I would work with such an employer but that is always an option. Video chat platforms can implement it into their software. Students could use it to reduce their distractions. There are many possibilities. Attention monitoring aside, autonomous driving is one of the problems I most look forward to being solved within this decade--actually, I would love to contribute to the solution even more. And despite the different approaches regarding the types and combinations of inputs (camera, radar, lidar, GPS), they all utilize neural networks. So machine learning models like the one I developed will increasingly have immense real-world impact.

All in all, this was fun (aside from the data collection and preparation). Time constraints impacted my ability to complete steps D and E of my project outline. Perhaps I will return to them in the future--if so I will include them below. If not, next stop: solve autonomous driving.

 

Updates

(07/23/21) I created a webapp demonstrating my model using Streamlit: https://share.streamlit.io/tyronewilkinson/attentionmonitoring/webapp/webapp.py

 

References

Data

https://fei.edu.br/~cet/facedatabase.html
A Brazilian face database that contains a set of face images taken between June 2005 and March 2006 at the Artificial Intelligence Laboratory of FEI in São Bernardo do Campo, São Paulo, Brazil.

http://www.anefian.com/research/face_reco.htm
Georgia Tech face database contains images taken between 06/01/99 and 11/15/99 at the Center for Signal and Image Processing at Georgia Institute of Technology.

https://generated.photos/
An AI-based face generator built on a proprietary dataset of tens of thousands of images of people taken in studio.

https://www.google.com/imghp
Google Images

Helpful Sources

NYC Data Science Academy
StatQuest!!!
https://www.pyimagesearch.com/
https://machinelearningmastery.com/
https://towardsdatascience.com/

Applications Used

Image Downloader: Batch Image Download Browser Extension
IrfanView: Batch Image Convert and Rename
Git LFS: Store Full Dataset on GitHub

 

About Author

Tyrone Wilkinson

Tyrone Wilkinson

| Data Scientist | I love tackling interesting problems. With a degree in Computer Science and background in IT, I now leap into AI. Contact me if you want to change the world.
View all posts by Tyrone Wilkinson >

Related Articles

Leave a Comment

No comments found.

View Posts by Categories


Our Recent Popular Posts


View Posts by Tags

#python #trainwithnycdsa 2019 airbnb Alex Baransky alumni Alumni Interview Alumni Reviews Alumni Spotlight alumni story Alumnus API Application artist aws beautiful soup Best Bootcamp Best Data Science 2019 Best Data Science Bootcamp Best Data Science Bootcamp 2020 Best Ranked Big Data Book Launch Book-Signing bootcamp Bootcamp Alumni Bootcamp Prep Bundles California Cancer Research capstone Career Career Day citibike clustering Coding Course Demo Course Report D3.js data Data Analyst data science Data Science Academy Data Science Bootcamp Data science jobs Data Science Reviews Data Scientist Data Scientist Jobs data visualization Deep Learning Demo Day Discount dplyr employer networking feature engineering Finance Financial Data Science Flask gbm Get Hired ggplot2 googleVis Hadoop higgs boson Hiring hiring partner events Hiring Partners Industry Experts Instructor Blog Instructor Interview Job Job Placement Jobs Jon Krohn JP Morgan Chase Kaggle Kickstarter lasso regression Lead Data Scienctist Lead Data Scientist leaflet linear regression Logistic Regression machine learning Maps matplotlib Medical Research Meet the team meetup Networking neural network Neural networks New Courses nlp NYC NYC Data Science nyc data science academy NYC Open Data NYCDSA NYCDSA Alumni Online Online Bootcamp Online Training Open Data painter pandas Part-time Portfolio Development prediction Prework Programming PwC python Python Data Analysis python machine learning python scrapy python web scraping python webscraping Python Workshop R R language R Programming R Shiny r studio R Visualization R Workshop R-bloggers random forest Ranking recommendation recommendation system regression Remote remote data science bootcamp Scrapy scrapy visualization seaborn Selenium sentiment analysis Shiny Shiny Dashboard Spark Special Special Summer Sports statistics streaming Student Interview Student Showcase SVM Switchup Tableau team TensorFlow Testimonial tf-idf Top Data Science Bootcamp twitter visualization web scraping Weekend Course What to expect word cloud word2vec XGBoost yelp