Data Analysis on Attention Monitoring

Posted on Jul 17, 2021
The skills I demoed here can be learned through taking Data Science with Machine Learning bootcamp with NYC Data Science Academy.

LinkedIn | GitHub | Email | Notebook | Webapp 

The Idea

Hate to break it to you, but you cannot trust people. Initially, there were no cabin-facing cameras in Tesla vehicles. It was thought that an agreement before using Autopilot and detecting whether adequate pressure was applied to the steering wheel was sufficient to ensure that the driver would pay attention. Even when cameras were added, starting with the Model 3, data stated that they were not intended for driver monitoring. Elon Musk, CEO and Product Architect of Tesla, stated they were implemented to prevent people from vandalizing cars when they would be eventually used as robotaxis.

However, George Hotz, President of was convinced that Musk would ultimately have to add driver-monitoring cameras to their vehicles. His prediction coming from his data, was that people would misuse Autopilot as Tesla’s driver-assistance features advanced and gained in popularity turned out to be correct.

"Do I still need to pay attention while using Autopilot?

Yes. Autopilot is a hands-on driver assistance system that is intended to be used only with a fully attentive driver. It does not turn a Tesla into a self-driving car nor does it make a car autonomous.

Before enabling Autopilot, you must agree to “keep your hands on the steering wheel at all times” and to always “maintain control and responsibility for your car.” Once engaged, Autopilot will also deliver an escalating series of visual and audio warnings, reminding you to place your hands on the wheel if insufficient torque is applied. If you repeatedly ignore these warnings, you will be locked out from using Autopilot during that trip.

You can override any of Autopilot’s features at any time by steering, applying the brakes, or using the cruise control stalk to deactivate."

The many reports of people sleeping, reading, and engaging in other activities that showcased their utter disregard for the Autopilot agreement were proof enough that more obtrusive mechanisms had to be implemented to garner greater compliance among its drivers. Beginning in early 2020, Volvo (Pilot Assist) also began adding driver-monitoring cameras to all of their vehicles to combat distracted driving. They have joined the likes of’s Openpilot, GM’s Super Cruise, Ford’s Co-Pilot360, and a myriad of other companies and their driver-assistance technologies that incorporate driver-monitoring.


The Implementation

Data Analysis on Attention Monitoring

Building a platform on a neural network that utilized a human-facing camera to determine whether or not the person was attentive seemed straightforward and scalable. Swap a camera mounted on a rearview mirror for a standard laptop camera, and you have my project, essentially.


The Steps

Take a look at my notebook to see how the steps below played out.

  1. Understand Neural Networks ✓
  2. Outline Project
  3. Collect and Prepare Data
  4. Build, Test, Iterate ✓ 
  5. Final Results


1. Understand The Data

I chose a convolutional neural network (CNN) because it is commonly used in image classification. CNNs are preferred over traditional neural networks because they reduce the number of input nodes, tolerate pixel shifts in the image, and take advantage of the correlation observed in complex images where similarly colored pixels tend to be close together. I went with a pre-trained CNN as it would yield much better results than training the CNN from scratch, given my limited dataset. This process is called transfer learning

ResNet-152 was chosen due to its better performance compared to other pre-trained architectures offered by the Keras module in TensorFlow. Its depth is a big contributor to its effectiveness; the “152” stands for 152 layers. ResNet-152 is trained on a subset of the ImageNet dataset consisting of 1.2 million images with 1000 categories. The main features of deep residual networks are the shortcuts between layers which prevent vanishing gradients and mitigate accuracy saturation.


2. Outline

  1. Train a pre-trained CNN to classify images.

    Top Layer

  2. Use CNN to classify video clips.
  3. Use CNN to classify live video.
  4. Learn how to train the model on AWS (TBD).
  5. Deploy a web application (TBD).


3. The Data

Ideally, I should have had a bunch of interns helping me with this, and I am only half joking. It was the longest and most tedious, hands-on portion of my project (the longest portion was training the model). I was unable to find a single dataset that met my requirements with regard to the subject’s body position and diversity, accessibility, and data size. So I spent a great deal of time compiling and labeling the data.

By the end, I had 10427 photos. I divided them manually into a 70-30 train-evaluation split, then further manually divided the 30 to 15-15 validation-test split. This had to be done manually so the photos from the various data sources were more-or-less evenly distributed.

Data Analysis on Attention Monitoring

Here is how I partitioned the data:
Train: 7302 images belonging to 2 classes
Validate: 1561 images belonging to 2 classes.
Test: 1564 images belonging to 2 classes.

The training data was augmented to create a more diverse data set and make the model more generalizable. The first set of parameters I used achieved that goal but resulted in unrealistic training data:

Data Analysis on Attention Monitoring

So I tweaked the parameters when I retrained the model:


4. Build, Test, Iterate

Initially, I trained a ResNet-152 model with frozen base layers and the unfrozen fully-connected layers I constructed. The base layers were frozen to prevent their weights from being updated during training.

As mentioned, I tweaked the augmentation parameters to get more realistic images, and unfroze the last convolution block so that the last block could be trained with the head I constructed.

1st model training results:
loss: 0.4209 - accuracy: 0.7615

2nd model training results:
loss: 0.3175 - accuracy: 0.8561

Although the second model reported a better loss and accuracy, one can see by the graphs that it has a greater degree of overfitting. Tweaking the augmentation parameters and using techniques such as regularization could mitigate that overfitting.


5. Data Results

I utilized the OpenCV computer vision library for the video classification portion of my project. To my surprise and delight, there was no substantial difference in code between classifying video clips and classifying live video from my laptop’s webcam.

As I tested the model I noticed a few things:

  • It was better at predicting when I was not paying attention versus when I was paying attention.
  • The further away I was from the camera, the less accurate it was at detecting if I was paying attention.
  • It was better at detecting attentiveness based on head movement and the presence of obstacles versus eye movement.

I mitigated the first two issues by adjusting when the model would classify an image as being attentive and not attentive. Instead of labeling the image as attentive with less than .50 prediction and not attentive with equal to and greater than .50, I labeled images with less than .99 prediction as attentive.

Keep in mind that the subdirectories in which the data is stored are automatically labeled in alphabetical order; the directory containing the attentive photos was labeled zero, while the directory containing the not attentive photos was labeled one. That means the prediction values between zero and one correspond to how confident the model is that the image it is classifying deserves a zero label or a one label, which is why a label of .945 would “normally” be classified as closer to one.

Note: In the images below I multiplied the prediction values by 100.

Before Change:

After Change:

To test the generalizability of the model, I let my hair down since there were no images in the dataset with my hair type, and put my glasses on since there were few images with people wearing glasses. Neither action seemed to impair the results.

The last issue can be mitigated by better labeling of the data set and better quality data. However, I will NOT be going through those images again, so perhaps next time I will utilize those interns...


Here is an example of the "finished" product:



The awesome thing about my project is that it has several potential real-world applications. Perhaps drivers can install a similar feature themselves that utilize after-market cabin-facing cameras. Employers could use attention monitoring software to increase workplace productivity. I doubt I would work with such an employer but that is always an option. Video chat platforms can implement it into their software. Students could use it to reduce their distractions. There are many possibilities.

Attention monitoring aside, autonomous driving is one of the problems I most look forward to being solved within this decade--actually, I would love to contribute to the solution even more. And despite the different approaches regarding the types and combinations of inputs (camera, radar, lidar, GPS), they all utilize neural networks. So machine learning models like the one I developed will increasingly have immense real-world impact.

All in all, this was fun (aside from the data collection and preparation). Time constraints impacted my ability to complete steps D and E of my project outline. Perhaps I will return to them in the future--if so I will include them below. If not, next stop: solve autonomous driving.



(07/23/21) I created a webapp demonstrating my model using Streamlit:

(08/23/21) Successfully trained the model on AWS. Check out my notebook on GitHub explaining the process: Attention Monitoring on AWS

(08/26/21) Started to convert code to PyTorch. Check out what I have completed so far: Attention Monitoring with PyTorch

(10/18/21) Completed code conversion to PyTorch: Attention Monitoring with PyTorch



A Brazilian face database that contains a set of face images taken between June 2005 and March 2006 at the Artificial Intelligence Laboratory of FEI in São Bernardo do Campo, São Paulo, Brazil.
Georgia Tech face database contains images taken between 06/01/99 and 11/15/99 at the Center for Signal and Image Processing at Georgia Institute of Technology.
An AI-based face generator built on a proprietary dataset of tens of thousands of images of people taken in studio.
Google Images

Helpful Data Sources

NYC Data Science Academy

Applications Used

Image Downloader: Batch Image Download Browser Extension
IrfanView: Batch Image Convert and Rename
Git LFS: Store Full Dataset on GitHub


About Author

Tyrone Wilkinson

| Data Scientist | I love tackling interesting problems. With a degree in Computer Science from Columbia University and background and IT experience spanning over 5 years, I now leap into AI. Contact me if you want to...
View all posts by Tyrone Wilkinson >

Related Articles

Leave a Comment

No comments found.

View Posts by Categories

Our Recent Popular Posts

View Posts by Tags

#python #trainwithnycdsa 2019 2020 Revenue 3-points agriculture air quality airbnb airline alcohol Alex Baransky algorithm alumni Alumni Interview Alumni Reviews Alumni Spotlight alumni story Alumnus ames dataset ames housing dataset apartment rent API Application artist aws bank loans beautiful soup Best Bootcamp Best Data Science 2019 Best Data Science Bootcamp Best Data Science Bootcamp 2020 Best Ranked Big Data Book Launch Book-Signing bootcamp Bootcamp Alumni Bootcamp Prep boston safety Bundles cake recipe California Cancer Research capstone car price Career Career Day citibike classic cars classpass clustering Coding Course Demo Course Report covid 19 credit credit card crime frequency crops D3.js data data analysis Data Analyst data analytics data for tripadvisor reviews data science Data Science Academy Data Science Bootcamp Data science jobs Data Science Reviews Data Scientist Data Scientist Jobs data visualization database Deep Learning Demo Day Discount disney dplyr drug data e-commerce economy employee employee burnout employer networking environment feature engineering Finance Financial Data Science fitness studio Flask flight delay gbm Get Hired ggplot2 googleVis H20 Hadoop hallmark holiday movie happiness healthcare frauds higgs boson Hiring hiring partner events Hiring Partners hotels housing housing data housing predictions housing price hy-vee Income Industry Experts Injuries Instructor Blog Instructor Interview insurance italki Job Job Placement Jobs Jon Krohn JP Morgan Chase Kaggle Kickstarter las vegas airport lasso regression Lead Data Scienctist Lead Data Scientist leaflet league linear regression Logistic Regression machine learning Maps market matplotlib Medical Research Meet the team meetup methal health miami beach movie music Napoli NBA netflix Networking neural network Neural networks New Courses NHL nlp NYC NYC Data Science nyc data science academy NYC Open Data nyc property NYCDSA NYCDSA Alumni Online Online Bootcamp Online Training Open Data painter pandas Part-time performance phoenix pollutants Portfolio Development precision measurement prediction Prework Programming public safety PwC python Python Data Analysis python machine learning python scrapy python web scraping python webscraping Python Workshop R R Data Analysis R language R Programming R Shiny r studio R Visualization R Workshop R-bloggers random forest Ranking recommendation recommendation system regression Remote remote data science bootcamp Scrapy scrapy visualization seaborn seafood type Selenium sentiment analysis sentiment classification Shiny Shiny Dashboard Spark Special Special Summer Sports statistics streaming Student Interview Student Showcase SVM Switchup Tableau teachers team team performance TensorFlow Testimonial tf-idf Top Data Science Bootcamp Top manufacturing companies Transfers tweets twitter videos visualization wallstreet wallstreetbets web scraping Weekend Course What to expect whiskey whiskeyadvocate wildfire word cloud word2vec XGBoost yelp youtube trending ZORI