TV from a Data Perspective - An Analysis of NBC's The Office

Paul Ton
Posted on Oct 16, 2017


The Office is an American comedy series that aired on NBC from 2005-2013. It's about the day to day life at a paper company in Scranton, Pennsylvania. Even though the show has an ensemble cast, the four characters that appear more than anyone else are: 1. Michael Scott, the regional manager 2. Dwight Shrute, paper salesman and self-appointed second-in-command 3. Jim Halpert, rival paper salesman, and 4. Pam Beesly, receptionist.

The dataset for this project is the text for all spoken lines for the entire run of the show. It was scraped by user u/misunderstoodpoetry on reddit from  It consists of ~58,000 lines, spoken over ~9,000 scenes, ~200 episodes, and 9 seasons. Each line of the data contains: season#, episode#, scene#, speaker, and the line text.


The Office was without question an overwhelming success. It was one of NBC's highest rated series during its run, and it was nominated for 42 Emmy's and 9 Golden Globes. So what made it so good? Why do people like it?

These are the imdb viewer ratings for each episode of the office (rated on a scale of 1 to 10). The highest up-spike is the series finale. Another one occurs in season 7 when Michael leaves the show. There's also an episode in season 5 that followed the Super Bowl that was well-received. The most unpopular episode was in season 6, which can be explained by the fact that it was a "clip" episode, meaning that it consisted primarily of scenes cut from previous episodes.

These are external explanations and other important factors include the stellar cast and directing, but what can we do with what we have? Can we find any clues to the show's success within the script itself?

For me, this is done as a fan of the show, entirely out of curiosity, but I can imagine that a writer or TV exec. might be interested in the answers to that question (especially if they're contemplating a spin-off, reboot, or revival). Another audience that might find these answers useful are people in the digital humanities.


The first tool we'll be using to investigate the show is Lineshare:

- the # of lines a character speaks / the # lines spoken by all characters

Intuitively, this is a good proxy for screen-time or how often we see a character.

From the lineshare breakdown for each of the nine seasons, we can see that Michael clearly speaks the most by far (until he leaves the show in season 7). The next three most common speakers are Dwight, Jim, and Pam. Fifth place rotates for a while but eventually settles on Andy.

The second tool we'll be using is Co-occurence:

- the # of times two characters are in scenes together

This is an attempt at measuring the relationships in the show. People are social creatures, and they care about how they relate with each other. The strongest recurring relationships in the office include Jim and Pam (romantic), Jim and Dwight (rivals), and Michael and Dwight (boss/employee but also friends). These relationships can be visualized a few different ways. If you take the characters by pairs, you can build a heatmap where the heat is their co-occurrence frequency. Alternatively, if you think as characters as nodes, you can use co-occurrence to draw edges between them and create a social graph.

The third tool is Centrality:

- The influence or importance of a character within their social graph

Below you can see the centrality for the characters during season 4. Instead of Michael dominating, as in lineshare, the top spots for centrality are more evenly divided among the central characters.

Episode Explorer

Using these tools, we can inspect popular or less popular episodes to see if anything stands out.

A link to an episode explorer app can be found at:


Essentially this has been an exercise in feature engineering. We've taken the dataset and come up with ways to quantify three different aspects of it.  Even though they are reasonable attempts grown from intuition, how accurately these metrics describe reality and how useful they actually are remains to be seen and depends on the tasks they are ultimately used for.

The next step would be to fulfill the initial promise and try and correlate these measures to episode rating. Another might be to apply these measures across different TV shows and see if they are useful in differentiating them. Other avenues to look for new features include using natural language processing (e.g. sentiment analysis or LDA) -- because we have the text and ought to use it.


This analysis was done using the R statistical language. The plots and diagrams were made using the ggplot2, plotly, chordDiagram, and igraph libraries. The app was written using Shiny and Shinydashboard.

The specific measure of centrality we use is eigenvector centrality, which weights a node's importance based on its surrounding nodes. You can find out more at:


About Author

Paul Ton

Paul Ton

Paul is a software engineer by training. He has great respect for the theoretical underpinnings of data science, while still appreciating the practical considerations required to handle real world problems. He's always happy to discuss all things data!
View all posts by Paul Ton >

Related Articles

Leave a Comment

No comments found.

View Posts by Categories

Our Recent Popular Posts

View Posts by Tags

#python #trainwithnycdsa 2019 airbnb Alex Baransky alumni Alumni Interview Alumni Reviews Alumni Spotlight alumni story Alumnus API Application artist aws beautiful soup Best Bootcamp Best Data Science 2019 Best Data Science Bootcamp Best Data Science Bootcamp 2020 Best Ranked Big Data Book Launch Book-Signing bootcamp Bootcamp Alumni Bootcamp Prep Bundles California Cancer Research capstone Career Career Day citibike clustering Coding Course Demo Course Report D3.js data Data Analyst data science Data Science Academy Data Science Bootcamp Data science jobs Data Science Reviews Data Scientist Data Scientist Jobs data visualization Deep Learning Demo Day Discount dplyr employer networking feature engineering Finance Financial Data Science Flask gbm Get Hired ggplot2 googleVis Hadoop higgs boson Hiring hiring partner events Hiring Partners Industry Experts Instructor Blog Instructor Interview Job Job Placement Jobs Jon Krohn JP Morgan Chase Kaggle Kickstarter lasso regression Lead Data Scienctist Lead Data Scientist leaflet linear regression Logistic Regression machine learning Maps matplotlib Medical Research Meet the team meetup Networking neural network Neural networks New Courses nlp NYC NYC Data Science nyc data science academy NYC Open Data NYCDSA NYCDSA Alumni Online Online Bootcamp Online Training Open Data painter pandas Part-time Portfolio Development prediction Prework Programming PwC python Python Data Analysis python machine learning python scrapy python web scraping python webscraping Python Workshop R R language R Programming R Shiny r studio R Visualization R Workshop R-bloggers random forest Ranking recommendation recommendation system regression Remote remote data science bootcamp Scrapy scrapy visualization seaborn Selenium sentiment analysis Shiny Shiny Dashboard Spark Special Special Summer Sports statistics streaming Student Interview Student Showcase SVM Switchup Tableau team TensorFlow Testimonial tf-idf Top Data Science Bootcamp twitter visualization web scraping Weekend Course What to expect word cloud word2vec XGBoost yelp