Data Analysis on Rotten Tomatoes
The skills the author demoed here can be learned through taking Data Science with Machine Learning bootcamp with NYC Data Science Academy.
Introduction
The success of data ratings-aggregating sites like Rotten Tomatoes and IMDB implies something uncontroversial: film reviews matter.
While we may tend to think of ourselves as unbothered by the opinions of others, sites that collect the film criticism of strangers for the consumption of other strangers suggest to us the opposite. For most, the opinions of others do matter - a lot!
Perhaps that's why many reviews are read by visitors who have already seen the reviewed movie. Few things are as pleasant as seeing a privately formed opinion shared by others; conversely, it can be fun to disagree with the consensus view. However, the chief purpose of film criticism remains to inform the decisions of movie-goers who have not yet purchased a ticket, and it is with that in mind that I noticed something interesting going on in the small world of online film criticism.
For the most part, the critics' and audience "scores" - the aggregation of 0 to 5 star ratings assigned to a film by critics and non-critic movie-goers, respectively - rise and fall together in response to the quality of the film being reviewed. Naturally, the critics' and audiences' scores nearly always differ somewhat for each film.
Background
However, there's a category of film for which the critics' and audiences' scores deviate significantly, and in both directions. 2019's Joker, for instance, received excellent reviews from audiences (89% positive), but poor reviews from critics (69% positive). Even more dramatically divisive was Dave Chappelle's 2019 special, receiving a 99% positive score from audiences, versus a 25% positive score from critics.
Going the other direction are films like Midsommar (83% critic; 63% audience), and The Last Jedi (91% critic; 44% audience).
What these films have in common (or don't) is easily discerned with a little cultural context; with the exception of Midsommar, they have excited political/cultural controversy. The deviations in scores they receive seem at least partly reflective of the caution with which professional critics tend to treat culturally volatile material.
The problem is, that's a nearly unprovable assertion. However, this insight contains something of potential value; what if, because of a complex social phenomenon, the group of people whose job it is to think about and criticize film are actually the last group from whom a typical person should want to take film advice? That seems to leave the audience score to rely on - however, doing so consigns us to a diet of Disney films and Friends. While we don't want to miss out on movies like Joker due to a low critic rating, neither do we want to subject ourselves to something like The Last Jedi because of a high one. What to do?
Taking the percent-change difference between critic and audience scores might be one way of cutting through the noise inherent in reviewing a controversial film, in order to distill a more accurate indication of how likely a typical person is to enjoy a given movie.
Data
To do this, I needed to scrape Rotten Tomatoes for some data. Specifically, I looked for the top 100 (critic-reviewed) movies by genre, across 16 genres. With all films present, that's 1,600 movies on which to do a little Python analysis and graphing.
Technical note for those interested: for this project, I used Scrapy to collect
the fields of "title", "rating", "critic's score", "audience score", "runtime", and "box office". The percent-change was calculated with this bit of code:
def pcntchange(a,b):
list_ = []
for a, b in zip(audience_score, critic_score):
list_.append(100 * (b - a)/a)
return list_
The box office data, pictured below, is given in two forms. Because scientific notation is a poor way of communicating very large numbers graphically, I performed a log-transformation to give a more understandable look to the distribution of revenue figures for these movies:
A jointplot of audience and critic scores:
And various self explanatory graphs:
Finally, a look at how the percent-change, from audience score to critics', changes in relation to
Data Findings
Some of the findings of my analysis are in line with what we might expect; naturally, the distribution of critic's scores is much taller than that of the audience scores. After all, the "top 100" lists from which I scraped the data are top 100 by critic score.
Others are more surprising; that the percent-change between audience and critic scores should be so positively correlated with a film's PG-13 rating is surprising. The opposite relationship is observable between percent-change and movies with an R rating.
There's more work to come on this idea. Eventually, I'd like to write a continually-updating list of films ordered with respect to percent-change, but weighted depending on the count of reviews, and filterable by all the usual knobs (genre, release date, etc.). I believe this will prove to be an effective and interesting way of cutting through the problem of critic bias in film criticism.