Analyzing Data To Detect Bias in Music Reviews
The skills I demoed here can be learned through taking Data Science with Machine Learning bootcamp with NYC Data Science Academy.
Introduction
With over a quarter of a million unique readers per day, the music review website Pitchfork is one of the most influential online publications in independent music. Several well-known artists, including Sufjan Stephens and Arcade Fire, have experienced surges in popularity and sales following positive album reviews on Pitchfork. The structure of the reviews is straightforward: a numerical score between 0.0 and 10.0 is given by the author, along with a detailed written data review of the album's contents. There is currently a library of over 20,000 reviews dating back to 1999 available at pitchfork.com.
With popularity and influence often comes controversy, and Pitchfork is no exception, particularly in regards to perceived gender- and genre-based biases in their content. In 2007, a review for hip-hop artist M.I.A.'s album Kala erroneously stated that noted producer Diplo had produced all of the album's tracks, when, in reality, M.I.A. had produced most of the tracks herself.
In an interview with Pitchfork, M.I.A. later said, "There is an issue especially with what male journalists write about me and say 'this must have come from a guy.'" Additionally, Pitchfork's early reviews focused disproportionately on rock music, although they now appear to feature a more balanced mixture of genres.
The goal of this project is to examine whether there is measurable evidence of bias in the proportion of reviews given to albums of different genres and to artists of different genders, as well as review scores based on genre, artist gender, and reviewer gender. Diversity and inclusion are consistently at the forefront of the news, and perceived biases can damage not only a company's reputation, but also its profits. As such, I hope to offer insights that can be of use both to reviewers and to members of the editorial staff at Pitchfork (or at other publications) with regards to bias in the music criticism industry.
Preliminary Data Analysis
For this project, I scraped data from over 20,000 album reviews on pitchfork.com, specifically album title, artist, genre, release year, record label, review date, reviewer name, review text, review score, and whether the album had been given one of two distinctions, "Best New Music" or "Best New Reissue." A snapshot of the data is shown below.
Pitchfork Reviews
Exploratory data analysis revealed Pitchfork's growth over the years, with the number of reviews published annually growing from just over 200 in 1999 to approximately 1,200 each year since 2005. The average and median review scores are 7.06 and 7.3, respectively, and the histogram of scores follows a left-skewed distribution.
Data Analysis of Genre Bias
I first examined the number of reviews published per year belonging to each genre. Albums are classified as one or more of nine genres: Electronic, Pop/R&B, Folk/Country, Rock, Rap, Experimental, Jazz, Metal, and Global.
In each year from 1999 until 2011, Rock was overwhelmingly the most popular genre reviewed, comprising over 50% of all reviews. Since 2011, Rock's share of all reviews has shrunk to about one-third, and other genres which had previously received little attention from Pitchfork--Rap and Pop/R&B, most notably--have gained greater prominence on the site. This progression is shown below for the years 1999, 2009, and 2019. While it is clear that Rock is still the most-reviewed genre by far, there has been an apparent effort by Pitchfork to feature a more equitable mixture.
Average Review Score
I next conducted one-way ANOVA tests with α = .05 to determine whether there was a significant difference in how genres were scored in each year from 2001 to 2020. An important caveat is that a single album can be classified as several genres, and the occurrence of one genre may not be independent of another.
To check for independence, I performed a chi-squared test, and the occurrences of different genres turned out to be very much not independent. This makes intuitive sense; if an album is classified as Rock, it is more likely to have a secondary classification of Metal than Rap, for instance. To circumvent this issue, I removed all albums with multiple genre classifications before conducting the ANOVA tests.
In early years, there was a wide variation in average scores by genre, while in later years, the average scores were more uniform, as displayed below.
This led me to hypothesize that average genre scores have grown more similar over time. However, the one-way ANOVA tests revealed quite the opposite. Significant differences in average score by genre have become much more pronounced since 2015, while there was often no significant difference at all prior to 2015. I created the graph below to illustrate this phenomenon. The y-axis shows the base-20 logarithm of the P-value of the ANOVA test for each year. Points below the dotted line at y = -1 indicate a statistically significant difference in average review scores by genre for that year.
Data Analysis of Gender Bias
I first searched for evidence of gender-based bias by comparing the total quantities of masculine pronouns (he/him/his) and feminine pronouns (she/her/hers) contained in the reviews published each year.
Data on Feminine and Masculine Reviews
The analysis revealed that, from 1999 to 2014, there was a fairly constant balance of approximately 80% masculine to 20% feminine pronouns. Starting in 2015, the proportions have gradually grown closer together, and in the last two years, the proportion of feminine pronouns has exceeded 30%. This does not coincide with an industry shift towards greater representation for female artists; rather, it appears that Pitchfork itself has made a concerted effort to review more albums by female artists.
The trend towards a more equal distribution of gender pronouns is even more pronounced in reviews that bestowed a "Best New Music" or "Best New Reissue" distinction. In 2018 and 2020, over 40% of gendered pronouns were feminine, which exceeds any proportion in the chart above.
Data on Detecting Gender
Finally, I wanted to determine whether there was evidence of bias in scoring based on the artist's or the reviewer's gender. To achieve this, I needed a way of detecting gender from text data. I found a module called gender_guesser.detector
which contains the method .get_gender
. This method reads a string and classifies the gender of the associated name with surprising accuracy, even when given an uncommon name. Occasionally, the method does not recognize or fails to correctly classify the name. The table below shows several examples.
Name |
gender_guesser.detector.get_gender(Name) |
Success? |
'Lisa' |
female |
Yes |
'Carol' |
mostly_female |
Yes |
'Casey' |
andy (androgynous) |
Yes |
'Chris' |
mostly_male |
Yes |
'Michael' |
male |
Yes |
'Björk' |
female |
Yes |
'Kanye' |
unknown |
No |
'Kyle' |
female |
No |
To further examine gender bias, I restricted only to reviews for which both the artist's and the reviewer's names were classified as male (including "mostly male") or female (including "mostly female"). This left over 6,000 of the original 20,000 reviews, so I still had sufficient data to work with. Among these reviews, I found potential evidence of gender-based bias in scoring; both male and female reviewers give higher scores, on average, to artists of their own gender.
Reviewer Gender | Artist Gender | Average Score |
Male | Male | 7.1302 |
Male | Female | 7.0719 |
Female | Male | 7.0870 |
Female | Female | 7.1795 |
Average Score by Gender
I wanted to determine if these differences were significant and if there have been changes in this tendency over time. Deeper analysis revealed very little evidence of regular preference for one artist's gender over another, either for male reviewers or for female reviewers.
For more rigorous confirmation of this, I performed t-tests to determine if there was a significant difference between average review scores per year for male and female artists. The y-axis represents the base-20 logarithm of the P-value of the t-test, and points below the dotted line indicate a statistically significant difference between average scores in that year. For reviewers of either gender, significant differences were rare, and I cannot attribute them to anything more serious than occasional anomalies.
Conclusions
In examining genre-based bias at pitchfork.com, there was evidence that, while Rock albums still comprise the largest share of reviews, the distribution has become more uniform in recent years. Recent years have seen a noteworthy increase in the number of albums classified as Pop/R&B and Rap, both genres that are dominated by artists of color. There has also been a broadening disparity between average review scores by genre, which I verified by conducting one-way ANOVA tests.
Whether this is important for Pitchfork to address is unclear; the more even balance of genres in recent years seems to address, to some extent, the modern concerns of "visibility" and "representation."
Additionally, I found evidence of a shift towards a larger platform for female artists, as well as evidence that both male and female artists are typically reviewed equitably by reviewers of either gender. The findings of this project indicate a trend towards greater visibility for, and a reduction in bias against, female artists and non-Rock genres. While there is still an overall imbalance in these areas, I suspect that this is a symptom of the music industry as a whole and may well be out of Pitchfork's hands.
The GitHub repository for this project is available here.