What's in a Musical Genre?
Music. A single experience that speaks to so many people, and yet there’s no single way to experience it.
My repertoire of favorite songs from over the years, some persisting from nostalgia-inducing childhood summers, and some discovered as recently as last month, is an interesting mix that I’ve always scrutinized. What is it that lures me to the songs I love? Is it their complex balances of major and minor tonalities that intrigue me? Or is it the sound of certain instruments like the electric guitar that sends chills up my spine?
Exploring music for my first data analysis endeavor in R was consequently a no-brainer. While analyzing personal music taste is something I’ve always wanted to do formally, I looked for a slightly more tackleable problem to focus on. Examining how music has changed over the years is very interesting, but upon doing research, I found it to be somewhat “done” already. I decided that the less explored topic of genre analysis would be fun and relevant to my research interests; it relates to differentiating between musical preferences and styles, but on a more societal scale.
For additional inspiration, I referenced This Is Your Brain On Music, a book by one of my favorite neuroscientists, Daniel Levitin. On genre categorization, Levitin cites philosopher Ludwig Wittgenstein’s argument that categories often aren’t defined by strict, stateable definitions, but rather are constructed by family resemblance. Namely, all songs belonging to a certain genre may not share any one feature in common. Levitin concludes, “Definitions of musical genres aren’t very useful; we say that something is heavy metal if it resembles heavy metal”. Intrigued, I sought to depict these resemblances within genres, and to study the blurred lines between these families of music.
To begin the investigation, I chose 20 playlists created by Spotify to represent 20 different genres. For example, a playlist called “Essential Alternative” was used to represent Alternative Rock. I then extracted the data for these 1961 songs using spotifyr, a wrapper created by data scientist Charlie Thompson that allows for extracting track information from Spotify’s Web API using R. Spotify provides several audio metrics for each of its tracks, from musical features like mode (whether a track is played in a major or minor scale) to more complex features like valence (the positiveness a track conveys). A complete list of its metrics and their descriptions can be found here. I normalized the features that weren’t measured on a (0,1) scale to fit a uniform plot.
Using Genretics, the interactive web application I built using R’s Shiny package, I sought to explore the following questions, and I invite others to do the same:
What kinds of trends can we find within the musical genres and sub-genres we know of today?
What similarities and differences can we find across these genres?
Do Spotify’s metrics show what we’d expect for a given song?
Analysis Using Genretics
Upon opening Genretics, the “Explore” pane is expanded, and users can select any amount of genres from the 20 choices listed in the left pane, along with which two audio features they’d like to use for the X and Y axes. With 13 metrics to choose from for each axis, over 80 plot configurations, i.e. feature relationships, can be generated. Each song of a genre is plotted upon genre selection, with hoverable song details.
The default configuration shown is valence vs. energy, which is interesting for sentiment analysis, since the resulting plot can be viewed in quadrants, as such:
As somewhat expected, we find the majority of the data set’s metal songs congregating in the high energy, low valence (or “Angry”) quadrant, and reggae dancing between the cheerful and peaceful quadrants, due to its general high valence trend and tendency toward a middle energy level range. Classical music poises most of itself in the low energy, low valence corner that we’ve deemed “Depressed”, which might seem a bit surprising initially, considering classical music can also be found to elicit peacefulness. Upon further investigation though, much of the classical tracks in the data set are found to be nocturnes, requiems, and symphonies in minor keys.
Another useful feature of the application is the “Average” tab, which plots single data points to represent each genre’s average values across all of its tracks. Being particularly interested in evaluating the distinctions between rock subgenres from a music theory perspective, I find the following plot compelling:
Time signature (i.e. beats per measure) was normalized so that 4/4 became 1, and 3/4 became 0. The rhythms of songs with 4/4 time (or “common time”) are generally more common and standard than those with other time signatures like 3/4 (used in waltzes). Mode, according to Spotify’s metric definitions (which can be conveniently viewed in Genretics’ “Glossary” tab), consists of 1s to represent major modalities and 0s to represent minor modalities. Interestingly, punk, alternative, and classic rock exhibit more common time signatures, while progressive rock, being known for its experimental nature and nonconformity, can be seen using the less common time signature more frequently. One may also note that the majority of the metal tracks in the data set are played in minor keys, while classic rock tends to be played in major.
Digging deeper into time signature utilization, one can see that the more popular a genre is today, the more likely it is to consist of songs using common time:
Genretics’ third plotting feature, “Summary of Y Axis”, allows users to view all genres sorted in ascending order according to the measure assigned to the Y axis. A particularly amusing plot shows which genres are the most (and least) danceable:
It’s surprising to find that disco ranks fourth and EDM (electronic dance music) eighth, while understandable that classical music and video game soundtracks are found in the bottom tier.
Understanding the relationships between the metrics themselves is also a possible venture. The following plot shows an indisputable positive correlation between energy and loudness; one might suspect that Spotify plugs loudness metrics directly into their calculations for energy scores.
Valence, perhaps one of the most intriguing and mysterious audio features Spotify offers, can be shown to have a pretty strong positive correlation with danceability:
Genretics has proven to be a fun, addicting, and insightful application for those who have played around with it thus far. The inter- and intra-genre relationships illustrated above and in additional plots indeed show identifiable and distinguishable “family traits” within and across genres, to echo Wittgenstein. I’m still eager to learn of additional insights living in its plethora of plots. I also look forward to developing additional features, such as clustering similar genres together based on specified features, obtaining actual reliable measurements for correlation values between features, and perhaps the ability to import personal playlists into the data set. I hope others will enjoy using Genretics to better understand the songs, genres, and music they love listening to.