What's in a Musical Genre?
Music. A single experience that speaks to so many people, and yet there’s no single way to experience it.
My repertoire of favorite songs from over the years, some persisting from nostalgia-inducing childhood summers, and some discovered as recently as last month, is an interesting mix that I’ve always scrutinized. What is it that lures me to the songs I love? Is it their complex balances of major and minor tonalities that intrigue me? Or is it the sound of certain instruments like the electric guitar that sends chills up my spine?
Exploring music for my first data analysis endeavor in R was consequently a no-brainer. While analyzing personal music taste is something I’ve always wanted to do formally, I looked for a slightly more tackleable problem to focus on. Examining how music has changed over the years is very interesting, but upon doing research, I found it to be somewhat “done” already. I decided that the less explored topic of genre analysis would be fun and relevant to my research interests; it relates to differentiating between musical preferences and styles, but on a more societal scale.
For additional inspiration, I referenced This Is Your Brain On Music, a book by one of my favorite neuroscientists, Daniel Levitin. On genre categorization, Levitin cites philosopher Ludwig Wittgenstein’s argument that categories often aren’t defined by strict, stateable definitions, but rather are constructed by family resemblance. Namely, all songs belonging to a certain genre may not share any one feature in common. Levitin concludes, “Definitions of musical genres aren’t very useful; we say that something is heavy metal if it resembles heavy metal”. Intrigued, I sought to depict these resemblances within genres, and to study the blurred lines between these families of music.
To begin the investigation, I chose 20 playlists created by Spotify to represent 20 different genres. For example, a playlist called “Essential Alternative” was used to represent Alternative Rock. I then extracted the data for these 1961 songs using spotifyr, a wrapper created by data scientist Charlie Thompson that allows for extracting track information from Spotify’s Web API using R. Spotify provides several audio metrics for each of its tracks, from musical features like mode (whether a track is played in a major or minor scale) to more complex features like valence (the positiveness a track conveys). A complete list of its metrics and their descriptions can be found here. I normalized the features that weren’t measured on a (0,1) scale to fit a uniform plot.
Using Genretics, the interactive web application I built using R’s Shiny package, I sought to explore the following questions, and I invite others to do the same:
What kinds of trends can we find within the musical genres and sub-genres we know of today?
What similarities and differences can we find across these genres?
Do Spotify’s metrics show what we’d expect for a given song?
Analysis Using Genretics
Upon opening Genretics, the “Explore” pane is expanded, and users can select any amount of genres from the 20 choices listed in the left pane, along with which two audio features they’d like to use for the X and Y axes. With 13 metrics to choose from for each axis, over 80 plot configurations, i.e. feature relationships, can be generated. Each song of a genre is plotted upon genre selection, with hoverable song details.
The default configuration shown is valence vs. energy, which is interesting for sentiment analysis, since the resulting plot can be viewed in quadrants, as such:
The skills I demoed here can be learned through taking Data Science with Machine Learning bootcamp with NYC Data Science Academy.