Orpheus: A Multi-User Music Recommendation System

Joshua Litven
James Lee
Oamar Gianan
, and
Posted on December 23, 2016


What’s the recipe for the ultimate road trip? Companions who can make you laugh, snacks to last the whole trip, and of course, a good music selection.

It has long been an unwritten rule that whoever’s at the wheel has control of the music. This can get boring fast, especially when the driver has a bland taste. Some passengers may tune out and start listening to their own music on headphones and disengage from the group. What is essential is a playlist that would cater to the taste of all passengers.

In this project we address the challenge of creating a playlist for multiple users with different tastes and preferences and provide a uniformly fantastic listening experience.

Imagine an app where, upon you and your companions logging in to your music devices, aggregates everyone’s listening history and automates a playlist that everyone would enjoy!

We created just an app: Orpheus. With Orpheus, multiple users can login to their Spotify accounts and find songs they can all rock out to. The order of the tracks can be based on mood, tempo and more. Orpheus was developed in Flask, using the Spotify Web API to get user data. Check it out here!

Orpheus can also be used for parties or as background music for group workouts in the gym with your bros!

Algorithm Overview

In order to develop a model for recommending music, we needed data. We collected user taste profiles from the Echo Nest website. At a high level, the data was used to train a recommendation system for a single user. We employed collaborative filtering using Apache's Spark’s machine learning library to build a latent factor model. This model is then used by an aggregation strategy to determine preferences for multiple users in a group and recommend a final playlist. Finally, this playlist is sent to the Flask app where users can get groovy to it. The entire pipeline can be seen below:


The following describes each step in more detail.

The dataset

The dataset the recommendation model was trained on was from the Echo Nest Taste Profile Subset.  The dataset consisted of 5 GB of 1,019,318 unique users, 384,546 unique songs and 48,373,586 unique observations of user, song, play-count triplets.  

On average, 125 users listen to each song and, less than 100 users are responsible for 80% of the songs listened.  Most likely, there are a few songs that are highly popular and most songs are listened by a few.  

Recommender Systems

With user listening history in hand, the next step was in creating a recommender system. In general, recommendation systems aim to predict the preference that a user has for a given item. Items that have the highest predicted preference can then be made as recommendations to the user.

There are two commonly used approaches to building a recommender system: Content-based filtering techniques and collaborative filtering.

In content-based filtering the system looks at the characteristics of the users or items to make predictions. For example, in music, the system would find songs in the same genre or have the same artist to determine similar songs. Using these characteristics the recommender could look at a user’s items and determine which items are most similar and recommend them.

In collaborative filtering, the idea is that users similar to you will like similar items. The content of the items is abstracted away and only the interaction between users and items is taken into account. A downside of collaborative filtering is that to make recommendations a user requires historical data with the items: This is known as the cold start problem.

Within collaborative filtering, there are two types of feedback received from users: Explicit and implicit. Explicit feedback occurs when users actively rate an item (e.g. the Netflix star rating). Implicit feedback occurs based on the consumption of an item, for example when a user listens to a song. Our dataset consists of these implicit ratings.

We chose to use the collaborative filtering approach known as the latent factor model due to its ability to handle implicit feedback, its scalability and Spark’s Alternating Least Squares (ALS) implementation. A good overview of the implementation and its practical use can be found here.

The Latent Factor Model

The latent factor model attempts to reveal latent features about users and products in order to make recommendations. Specifically, given a user-rating matrix, the model finds an approximate low-rank matrix factorization as seen below. The dot product of the user’s latent feature with an item’s latent feature represents the user’s predicted ratings. Predicted ratings for all items are computed, and ordered to give the user a final recommendation with the highest predicted rating.

The low-rank factors are determined by solving the optimization problem below.


Overview of the low-rank matrix factorization method and optimization problem for explicit ratings.

Lambda is a regularization parameter used to avoid overfitting to the data.

The implicit model works slightly differently. Rather than attempt to predict explicit ratings, a confidence that user u likes item i is given by the following equation:


Where alpha is a tuning parameter of the model. The implicit optimization problem then becomes:



Here p_ui represents whether the user liked the item, while c_ui represents our confidence they liked the item. The regularization term is the same as in the explicit case.

For more details, see the paper that invented the implicit feedback collaborative filtering method here.

Now that we have a model, how do we choose the parameters? For that matter, how do we evaluate our model?

Model Evaluation

Ranking metrics are a common approach to evaluating recommender systems. Briefly, they allow us to assess the quality of a recommendation based on their ranking of predicted items. To evaluate our model, we used the ranking metric mean average precision (MAP). This was the ranking metric used for the million song dataset challenge on Kaggle.

Given a user’s item history and a recommendation, the precision-at-k (P) measures the proportion of correct recommendations within the top-k recommendations. The average precision (AP) is the precision at each recall point k. Finally, the mean average precision (MAP) averages the AP over all users.

In order to evaluate the model’s MAP, we perform cross-validation. Each iteration of cross-validation splits the users into training and test users. The test user’s listening history is further split to a hidden and visible set. The model is trained on both the training user’s and the test user’s visible implicit ratings. After the model is trained, the MAP score for the test user’s hidden set is computed to determine the quality of recommendations. For efficiency we selected a random subset of 100 users for computing the MAP.

Model Results

To determine parameters to the model we ran 5-fold cross-validation on a one dimensional grid for each parameter: The rank, lambda and alpha. We compared the results to a baseline popularity model which simply recommends the most popular songs that a user hasn’t already listened to. The cross-validation results are given in the plot below.


Cross-validation parameter tuning results.

We first ran the explicit model, which can be seen in Figure a). The explicit model performs considerably worse than the popularity model, which is unsurprising given that the data is not explicit and therefore this is not an appropriate model to use.

Figure  b) shows the implicit model as a function of rank. Generally the higher the rank the better the performance, as higher rank matrices provide better approximations of the user-item interaction. We chose rank = 50 as a good compromise between accuracy and computational efficiency.

We found that lambda didn’t have a significant effect on the performance of the model as can be seen From Figure c). Once lambda becomes too large (> 1) the quality of the model goes down.

Conversely, alpha had a profound impact on the quality of the model. We found the optimal value of alpha = 40, which is also the suggested values by the authors of the algorithm.

We found the optimal parameters to be rank = 50, lambda = 0.1 giving a final MAP score of 0.1.

When comparing this to the closed Kaggle competition, we get approximately 25th out of 150 teams. This gave us confidence the model was performing well.

Aggregation Strategies

We now have a working recommender system and it works great for individual recommendations. The next part is to make Orpheus recommend for a group of users. But how do we convert a single recommender system into a group recommender system?

One technique is to get each group member’s recommendation and combine all the recommendations using an aggregation strategy.

As much as possible, we want Orpheus to come up with  a playlist which would satisfy all members of the group. Assuming there are a couple of different aggregation strategies to employ, which one would work best for a small group? What if in our road trip scenario we had a minivan instead of a car, would the same aggregation strategy work on a larger group? More importantly, how would Orpheus recommend to a group with very dissimilar tastes?

These were the questions we needed to answer to come up with the best recommendation for the group.

To illustrate how aggregation strategies work, we pick some users in our dataset. Suppose User 193650 and his friends, User 84250, and User 92650 go for a drive, Orpheus knows their listening history and has come up with individual recommendations for each of them. Table 1 shows a subset of recommended songs and how confident Orpheus is that the user will like the song. A confidence closer to 1.0 means the user will most likely enjoy the song. Which songs will Orpheus play first?


Table 1.

The least misery aggregation strategy has been used for a group recommendation system for movies. The movie recommender uses it on explicit ratings while here we use it on implicit ratings. For each song, we get the smallest confidence rating and set it as the confidence rating of the song for the group. We then rearrange all songs from highest to lowest confidence rating. This is now the group recommended playlist.


Basically, least misery gets the happiness of the least happy member of the group. The other strategies we tested are: Average strategy,  which gets the average happiness of all members; most pleasure, which considers the happiness of the most happy member; and multiplicative, which gets the product of all members’ happiness.

To measure the satisfaction of each member of a playlist, we apply the formula below taken from here, which in turn can give us the group’s satisfaction rating on the resulting playlist.



User and group rating equations.

Another variable we need to consider is homogeneity of the group. K-means clustering on the dataset would uncover similarities of each users and group them together based on their taste. This way, we can just get members from the same cluster for a homogenous group and members from different clusters for a heterogenous group.

However, the main challenge with the dataset is its dimensionality and sparsity. Imagine doing K-means clustering for 1 million observations and close to 400,000 variables! Luckily, in the process of training a recommender system, a reduced latent factor matrix is produced. We apply K-means clustering on the user’s latent features to come up with groups.

To test our aggregation strategies, we divided our groups into two categories: homogenous grouping and heterogeneous grouping. For each category we wanted to see how varying the group size might affect the group satisfaction so we made 3, 5, and 7 member groups. We made 20 samples for each group for a total of 120 samples.

The statistical results below show that the group satisfaction of homogenous groups using different aggregation strategies do not statistically differ from each other. This means that any of the aggregation strategies will result to a playlist where all members are happy in the homogeneous case. For heterogeneous groups, we found that the average strategy was statistically significantly better than other methods, as can be seen by the ANOVA and Tukey HSD post-hoc test results below.


The Flask App

We complete our project with a Flask Application designed to generate a Spotify playlist ordered in whichever feature chosen from the tracks of up to six different people.  The diagram below shows how the app works.


After the necessary inputs are made, the playlist can be launched with a player embedded inside the application. Users can order the playlist by metrics such as energy, mood and tempo, as can be seen in the pictures below.




Orpheus gives people the opportunity to enjoy music together. A flask app supported by an implicit collaborative filtering recommender system combined with appropriate aggregation strategies give users the ultimate tool to accompany them on the road, at the zoo, or in the bedroom.

We encourage you to grab your friends and experience Orpheus today!


  • Carvalho, L., Macedo, H.: Users’ Satisfaction in Recommendation Systems for Groups: an Approach Based on Noncooperative Games (2013)
  • Hu, Y., Koren, Y., Volinsky, C.: Collaborative Filtering for Implicit Feedback Datasets
  • Masthoff, J.: Group Recommender Systems: Combining Individual Models (2011)
  • O’ Conner, M., Cosley, D., Konstan, J.A., Riedl, J.: PolyLens: A Recommender System for Groups of Users. ECSCW, Bonn, Germany (2001)
  • Segaran, T.: Programming Collective Intelligence (2007) Chapter 2
  • Shani, G., Gunawardana, A.: Evaluating Recommendation Systems

About Author

Joshua Litven

Joshua Litven

Joshua Litven received his Master's degree in Computer Science at the University of British Columbia where he worked on developing parallel algorithms to simulate realistic collisions between highly deformable objects. In practice, this meant watching lots of virtual...
Read more
James Lee

James Lee

James Lee just graduated from New York University with a B.A. in Economics with a minor in Mathematics. James diversified his interests by taking classes in various fields such as Analytical Statistics, Econometrics, Linear Algebra, Organic Chemistry, and...
Read more
Oamar Gianan

Oamar Gianan

Oamar Gianan has about 15 years of experience in the information technology industry primarily in cloud computing. He developed a passion for data analysis by working on infrastructure where big data is processed. Before moving to New York,...
Read more

Leave Responses

Your email address will not be published. Required fields are marked *

war robots hack January 23, 2018
Τhanks in support of sharіng such a good thinking, piece of writing is pleasant, thats why i have read it completeⅼy
din abrasion tester January 22, 2018
I was able to find good information from your articles.
www.royalbola77.com January 22, 2018
It's very effortless to find out any matter on web as compared to books, as I found this paragraph at this website.
www.Christopherlimo.com January 8, 2018
This is the perfect webpage for anyone who hopes to find out about this topic. You realize so much its almost tough to argue with you (not that I really would want to…HaHa). You definitely put a fresh spin on a subject which has been written about for years. Great stuff, just excellent!
christopherlimo.Com January 6, 2018
Hey very nice blog!
Temperature Sensor December 25, 2017
The ovaries produce the hormones estrogen and progesterone and the prostate gland produces testosterone.
媒体策略 December 19, 2017
In an exclusive excerpt, his rules of the game.
中学生の試験問題・ドリル・テスト対策は夢ドリル November 19, 2017
What i don't understood is in truth how you are not actually much more smartly-preferred than you may be right now. You're so intelligent. You already know thus considerably in relation to this topic, made me for my part consider it from so many varied angles. Its like women and men don't seem to be interested except it's one thing to do with Woman gaga! Your own stuffs outstanding. Always handle it up!
Xuong sao truong sa November 15, 2017
Great site you have here but I was wanting to know if you knew of any user discussion forums that cover the same topics discussed here? I'd really love to be a part of community where I can get comments from other experienced people that share the same interest. If you have any suggestions, please let me know. Bless you!
保诚 November 7, 2017
When the economy is restarting and the general mood will turn more optimistic, those organizations able to convert lots of prospects into customers and ultimately into clients, will be in a prime position to accelerate even more.
医師求人.jp October 7, 2017
Nice weblog here! Also your web site so much up very fast! What web host are you using? Can I get your affiliate link on your host? I wish my web site loaded up as fast as yours lol
viverfeliz.edublogs.org September 22, 2017
Your style is so unique in comparison to other people I've read stuff from. Many thanks for posting when you've got the opportunity, Guess I'll just bookmark this blog.
食素 September 11, 2017
It will never beat the real thing, but it's worth a try if you're feeling adventurous.
poker online terpercaya August 25, 2017
As it is a totally free registration tournament, many players should not watch for premium hands and so they go all-in with all of sorts of hands in the first hour. They do are apt to have very colorful casino type designs including blackjack, craps, or roulette, as well as poker. You have to remember that everybody who played in these matches were all seasoned poker players.
dewitt auction August 6, 2017
perfect post, you couldn't have explained it better im gonna add it to my website leagle.com
peggy sue porn video July 24, 2017
Great comment, i'm goign to post it on my blog at leagle.com
ronald a. spinabella July 21, 2017
Thats an awesome post, my name is ronald a. spinabella and I love seo and digital marketing.
ronald spinabella July 21, 2017
I wish i could write that great of a post, my name is ronald spinabella and I am an seo guru and internet marketer
ron spinabella July 20, 2017
I wish i could write that great of a post, my name is ronald spinabella and I am an seo guru and internet marketer
chicas aguila 2000 July 14, 2017
Solo aquí se llevan a cabo innumerables secciones ⅾe sexo 100% reales en cámara web ʏ paгa գue lleves aⅼ límite todɑѕ tus fantasías.
Agen Judi Bola June 28, 2017
I'm really impressed along with your writing abilities and also with the structure in your weblog. Is that this a paid subject matter or did you customize it yourself? Anyway stay up the excellent quality writing, it is rare to peer a great weblog like this one nowadays..
Finley June 22, 2017
Hi there all, here every person is sharing these experience, so it's pleasant to read this weblog, and I used to pay a visit this weblog every day.
pasaran taruhan bola indowin May 20, 2017
This piece of producing provides very clear notion specially designed with the new customers of blogging, that genuinely the right way to do running a blog and site-building.
music distribution companies April 23, 2017
Remarkablе things here. I am very happy to look your article. Thank you sօ much and I'm having a look forwaгd to touch you. Will you kindlу drop me a mail?
Coleman82 April 4, 2017
Girls wanted, no matter where you live! - to high paid job. If you are daring and young women between 18-40 years old, you can earn $1000 per week, when you get experienced you can earn 3 times more. I won't spam any websites here, if you are interested, you can google it: Jevlo's jobs modeling
controlar March 24, 2017
My spouse and Ⅰ stumbled over here different web pɑge andd thougfht I miցht ass well check tɦings out. ӏ like what I ѕee so i am jᥙst followіng you. Look forward to exploring үouг web paɡе аgain.
lasertest February 25, 2017
I have read so many content concerning the blogger lovers but this post is actually a fastidious article, keep it up.
lasertest February 25, 2017
Normally I don't learn post on blogs, however I wish to say that this write-up very compelled me to try and do so! Your writing style has been amazed me. Thanks, quite nice post.
seo February 20, 2017
Hello Web Admin, I noticed that your On-Page SEO is is missing a few factors, for one you do not use all three H tags in your post, also I notice that you are not using bold or italics properly in your SEO optimization. On-Page SEO means more now than ever since the new Google update: Panda. No longer are backlinks and simply pinging or sending out a RSS feed the key to getting Google PageRank or Alexa Rankings, You now NEED On-Page SEO. So what is good On-Page SEO?First your keyword must appear in the title.Then it must appear in the URL.You have to optimize your keyword and make sure that it has a nice keyword density of 3-5% in your article with relevant LSI (Latent Semantic Indexing). Then you should spread all H1,H2,H3 tags in your article.Your Keyword should appear in your first paragraph and in the last sentence of the page. You should have relevant usage of Bold and italics of your keyword.There should be one internal link to a page on your blog and you should have one image with an alt tag that has your keyword....wait there's even more Now what if i told you there was a simple Wordpress plugin that does all the On-Page SEO, and automatically for you? That's right AUTOMATICALLY, just watch this 4minute video for more information at. Seo Plugin
تحميل اغانى شعبى February 18, 2017
Hi,I read your new stuff named "Orpheus: A Multi-User Music Recommendation System - NYC Data Science Academy BlogNYC Data Science Academy Blog" regularly.Your story-telling style is witty, keep it up! And you can look our website about تحميل اغانى شعبى http://www.matb3aa.com/%D8%A7%D8%BA%D8%A7%D9%86%D9%8A-%D8%B4%D8%B9%D8%A8%D9%8A.
Orpheus: A Multi-User Music Recommendation System | Joshua Litven December 26, 2016
[…] After graduating from the NYC Data Science Academy I am taking a relaxing week off to hang with my family. I thought I’d share my final project that I’m really excited about. It’s called Orpheus, a music recommendation system which you can check out here. […]