Original article was published by Bram Adams on Artificial Intelligence on Medium
Using AI to Choose the Next Song that Plays on Spotify
You’re out at a club with your friends on a Saturday in the year 2035. But wait, this is no regular dance club. This club is called a data club, where you get a personalized experience to increase your enjoyment of the night.
One of your friends is having a bad day and the club knows that she really likes Akon from the time she was here last.
It quickly cross references all the other faces in the club, and decides that playing Akon as the next song is the best move to increase the overall energy of the club.
Sounds crazy right? But what if I told you that all the technology needed to make this possible exists today?
To make this demo work, I leveraged three different technologies: a webcam, the Spotify API and a face expression detection ML model.
The humble webcam, used by streamers and for Zoom calls worldwide, can also be used to create Machine Learning applications. Because the frames are rendered in real time, we can capture different things on the camera, and then pass the images to our program to process.
Face Expression Detection
The Face Expression Detection algorithm is the most complicated part in this machine. In the background, it uses Tensorflow (a Google Machine Learning library) to predict what emotion an image is portraying.
There are many different implementations and libraries with this technology. For this project I used faceapi.js, because it works in the browser and it’s relatively straightforward to set up.
I collected the emotions frame by frame, and then averaged them to get the most common emotion.
This is where the fun comes in. We can then use the predictions from the ML model to do functions, such as…
The Spotify API is a way for our application to ask Spotify to do stuff by calling functions. In order to be able to do this in the first place, Spotify needs to give you a key. This key allows you to act on behalf of a user.
The next function is to play the songs. Each song in Spotify has it’s own ID. To play the songs I wanted, I copied the ID from Spotify and then plugged it into the Spotify API.
After my script triggers the API with a different emotion request, it switches to playing another song. You can think of it as basically telling the computer to play the next song whenever you stop smiling.
It’s like you’re royalty, you can stop the party whenever you please!
After I wired all the pieces together, I had a working Spotify mood ring. It’s only configured to track one person’s face and play five songs, but hey, it’s a cool start.
What songs would you add to your mood playlist?