Skip to content
Jonathan Wallach edited this page Jun 6, 2021 · 9 revisions

Overview

In this project we developed a model that classifies the mood of a song based on a variety of audio features. With this classification we can take an input mood from a user and build a tailored playlist that can be listened to directly through a user's Spotify account. This project was completed as a part of Cal Poly's Knowledge Discovery through Data course (CPE 466) in a 6-week project period broken into the 3 two week sprints.

Given the lack of a pre-existing dataset that classified music by mood, to go about building this model we utilized a Spotify API, Spotipy, to build a dataset of our own. By the end of our data collection we had a set of 1,782 songs pulled from mood-based Spotify generated playlists. Each song had associated data including its danceability, tempo, valence, energy, and more, which our model utilized to train.

Mood classification with music is an interesting challenge since a song can elicit a different mood for different people. For our classification purposes we used psychologist Robert Thayer's traditional mood model which separates moods down to happy, sad, excited, and calm. After attempting a few different models, we've developed a model that is able to classify songs into these 4 moods with 76% accuracy. To read more about out development process, check out our Medium article here.

Clone this wiki locally