Nowadays, each one of us can recognize a song by simply hearing just a few seconds of its tune. Nevertheless, no matter how recognizable a song might initially be, at some point, its recollection level, decays. Based on this ‘decreasing trajectory’
, our project partner CERTH
proposed a composite recognition model
, which aims at estimating song recognition levels
, based on ‘chart data, YouTube views, Spotify popularity of tracks and forgetting curve dynamics’
In their paper ‘Date-driven song recognition estimation using collective memory dynamics models’,
Christos Koutlis, Manos Schinas, Vasiliki Gkatziaki, Symeon Papadopoulos and Yiannis Kompatsiaris, define song recognition
as ‘the fraction of the audience that recognizes (comprehend that they have heard it before) a speciﬁc music track through audio exposure
’. As they furthermore explain, the concept of song recognition is different from the one of song popularity
, as a song is possible to no longer be trending. For example, it is possible for an old song, that is no longer placed in the charts, to be still recognized by many people.
In addition, CERTH’s method considers various recognition decay rates and initial recognition levels per song, based on the number of weeks the song has remained in the charts
. As pinpointed, ‘the number-of-weeks feature is an indicator of how strongly the audience is exposed to a speciﬁc tune and as it increases, the forgetting procedure starts from a higher point and decelerates further’.
"T-REC builds upon three main components, the recognition growth that represents the level of recognition a track reaches during its initial prosperity time the recognition decay that represents the collective memory decay process and the recognition proxy-based adjustment that adjusts the recognition level of tracks, which is especially useful for tracks with no chart information."
In order to compare T-REC with other competitive models, the team conducted a survey in Sweden to measure the recognition level of 100 songs, which were later on used as “ground truth”
to evaluate the model. The list included 39,466 tracks in total, and was provided by another FuturePulse partner, the collaborating background music provider, Soundtrack Your Brand
Particularly, the team used data from 211 charts, 198 track charts and 13 singles charts, that air played long periods of time (in some cases form the 60s until this day), from 62 countries, including Sweden. They also used the Spotify API to ‘annotate chart entries with the Spotify id
’ and the International Standard Recording Codes (ISRC)
for each song. Due to the fact that most songs do not make it in the charts, CERTH additionally employed ‘YouTube views and Spotify popularity of tracks as current track popularity proxies’
. Knowing both the tracks’ id on Spotify and on YouTube, the team retrieved these two signals by using their public APIs.
Christos Koutlis, Manos Schinas, Vasiliki Gkatziaki, Symeon Papadopoulos and Yiannis Kompatsiaris explain that they used these two metrics because “they reﬂect the exposure of a song in two widely used platforms
. The number of video views in YouTube, is a direct measure of how many people heard a song. On the other hand, although Spotify popularity is a score generated internally by Spotify and the exact formula is not known, that score reﬂects the actual number of streams a song received recently.’
Most importantly, the experimental results showed that CERTH’s method performs greatly on accurately estimating the current recognition level of songs, far better than the compared competitive models, while also having a high statistical signiﬁcance level. There were two important conclusions:
- According to T-REC’s parameters, a song needs nearly 7 weeks in the charts to achieve a very slow velocity towards oblivion and at least 25 weeks to achieve its highest contemporary recognition.
- The role of the number-of-weeks feature incorporated in T-REC through the logistic functions is extremely important for the accurate estimation of a song’s recognition level.
As the authors acknowledge, this is very much the key component in future attempts to ‘include extensions that alleviate the deviation of recent tracks’ recognition estimation and also account for demographic-speciﬁc estimations
You can read the paper ‘Data-Driven Song Recognition Estimation Using Collective Memory Dynamics Models
* The ‘Data-Driven Song Recognition Estimation Using Collective Memory Dynamics Models
’ paper was presented by CERTH
at the 20th conference of the International Society for Music Information Retrieval (ISMIR2019)
, on November 4th - 8th 2019, in Delft, Netherlands.
Photo by freepik.com