Semantic models of musical mood: Comparison between crowd-sourced and curated editorial tags

Multimedia and Expo Workshops(2013)

Cited 25|Views4
No score
Abstract
Social media services such as Last.fm provide crowd-sourced mood tags which are a rich but often noisy source of information. In contrast, editorial annotations from production music libraries are meant to be incisive in nature. We compare the efficiency of these two data sources in capturing semantic information on mood expressed by music. First, a semantic computing technique devised for mood-related tags in large datasets is applied to Last.fm and I Like Music (ILM) corpora separately (250,000 tracks each). The resulting semantic estimates are then correlated with listener ratings of arousal, valence and tension. High correlations (Spearman's rho) are found between the track positions in the dimensional mood spaces and listener ratings using both data sources (0.60 <; rs <; 0.70). In addition, the use of curated editorial data provides a statistically significant improvement compared to crowd-sourced data for predicting moods perceived in music.
More
Translated text
Key words
behavioural sciences,correlation methods,music,semantic Web,social networking (online),I Like Music corpora,ILM corpora,Last.fm corpora,Spearman rho,crowd-sourced data,crowd-sourced mood tags,curated editorial tags,data sources,editorial annotations,listener ratings,mood-related tags,musical mood,noisy information source,production music libraries,semantic computing technique,semantic information,semantic models,social media services,Semantic computing,affective circumplex transformation,dimensional emotion model,music moods
AI Read Science
Must-Reading Tree
Example
Generate MRT to find the research sequence of this paper
Chat Paper
Summary is being generated by the instructions you defined