Procedural music generation for videogames conditioned through video emotion recognition

2023 4TH INTERNATIONAL SYMPOSIUM ON THE INTERNET OF SOUNDS(2023)

Cited 0|Views1
No score
Abstract
Videogames have consistently become one of the main forms of entertainment in recent years. Being a type of product that comprises several technological and artistical aspects, such as computer graphics, video and audio design, music composition, etc., a significant amount of research effort from various scientific fields has been devoted towards videogame-related problems. More recently, open-world videogames, characterized by nonlinear storylines and multiple gameplay scenarios and possibilities, have become increasingly popular. In such games, the music has to consider an enormous amount of events and variations, consequently it is extremely hard for human composers to write by themselves the score for every potential situation. Therefore, following the improvements in deep learning techniques, we propose a method for procedural music generation for videogames conditioned on emotions, especially suitable to open-world games. Specifically, we propose a pipeline where emotions, as represented on the valence-arousal plane, are extracted from gameplay videos, following the assumption that these values have a correspondence with the ones felt by the player. We then use the emotion-related information to condition a music transformer architecture in order to generate MIDI tracks, whose emotional content follows that of the game. We perform a perceptual experiment with human players in order to demonstrate the effectiveness of the proposed technique and to further investigate the applicability of this procedure to the videogame music generation research problem.
More
Translated text
Key words
Videogame audio,procedural music generation,Human-centered computing
AI Read Science
Must-Reading Tree
Example
Generate MRT to find the research sequence of this paper
Chat Paper
Summary is being generated by the instructions you defined