Recognizing Speech Emotion Based on Acoustic Features Using Machine Learning

Abu Saleh Nasim, Rakibul Hassan Chowdory,Ashim Dey,Annesha Das

2021 International Conference on Advanced Computer Science and Information Systems (ICACSIS)(2021)

Cited 3|Views1
No score
Abstract
Detecting emotion from speech can be helpful to understand the state of individual's mind. Accurately classifying emotion from speech is a very challenging job. In this work, we combined two datasets- Ryerson Audio-Visual Database of Emotional Speech and Song (RAVDESS) and Toronto Emotional Speech Set (TESS) to diversify the speech dataset. The resulting dataset contains 4048 audio files. Seven ke...
More
Translated text
Key words
Mel-Frequency Cepstral Coefficient (MFCC),Chroma,Mel Spectrogram,RAVDESS dataset,TESS dataset,Speech emotion recognition (SER),Human-computer interaction (HCI)
AI Read Science
Must-Reading Tree
Example
Generate MRT to find the research sequence of this paper
Chat Paper
Summary is being generated by the instructions you defined