Detail publikace

Speech Emotion Recognition with Deep Learning

Originální název

Speech Emotion Recognition with Deep Learning

Anglický název

Speech Emotion Recognition with Deep Learning

Jazyk

en

Originální abstrakt

This paper describes a method for Speech Emotion Recognition (SER) using Deep Neural Network (DNN) architecture with convolutional, pooling and fully connected layers. We used 3 class subset (angry, neutral, sad) of German Corpus (Berlin Database of Emotional Speech) containing 271 labeled recordings with total length of 783 seconds. Raw audio data were standardized so every audio file has zero mean and unit variance. Every file was split into 20 millisecond segments without overlap. We used Voice Activity Detection (VAD) algorithm to eliminate silent segments and divided all data into TRAIN (80%) VALIDATION (10%) and TESTING (10%) sets. DNN is optimized using Stochastic Gradient Descent. As input we used raw data without any feature selection. Our trained model achieved overall test accuracy of 96.97% on whole-file classification.

Anglický abstrakt

This paper describes a method for Speech Emotion Recognition (SER) using Deep Neural Network (DNN) architecture with convolutional, pooling and fully connected layers. We used 3 class subset (angry, neutral, sad) of German Corpus (Berlin Database of Emotional Speech) containing 271 labeled recordings with total length of 783 seconds. Raw audio data were standardized so every audio file has zero mean and unit variance. Every file was split into 20 millisecond segments without overlap. We used Voice Activity Detection (VAD) algorithm to eliminate silent segments and divided all data into TRAIN (80%) VALIDATION (10%) and TESTING (10%) sets. DNN is optimized using Stochastic Gradient Descent. As input we used raw data without any feature selection. Our trained model achieved overall test accuracy of 96.97% on whole-file classification.

BibTex


@inproceedings{BUT133621,
  author="Pavol {Harár} and Radim {Burget} and Malay Kishore {Dutta} and Anushikha {Singh}",
  title="Speech Emotion Recognition with Deep Learning",
  annote="This paper describes a method for Speech Emotion Recognition (SER) using Deep Neural Network (DNN) architecture with convolutional, pooling and fully connected layers. We used 3 class subset (angry, neutral, sad) of German Corpus (Berlin Database of Emotional Speech) containing 271 labeled recordings with total length of 783 seconds. Raw audio data were standardized so every audio file has zero mean and unit variance. Every file was split into 20 millisecond segments without overlap. We used Voice Activity Detection (VAD) algorithm to eliminate silent segments and divided all data into TRAIN (80%) VALIDATION (10%) and TESTING (10%) sets. DNN is optimized using Stochastic Gradient Descent. As input we used raw data without any feature selection. Our trained model achieved overall test accuracy of 96.97% on whole-file classification.",
  booktitle="2017 4th International Conference on Signal Processing and Integrated Networks (SPIN)",
  chapter="133621",
  doi="10.1109/SPIN.2017.8049931",
  howpublished="electronic, physical medium",
  year="2017",
  month="february",
  pages="137--140",
  type="conference paper"
}