Skip to main content

Simple Meta-optimization of the Feature MFCC for Public Emotional Datasets Classification

  • Conference paper
  • First Online:
Hybrid Artificial Intelligent Systems (HAIS 2021)

Part of the book series: Lecture Notes in Computer Science ((LNAI,volume 12886))

Included in the following conference series:

Abstract

A Speech Emotion Recognition (SER) system can be defined as a collection of methodologies that process and classify speech signals to detect emotions embedded in them [2]. Among the most critical issues to consider in an SER system are: i) definition of the kind of emotions to classify, ii) look for suitable datasets, iii) selection of the proper input features and iv) optimisation of the convenient features. This work will consider four of the well-known dataset in the literature: EmoDB, TESS, SAVEE and RAVDSS. Thus, this study focuses on designing a low-power SER algorithm based on combining one prosodic feature with six spectral features to capture the rhythm and frequency. The proposal compares eleven low-power Classical classification Machine Learning techniques (CML), where the main novelty is optimising the two main parameters of the MFCC spectral feature through the meta-heuristic technique SA: the n_mfcc and the hop_length.

The resulting algorithm could be deployed on low-cost embedded systems with limited computational power like a smart speaker. In addition, the proposed SER algorithm will be validated for four well-known SER datasets. The obtained models for the eleven CML techniques with the optimised MFCC features outperforms clearly (more than a 10%) the baseline models obtained with the not-optimised MFCC for the studied datasets.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 99.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 129.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Ahsan, M., Kumari, M.: Physical features based speech emotion recognition using predictive classification. Int. J. Comput. Sci. Inf. Technol. 8(2), 63–74 (2016). https://doi.org/10.5121/ijcsit.2016.8205

    Article  Google Scholar 

  2. Akçay, M.B., Oğuz, K.: Speech emotion recognition: emotional models, databases, features, preprocessing methods, supporting modalities, and classifiers. Speech Commun. 116(October 2019), 56–76 (2020). https://doi.org/10.1016/j.specom.2019.12.001

    Article  Google Scholar 

  3. Anagnostopoulos, C.N., Iliou, T., Giannoukos, I.: Features and classifiers for emotion recognition from speech: a survey from 2000 to 2011. Artif. Intell. Rev. 43(2), 155–177 (2012). https://doi.org/10.1007/s10462-012-9368-5

    Article  Google Scholar 

  4. Breiman, L.: Random forests. Mach. Learn. 45(1), 5–32 (2001). https://doi.org/10.1023/A:1010933404324

    Article  MATH  Google Scholar 

  5. Burkhardt, F., Paeschke, A., Rolfes, M., Sendlmeier, W., Weiss, B.: A database of German emotional speech. In: 9th European Conference on Speech Communication and Technology, pp. 1517–1520 (2005)

    Google Scholar 

  6. Chang, C.C., Lin, C.J.: LIBSVM: a library for support vector machines. ACM Trans. Intell. Syst. Technol. 2(3) (2011). https://doi.org/10.1145/1961189.1961199

  7. Chatterjee, S., Koniaris, C., Kleijn, W.B.: Auditory model based optimization of MFCCs improves automatic speech recognition performance. In: Proceedings of the Annual Conference of the International Speech Communication Association, INTERSPEECH (January), pp. 2987–2990 (2009)

    Google Scholar 

  8. Dzedzickis, A., Kaklauskas, A., Bucinskas, V.: Human emotion recognition: review of sensors and methods. Sensors (Switzerland) 20(3) (2020). https://doi.org/10.3390/s20030592

  9. Friedman, J.H.: Greedy function approximation: a gradient boosting machine. Ann. Stat. 29(5), 1189–1232 (2001). https://doi.org/10.1214/aos/1013203451

  10. Geurts, P., Ernst, D., Wehenkel, L.: Extremely randomized trees. Mach. Learn. 63(1), 3–42 (2006). https://doi.org/10.1007/s10994-006-6226-1

    Article  MATH  Google Scholar 

  11. Haq, S., Jackson, P.J.B.: Speaker-dependent audio-visual emotion recognition. In: Proceedings of the International Conference on Auditory-Visual Speech Processing (AVSP 2008), Norwich, UK (2009)

    Google Scholar 

  12. Haq, S., Jackson, P.J.B.: Machine Audition: Principles, Algorithms and Systems. chap. Multimodal, pp. 398–423. IGI Global, Hershey (2010)

    Google Scholar 

  13. Haq, S., Jackson, P., Edge, J.: Audio-visual feature selection and reduction for emotion classification. Expert Syst. Appl. 39, 7420–7431 (2008)

    Google Scholar 

  14. Hastie, T., Tibshirani, R., Friedman, J.: Springer Series in Statistics The Elements of Statistical Learning Data Mining, Inference, and Prediction. Technical report

    Google Scholar 

  15. Kingma, D.P., Ba, J.L.: Adam: a method for stochastic optimization. In: 3rd International Conference on Learning Representations, ICLR 2015 - Conference Track Proceedings. International Conference on Learning Representations, ICLR (2015)

    Google Scholar 

  16. Klapuri, A., Davy, M.: Signal Processing Methods for Music Transcription. Springer, Heidelberg (2007)

    Google Scholar 

  17. Koolagudi, S.G., Rao, K.S.: Emotion recognition from speech: a review. Int. J. Speech Technol. 15(2), 99–117 (2012). https://doi.org/10.1007/s10772-011-9125-1

    Article  Google Scholar 

  18. Librosa.org: MFCC implementation (2021). https://librosa.org/doc/main/_modules/librosa/feature/spectral.html#mfcc

  19. Livingstone, S.R., Russo, F.A.: The Ryerson audio-visual database of emotional speech and song (RAVDESS): a dynamic, multimodal set of facial and vocal expressions in north American English. PLoS ONE 13(5), e0196391 (2018). https://doi.org/10.1371/journal.pone.0196391

  20. Manning, C.D., Raghavan, P., Schuetze, H.: The Bernoulli model. In: Introduction to Information Retrieval, pp. 234–265 (2009)

    Google Scholar 

  21. Pandey, S.K., Shekhawat, H.S., Prasanna, S.R.: Deep learning techniques for speech emotion recognition: a review. In: 2019 29th International Conference Radioelektronika, RADIOELEKTRONIKA 2019 - Microwave and Radio Electronics Week, MAREW 2019 (2019). https://doi.org/10.1109/RADIOELEK.2019.8733432

  22. Pichora-Fuller, M.K., Dupuis, K.: Toronto emotional speech set (TESS) (2020). https://doi.org/10.5683/SP2/E8H2MF

  23. Rahi, P.K.: Speech emotion recognition systems: review. Int. J. Res. Appl. Sci. Eng. Technol. 8(1), 45–50 (2020). https://doi.org/10.22214/ijraset.2020.1007

    Article  Google Scholar 

  24. Rao, K.S., Koolagudi, S.G., Vempada, R.R.: Emotion recognition from speech using global and local prosodic features. Int. J. Speech Technol. 16(2), 143–160 (2013). https://doi.org/10.1007/s10772-012-9172-2

    Article  Google Scholar 

  25. Rutenbar, R.A.: Simulated annealing algorithms: an overview. IEEE Circuits Dev. Mag. 5(1), 19–26 (1989). https://doi.org/10.1109/101.17235

    Article  Google Scholar 

  26. Sahidullah, M., Saha, G.: Design, analysis and experimental evaluation of block based transformation in MFCC computation for speaker recognition. Speech Commun. 54(4), 543–565 (2012). https://doi.org/10.1016/j.specom.2011.11.004

    Article  Google Scholar 

  27. Väyrynen, E.: Emotion recognition from speech using prosodic features. Ph.D. thesis (2014)

    Google Scholar 

  28. Zeng, Z., Pantic, M., Roisman, G.I., Huang, T.S.: A survey of affect recognition methods: audio, visual, and spontaneous expressions. IEEE Trans. Pattern Anal. Mach. Intell. 31(1), 39–58 (2009). https://doi.org/10.1109/TPAMI.2008.52

    Article  Google Scholar 

  29. Zhu, J., Zou, H., Rosset, S., Hastie, T.: Multi-class AdaBoost*. Technical report (2009)

    Google Scholar 

Download references

Acknowledgement

This research has been funded partially by the Spanish Ministry of Economy, Industry and Competitiveness (MINECO) under grant TIN2017-84804-R/PID2020-112726RB-I00.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Enrique de la Cal .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2021 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

de la Cal, E., Gallucci, A., Villar, J.R., Yoshida, K., Koeppen, M. (2021). Simple Meta-optimization of the Feature MFCC for Public Emotional Datasets Classification. In: Sanjurjo González, H., Pastor López, I., García Bringas, P., Quintián, H., Corchado, E. (eds) Hybrid Artificial Intelligent Systems. HAIS 2021. Lecture Notes in Computer Science(), vol 12886. Springer, Cham. https://doi.org/10.1007/978-3-030-86271-8_55

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-86271-8_55

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-86270-1

  • Online ISBN: 978-3-030-86271-8

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics