PL EN


Preferencje help
Widoczny [Schowaj] Abstrakt
Liczba wyników
Czasopismo
2015 | nr 4, CD 3 | 9712--9721
Tytuł artykułu

Emotion Recognition from Natural Speech - Emotional Profiles

Warianty tytułu
Języki publikacji
EN
Abstrakty
EN
Emotion recognition system can improve customer service especially in the case of call centers. Knowledge of the emotional state of the speaker would allow the operator to adapt better and generally improve cooperation. Research in emotion recognition focuses primarily on speech analysis. Emotion classification algorithms designed for real-world application must be able to interpret the emotional content of an utterance or dialog beyond various limitation i.e. speaker, context, personality or culture. This paper presents research on emotion recognition system of spontaneous voice stream based on a multimodal classifier. Experiments were carried out basing on natural speech characterized by seven emotional states. The process of multimodal classification was based on Plutchik's theory of emotion and emotional profiles.(original abstract)
Czasopismo
Rocznik
Numer
Strony
9712--9721
Opis fizyczny
Twórcy
  • Politechnika Łódzka
  • Politechnika Łódzka
Bibliografia
  • [1] Metallinou A., Katsamanis A., Narayanan S.: A hierarchical framework for modeling multimodality and emotional evolution in affective dialogs, 2012 IEEE International Conference on Acoustics, Speech and Signal Processing, 2012.
  • [2] Gunes H., Piccardi M.: Bi-modal emotion recognition from expressive face and body gestures, Journal of Network and Computer Applications. vol. 30, no. 4, pp. 1334 - 1345, 2007.
  • [3] Garay N., Cearreta I., Lpez J.M., Fajardo I.: Assistive technology and affective mediation, Assistive Technol. vol. 2, no. 1, 2006.
  • [4] Mower E., Mataric M.J., Narayanan S.S.: A Framework for Automatic Human Emotion Classification Using Emotional Profiles, IEEE Transactions on Audio, Speech and Language Processing, 19:5(1057-1070), 2011.
  • [5] Plutchik R.: The nature of emotion. American Scientist, vol. 89, no. 4(1), 2001.
  • [6] Martin J.C., Niewiadomski R., Devillers L., Buisine S., Pelachaud C.: Gesture expressivity and blended facial expressions, International Journal of Humanoid Robotics, vol.3, no.3, 2006.
  • [7] Mower E., Metallinou A., Lee C., Kazemzadeh A., Busso C., Lee S., Narayanan S.: Interpreting ambiguous emotional expressions, Affective Computing and Intelligent Interaction and Workshops, 2009.
  • [8] Klasmeyer G.: Emotions in Speech, Institut fur Kommunikationswissenschaft, Technical University of Berlin, 1995.
  • [9] Obrębowski A.: Narząd głosu i jego znaczenie w komunikacji społecznej, Uniwersytet Medyczny im. Karola Marcinkowskiego w Poznaniu, 2008.
  • [10] Khulage, A.A., Extraction of pitch, duration and formant frequencies for emotion recognition system, Fourth International Conference on Advances in Recent Technologies in Communication and Computing, 2012.
  • [11] Zieliński T.: Cyfrowe przetwarzanie sygnałów, Wydawnictwa Komunikacji i Łączności, 2003.
  • [12] Skowronski M., Harris J.: Increased MFCC filter bandwidth for noise-robust phoneme recognition, EEE International Conference on Acoustics, Speech, and Signal Processing, 2002.
  • [13] Kamińska D., Sapiński T., Pelikant A.: Comparison of Perceptual Features Efficiency for Automatic Identification of Emotional States from Speech, 6th International Conference on Human System Interaction, 2013
  • [14] Hermansky H.: Perceptual Linear Predictive (PLP) Analysis of Speech, Journal Acoustical Society of America, vol. 84, no. 4, 1989.
  • [15] Hermansky H. Morgan N.: RASTA processing of speech, IEEE Transactions on Speech and Audio Processing, vol.2, no.4, 1990.
  • [16] Kumar P., Biswas A., Mishra A.N., Chandra M.: Spoken Language Identification Using Hybrid Feature Extraction Methods, Journal of Telecommunications, vol.1, no.2, 2010.
  • [17] Kamińska D., Pelikant A.: Spontaneous emotion recognition from speech signal using multimodal classification, IAPGOS, vol. 2, no. 3, 2012.
  • [18] Mower E., Matarić M.J., Narayanan S., A Framework for Automatic Human Emotion Classification Using Emotion Profiles, IEEE Transactions On Audio, Speech, And Language Processing, vol. 19, no. 5, 2011.
Typ dokumentu
Bibliografia
Identyfikatory
Identyfikator YADDA
bwmeta1.element.ekon-element-000171564407

Zgłoszenie zostało wysłane

Zgłoszenie zostało wysłane

Musisz być zalogowany aby pisać komentarze.
JavaScript jest wyłączony w Twojej przeglądarce internetowej. Włącz go, a następnie odśwież stronę, aby móc w pełni z niej korzystać.