Abstract: A system for recognition of emotions based on speech
analysis can have interesting applications in human robot
interaction. Robot should make a proper mutual communication
between sound recognition and perception for creating a desired
emotional interaction with humans. Advanced research in this
field will be based on sound analysis and recognition of emotions
in spontaneous dialog.
In this paper, we report the results obtained from an exploratory
study on a methodology to automatically recognize and classify
basic emotional states. The study attempted to investigate the
appropriateness of using acoustic and phonetic properties of
emotive speech with the minimal use of signal processing
The efficiency of the methodology was evaluated by experimental
tests on adult European speakers.
The speakers had to repeat six simple sentences in English
language in order to emphasize features of the pitch (peak, value
and range), the intensity of the speech, the formants and the
speech rate. The proposed methodology using the freeware
program (PRAAT) and consists of generating and analyzing a
graph of pitch, formant and intensity of speech signals for classify
Eventually, the proposed model provided successful recognition
of the basic emotion in most of the cases.