Prosodic and Semantic Effects on the Perception of Mixed Emotions in Speech
Abstract
The current study examines the perception of mixed happy-sad emotions elicited by a combination of prosodic voice cues (pitch and tempo), and sentence content (semantics) in speech. In the first experiment, participants will rate sentences spoken by a female talker on happiness and sadness using a 7-point Likert scale. In the second experiment, the processing of emotions will be examined using eye-tracking. Participants will watch audio-visual recordings of a female talker speaking a series of sentences and will rate the emotional expressions using the same rating scale. When pitch and tempo cues are consistent with happy and sad expressions, we expect listeners to rate the expressions in accordance with these emotions. However, when voice cues that signal happy and sad emotions are in conflict, they will result in intermediate happiness and sadness ratings, reflecting the perception of mixed happy-sad emotions. We expect that eye-tracking measures will reveal shorter durations of looking time to purely happy or sad emotions in comparison to mixed happy-sad emotions. Furthermore, the semantics of sentence content will reduce the perception of mixed happy-sad emotions evoked in vocal expressions. The findings from the current study are expected to extend our knowledge on the perception of mixed emotions in normal populations and in special populations with social-emotional deficits.
Discipline: Psychology Honours
Faculty Mentor: Dr. Tara Vongpaisal
Downloads
Published
Issue
Section
License
Authors retain any and all existing copyright to works contributed to these proceedings.