The Twinkle of Emotional Feeling: Where It Comes From?
In a daily body-to-body interaction, emotional expressions play a vital role in creating social linkages, producing cultural exchanges, influencing relationships and communicating experiences. Emotional information is transmitted and perceived simultaneously through verbal (the semantic content of a message) and nonverbal (facial expressions, vocal expressions, gestures, paralinguistic information) communicative tools and contacts and interactions are highly affected by the way this information is communicated/perceived by/from the addresser/addressee. Therefore, research devoted to the understanding of the relationship between verbal and non-verbal communication modes, and to investigating the perceptual and cognitive processes involved in the perception of emotional states, as well as the role played by communication impairments in their recognition is particularly relevant in the field of Human-Human and Human-Computer Interaction both for build up and harden human relationships and for developing friendly and emotionally colored assistive technologies.
A long research tradition has tended to investigate emotions and related perceptual cues to infer them, through separate investigations into the three fundamental expressive domains involved in their communication, i.e. facial expressions, speech and body movements.
Whatever was the exploited domain, the data reported in literature always referred to static facial expressions, static postures, that contrast with vocal stimuli since they are naturally dynamic. In addition, in daily experience, also emotional facial expressions and gestures vary along time, since emotional states are intrinsically dynamic processes.
Is the dynamic visual information still emotionally richer than the auditory one? A recent study comparing the power of visual and auditory channels in conveying emotional information exploiting dynamism in facial as in vocal expressions has been made through the definition and constitution of a cross-modal and cross-cultural database constituted by dynamic verbal and non-verbal (gaze, facial expressions, and gestures) data and the definition of psychological experiments aimed to portray the underlying meta-structure of the affective communication (Esposito 2009, 2007). Such a database (Esposito et al. 2009, Esposito & Riviello 2010) allowed to characterize the emotional dynamic features of some basic emotions transmitted dynamically by the visual and auditory channels considered either singularly or in combination, with the aim to establish if there is a preferential channel for perceiving an emotional state, and if this preference depends on the perceptual mode and/or on the cultural context. To this aim, a series of perceptual experiments for evaluating the subjective perception of emotional states, exploiting video and audio stimuli extracted from Italian and American English movies, were conducted on Italian and American English subjects (Esposito, 2009; Esposito & Riviello in progress).
Seite 2 von 2
In a cross cultural perspective, the results of these experiments suggest the need of investigations on the role the multimodality might play in communicating emotional feelings considering instantiated forms of interactions since perception is a strongly non linear process highly affected by the context, the culture, the medium and the mode through which the emotion is expressed. The open questions are:
Does multimodality increase our ability to felt and perceive emotional feelings?
Is one channel (the voice for example) more powerful than another (e.g. the visual one)?
Does cultural specificity have an effect on how emotional feeling is perceived?
What is the role of language specificity in this context?
The author wants to express her great appreciation to Dr. Maria Teresa Riviello for her collaboration, useful comments and suggestions.
Frau Prof. Dr. Anna Esposito
Department of Psychology, and IIASS, Italy
Second University of Naples
Mittwoch, 6. Juli 2011, 16 Uhr c.t.
Universität Ulm, N27, Raum 2.033 (Videoübertragung zur Otto-von-Guericke-Universität Magdeburg G26.1-010)