Information Fusion for the Recognition of Emotions and Dispositions
Multimodal recognition of emotions, dispositions, activities and intentions of a human user in a human-computer interaction is the aim of this research project. For this, we consider spatio-temporal information fusion of multi-sensory inputs along with symbolic information soruces from a theoretical and algorithmic persective. Results of the information fusion are provided to the planning, decision making, or dialog and interaction management level of the overall system. Classifiers are integrated into information fusion architectures in order to deal with incomplete or uncertain user data in learning and testing phases, for instance in case of noisy modalities. Machine learning form examples and pattern recognition are the major directions of reserach in our project.