Effective companion systems require robust methods for detection of faces and facial mimics as well as situations of non-verbal person approach, i.e. via body posture and gaze. Currently, under real world conditions these tasks cannot be carried out at a satisfying degree of quality. Thus, to achieve robust analysis of faces and body posture, new mono- and binocular image processing approaches will be investigated.
For vision based emotion recognition from faces, two methods will be studied, which combine static and dynamic features in temporal and spatial image sequences. In addition, neural computational mechanisms will be developed which aim at replicating key functionalities of processing in visual cortex. Here, the segregated processing along the motion and form pathway and their fusion will be studied to analyze sensory signals in visual communication and vision-based social interaction via head and body poses. This serves to derive a deeper understanding of the details in early and mid-level visual processing in a biologically inspired architecture and their use for advanced human-computer interaction.