Subproject B3
Multimodal Interaction


The use of multimodal input and output channels opens up new possibilities for Companion Systems. Users should be in a position to employ the optimal modalities for input and output suiting the current interaction. In the first funding period an interaction model was developed, which generates a multimodal user interface for any identified context of use.

The current focus is on individual adaptation of the user interface by utilizing the interaction history between the user and the system. In order to do so, input and output decisions must be accompanied by learning mechanisms and decision modules processing relevant information from such history. In addition, great importance must be attached to the affective states of the user and their physiological and cognitive implications. Under varying affective states, for example, one and the same output can be perceived differently, or even not at all. Apart from that the preferred form of in- and output as a whole can change. At first, the influence of affective states on the user's multimodal perception and behavior must be explored.

Finally, this will enable a Companion System not only to adapt to general user preferences, but also to respond to immediate and individual needs of the current user. This will help achieving significant usability and user experience.