News of the Institute of Media Informatics

Congratulation to obtaining a PhD degree to Frank Honold!

Ulm University

On the 2nd August 2018, Frank Honold has competed his phd studies by presenting his final work entitled

"Interaktionsmanagement und Modalitätsarbitrierung für eine adaptive und multimodale Mensch-Computer Interaktion".

He has been a assiduous member of the research group Ubiquitous Computing headed by Prof. Dr-Ing. Michael Weber for many years.

Frank Honold, a longterm collaborator in the special research project (SFB) Companion-Technology for Cognitive Technical Systems, concluded his academic carrier by completing his Ph.D. project. It has been supervised by Prof. Dr-Ing. Michael Weber, Institute of Media Informatics, and Prof. Dr Dr-Ing. Wolfgang Minker, Institute of Communications Engineering.

On the occasion, many of his former fellow collegues gathered at a little celebration which was excellently refined with culinary specialties home made by his family.

We wish all the best to you, Frank, at the rough Alb or wherever it may draw you to!

Here is the abstract of the Ph.D. thesis for all experts:

At present time, a paradigm shift is taking place in the human-computer interaction domain. So far, applications have been designed in a user-centered manner.  Applications are intended to be operated in specific scenarios. Their implementations are based on broad analyses of specific contexts of use (CoU). At current time, we can observe a shift from these rather standalone applications to cloud-based applications, for which the user interfaces (UIs) are realized in a universal manner to serve a broad range of devices in any CoU. At the same time, applied techniques like responsive web design witness the need and desire for device-specific adaptations to gain suitable UIs for heterogeneous devices. Thereby, the involved process does not change the encoding concepts of the UI. There is no shift in modalities, e.g. from a graphical UI to a verbal UI. Despite the multitude of our everyday interactive devices, a change in the  CoU can still result in a situation in which we cannot operate an application anymore, because the UI that was used until then cannot adapt in the required extent to the change in the CoU.


People however adapt their communicative behavior continuously to their environment and their communicative partners. We use speech, gestures, or sketches, and we can use different complementary modalities in parallel, or in a sequential way to express our meanings. Our decisions about the use of specific modalities are based on our contextual knowledge. Even though nowadays context-aware ubiquitous systems do analyze environmental facts in order to optimize their functionality, the versatility of adaptive UIs is almost never addressed.

Permanently changing scenarios may cause interactions with remote services. These services may be unknown at design time, but necessitate a proper UI at runtime. As such requirements originate from external components at runtime, there is no such UI, because it was not provided by the referencing application in advance. In addition, future applications should be deployable on almost any device using multiple modalities. Most present approaches in this domain support model-driven UI generation on the one hand, but omit the utilization of context knowledge in order to adapt the UI in a user-specific way on the other hand. Independent thereof, current prevalent approaches for modality arbitration use simple decision rules for reasoning and do not respect the important fact that contextual parameters may be afflicted with uncertainty.

This work addresses these problems by presenting an architectural framework in combination with a methodology, which allows a dynamic and continuous interaction management for various applications. The presented approach enables the use of a dynamic sub-system to run a continuously and user-specific model-driven modality arbitration process with uncertain knowledge in various application domains.

A prototypical implementation is used to analyze and discuss different reasoning-related parameters. The correlation of the contradicting objectives of a short reasoning time vs. the perceived quality of a UI is analyzed under the influence of three different reasoning algorithms. The findings of this work can be used in future systems to facilitate a seamless interaction across device borders using multiple modalities.