Explainability in human-robot interaction
In complex robot systems, a large number of AI components interact, whose individual decisions together constitute the observable behavior of the robot. For human interaction partners, these processes can hardly be understood in their entirety and the robot behavior is often not interpretable. The application of techniques from the Explainable AI aims to make the actions, skills and knowledge of a robot transparent.
It is possible to work on theses in this research area. The robots TIAGo (http://pal-robotics.com/robots/tiago/) and Furhat (https://furhatrobotics.com) are available in the laboratory for practical use and empirical studies.
If you are interested, send an email to email@example.com
If you are interested in a thesis topic you are invited to check with our staff for additional information. If there are currently no announcements, you are still invited to ask our staff for offerings. For this it is advisable to consult an employee from the research area that appeals to you most.
The following descriptions are mostly available in German. If you are, however, interested in writing a thesis in English, you are welcome to contact us.
We offer various theses in the areas of hybrid planning, hierarchical planning, and partial order causal link (POCL) planning. We focus our research on plan recognition, mixed-initiative planning with the user, plan explanations, heuristic search, modeling support, and complexity analyses.
We are happy to present our current theses topics to you. Please contact Conny Olz in case you are interested.
Semantic Web Technologies and Automatisched Reasoning
- Evaluation of Stream Reasoning Systems
- Description (PDF)
- Comparison of OWL Reasoners for SPARQL Queries
- Description (PDF)
For further topics please contact Birte Glimm.
There is also the opportunity for a joint bachelor's or master's project with Dr. Thorsten Liebig of derivo GmbH, a spin-off of the institute.
Human-in-the-Loop Reinforcement Learning
Is it possible to train Robots/AI with demonstrations and/or interactive feedback? In our research we want to investigate how methods & concepts from the field of explainable AI (XAI) can be used to enable natural & fast training of robots/agents.
For these reasons we offer several thesis topics in the fields of Human-in-the-Loop Reinforcement Learning (HRL, Imitation Learning), Explainability and Human-Robot-Interaction.
If you are interested, please sent a mail to jakob.karalus(at)uni-ulm.de