XAI demonstrator

Teaching the use of Explainable Artificial Intelligence in a practical way

Artificial Intelligence (AI) is increasingly used to support decision-making in companies and for private individuals. Current studies, however, show that automated decisions based on algorithms are viewed sceptically in Germany [1] – even when the algorithms achieve demonstrably better results than human decision-makers. The reason: In many cases, the complexity of intelligent systems make it seem, from a human perspective, as if algorithmic decisions were being made through a "black box" and could no longer be traced and validated. The research field of Explainable Artificial Intelligence (XAI) addresses this issue and aims to generate automated explanations that are understandable for humans, which help the user comprehend and understand the individual decisions made by AI systems. But what does XAI do in the first place and how can it help create a basis for appropriate trust in the AI system?

The XAI demonstrator, which is being developed at Ulm University, aims to answer these questions with practical examples and thus make the still rather unknown research field of explainable artificial intelligence more accessible to both students and the general public. The project is funded by the Péter Horváth Foundation.

Purpose of the project is to develop a demonstrator that makes the methods and concepts of XAI accessible to the general public by means of easily understandable application examples (use cases). In two use cases, users can call on the help of an AI system, whose recommendations and decisions are explained live by an XAI. This imparts the methods and concepts of XAI in a playful and applied way. The XAI demonstrator is intended to serve as a demonstration object for teaching, for imparting knowledge about XAI in companies and administration, and for addressing the broader public. The demonstrator will also be used in science to collect user feedback, test new XAI approaches and, in particular, evaluate their effect on the user. The aim is to make a scientific contribution to more intensely focus XAI research on humans.

Funding body: Péter Horváth Foundation

Project period: June 2020 - January 2021

[1] Fischer & Petersen (2018): What Germany knows and thinks about algorithms, Bertelsmann Stiftung (ed.).


To the XAI-Demonstrator: https://www.erklaerbare-ki.de/xai-demonstrator

To the technischen Dokumentation: https://xai-demonstrator.github.io/xai-demonstrator

To the Github-Repository: https://github.com/XAI-Demonstrator/xai-demonstrator


The XAI demonstrator app makes XAI methods and concepts accessible to the general public by means of easily understandable application examples (use cases). Two exemplary use cases enable users to test and experience the help of an AI system whose recommendations and decisions are explained live by an XAI.

In this way, the XAI demonstrator allows users to observe the functioning of various XAI algorithms in action and get to know the basic principle and application possibilities of XAI in a practical and understandable way. The XAI demonstrator will be used as a demonstration app for teaching, for educating companies and authorities about XAI, and for addressing a broad public. In addition, the demonstrator is used scientifically to collect user feedback, to test new XAI approaches and, in particular, to evaluate how they are experienced by the user.