Explainability, Fairness and Acceptability of Intelligent Systems (EFA)

The adoption of intelligent system grows continuously with the successful implementation of methods from the area of artificial intelligence, in particular from machine learning, but also from areas such as sensor technology, compressed sensing, big data and digitalisation. Such systems are data-driven and model-based. Through their capabilities to perceive the environment, to learn, plan and act, such systems are more and more autonomous. Hence, intelligent systems will take over more and more responsibilities in our social and economic lives. Such systems often use large datasets, e.g., to automatically classify customers, to judge the creditworthiness of an applicant or to elicit prognoses about business processes. The involved risks are clear and are a topic of wide-spread discussions. 

The acceptance of intelligent systems today is limited due to their insufficient ability to explain their decisions and actions towards human users. Often, it is neither comprehensible how an automatic decision was taken (explainability) nor whether the decision is fair in terms of moral, ethical and legal criteria. Due to the power of intelligent systems, their explainability, fairness, and acceptability is essential for a wide spread use. 

The goal of the competence centre is to scientifically analyse the topics of explainability, fairness, and acceptability from an interdisciplinary perspective and to advance the state-of-the-art in this area.