Fairness and bias of AI

An interactive e-learning unit including a real case study

Many companies and administrations are using data-driven, algorithmic decision-making systems already. Soon, systems from the field of Artificial Intelligence (AI) will be omnipresent in almost all areas of industry or daily life. Negative headlines, however, are increasing because AI systems are used improperly and make discriminatory decisions. The U.S. justice system, for example, has been using a software called COMPAS for years to assess the potential recidivism risk of offenders based on a variety of factors. It turns out, however, that COMPAS makes different mistakes depending on ethnicity. With regards to African-Americans, for example, the software tends to assess the risk of recidivism too high.

The problem that is driving this is that many – not only in the general population but also among those who develop and utilise AI systems – still make a blanket assumption that algorithmic decisions are objective and neutral. However, this is neither a given nor is a decision based solely on objective characteristics necessarily fair and non-discriminatory. In view of this situation, there is a great need to gain a deeper understanding of the challenges presented by the use and implementation of AI systems as well as possible solutions. Ulm University has thus established the interactive e-learning course "Fairness and Bias of Algorithmic Decision Systems" in order to provide students, professionals and the interested public with a comprehensive application-oriented understanding of fairness and bias of AI systems. In addition to learning entertaining basics, participants will also apply and deepen what they have learned by means of interactive exercises and a real-life case study. The project was funded by the Péter Horváth Foundation.

Funding body: Péter Horváth Foundation

Project period: January 2022 – August 2022

Related Links: https://bias-and-fairness-in-ai-systems.de/

Transfer

The e-learning course aims to contribute to the transfer of knowledge to ensure the use of ethically-sound AI systems. It provides participants with entertaining basics and examples from practice as well as impulses for critical thinking in the handling and implementation of ethically-sound AI. First, participants learn about the concepts of "bias" and "fairness" in AI systems, which can be deepened using practical, interactive exercises. The newly-acquired skills can then be applied in a real-world case study.

The course is not just part of the course catalogue for students in the master's programme at Ulm University but is also available to the general public. It is used as a basis for knowledge transfer into practice and subsequently also as a starting point for the acquisition of application-oriented research projects in cooperation with companies or public administrations. The course is intended to support a well-founded social discourse on the potentials and limits of AI.