Prof. Dr. Mathias Klier
+49 (0) 7 31 50-3 23 12
mathias.klier(at)uni-ulm.de
Many companies and administrations are already using data-driven, algorithmic decision-making systems. In the foreseeable future, there will hardly be an industry or an area of daily life in which systems from the field of artificial intelligence (AI) will not be omnipresent. Increasingly, however, one reads negative headlines because AI systems are used improperly and make discriminatory decisions. For example, for years the U.S. justice system used software called COMPAS to calculate the recidivism risk of offenders* based on various factors. However, it turns out that COMPAS makes different mistakes depending on ethnicity. For example, African-Americans are more likely to be assessed as being at too high a risk of re-offending.
The problem behind this is that not only in the general population, but also among those who develop and deploy AI systems, many still make a blanket assumption that algorithmic decisions are objective and neutral. Yet neither is this a given, nor is a decision based solely on objective characteristics necessarily fair and non-discriminatory. Given this situation, there is a great need to gain a deep understanding of the challenges in the use and implementation of AI systems and possible solutions. Against this background, the interactive e-learning unit "Fairness and Bias of Algorithmic Decision Systems" was developed at the University of Ulm to provide students, professionals and the interested public with a comprehensive application-oriented understanding of fairness and bias of AI systems. The core of the course unit is - in addition to the teaching of fundamentals - the application and consolidation of what has been learned by means of interactive exercises and the processing of a real case study from practice. The project was funded by the Péter Horváth Foundation.
Sponsor: Péter Horváth Foundation
Project period: January 2022 - August 2022
Related Links: https://bias-and-fairness-in-ai-systems.de/
With the help of the e-learning unit, a contribution is to be made to the transfer of knowledge in order to ensure the use of ethically impeccable AI systems. The teaching unit provides interested parties with entertaining basics and examples from practice as well as impulses for critical thinking in the handling and implementation of ethically sound AI. First, participants learn about the concepts of "bias" and "fairness" in AI systems, which can be deepened using practical, interactive exercises. The newly learned skills can then be applied in a real-world case study.
The learning unit is embedded in the course offerings for students in the master's program at the University of Ulm, but is also available to the general public. It is used as a basis for the transfer of knowledge into practice and subsequently also as a starting point for the acquisition of application-oriented research projects in cooperation with companies or public administrations. The learning unit is intended to serve a well-founded social discourse on the potentials and limits of AI.