Evaluating explainability in the context of predictive process monitoring

Universität Ulm

Fakultätsöffentliche Vorstellung des Promotionsvorhabens (Grüner Vortrag), Ghada ElKhawaga, Ort: O27 / Raum 5202, Datum: 22. November 2023, Zeit: 11:00 Uhr

Predictive Process Monitoring (PPM) has proven to be a fundamental use case of process mining. In PPM, the future of a running business process instance is predicted in terms of process instance outcomes, the next process activity to be executed, time-related information about this activity, and key performance indicators of the process. Supporting decision makers with such information enables making the right decisions to enhance process performance or to prevent undesired process outcomes.

A large number of PPM approaches apply predictive models that are based on Machine Learning (ML). Consequently, PPM inherits the challenges faced by ML approaches, one of which concerns the need to gain user trust in the generated predictions. Having the user at the center of attention, various eXplainable Artificial Intelligence (XAI) methods have emerged to generate explanations of the reasoning process of a ML model. With the growing interest into XAI methods, it becomes crucial to ensure the validity and interpretability of the obtained explanations. 

In the context of this PhD project an analysis framework is developed to study the effect of the different techniques applied in the course of predictive process mining on the generated predictions. Furthermore, two different approaches to evaluate XAI methods are proposed. The first approach compares the consistency of global XAI methods with the ground truth learned about the data. Meanwhile, the second approach evaluates the interpretability of explanations generated by local XAI methods. Finally, the potential of using explanations to address major challenges faced by process mining are investigated.