Review-based explanations for recommendations in e-commerce

Modern e-commerce platforms are increasingly relying on sophisticated, AI-based recommender systems to help users navigate the information overload through personalised product suggestions. These systems achieve particularly high recommendation quality when they process unstructured, textual customer reviews using deep learning or modern text mining techniques. However, technological progress has a downside: the underlying models usually function as highly complex ‘black boxes’. For users, the system’s logic is completely opaque, which inevitably leads to scepticism and a lack of trust in the algorithmic decisions. To ensure the acceptance of such systems, it is essential to make them explainable from a user-centred perspective. This is precisely where the emerging field of research known as Explainable Artificial Intelligence (XAI) comes in. However, research has so far lacked suitable XAI methods capable of adequately and comprehensibly explaining complex recommendations based on unstructured text data. 

Against this backdrop, this research project is dedicated to the algorithmic development of innovative, model-agnostic explanation methods for review-based recommender systems. The methodological focus lies on the design of novel XAI approaches for unstructured text data. The challenge lies not only in technically understanding the recommendation process, but also in generating explanations that are aligned with human explanatory patterns. To this end, the newly developed method first automatically structures unstructured reviews into concepts relevant to humans (e.g. aspect/opinion pairs relating to specific product characteristics). By systematically varying these text modules, the causal influences of individual review contents on the recommender system’s output can be quantified. On this basis, the algorithm generates user-centred explanations – for example, through counterfactual approaches or visual SHAP values – which transparently demonstrate to users exactly why a specific product was suggested instead of another. 

A key objective of the project is to iteratively evaluate and further develop these new algorithmic XAI artefacts. Functional analyses and controlled online experiments will investigate the extent to which the algorithmic design decisions (e.g. textual vs. visual representation, counterfactual vs. feature importance explanations) meet requirements such as relevance, comprehensibility and model fidelity, and measurably increase users’ trust in the system. Finally, the developed explanation method will be integrated into a real-world e-commerce platform in cooperation with an industry partner, in order to validate, through a field experiment, the actual cumulative influence of the generated explanations on users’ economic decision-making behaviour.

Cooperation partner: German Research Foundation (DFG)

Project period: 2024–2026