Review-based explanations for recommendations in e-commerce
Research project
Research project
Prof. Dr. Steffen Zimmermann
+49 (0) 7 31 50-3 23 00
steffen.zimmermann(at)uni-ulm.de
On e-commerce platforms, users are often confronted with an overwhelming variety of products and services, which can lead to information overload. Recommender systems offer a solution by generating personalised product recommendations. Customer reviews play an important role here, as they serve as an important source of information about the quality and characteristics of products and enable high recommendation scores in state-of-the-art recommender systems. Despite the technological advances in recommender systems, they are often met with scepticism and a lack of trust. This is often due to the lack of transparency of the recommendation logic and the fear that commercial interests could dominate the recommendations. In order to reinforce trust in these systems, approaches are needed that make their functioning comprehensible and promote acceptance. For review-based recommender systems in particular, however, there are still no adequate explanatory methods that emphasise the trust-building potential of reviews - although reviews are indispensable, especially in e-commerce, and are perceived by users as more trustworthy than information provided by companies.
Against this background, the project will develop an innovative explanation method for review-based recommender systems that focuses on relevant information from reviews. The method is based on concepts of Explainable Artificial Intelligence (XAI) and makes the recommendation process transparent for users by explaining how and why specific products or services are suggested. Reviews are not only used as input for the algorithms of the recommender systems, but are also used directly to explain the recommendations. This allows the trust-building potential of reviews to be fully utilised, as users can understand what information from reviews has influenced their decisions. Thanks to a model-agnostic approach, the method can be flexibly applied to various review-based recommender systems.
A central goal of the current project is to comprehensively evaluate the effects of the explanation method on trust, acceptance and user behaviour. To this end, controlled online experiments are being conducted to investigate how the new explanation method affects users' trust in the system and their willingness to use it. In addition, it is planned to integrate the explanation method into the system of one of our cooperation partners and evaluate it in field experiments.
Project period: 2024 - 2026