Seminar Customer Relationship Management and Social Media (Bachelor)

The seminar Customer Relationship Management and Social Media builds on the course "Customer Relationship Management and Customer Analytics" and is assigned to the specialisation "Business Analytics".

As part of the seminar, approaches to solutions for specific issues in the areas of customer relationship management and social media will be examined and refined. As a rule, a structured literature review on the topic is to be compiled first and best practices researched. A critical comparison of theory and practice, own ideas and recommendations for action and, if necessary, the use or evaluation of software tools round off the seminar.

Topics

Economic inequality, education, income, or unemployment – all these factors influence how much people trust their governments. However, such relationships are rarely linear and defy simple models. New approaches from AI research – in particular explainable AI – make it possible to visualise complex economic relationships and better understand them politically and socially.
This seminar paper aims to provide a structured overview of the scientific literature on XAI in economic and political analyses. It will then show how such methods can be used to make economic data easier to understand and better explain why certain developments occur.

Literature assistance: Bellantuono, L., Palmisano, F., Amoroso, N., Monaco, A., Peragine, V., & Bellotti, R. (2023). Detecting the socio-economic drivers of confidence in government with eXplainable Artificial Intelligence. Scientific Reports, 13(1), 839.
https://www.nature.com/articles/s41598-023-28020-5.pdf

Many AI systems are not only trained once, but are further developed during operation through human feedback. Users mark errors, correct suggestions or label new cases – and thus directly influence how the model improves. In practice, however, this feedback often remains inaccurate or inconsistent because users cannot understand why the AI arrives at a particular output. Explainable AI addresses this issue by making the decision-making basis of a model understandable, thus providing guidance for targeted feedback.
This seminar paper aims to provide a structured overview of the scientific literature on explainable AI in the context of human-in-the-loop learning systems and, based on this, to identify how explainable AI can be used in concrete terms to make human feedback more informative and consistent.

Literature assistance: Kulesza, T., Burnett, M., Wong, W. K., & Stumpf, S. (2015, March). Principles of explanatory debugging to personalise interactive machine learning. In Proceedings of the 20th international conference on intelligent user interfaces (pp. 126-137).
https://doi.org/10.1145/2678025.270139

AI is increasingly being used in critical and complex applications (e.g. medicine, security). In these areas, the relevant expertise usually lies with users and domain experts – not programmers. This is precisely why the question of how non-technical users can help shape AI systems without programming is becoming increasingly important. This means that users influence a system through interaction, corrections or feedback in such a way that it better suits the real working context without writing code or understanding machine learning methods in detail. 
This seminar paper aims to use a structured literature review to identify which approaches enable ‘AI development without programming’ (e.g. interactive learning and human-in-the-loop) and which design principles are necessary to enable non-technical users to train AI systems in a targeted and responsible manner.

Literature reference: Mosqueira-Rey, E., Hernández-Pereira, E., Alonso-Ríos, D., Bobes-Bascarán, J., & Fernández-Leal, Á. (2023). Human-in-the-loop machine learning: a state of the art. Artificial Intelligence Review, 56(4), 3005-3054.
https://doi.org/10.1007/s10462-022-10246-w
 

DThe film Mercy depicts a justice system in which AI acts as a judge, deciding on guilt and punishment. It thus picks up on a real development: AI-supported systems are already being discussed or used as decision-making aids in various areas of the justice system. At the same time, the idea of ‘AI as a judge’ touches on central constitutional issues such as transparency, the obligation to provide reasons, accountability and fairness. This raises the key question of whether judicial decision-making can be automated in principle – or whether it remains inextricably linked to human judgement.
This seminar paper will use a scientifically sound literature review to analyse the opportunities and risks associated with the use of AI in the administration of justice and to identify the limits of delegating judicial decisions to AI from the perspective of the rule of law.

Literature assistance: Borgesano, F., De Maio, A., Laghi, P., & Musmanno, R. (2025). Artificial intelligence and justice: a systematic literature review and future research perspectives on Justice 5.0. European Journal of Innovation Management, 28(11), 349-385.
https://doi.org/10.1108/EJIM-01-2025-0117

AI can recognise patterns in data and use them to make predictions. However, these patterns are often only correlations and say nothing about what the real cause is. A well-known example: in many data sets, drinking a glass of wine a day correlates with a higher life expectancy. It would be premature to conclude from this that wine prolongs life. However, it is also possible that wine drinking is more common among people who have more social contacts or a higher income – and that these factors explain the difference. AI models are therefore strong in making predictions, but only of limited use in deriving reliable recommendations for action. This is where causal AI comes in, aiming to reveal causalities in addition to correlations.
This seminar paper uses a structured literature review to show why correlation-based AI often does not allow for reliable cause-and-effect statements and which methods of causal AI are used to derive causal effects for robust decisions and recommendations.
Literature reference: Jiao, L., Wang, Y., Liu, X., Li, L., Liu, F., Ma, W., ... & Hou, B. (2024). Causal inference meets deep learning: A comprehensive survey. Research, 7, 0467.
DOI: 10.34133/research.0467

Applications such as ChatGPT, which are based on generative artificial intelligence (GenAI), can disruptively change everyday life and the professional world. GenAI comprises algorithms that are capable of generating new content in the form of audio, code, images, text, simulations and videos. With the increasing complexity and multi-layered nature of the solutions developed by GenAI, new challenges and opportunities are constantly emerging – including for the field of Explainable Artificial Intelligence (XAI) research. XAI methods for GenAI could provide explanations that enable a better understanding of how GenAI works, whether through analysing responses to specific inputs or understanding the model as a whole.
The seminar paper should provide an overview of the scientific literature on XAI specifically for GenAI. Subsequently, a scenario should be designed for how GenAI systems can be designed in such a way that they are understandable and comprehensible for users.
Literature assistance: https://arxiv.org/abs/2404.09554

In Germany, more than half of the population rejects fully automated decisions. The myth of the uncontrollable intelligent machine exists. On the other hand, algorithmic decision-making (support) systems are already being used successfully in many areas of everyday life, not least in medical diagnostics or in assessing creditworthiness. In order to increase user acceptance of algorithms and intelligent systems, the field of Explainable Artificial Intelligence (XAI) aims to provide automatic explanations for algorithmic decisions. Counterfactual explanations are being hyped in science and practice – but how good are they really?
The seminar paper should provide a structured overview of the scientific literature on XAI and, in particular, on counterfactual explanations. It should then critically assess the added value that counterfactual explanations bring to users of intelligent systems.

Literature assistance: https://arxiv.org/abs/2010.10596
 

People analytics, as a form of algorithmic management, is gaining popularity in the wake of the decentralisation of work and the search for tools to support (self-)organisation and leadership in distributed teams. However, alongside the supposed potential of people analytics, there is also the inherent danger that it will serve as surveillance software and reinforce existing prejudices and discrimination.

This seminar paper aims to provide a comprehensive overview of the scientific literature on employee perceptions of people analytics. Subsequently, a scenario will be designed to measure employee perceptions regarding the introduction of people analytics in the work environment.

Literature assistance: Klöpper, M. (2023). Every break you take, every click you make – empirical insights on employees' perception of people analytics. aisel.aisnet.org/ecis2023_rp/356.

https://aisel.aisnet.org/ecis2023_rp/356/

 

Artificial intelligence (AI) has surpassed humans in diagnosing X-rays and playing chess. At the same time, headlines are being made because AI systems are being used improperly and making discriminatory decisions. For example, when job applications are filtered using an AI system and only male applicants are selected as a result. Therefore, consideration of fairness and bias in the development of such systems has already gained considerable importance. There are various ways to define when an AI system is fair, including the concept of ‘counterfactual fairness’.
This seminar paper aims to provide a comprehensive overview of the scientific literature on counterfactual fairness. Subsequently, a scenario will be designed to demonstrate how counterfactual fairness could contribute to the fairness (with a focus on traceability) of AI systems.

Literature assistance: Kusner et al. (2017). Counterfactual fairness. Advances in neural information processing systems.
https://proceedings.neurips.cc/paper/2017/file/a486cd07e4ac3d270571622f4f316ec5-Paper.pdf

 

Post-merger integration (PMI) encompasses the process of merging at least two companies into a single legal entity. This process continues until a uniform structure has been established between the companies involved. A key success factor in PMI is effective collaboration between new colleagues. The so-called ‘relationship risk’ represents a decisive integration risk in this context.
This seminar paper will first provide an overview of the scientific literature on analytics solutions for evidence-based PMI decisions. Building on this, a scenario will be designed that shows how the characteristics and developments of relationship risks can generally be assessed from the employee's perspective.

Literature reference: Woehler et al. (2021). Turnover during a corporate merger: How workplace network change influences staying. Frantz, T.L. (2017). Dissecting Post-Merger Integration Risk: The PMI Risk Framework.

https://scholarworks.umt.edu/cgi/viewcontent.cgi?article=1013&context=manmark_pubs

Retrieval-Augmented Generation (RAG) models combine the advantages of information retrieval systems with the capabilities of Generative Artificial Intelligence (GenAI) to provide accurate and context-sensitive answers to complex questions. 
This innovative technology makes it possible to dynamically integrate external knowledge sources into the generation process, opening up a wide range of new applications, particularly in data-intensive areas such as education, research and corporate communications. At the same time, RAG models raise important questions about data protection, bias and the reliability of the retrieved information.
This seminar paper will examine the current scientific literature on RAG models in order to analyse their potential and challenges. The aim is to identify practical applications and the limitations of this technology and to evaluate its significance for future-oriented knowledge processing.

Literature assistance: Gao, Yunfan, et al. ‘Retrieval-augmented generation for large language models: A survey.’ arXiv preprint arXiv:2312.10997 (2023).
arxiv.org/pdf/2312.10997

 

Energy supply is at the heart of many social and political discussions – it is fundamental to the economy, everyday life and climate protection. However, with the increasing use of artificial intelligence (AI) to control and optimise modern energy systems, the challenge of making these complex models comprehensible and trustworthy is also growing. This is exactly where Explainable Artificial Intelligence (XAI) comes in: it makes the decisions of AI models understandable – a crucial factor for security, transparency and acceptance.
The aim of this seminar paper is to provide a structured overview of the scientific literature on XAI in the energy sector. This will be followed by a critical assessment of current XAI approaches based on typical application examples.

Literature reference: www.sciencedirect.com/science/article/pii/S2666546822000246

 

Autonomous agents have long been one of the fundamental concepts of artificial intelligence. However, recent developments – particularly large language models and modern learning methods – have led to a new class of autonomous systems, which are discussed under the term ‘agentic AI’. These systems are capable of pursuing complex goals, planning independently, using external tools and acting autonomously over long periods of time.
The seminar paper should provide a structured overview of the scientific literature on Agentic AI. Building on this, the central characteristics of agentic AI systems and their potential for specific applications – for example, in knowledge work, software development, or organisational processes – should be identified and critically evaluated.

Literature assistance: ieeexplore.ieee.org/stamp/stamp.jsp;

Generative artificial intelligence (GenAI) is increasingly being used as an assistance system in the workplace to support employees in knowledge-intensive activities. Empirical studies show that the use of GenAI can significantly increase individual productivity, especially among less experienced or less qualified employees. At the same time, the effects on experienced employees are mixed and may be accompanied by changes in work quality and learning processes.
This seminar paper aims to provide an overview of the scientific literature on the productivity effects of generative AI. It will then critically assess the conditions under which GenAI systems can contribute to sustainable productivity gains and the risks to work quality and skills development.

Literature reference: academic.oup.com/qje/article/140/2/889/7990658

Large language models (LLMs) have made significant progress in a short period of time and are now used in a wide range of applications, including text generation, programming, knowledge work, decision support and automated communication. Due to their high performance, these models are also increasingly being used in sensitive and socially relevant fields of application. At the same time, LLMs are considered difficult to understand, as both their internal decision-making processes and the fundamentals of their training are only transparent to a limited extent. Especially with large models, it is often unclear what data they were trained on and how this data affects behaviour, biases and generated content.
This seminar paper aims to provide a structured overview of the current state of research on explainable large language models. Existing approaches to the explainability of LLMs and typical application scenarios will be presented. Building on this, the central limitations of current explainability methods and open research questions will be critically discussed, particularly with regard to the use of LLMs in sensitive areas of application.

Literature assistance: dl.acm.org/doi/pdf/10.1145/3639372

Large language models (LLMs) are increasingly being used in socially and economically relevant areas of application, such as decision support, knowledge management and automated dialogue systems. At the same time, there is often limited knowledge about the data sets – which are often very large and heterogeneous – on which these models have been trained. This creates the risk that LLMs will reproduce or reinforce existing social biases, even if these are not explicitly contained in the training data.
The spread of retrieval-augmented generation (RAG) systems further exacerbates this problem, as biases can result not only from the language model itself, but also from the connected external data sources. Bias and fairness thus become central challenges for data-driven generative AI systems.

The seminar paper should provide a structured overview of the scientific literature on bias detection and bias mitigation in generative AI systems. Existing approaches to detecting and reducing bias in large language models and RAG systems should be presented and critically evaluated. Particular attention will be paid to the underlying causes of bias, such as the composition and lack of transparency of training data, model architecture, and the selection and structure of external data sources in RAG systems. Finally, open research questions and implications for the responsible use of such systems in sensitive areas of application will be discussed.

Literature assistance: https://dl.acm.org/doi/pdf/10.1145/3637528.3671458

 

With the increasing use of complex AI systems, there is a growing need for transparent and trustworthy decisions. While many approaches to Explainable Artificial Intelligence (XAI) rely on post-hoc explanations, the idea of Explainability by Design is increasingly coming to the fore, whereby explainability is already taken into account when designing models and architectures.
This seminar paper will provide an overview of the scientific literature on explainability by design. Explainable model and system architectures will be presented and compared with classic post-hoc explanation methods. Finally, the paper will discuss the use cases in which explainability by design offers added value.

Literature assistance:  https://www.jair.org/index.php/jair/article/view/17970/27222

Lecturers

Professor Dr Mathias Klier, Institute for Business Analytics
Professor of Mathias Klier
Dr Maximilian Förster, Institute for Business Analytics
Dr Maximilian Förster
Mike Rothenhäusler, Institute for Business Analytics
Mike Rothenhäusler
Maximilian Buck, Institute for Business Analytics
Maximilian Buck

Inhaltliche Informationen

Die Studierenden erwerben im Rahmen dieses Moduls die Fähigkeit, ein Thema aus dem Gebiet Customer Relationship Management und Social Media selbständig und nach wissenschaftlichen Kriterien zu erarbeiten. Die Bearbeitung einer Seminararbeit mit anschließender Präsentation und Diskussion der Ergebnisse fördert die rhetorische Fertigkeit und soziale Kompetenz der teilnehmenden Studierenden.

In diesem Modul werden folgende fachliche Inhalte vermittelt:

  • Social Media - Digitale Plattformen
  • Social Media - Fake News
  • CRM - (Explainable) und (Generative) Artificial Intelligence

Je nach Themengebiet wird individuelle Literatur empfohlen.

Organisatorische Informationen

Nächster Veranstaltungsbeginn: WiSe 25/26

Ort: Auftaktveranstaltung (60 Minuten zu Beginn des Semesters) und Abschlusspräsentation (2-3 Stunden am Semesterende) in Präsenz. 

Termine: 

  • Abschlusspräsentation: Zeitpunkt und Ort werden in Abstimmung mit den Studierenden rechtzeitig bekannt gegeben
  • Abgabe der Seminararbeiten: eine Woche nach der Abschlusspräsentation

ECTS: 4

Seminar (2 SWS): Schriftliche Hausarbeit, Präsentationsunterlagen, Präsentation im Rahmen eines Seminarvortrags

Anmeldung über das zentrale Seminarvergabetool der Wirtschaftswissenschaften: econ.mathematik.uni-ulm.de/semapps/stud_de

Die Themen können nur alleine bearbeitet werden. Zur Erlangung des Leistungsnachweises ist die Anfertigung einer Seminararbeit sowie einer Präsentation (10 Minuten) mit anschließender Diskussion (5 Minuten) notwendig.

Schwerpunktfächer: Technologie- und Prozessmanagement, Business Analytics, sowie Unternehmensführung und Controlling, Wahlpflicht BWL

Studiengänge: B.Sc. Wirtschaftswissenschaften, B.Sc. Wirtschaftsmathematik, B.Sc. Wirtschaftschemie, B.Sc. Wirtschaftsphysik und Studiengänge mit Nebenfach Wirtschaftswissenschaften