Seminar Customer Relationship Management and Social Media (Bachelor)
The seminar Customer Relationship Management and Social Media builds on the course "Customer Relationship Management and Customer Analytics" and is assigned to the specialisation "Business Analytics".
The seminar aims to examine and (further) develop solutions for specific issues in the areas of customer relationship management and social media. As a rule, a structured literature review on the topic is to be compiled first and best practices researched. A critical comparison of theory and practice, own ideas and recommendations for action and, if necessary, the use or evaluation of software tools round off the seminar.
Topics
Applications such as ChatGPT, which are based on generative artificial intelligence (GenAI), can disruptively change everyday life and the professional world. GenAI comprises algorithms that are capable of generating new content in the form of audio, code, images, text, simulations and videos. With the increasing complexity and diversity of the solutions developed by GenAI, new challenges and opportunities are constantly emerging – including for the field of Explainable Artificial Intelligence (XAI) research. XAI methods for GenAI could provide explanations that enable a better understanding of how GenAI works, whether through analysing responses to specific inputs or understanding the model as a whole.
The seminar paper should provide an overview of the scientific literature on XAI specifically for GenAI. Subsequently, a scenario should be designed for how GenAI systems can be designed in such a way that they are understandable and comprehensible for users.
Literature assistance: https://arxiv.org/abs/2404.09554
In Germany, more than half of the population rejects fully automated decisions. The myth of the uncontrollable intelligent machine exists. On the other hand, algorithmic decision-making (support) systems are already being used successfully in many areas of everyday life, not least in medical diagnostics or in assessing creditworthiness. In order to increase user acceptance of algorithms and intelligent systems, the field of Explainable Artificial Intelligence (XAI) aims to provide automatic explanations for algorithmic decisions. Counterfactual explanations are being hyped in science and practice – but how good are they really?
The seminar paper should provide a structured overview of the scientific literature on XAI and, in particular, on counterfactual explanations. It should then critically assess the added value that counterfactual explanations bring to users of intelligent systems.
Literature assistance: https://arxiv.org/abs/2010.10596
People analytics, as a form of algorithmic management, is gaining popularity in the wake of the decentralisation of work and the search for tools to support (self-)organisation and leadership in distributed teams. However, alongside the supposed potential of people analytics, there is also the inherent danger that it will serve as surveillance software and reinforce existing prejudices and discrimination.
This seminar paper aims to provide a comprehensive overview of the scientific literature on employee perceptions of people analytics. Subsequently, a scenario will be designed to measure employee perceptions regarding the introduction of people analytics in the work environment.
Artificial intelligence (AI) has surpassed humans in diagnosing X-rays and playing chess. At the same time, headlines are being made because AI systems are being used improperly and making discriminatory decisions. For example, when job applications are filtered using an AI system and only male applicants are selected as a result. Therefore, consideration of fairness and bias in the development of such systems has already gained considerable importance. There are various ways to define when an AI system is fair, including the concept of "counterfactual fairness".
This seminar paper aims to provide a comprehensive overview of the scientific literature on counterfactual fairness. Subsequently, a scenario will be designed to demonstrate how counterfactual fairness could contribute to the fairness (with a focus on traceability) of AI systems.
Post-merger integration (PMI) encompasses the process of merging at least two companies into a single legal entity. This process continues until a uniform structure has been established between the companies involved. A key success factor in PMI is effective collaboration between new colleagues. The so-called "relationship risk" represents a decisive integration risk in this context.
This seminar paper will first provide an overview of the scientific literature on analytics solutions for evidence-based PMI decisions. Based on this, a scenario will be developed that shows how the characteristics and developments of relationship risks can generally be assessed from the employee's perspective.
The development of language models such as OpenAI's Codex and technologies based on it, such as GitHub Copilot, have fundamentally changed the way software is developed. These AI-powered tools can generate, analyse, debug and optimise code, redefining the role of software developers. At the same time, controversial discussions are emerging about the risks, limitations and ethical aspects of their use.
This seminar paper will first examine the potential and challenges of GitHub Copilot and comparable technologies. The focus will be on analysing scientific literature in order to gain a sound understanding of current developments. Based on this, a scenario will be designed that outlines possible future perspectives for software development and highlights the importance of AI-supported tools in this context.
Deepfakes have blurred the line between reality and fiction and represent one of the greatest challenges of the digital age. From a supposed call for surrender by the Ukrainian president to images showing Chancellor Scholz in prison, artificially generated content can spread disinformation in a targeted manner and cause considerable damage.
This seminar paper first analyses the risks of deepfakes and current approaches to detecting and combating them. The focus is on examining scientific literature to gain in-depth insights into existing challenges and solution strategies. The aim is to identify strategies that help to ensure the integrity of digital content and curb the misuse of this technology.
The spread of generative artificial intelligence (GenAI) and technologies based on it, such as ChatGPT, opens up new perspectives for school education. Such technologies can support pupils in researching, writing texts and understanding complex topics by offering personalised and interactive learning opportunities. At the same time, there are legitimate concerns about possible negative effects, such as the impairment of independent thinking or the threat to the integrity of school performance through unreflective use and excessive dependence on GenAI.
This seminar paper aims to provide a well-founded overview of the scientific literature on the opportunities and challenges of GenAI-supported learning technologies. On this basis, a scenario will be developed that outlines possible solutions for the responsible and effective integration of ChatGPT and similar GenAI applications into school teaching.
Retrieval-Augmented Generation (RAG) models combine the advantages of information retrieval systems with the capabilities of Generative Artificial Intelligence (GenAI) to provide accurate and context-sensitive answers to complex questions. This innovative technology makes it possible to dynamically integrate external knowledge sources into the generation process, opening up a wide range of new applications, particularly in data-intensive areas such as education, research and corporate communications. At the same time, RAG models raise important questions about data protection, bias and the reliability of the retrieved information.
This seminar paper will examine the current scientific literature on RAG models in order to analyse their potential and challenges. The aim is to identify practical applications and the limitations of this technology and to evaluate its significance for future-oriented knowledge processing.
The advent of generative artificial intelligence (GenAI) is changing not only work processes, but also the distribution of roles between humans and machines. AI models such as ChatGPT and GitHub Copilot no longer act merely as tools, but as creative partners that generate text, code or images together with humans – a process known as "co-creation". This new form of collaboration raises fundamental questions: Who has control over the creative process? How is the role of humans in creative and cognitive professions changing? And how is responsibility distributed in such hybrid work processes? The challenge is to better understand the dynamics of this human-AI interaction: What skills do users need in the co-creation process? How are creative processes changing as a result of AI assistance? And what impact does this have on autonomy, creativity and decision-making?
This seminar paper aims to analyse the current state of research on human-AI collaboration. The aim is to examine typical fields of application (e.g. education, software development, design), systematise existing concepts of co-creation and discuss the associated opportunities and challenges. In addition, open questions regarding the distribution of roles, responsibility and the design of such hybrid systems will be examined.
In education, human resources, the justice system and lending, decisions that affect people are increasingly being supported, either partially or completely, by artificial intelligence (AI). AI-supported systems are used to award grades, filter job applications and assess credit risks. This raises a key question: can these systems make fair and transparent judgements, or do they unconsciously reinforce existing inequalities? Although AI models often make decisions faster and supposedly more objectively than humans, they are subject to criticism: they reproduce existing biases in the training data, are non-transparent ("black boxes") and difficult to control. Legal and ethical questions of responsibility and explainability are also gaining in importance – especially when AI systems evaluate people or decide on opportunities and exclusions.
This seminar paper will examine the conditions under which AI-based evaluation systems can be considered fair. The aim is to outline the technical basis of algorithmic decisions, analyse current case studies from education, justice and human resources management, and discuss criteria for fair, transparent and responsible AI evaluation systems.
Energy supply is at the heart of many social and political discussions – it is fundamental to the economy, everyday life and climate protection. However, with the increasing use of artificial intelligence (AI) to control and optimise modern energy systems, the challenge of making these complex models comprehensible and trustworthy is also growing. This is exactly where Explainable Artificial Intelligence (XAI) comes in: it makes the decisions of AI models understandable – a crucial factor for security, transparency and acceptance.
The aim of this seminar paper is to provide a structured overview of the scientific literature on XAI in the energy sector. This will be followed by a critical assessment of current XAI approaches based on typical application examples.
Who gets a loan? How is stock market risk assessed? Such decisions are increasingly being made by AI models – but many of them are incomprehensible to customers and experts alike. To ensure that trust and fairness in finance do not fall by the wayside, researchers are developing methods that make machine decisions transparent and explainable – known as Explainable AI (XAI).
The seminar paper should provide a structured overview of the scientific literature on XAI in the financial sector. This will be followed by an analysis of typical fields of application and explanation methods based on selected case studies.
Content information
In this module, students acquire the ability to independently research a topic in the field of customer relationship management and social media according to scientific criteria. Writing a seminar paper followed by a presentation and discussion of the results promotes the rhetorical skills and social competence of the participating students.
This module covers the following technical content:
- Social media – digital platforms
- Social media – fake news
- CRM – (explainable) and (generative) artificial intelligence
Depending on the subject area, individual literature is recommended.
Organisational information
Next event start date: WiSe 25/26
Location: Kick-off event (60 minutes at the beginning of the semester) and final presentation (2-3 hours at the end of the semester) in person.
Dates:
- Final presentation: The time and place will be announced in good time in consultation with the students.
- Submission of seminar papers: One week after the final presentation.
ECTS: 4
Seminar (2 SWS): Written assignment, presentation materials, presentation as part of a seminar lecture
Registration via the central seminar allocation tool for economics: econ.mathematik.uni-ulm.de/semapps/stud_de
The topics can only be worked on individually. To obtain the credit, students must write a seminar paper and give a presentation (10 minutes) followed by a discussion (5 minutes).
Main subjects: Technology and Process Management, Business Analytics, Business Management and Controlling, compulsory elective Business Administration
Degree programmes: B.Sc. Economics, B.Sc. Business Mathematics, B.Sc. Business Chemistry, B.Sc. Business Physics and degree programmes with Economics as a minor subject