Office Hours

For reliable meetings, please arrange an appointment via mail.
Otherwise, you can try to drop by at my office.

Dr. rer. nat. Benjamin Erb

Benjamin Erb is employed as a research associate at the Institute of Distributed Systems. He holds a Diploma degree in Computer Science in Media and a Bachelor degree in Psychology from Ulm University. In 2019, he received his doctoral degree for his work on a novel live graph computing approach that combines concepts of traditional graph computing with features from event-driven architectures. 

His current research focuses on distributed data systems with special requirements. Such requirements include strong privacy requirements of input data as well as history-aware data collection and processing.

Research Interests 

  • Data-intensive Systems and Data Engineering
    • processing on evolving graphs and offline graphs
    • programming models for data processing
    • distributed processing platforms
    • architectures for data systems with special requirements and capabilities
  • Software Architectures for Distributed Systems
    • event-driven architectures
    • event sourcing & CQRS
    • scalable web architectures
    • concurrency and parallelism
  • Privacy
    • psychological aspects of privacy
    • privacy aspects in empirical research
    • user-centered privacy


  • PePER (2021/07 – 2022/09; completed): PePER – A Privacy ­Enhanced Platform for Empirical Research. Funding: ProTrainU start-up funding (Ulm University).
  • ReSense (2020/11 – 2023/02; completed): Retrospective Sensor Networks and Edge Computing for Secure Event Detection and Monitoring. Funding: BMBF/DAAD-GERF.
  • SIDGRAPH (started 2014/08 – 2017/07; completed): Development of scalability and distribution mechanisms for graph-based and event-driven computations and simulations. Funding: Industry project.
  • PRIPARE (2013/10 – 2015/09; completed): Design and implementation of a collaborative web portal for patterns and best practices for privacy
  • diretto / diretto.resc (2009/10 – 2011/08; completed): The main target of this student project has been the design and prototypical implementation of a platform for distributed reporting. Use cases include collaborations in disaster scenarios and the live coverage of large-scale public events. The second stage of the project has been funded by MFG Stiftung Baden-Württemberg as part of a Karl-Steinbuch scholarship.


  • Member of the academic staff group on the Computer Science Study Committee of Ulm University.


Schillings, C., Meißner, E., Erb, B., Bendig, E., Schultchen, D., Pollatos, O. and others 2024. Effects of a Chatbot-Based Intervention on Stress and Health-Related Parameters in a Stressed Sample: Randomized Controlled Trial. JMIR Mental Health. 11, 1 (May 2024), e50454.


Volpert, S., Erb, B., Eisenhart, G., Seybold, D., Wesner, S. and Domaschka, J. 2023. A Methodology and Framework to Determine the Isolation Capabilities of Virtualisation Technologies. Proceedings of the 2023 ACM/SPEC International Conference on Performance Engineering (Coimbra, Portugal, Apr. 2023), 149–160.
The capability to isolate system resources is an essential characteristic of virtualisation technologies and is therefore important for research and industry alike. It allows the co-location of experiments and workloads, the partitioning of system resources and enables multi-tenant business models such as cloud computing. Poor isolation among tenants bears the risk of noisy-neighbour and contention effects which negatively impacts all of those use-cases. These effects describe the negative impact of one tenant onto another by utilising shared resources. Both industry and research provide many different concepts and technologies to realise isolation. Yet, the isolation capabilities of all these different approaches are not well understood; nor is there an established way to measure the quality of their isolation capabilities. Such an understanding, however, is of uttermost importance in practice to elaborately decide on a suited implementation. Hence, in this work, we present a novel methodology to measure the isolation capabilities of virtualisation technologies for system resources, that fulfils all requirements to benchmarking including reliability. It relies on an immutable approach, based on Experiment-as-Code. The complete process holistically includes everything from bare metal resource provisioning to the actual experiment enactment.The results determined by this methodology help in the decision for a virtualisation technology regarding its capability to isolate given resources. Such results are presented here as a closing example in order to validate the proposed methodology.
Schillings, C., Meißner, E., Erb, B., Schultchen, D., Bendig, E. and Pollatos, O. 2023. A chatbot-based intervention with ELME to improve stress and health-related parameters in a stressed sample: Study protocol of a randomised controlled trial. Frontiers in Digital Health. 5, (Mar. 2023), 14.
Background: Stress levels in the general population had already been increasing in recent years, and have subsequently been exacerbated by the global pandemic. One approach for innovative online-based interventions are “chatbots” – computer programs that can simulate a text-based interaction with human users via a conversational interface. Research on the efficacy of chatbot-based interventions in the context of mental health is sparse. The present study is designed to investigate the effects of a three-week chatbot-based intervention with the chatbot ELME, aiming to reduce stress and to improve various health-related parameters in a stressed sample. Methods: In this multicenter, two-armed randomised controlled trial with a parallel design, a three-week chatbot-based intervention group including two daily interactive intervention sessions via smartphone (á 10-20 min.) is compared to a treatment-as-usual control group. A total of 130 adult participants with a medium to high stress levels will be recruited in Germany. Assessments will take place pre-intervention, post-intervention (after three weeks), and follow-up (after six weeks). The primary outcome is perceived stress. Secondary outcomes include self-reported interoceptive accuracy, mindfulness, anxiety, depression, personality, emotion regulation, psychological well-being, stress mindset, intervention credibility and expectancies, affinity for technology, and attitudes towards artificial intelligence. During the intervention, participants undergo ecological momentary assessments. Furthermore, satisfaction with the intervention, the usability of the chatbot, potential negative effects of the intervention, adherence, potential dropout reasons, and open feedback questions regarding the chatbot are assessed post-intervention. Discussion: To the best of our knowledge, this is the first chatbot-based intervention addressing interoception, as well as in the context with the target variables stress and mindfulness. The design of the present study and the usability of the chatbot were successfully tested in a previous feasibility study. To counteract a low adherence of the chatbot-based intervention, a high guidance by the chatbot, short sessions, individual and flexible time points of the intervention units and the ecological momentary assessments, reminder messages, and the opportunity to postpone single units were implemented.
Kargl, F., Erb, B. and Bösch, C. 2023. Defining Privacy. Digital Phenotyping and Mobile Sensing: New Developments in Psychoinformatics. C. Montag and H. Baumeister, eds. Springer International Publishing. 461–463.


Bauer, A., Leznik, M., Iqbal, M.S., Seybold, D., Trubin, I., Erb, B., Domaschka, J. and Jamshidi, P. 2022. SPEC Research — Introducing the Predictive Data Analytics Working Group. Companion of the 2022 ACM/SPEC International Conference on Performance Engineering (Bejing, China, 2022), 13–14.
The research field of data analytics has grown significantly with the increase of gathered and available data. Accordingly, a large number of tools, metrics, and best practices have been proposed to make sense of this vast amount of data. To this end, benchmarking and standardization are needed to understand the proposed approaches better and continuously improve them. For this purpose, numerous associations and committees exist. One of them is SPEC (Standard Performance Evaluation Corporation), a non-profit corporation for the standardization and benchmarking of performance and energy evaluations. This paper gives an overview of the recently established SPEC RG Predictive Data Analytics Working Group. The mission of this group is to foster interaction between industry and academia by contributing research to the standardization and benchmarking of various aspects of data analytics.


Herbert, C., Marschin, V., Erb, B., Meißner, E., Aufheimer, M. and Boesch, C. 2021. Are you willing to self-disclose for science? Effects of Privacy Awareness (PA) and Trust in Privacy (TIP) on self-disclosure of personal and health data in online scientific studies -an experimental study. Frontiers in Big Data. (Dec. 2021). [accepted for publication]
Digital interactions via the internet have become the norm rather than the exception in our global society. Concerns have been raised about human-centered privacy and the often unreflected self-disclosure behavior of internet users. This study on human-centered privacy follows two major aims: first, investigate the willingness of university students as digital natives to self-disclose private data and information from psychological domains including their person, social and academic life, their mental health as well as their health behavior habits when taking part as a volunteer in a scientific online survey. Second, examine to what extent the participants’ self-disclosure behavior can be modulated by experimental induction of Privacy Awareness (PA) or Trust in Privacy (TIP) or a combination of both (PA and TIP). In addition, the role of human factors such as personality traits, gender or mental health (e.g., self-reported depressive symptoms) on self-disclosure behavior was explored and the influence of PA and TIP induction were considered. Participants were randomly assigned to four experimental groups. In group A (n = 50, 7 males), privacy awareness (PA) was induced implicitly by the inclusion of privacy concern items. In group B (n = 43, 6 males), trust in privacy (TIP) was experimentally induced by buzzwords and by visual TIP primes promising safe data storage. Group C (n = 79, 12 males) received both, PA and TIP induction, while group D (n = 55, 9 males) served as control group. Participants had the choice to answer the survey items by agreeing to one of a number of possible answers including the options to refrain from self-disclosure by choosing the response options “don’t know” or “no answer”. Self-disclosure among participants was high irrespective of experimental group and irrespective of psychological domains of the information provided. The results of this study suggest that willingness of volunteers to self-disclose private data in a scientific online study cannot simply be overruled or changed by any of the chosen experimental privacy manipulations. The present results extend the previous literature on human-centered privacy and despite limitations can give important insights into self-disclosure behavior of young people and the privacy paradox.
Erb, B., Bösch, C., Herbert, C., Kargl, F. and Montag, C. 2021. Emerging Privacy Issues in Times of Open Science. (Jun. 2021). PsyArXiv Preprint
The open science movement has taken up the important challenge to increase transparency of statistical analyses, to facilitate reproducibility of studies, and to enhance reusability of data sets. To counter the replication crisis in the psychological and related sciences, the movement also urges researchers to publish their primary data sets alongside their articles. While such data publications represent a desirable improvement in terms of transparency and are also helpful for future research (e.g., subsequent meta-analyses or replication studies), we argue that such a procedure can worsen existing privacy issues that are insufficiently considered so far in this context. Recent advances in de-anonymization and re-identification techniques render privacy protection increasingly difficult, as prevalent anonymization mechanisms for handling participants' data might no longer be adequate. When exploiting publicly shared primary data sets, data from multiple studies can be linked with contextual data and eventually, participants can be de-anonymized. Such attacks can either re-identify specific individuals of interest, or they can be used to de-anonymize entire participant cohorts. The threat of de-anonymization attacks can endanger the perceived confidentiality of responses by participants, and ultimately, lower the overall trust of potential participants into the research process due to privacy concerns.
Meißner, E., Kargl, F. and Erb, B. 2021. WAIT: Protecting the Integrity of Web Applications with Binary-Equivalent Transparency. Proceedings of the 36th Annual ACM Symposium on Applied Computing (Virtual Event, Republic of Korea, 2021), 1950–1953. (acceptance rate: 29%)
Modern single page web applications require client-side executions of application logic, including critical functionality such as client-side cryptography. Existing mechanisms such as TLS and Subresource Integrity secure the communication and provide external resource integrity. However, the browser is unaware of modifications to the client-side application as provided by the server and the user remains vulnerable against malicious modifications carried out on the server side. Our solution makes such modifications transparent and empowers the browser to validate the integrity of a web application based on a publicly verifiable log. Our Web Application Integrity Transparency (WAIT) approach requires (1) an extension for browsers for local integrity validations, (2) a custom HTTP header for web servers that host the application, and (3) public log servers that serve the verifiable logs. With WAIT, the browser can disallow the execution of undisclosed application changes. Also, web application providers cannot dispute their authorship for published modifications anymore. Although our approach cannot prevent every conceivable attack on client-side web application integrity, it introduces a novel sense of transparency for users and an increased level of accountability for application providers particularly effective against targeted insider attacks.
Meißner, E., Engelmann, F., Kargl, F. and Erb, B. 2021. PeQES: A Platform for Privacy-Enhanced Quantitative Empirical Studies. Proceedings of the 36th Annual ACM Symposium on Applied Computing (Virtual Event, Republic of Korea, 2021), 1226–1234. (acceptance rate: 29%)
Empirical sciences and in particular psychology suffer a methodological crisis due to the non-reproducibility of results, and in rare cases, questionable research practices. Pre-registered studies and the publication of raw data sets have emerged as effective countermeasures. However, this approach represents only a conceptual procedure and may in some cases exacerbate privacy issues associated with data publications. We establish a novel, privacy-enhanced workflow for pre-registered studies. We also introduce PeQES, a corresponding platform that technically enforces the appropriate execution while at the same time protecting the participants' data from unauthorized use or data repurposing. Our PeQES prototype proves the overall feasibility of our privacy-enhanced workflow while introducing only a negligible performance overhead for data acquisition and data analysis of an actual study. Using trusted computing mechanisms, PeQES is the first platform to enable privacy-enhanced studies, to ensure the integrity of study protocols, and to safeguard the confidentiality of participants' data at the same time.
Al-Momani, A., Wuyts, K., Sion, L., Kargl, F., Joosen, W., Erb, B. and Bösch, C. 2021. Land of the Lost: Privacy Patterns’ Forgotten Properties: Enhancing Selection-Support for Privacy Patterns. Proceedings of the 36th Annual ACM Symposium on Applied Computing (Virtual Event, Republic of Korea, 2021), 1217–1225. (acceptance rate: 29%)
Privacy patterns describe core aspects of privacy-enhancing solutions to recurring problems and can, therefore, be instrumental to the privacy-by-design paradigm. However, the privacy patterns domain is still evolving. While the main focus is currently put on compiling and structuring high-quality privacy patterns in catalogs, the support for developers to select suitable privacy patterns is still limited. Privacy patterns selection-support means, in essence, the quick and easy scoping of a collection of patterns to the most applicable ones based on a set of predefined criteria. To evaluate patterns against these criteria, a thorough understanding of the privacy patterns landscape is required. In this paper, (i) we show that there is currently a lack of extensive support for privacy patterns selection due to the insufficient understanding of pattern properties, (ii) we propose additional properties that need to be analyzed and can serve as a first step towards a robust selection criteria, (iii) we analyze and present the properties for 70 privacy patterns, and (iv) we discuss a potential approach of how such a selection-support method can be realized.
Bendig, E., Erb, B., Meißner, E., Bauereiß, N. and Baumeister, H. 2021. Feasibility of a Software agent providing a brief Intervention for Self-help to Uplift psychological wellbeing (“SISU”). A single-group pretest-posttest trial investigating the potential of SISU to act as therapeutic agent. Internet Interventions. 24, (2021), 100377.
Background: Software agents are computer-programs that conduct conversations with a human. The present study evaluates the feasibility of the software agent “SISU” aiming to uplift psychological wellbeing. Methods: Within a one-group pretest-posttest trial, N = 30 German-speaking participants were recruited. Assessments took place before (t1), during (t2) and after (t3) the intervention. The ability of SISU to guide participants through the intervention, acceptability, and negative effects were investigated. Data analyses are based on intention-to-treat principles. Linear mixed models will be used to investigate short-term changes over time in mood, depression, anxiety. Intervention: The intervention consists of two sessions. Each session comprises writing tasks on autobiographical negative life events and an Acceptance- and Commitment Therapy-based exercise respectively. Participants interact with the software agent on two consecutive days for about 30 min each. Results: All participants completed all sessions within two days. User experience was positive, with all subscales of the user experience questionnaire (UEQ) M > 0.8. Participants experienced their writings as highly self-relevant and personal. However, 57% of the participants reported at least one negative effect attributed to the intervention. Results on linear mixed models indicate an increase in anxiety over time (β = 1.33, p = .001). Qualitative User Feedback revealed that the best thing about SISU was its innovativeness (13%) and anonymity (13%). As worst thing about SISU participants indicated that the conversational style of SISU often felt unnatural (73%). Conclusion: SISU successfully guided participants through the two-day intervention. Moreover, SISU has the potential to enter the inner world of participants. However, intervention contents have the potential to evoke negative effects in individuals. Expectable short-term symptom deterioration due to writing about negative autobiographical life events could not be prevented by acceptance and commitment therapy-based exercises. Hence, results suggest a revision of intervention contents as well as of the conversational style of SISU. The good adherence rate indicates the useful and acceptable format of SISU as a mental health chatbot. Overall, little is known about the effectiveness of software agents in the context of psychological wellbeing. Results of the present trial underline that the innovative technology bears the potential of SISU to act as therapeutic agent but should not be used with its current intervention content. Trial-registration: The Trial is registered at the WHO International Clinical Trials Registry Platform via the German Clinical Studies Register (DRKS): DRKS00014933 (date of registration: 20.06.2018). Link:


Erb, B. 2020. Distributed computing on event-sourced graphs. Universität Ulm. phdthesis. Dissertation
Modern applications with increasingly connected domain topologies require processing and programming abstractions that reflect the network structure inherent to these applications. At the same time, data-intensive applications necessitate more and more online processing capabilities when consuming incoming streams of events to execute continuous computations and provide fast results. Only very few systems have taken into account the combined challenge of executing graph processing on a dynamically evolving graph. However, this is a critical capability as timely computations enable reactive application behaviors upon graph changes. In addition, no existing system supports processing on a live graph and on past version of that evolving graph at the same time. The distinct characteristics of event processing and graph computing, as well as batch processing and live processing yield specific requirements for any computing platform that unifies these approaches. Solutions require (i) data management strategies to keep track of the continuous graph evolution, (ii) appropriate graph processing models that can simultaneously handle computations and graph updates, and (iii) an efficient platform implementation that provides the necessary performance at runtime. To this end, this thesis suggests a combination of an adapted actor model, an event-sourced persistence layer, and a vertex-based, asynchronous live programming model. The event-sourced actor model enables highly concurrent computations in which the full application history is implicitly persisted. This model is then refined into a live graph processing model with a particular focus on asynchronicity, liveness, and parallel execution support. At the same time, the use of event sourcing enables the model to reconstruct global and consistent graph representations from arbitrary points of the timeline. These graph representations form the basis for decoupled, non-live graph processing models. The Chronograph platform represents an implementation of the event-sourced graph model. The platform ingests external update streams and maintains a live graph representation as part of a user-specified graph application. It thus enables live and offline computations on event-driven, history-aware graphs and supports different processing models on the evolving graph. This thesis provides the following contributions: (i) a distributed computing approach with history support based on the actor model and event sourcing, as wall as corresponding and supplementary concepts, (ii) a data management approach for evolving graphs that builds on the event-sourced actor model, (iii) a set of novel and adapted programming and processing models that integrate well with event-sourced graphs, (iv) a distributed platform architecture that implements the event-sourced graph model; and (v) an evaluation framework for such live graph processing systems.


Bendig, E., Erb, B., Schulze-Thuesing, L. and Baumeister, H. 2019. The Next Generation: Chatbots in Clinical Psychology and Psychotherapy to Foster Mental Health – A Scoping Review. Verhaltenstherapie. 29, 4 (2019), 266–280.
Background and Purpose: The present age of digitalization brings with it progress and new possibilities for health care in general and clinical psychology/psychotherapy in particular. Internet- and mobile-based interventions (IMIs) have often been evaluated. A fully automated version of IMIs are chatbots. Chatbots are automated computer programs that are able to hold, e.g., a script-based conversation with a human being. Chatbots could contribute to the extension of health care services. The aim of this review is to conceptualize the scope and to work out the current state of the art of chatbots fostering mental health. Methods: The present article is a scoping review on chatbots in clinical psychology and psychotherapy. Studies that utilized chatbots to foster mental health were included. Results: The technology of chatbots is still experimental in nature. Studies are most often pilot studies by nature. The field lacks high-quality evidence derived from randomized controlled studies. Results with regard to practicability, feasibility, and acceptance of chatbots to foster mental health are promising but not yet directly transferable to psychotherapeutic contexts. ­Discussion: The rapidly increasing research on chatbots in the field of clinical psychology and psychotherapy requires corrective measures. Issues like effectiveness, sustainability, and especially safety and subsequent tests of technology are elements that should be instituted as a corrective for future funding programs of chatbots in clinical psychology and psychotherapy.
Kargl, F., van der Heijden, R.W., Erb, B. and Bösch, C. 2019. Privacy in mobile sensing. Digital Phenotyping and Mobile Sensing. H. Baumeister and C. Montag, eds. Springer. 3–12.
In this chapter, we discuss the privacy implications of mobile sensing and modern psycho-social sciences. We aim to raise awareness of the multifaceted nature of privacy, describing the legal, technical and applied aspects in some detail. Not only since the European GDPR, these aspects lead to a broad spectrum of challenges of which data processors cannot be absolved by a simple consent form from their users. Instead appropriate technical and organizational measures should be put in place through a proper privacy engineering process. Throughout the chapter, we illustrate the importance of privacy protection through a set of examples and also technical approaches to address these challenges. We conclude this chapter with an outlook on privacy in mobile sensing, digital phenotyping and, psychoinformatics.


Lukaseder, T., Maile, L., Erb, B. and Kargl, F. 2018. SDN-Assisted Network-Based Mitigation of Slow DDoS Attacks. Proceedings of the 14th EAI International Conference on Security and Privacy in Communication Networks. (Singapore, 2018), 102–121.
Slow-running attacks against network applications are often not easy to detect, as the attackers behave according to the specification. The servers of many network applications are not prepared for such attacks, either due to missing countermeasures or because their default configurations ignores such attacks. The pressure to secure network services against such attacks is shifting more and more from the service operators to the network operators of the servers under attack. Recent technologies such as software-defined networking offer the flexibility and extensibility to analyze and influence network flows without the assistance of the target operator. Based on our previous work on a network-based mitigation, we have extended a framework to detect and mitigate slow-running DDoS attacks within the network infrastructure, but without requiring access to servers under attack. We developed and evaluated several identification schemes to identify attackers in the network solely based on network traffic information. We showed that by measuring the packet rate and the uniformity of the packet distances, a reliable identificator can be built, given a training period of the deployment network.
Meißner, E., Erb, B., Kargl, F. and Tichy, M. 2018. retro-λ: An Event-sourced Platform for Serverless Applications with Retroactive Computing Support. Proceedings of the 12th ACM International Conference on Distributed and Event-based Systems (Hamilton, New Zealand, 2018), 76–87. (acceptance rate: 39%)
State changes over time are inherent characteristics of stateful applications. So far, there are almost no attempts to make the past application history programmatically accessible or even modifiable. This is primarily due to the complexity of temporal changes and a difficult alignment with prevalent programming primitives and persistence strategies. Retroactive computing enables powerful capabilities though, including computations and predictions of alternate application timelines, post-hoc bug fixes, or retroactive state explorations. We propose an event-driven programming model that is oriented towards serverless computing and applies retroaction to the event sourcing paradigm. Our model is deliberately restrictive, but therefore keeps the complexity of retroactive operations in check. We introduce retro-λ, a runtime platform that implements the model and provides retroactive capabilites to its applications. While retro-λ only shows negligible performance overheads compared to similar solutions for running regular applications, it enables its users to execute retroactive computations on the application histories as part of its programming model.
Meißner, E., Erb, B. and Kargl, F. 2018. Performance Engineering in Distributed Event-sourced Systems. Proceedings of the 12th ACM International Conference on Distributed and Event-based Systems (Hamilton, New Zealand, 2018), 242–245. (acceptance rate: 39%)
Distributed event-sourced systems adopt a fairly new architectural style for data-intensive applications that maintains the full history of the application state. However, the performance implications of such systems are not yet well explored, let alone how the performance of these systems can be improved. A central issue is the lack of systematic performance engineering approaches that take into account the specific characteristics of these systems. To address this problem, we suggest a methodology for performance engineering and performance analysis of distributed event-sourced systems based on specific measurements and subsequent, targeted optimizations. The methodology blends in well into existing software engineering processes and helps developers to identify bottlenecks and to resolve performance issues. Using our structured approach, we improved an existing event-sourced system prototype and increased its performance considerably.
Erb, B., Meißner, E., Ogger, F. and Kargl, F. 2018. Log Pruning in Distributed Event-sourced Systems. Proceedings of the 12th ACM International Conference on Distributed and Event-based Systems (Hamilton, New Zealand, 2018), 230–233. (acceptance rate: 39%)
Event sourcing is increasingly used and implemented in event-based systems for maintaining the evolution of application state. However, unbounded event logs are impracticable for many systems, as it is difficult to align scalability requirements and long-term runtime behavior with the corresponding storage requirements. To this end, we explore the design space of log pruning approaches suitable for event-sourced systems. Furthermore, we survey specific log pruning mechanisms for event-sourced logs. In a brief evaluation, we point out the trade-offs when applying pruning to event logs and highlight the applicability of log pruning to event-sourced systems.
Erb, B., Meißner, E., Kargl, F., Steer, B.A., Cuadrado, F., Margan, D. and Pietzuch, P. 2018. Graphtides: A Framework for Evaluating Stream-Based Graph Processing Platforms. Proceedings of the 1st ACM SIGMOD Joint International Workshop on Graph Data Management Experiences & Systems (GRADES) and Network Data Analytics (NDA) (Houston, Texas, 2018). (acceptance rate: 38%)
Stream-based graph systems continuously ingest graph-changing events via an established input stream, performing the required computation on the corresponding graph. While there are various benchmarking and evaluation approaches for traditional, batch-oriented graph processing systems, there are no common procedures for evaluating stream-based graph systems. We, therefore, present GraphTides, a generic framework which includes the definition of an appropriate system model, an exploration of the parameter space, suitable workloads, and computations required for evaluating such systems. Furthermore, we propose a methodology and provide an architecture for running experimental evaluations. With our framework, we hope to systematically support system development, performance measurements, engineering, and comparisons of stream-based graph systems.
Lukaseder, T., Stölze, K., Kleber, S., Erb, B. and Kargl, F. 2018. An SDN-based Approach for Defending Against Reflective DDoS Attacks. 2018 IEEE 43th Conference on Local Computer Networks (2018). (acceptance rate: 28%)
Distributed Reflective Denial of Service (DRDoS) attacks are an immanent threat to Internet services. The potential scale of such attacks became apparent in March 2018 when a memcached-based attack peaked at 1.7 Tbps. Novel services built upon UDP increase the need for automated mitigation mechanisms that react to attacks without prior knowledge of the actual application protocols used. With the flexibility that software-defined networks offer, we developed a new approach for defending against DRDoS attacks; it not only protects against arbitrary DRDoS attacks but is also transparent for the attack target and can be used without assistance of the target host operator. The approach provides a robust mitigation system which is protocol-agnostic and effective in the defense against DRDoS attacks.


Erb, B., Meißner, E., Habiger, G., Pietron, J. and Kargl, F. 2017. Consistent Retrospective Snapshots in Distributed Event-sourced Systems. Conference on Networked Systems (NetSys’17) (Göttingen, Germany, Mar. 2017).
An increasing number of distributed, event-based systems adopt an architectural style called event sourcing, in which entities keep their entire history in an event log. Event sourcing enables data lineage and allows entities to rebuild any previous state. Restoring previous application states is a straight-forward task in event-sourced systems with a global and totally ordered event log. However, the extraction of causally consistent snapshots from distributed, individual event logs is rendered non-trivial due to causal relationships between communicating entities. High dynamicity of entities increases the complexity of such reconstructions even more. We present approaches for retrospective and global state extraction of event-sourced applications based on distributed event logs. We provide an overview on historical approaches towards distributed debugging and breakpointing, which are closely related to event log-based state reconstruction. We then introduce and evaluate our approach for non-local state extraction from distributed event logs, which is specifically adapted for dynamic and asynchronous event-sourced systems.
Erb, B., Meißner, E., Pietron, J. and Kargl, F. 2017. Chronograph: A Distributed Processing Platform for Online and Batch Computations on Event-sourced Graphs. Proceedings of the 11th ACM International Conference on Distributed and Event-Based Systems (Barcelona, Spain, 2017), 78–87. (acceptance rate: 37%)
Several data-intensive applications take streams of events as a continuous input and internally map events onto a dynamic, graph-based data model which is then used for processing. The differences between event processing, graph computing, as well as batch processing and near-realtime processing yield a number of specific requirements for computing platforms that try to unify theses approaches. By combining an altered actor model, an event-sourced persistence layer, and a vertex-based, asynchronous programming model, we propose a distributed computing platform that supports event-driven, graph-based applications in a single platform. Our Chronograph platform concept enables online and offline computations on event-driven, history-aware graphs and supports different processing models on the evolving graph.


Seybold, D., Wagner, N., Erb, B. and Domaschka, J. 2016. Is elasticity of scalable databases a Myth? 2016 IEEE International Conference on Big Data (Big Data) (Dec. 2016), 2827–2836. (acceptance rate: 18.7%)
The age of cloud computing has introduced all the mechanisms needed to elastically scale distributed, cloud-enabled applications. At roughly the same time, NoSQL databases have been proclaimed as the scalable alternative to relational databases. Since then, NoSQL databases are a core component of many large-scale distributed applications. This paper evaluates the scalability and elasticity features of the three widely used NoSQL database systems Couchbase, Cassandra and MongoDB under various workloads and settings using throughput and latency as metrics. The numbers show that the three database systems have dramatically different baselines with respect to both metrics and also behave unexpected when scaling out. For instance, while Couchbase's throughput increases by 17% when scaled out from 1 to 4 nodes, MongoDB's throughput decreases by more than 50%. These surprising results show that not all tested NoSQL databases do scale as expected and even worse, in some cases scaling harms performances.
Erb, B. and Kargl, F. 2016. Chronograph: A Distributed Platform for Event-Sourced Graph Computing. Proceedings of the Posters and Demos Session of the 17th International Middleware Conference (Trento, Italy, Dec. 2016), 15–16.
Many data-driven applications require mechanisms for processing interconnected or graph-based data sets. Several platforms exist for offline processing of such data and fewer solutions address online computations on dynamic graphs. We combined a modified actor model, an event-sourced persistence layer, and a vertex-based, asynchronous programming model in order to unify event-driven and graph-based computations. Our distributed chronograph platform supports both near-realtime and batch computations on dynamic, event-driven graph topologies, and enables full history tracking of the evolving graphs over time.
Lukaseder, T., Bradatsch, L., Erb, B. and Kargl, F. 2016. Setting Up a High-Speed TCP Benchmarking Environment - Lessons Learned. 41st Conference on Local Computer Networks (Nov. 2016), 160–163. (acceptance rate: 33%)
There are many high-speed TCP variants with different congestion control algorithms, which are designed for specific settings or use cases. Distinct features of these algorithms are meant to optimize different aspects of network performance, and the choice of TCP variant strongly influences application performance. However, setting up tests to help with the decision of which variant to use can be problematic, as many systems are not designed to deal with high bandwidths, such as 10 Gbps or more. This paper provides an overview of pitfalls and challenges of realistic network analysis to help in the decision making process.
Kraft, R., Erb, B., Mödinger, D. and Kargl, F. 2016. Using Conflict-free Replicated Data Types for Serverless Mobile Social Applications. Proceedings of the 8th ACM International Workshop on Hot Topics in Planet-scale mObile Computing and Online Social neTworking (Paderborn, Germany, 2016), 49–54.
A basic reason for backend systems in mobile application architectures is the centralized management of state. Mobile clients synchronize local states with the backend in order to maintain an up-to-date view of the application state. As not all mobile social applications require strong consistency guarantees, we survey an alternative approach using special data structures for mobile applications. These data structures only provide eventual consistency, but allow for conflict-free replication between peers. Our analysis collects the requirements of social mobile applications for being suitable for this approach. Based on exemplary mobile social applications, we also point out the benefits of serverless architecture or architectures with a thin backend layer.
Bösch, C., Erb, B., Kargl, F., Kopp, H. and Pfattheicher, S. 2016. Tales from the dark side: Privacy dark strategies and privacy dark patterns. Proceedings on Privacy Enhancing Technologies. 2016, 4 (2016), 237–254. (acceptance rate: 23,8% for volume 2016)
Privacy strategies and privacy patterns are fundamental concepts of the privacy-by-design engineering approach. While they support a privacy-aware development process for IT systems, the concepts used by malicious, privacy-threatening parties are generally less understood and known. We argue that understanding the “dark side”, namely how personal data is abused, is of equal importance. In this paper, we introduce the concept of privacy dark strategies and privacy dark patterns and present a framework that collects, documents, and analyzes such malicious concepts. In addition, we investigate from a psychological perspective why privacy dark strategies are effective. The resulting framework allows for a better understanding of these dark concepts, fosters awareness, and supports the development of countermeasures. We aim to contribute to an easier detection and successive removal of such approaches from the Internet to the benefit of its users.
Erb, B., Habiger, G. and Hauck, F.J. 2016. On the Potential of Event Sourcing for Retroactive Actor-based Programming. First Workshop on Programming Models and Languages for Distributed Computing (Rome, Italy, 2016), 1–5.
The actor model is an established programming model for distributed applications. Combining event sourcing with the actor model allows the reconstruction of previous states of an actor. When this event sourcing approach for actors is enhanced with additional causality information, novel types of actor-based, retroactive computations are possible. A globally consistent state of all actors can be reconstructed retrospectively. Even retroactive changes of actor behavior, state, or messaging are possible, with partial recomputations and projections of changes in the past. We believe that this approach may provide beneficial features to actor-based systems, including retroactive bugfixing of applications, decoupled asynchronous global state reconstruction for recovery, simulations, and exploration of distributed applications and algorithms.
Meißner, E., Erb, B., van der Heijden, R., Lange, K. and Kargl, F. 2016. Mobile triage management in disaster area networks using decentralized replication. Proceedings of the Eleventh ACM Workshop on Challenged Networks (2016), 7–12. (acceptance rate: 52%)
In large-scale disaster scenarios, efficient triage management is a major challenge for emergency services. Rescue forces traditionally respond to such incidents with a paper-based triage system, but technical solutions can potentially achieve improved usability and data availability. We develop a triage management system based on commodity hardware and software components to verify this claim. We use a single-hop, ad-hoc network architecture with multi-master replication, a tablet-based device setup, and a mobile application for emergency services. We study our system in cooperation with regional emergency services and report on experiences from a field exercise. We show that state-of-the-art commodity technology provides the means necessary to implement a triage management system compatible with existing emergency service procedures, while introducing additional benefits. This work highlights that powerful real-world ad-hoc networking applications do not require unreasonable development effort, as existing tools from distributed systems, such as replicating NoSQL databases, can be used successfully.
Lukaseder, T., Bradatsch, L., Erb, B., Van Der Heijden, R.W. and Kargl, F. 2016. A comparison of TCP congestion control algorithms in 10G networks. 41st Conference on Local Computer Networks (2016), 706–714. (acceptance rate: 28%)
The increasing availability of 10G Ethernet network capabilities challenges existing transport layer protocols. As 10G connections gain momentum outside of backbone networks, the choice of appropriate TCP congestion control algorithms becomes even more relevant for networked applications running in environments such as data centers. Therefore, we provide an extensive overview of relevant TCP congestion control algorithms for high-speed environments leveraging 10G. We analyzed and evaluated six TCP variants using a physical network testbed, with a focus on the effects of propagation delay and significant drop rates. The results indicate that of the algorithms compared, BIC is most suitable when no legacy variant is present, CUBIC is suggested otherwise.


Erb, B. 2015. Towards Distributed Processing on Event-sourced Graphs. Ulm University. Doctoral Symposium
The processing of large-scale data sets and streaming data is challenging traditional computing platforms and lacks increasingly relevant features such as data lineage and inherent support for retrospective and predictive analytics. By combining concepts from event processing and graph computing, an Actor-related programming model, and an event-based, time-aware persistence approach into a unified distributed processing solution, we suggest a novel processing approach that embraces the idea of graph-based computing with built-in support for application history.
Erb, B. and Kargl, F. 2015. A Conceptual Model for Event-sourced Graph Computing. Proceedings of the 9th ACM International Conference on Distributed Event-Based Systems (Oslo, Norway, 2015), 352–355.
Systems for highly interconnected application domains are increasingly taking advantage of graph-based computing platforms. Existing platforms employ a batch-oriented computing model and neglect near-realtime processing or temporal analysis. We suggest an extended conceptual model for event-driven computing on graphs. It takes into account the evolution of a graph and enables temporal analyses, processing on previous graph states, and retroactive modifications.


Domaschka, J., Hauser, C.B. and Erb, B. 2014. Reliability and Availability Properties of Distributed Database Systems. 18th International Enterprise Distributed Object Computing Conference (Sep. 2014), 226–233. (acceptance rate: 22%)
Distributed database systems represent an essential component of modern enterprise application architectures. If the overall application needs to provide reliability and availability, the database has to guarantee these properties as well. Entailing non-functional database features such as replication, consistency, conflict management, and partitioning represent subsequent challenges for successfully designing and operating an available and reliable database system. In this document, we identify why these concepts are important for databases and classify their design options. Moreover, we survey how eleven modern database systems implement these reliability and availability properties.
Engelmann, F., Lukaseder, T., Erb, B., van der Heijden, R. and Kargl, F. 2014. Dynamic packet-filtering in high-speed networks using NetFPGAs. Third International Conference on Future Generation Communication Technologies (FGCT 2014) (Aug. 2014), 55–59.
Computational power for content filtering in high-speed networks reaches a limit, but many applications as intrusion detection systems rely on such processes. Especially signature based methods need extraction of header fields. Hence we created an parallel protocol-stack parser module on the NetFPGA 10G architecture with a framework for simple adaption to custom protocols. Our measurements prove that the appliance operates at 9.5 Gb/s with a delay in order of any active hop. The work provides the foundation to use for application specific projects in the NetFPGA context.
Erb, B., Kargl, F. and Domaschka, J. 2014. Concurrent Programming in Web Applications. it-Information Technology. 56, 3 (2014), 119–126.
Modern web applications are concurrently used by many users and provide increasingly interactive features. Multi-core processors, highly distributed backend architectures, and new web technologies force a reconsideration of approaches for concurrent programming in order to fulfil scalability demands and to implement modern web application features. We provide a survey on different concepts and techniques of concurrency inside web architectures and guide through viable concurrency alternatives for architects and developers.
Erb, B. and Kargl, F. 2014. Combining Discrete Event Simulations and Event Sourcing. Proceedings of the 7th International ICST Conference on Simulation Tools and Techniques (Lisbon, Portugal, 2014), 51–55.
Discrete event simulations (DES) represent the status quo for many different types of simulations. There are still open challenges, such as designing distributed simulation architectures, providing development and debugging support, or analyzing and evaluating simulation runs. In the area of scalable, distributed application architectures exists an architectural style called event sourcing, which shares the same inherent idea as DES. We believe that both approaches can benefit from each other and provide a comparison of both approaches. Next, we point out how event sourcing concepts can address DES issues. Finally, we suggest a hybrid architecture that allows to mutually execute simulations and real applications, enabling seamless transitions between both.


Erb, B., Kaufmann, S., Schlecht, T., Schaub, F. and Weber, M. 2011. diretto: A Toolkit for Distributed Reporting and Collaboration. Mensch & Computer 2011: überMEDIEN ÜBERmorgen (Chemnitz, 2011), 151–160.
The goal of the diretto project is the creation of an extensible infrastructure and easy-to-use toolset for distributed on-site media reporting and collaborative event coverage in real-time. It empowers collocated users to participate dynamicallyin event reporting, andfacilitatescollaboration with remote users. For example, to cover public events or support disaster relief missions with on-site information. The diretto platform focuses on scalability to support large crowd participation. Our platform currently supports smartphone clients, areportingsolution for SLR cameras, and a rich web application for remote collaborators. diretto is easily extensible and can be tailored to mission-specific requirements.


Erb, B., Elsholz, J.-P. and Hauck, F.J. 2009. Semantic Mashup: Mashing up Information in the Todays World Wide Web - An Overview. Technical Report #VS-R08-2009. Institut für Verteilte Systeme, Universität Ulm.

Teaching in Summer Term 2024
  • Architectures for Distributed Internet Services (ADIS)
  • Concepts for Concurrent, Parallel and Distributed Programming (CCPDP)
  • Computer Networks and IT-Security (RNSEC)
  • Selected Topics in Distributed Systems  (ATVS)
  • Research Trends in Distributed Systems (RTDS)
Teaching in Winter Term 2023/2024
  • Networked Systems (VNS)
  • Computer Networks and IT-Security (RNSEC)
  • Selected Topics in Distributed Systems  (ATVS)
  • Research Trends in Distributed Systems (RTDS)


Compulsory Courses (Lectures)

Networked Systems (Vernetzte Systeme; VNS)
Lecture with exercises, 3V+2Ü, 6LP
WiSe 2023
Introduction to Computer Networks (Grundlagen der Rechnernetze; GRN)
Lecture with exercises, 2V+2Ü, 5LP
WiSe 2023, WiSe 2022, WiSe 2021, WiSe 2020

Elective Courses (Lectures)

Architectures for Distributed Internet Services (Architekturen für Verteilte Internetdienste; ADIS/AVID)
Lecture with exercises, 3V+1Ü, 6 LP
SoSe 2024, SoSe 2023, SoSe 2022, SoSe 2021, SoSe 2020
Concepts for Concurrent, Parallel and Distributed Programming (Konzepte für nebenläufige, parallele und verteilte Programmierung; CCPDP)
Lecture with exercises, 3V+1Ü, 6 LP
SoSe 2024, SoSe 2023, SoSe 2022, SoSe 2021
Distributed Computing Platforms in Practice (Verteilte Berechnungsplattformen in der Praxis; VBP)
Lecture with exercises, 1V+2Ü, 6 LP
WiSe 2022, SoSe 2020, SoSe 2019
Practical IT-Security (Praktische IT-Sicherheit; PSEC)
Lecture with exercises, 1V+2Ü, 6 LP
SoSe 2021, SoSe 2020

Lab Courses

Compulsory Courses (Lab Courses)

Introduction to Computer Networks (Grundlagen der Rechnernetze; GRN)
Lecture with exercises, 2V+2Ü, 5LP
WiSe 2019, WiSe 2018, WiSe 2017, WiSe 2016, WiSe 2015, WiSe 2014, WiSe 2013, WiSe 2012

Elective Courses (Lab Courses)

Advanced Concepts of Communication Networks (Fortgeschrittene Konzepte der Rechnernetze; FKR)
Lecture with exercises, 2V+2Ü, 6LP
SoSe 2016, SoSe 2015, SoSe 2014, SoSe 2013, SoSe 2012


Proseminars (Bachelor)

Privacy in the Internet (Privacy im Internet; PRIV)
Proseminar, 2S, 4LP
SoSe 2023, SoSe 2022, SoSe 2021, SoSe 2020, WiSe 2018, WiSe 2017, WiSe 2016, WiSe 2015, WiSe 2014, WiSe 2013, WiSe 2012
Effective Java (Kniffe, Tricks und Techniken für Java; KTT)
Proseminar, 2S, 4LP
SoSe 2015, SoSe 2013

Seminars (Bachelor/Master)

Selected Topics in Distributed Systems (Ausgewählte Themen in Verteilten Systemen; ATVS)
Seminar, 2S, 4LP
SoSe 2024, WiSe 2023, SoSe 2023, WiSe 2022, SoSe 2022, WiSe 2018, SoSe 2018, WiSe 2017, SoSe 2017, WiSe 2016, SoSe 2016, WiSe 2015, SoSe 2015, WiSe 2014, SoSe 2014, WiSe 2013, SoSe 2013, WiSe 2012
Research Trends in Distributed Systems (Forschungstrends in Verteilten Systemen; RTDS)
Seminar, 2S, 4LP
SoSe 2024, WiSe 2023, SoSe 2023, WiSe 2022, SoSe 2022, WiSe 2018, SoSe 2018, WiSe 2017, SoSe 2017, WiSe 2016, SoSe 2016, WiSe 2015, SoSe 2015, WiSe 2014, SoSe 2014, WiSe 2013, SoSe 2013, WiSe 2012

Student Projects

Individual Master Projects

Computer Networks and IT-Security (Rechnernetze und IT-Sicherheit; RNSEC)
Project, 4S, 8LP
(individual topics each term)

Joint Master Projects

Interactive Driving Simulator (Interaktiver Fahrsimulator)
Project, 4S, 8LP
WiSe 2013, SoSe 2013

Open Theses

“Replication Strategies for Offloading Computations on Rapidly Changing Data Structures,” Master's thesis, B. Erb (Supervisor), F. Kargl (Examiner), Inst. of Distr. Sys., Ulm Univ., 2024 – Open.
The aim of this thesis is the analysis and prototypical evaluation of different replication strategies in which computations on highly volatile data structures are outsourced to different remote nodes. The thesis should explore the solution space in terms of consistency and latency properties, timeliness as well as migration capabilites. As a concrete example, the work should examine the scenario of an automotive application that replicates its local application state onto nearby multi-access edge computing nodes that will then run computationally heavy calculations.

Supervised Theses

“State of Event Sourcing Application Development,” Master's thesis VS-2023-04M, B. Erb and E. Meißner (Supervisor), F. Kargl and F. J. Hauck (Examiner), Inst. of Distr. Sys., Ulm Univ., 2023 – Completed.
The event sourcing storage architecture is increasingly used for developing applications. However, previous work shows that developers encounter a couple of challenges when applying the pattern. One such challenge is the lack of mature tools and solutions, which help developers in implementing event-sourced applications. No detailed and methodological comparison of the tools already available on the market existed at the time of writing. This thesis introduces a methodology on how to compare and categorize such tools and applies it to three solutions (EventStoreDB, Axon, and Akka), which are selected according to a set of requirements. To remove subjective opinions from the assessment of the qualitative aspects, quality gates are defined, in addition to benchmarks, which are used to evaluate some quantitative aspects. Two example applications which cover a selection of event sourcing features are defined and implemented using the three selected tools, providing insight into how they aid in the development process. In the end, a detailed comparison of the capabilities of the evaluated tools is given and recommendations for when to use each tool are provided.
“Security Mechanisms for Multi-Tenancy Event-Sourced Graphs,” Master's thesis VS-2023-13M, B. Erb and E. Meißner (Supervisor), F. Kargl (Examiner), Inst. of Distr. Sys., Ulm Univ., 2023 – Completed.
This thesis aims to investigate and address the security challenges that arise when applying multi-tenancy to a graph-based processing platform that is characterized by computational entities that exchange messages and whose behavior depends on user-defined code. Using threat modeling techniques, we enumerate relevant threats and discuss adequate security mechanisms. The more promising ones are then deployed on a prototype platform. We compare the performance costs of access control using an attribute based policy language implementation (XACML, Authzforce) against extending the computational entities with this functionality and find that, in our case, the former is slower but may provide other benefits. We also measure the performance costs introduced by using a strategy against denial-of-service attacks through user-submitted code on the application level and determine that this introduces significant overhead. The general considerations in this thesis and the results obtained from the evaluations may prove useful when implementing a system that is similar to ours. It will aid in detecting threats and help in the selection of an adequate access control method.
“Confidential Computing via Multiparty Computation and Trusted Computing,” Master's thesis VS-2023-05M, E. Meißner and B. Erb (Supervisor), F. Kargl and F. J. Hauck (Examiner), Inst. of Distr. Sys., Ulm Univ., 2023 – Completed.
In the wake of the social sciences’ so-called replication crisis, researches increasingly strive to adopt methods preventing questionable research practices in empirical studies, e. g., study preregistration and full publication of survey datasets. However, publication of survey responses poses a serious threat to the privacy of study participants. Previous work has addressed this issue while maintaining protection against questionable research practices, but either relies on Trusted Execution Environments (TEEs), which have been shown to be susceptible to various kinds of attacks, or on Secure Multiparty Computation (SMPC), requiring a honest majority of participating parties. In this work, we combine TEEs with SMPC in a platform for conducting empirical studies that provides strong guarantees for the privacy of participants. Survey responses are split into secret shares, which are distributed among a number of TEE-protected computation parties. Statistical analysis of responses is performed as an SMPC. The platform is secure against a wider range of attackers than related work, i. e., against attackers either able to circumvent the utilised TEE or controlling a majority of the computation parties. We implement a prototype of this platform and evaluate its computational performance against alternative approaches. We show that it is suitable for conducting real-world privacy-preserving empirical studies, placing only minimal computational load on survey participants. Its performance in conducting statistical analysis is inferior to its alternatives, requiring ca. 10 min for performing one two-sample t-test. However, we argue that this is sufficient for real-world settings. Additionally, we list several approaches with which performance can be enhanced.
“PsyArXiv Data Analyzer,” individual lab project VS-2022-16P, B. Erb and E. Meißner (Supervisor), F. Kargl (Examiner), Inst. f. Vert. Sys., Univ. Ulm, 2022 – Completed.
This project focuses on practical anonymity in the publication of psychological study material by using software to detect data sets that are likely to contain identifying information. Under HIPAA and GDPR, health-related information is considered highly sensitive and should not be disclosed to the public under normal circumstances. In 2000, Latanya Sweeney pointed out that simple demographics often identify people uniquely, noting that 87% of the U.S. population could be uniquely identified with the combination of age, gender, and zip code alone, and that about half of the U.S. population would be identifiable with the combination of age, gender, and location. A manual ex- amination of study material revealed, that some studies still contain quasi-identifiers, which are sets of attributes that in combination can be used in combination to uniquely identify an individual under certain assumptions (e.g. the attacker must have access to a voter list, health records, or data sets on individuals acquired from a data broker). To address the issue of privacy in the publication of psychological study materials, we have developed a software that helps finding data sets in study materials of psychological studies that are likely to contain identifying information using the keywords or patterns configured by the user, providing per default the most predominant column headers we discovered by manually analyzing the study material. With our software, we are able to automatically prepare and analyze large amounts of data crawled from PsyArXiv prior to this project, and evaluate and score the results, focusing on files in CSV format and formats that can be converted to CSV. We hope that our work will bring more attention on the problem of anonymity in the release of study material, or even be used to identify privacy issues before the material is published.
“Investigation of Noisy Neighbour Isolation Capabilities for Virtualization Approaches,” Master's thesis VS-2022-26M, J. Domaschka and B. Erb (Supervisor), S. Wesner and F. Kargl (Examiner), Inst. f. Organisation und Management von Informationssystemen, Ulm Univ., 2022 – Completed.
Virtualization technology isolation capabilities impose a challenge for many researchers and businesses. This isolation among processes, containers, Virtual Machines (VMs) or other containing units, is significant due to a multitude of demands. Businesses want to divide their own infrastructure into individual parts in order to sell them to potential customers. Researchers want to isolate their experiments to not interfere with unintentional noise. Service providers want to consolidate their infrastructure to keep total cost of ownership as low as possible. Poor isolation would negatively impact those use cases. These mentioned demands towards isolation are enabled by isolation technologies. They create the incentive and demand for a methodology, to measure the capabilities of those technologies regarding isolation. This thesis presents such a methodology, enriched with further considerations around this problem.
“Interaktive Demos für Grundlagen der Rechnernetze,” Bachelor's thesis VS-2022-09B, B. Erb (Supervisor), F. Kargl (Examiner), Inst. of Distr. Sys., Ulm Univ., 2022 – Completed.
“Exploring Linkability of Psychological Research Data using Socio-Demographic Attributes,” Master's thesis VS-2022-01M, B. Erb (Supervisor), F. Kargl (Examiner), Inst. of Distr. Sys., Ulm Univ., 2022 – Completed.
“Exploring Linkability of Psychological Research Data Sets using Psychological Scales,” Master's thesis VS-2022-02M, B. Erb (Supervisor), F. Kargl (Examiner), Inst. of Distr. Sys., Ulm Univ., 2022 – Completed.
“Impact of HTTP/3 on Microservice Architectures,” Master's thesis VS-2021-16M, B. Erb and E. Meißner (Supervisor), F. Kargl (Examiner), Inst. of Distr. Sys., Ulm Univ., 2021 – Completed.
“Performance Comparison of Chronograph and Actor-Based Platforms,” individual lab project VS-2020-08P, B. Erb and E. Meißner (Supervisor), F. Kargl (Examiner), Inst. f. Vert. Sys., Univ. Ulm, 2020 – Completed.
“Causality-aware Log Pruning in Distributed event-sourced Systems,” individual lab project VS-P21-2019, B. Erb and E. Meißner (Supervisor), F. Kargl (Examiner), Inst. f. Vert. Sys., Univ. Ulm, 2019 – Completed.
“Revisited: A platform architecture for retroactive programming using event sourcing,” individual lab project VS-R07-2018, B. Erb and E. Meißner (Supervisor), F. Kargl (Examiner), Inst. f. Vert. Sys., Univ. Ulm, 2018 – Completed.
“Online Text Processing for Chatting Applications,” Bachelor's thesis VS-B19-2018, E. Meißner and B. Erb (Supervisor), F. Kargl (Examiner), Inst. of Distr. Sys., Ulm Univ., 2018 – Completed.
“Event-Sourced Graph Processing in Internet of Things Scenarios,” Master's thesis VS-M03-2018, B. Erb and E. Meißner (Supervisor), F. Kargl (Examiner), Inst. of Distr. Sys., Ulm Univ., 2018 – Completed.
“Bringing Height to the Chronograph Platform,” individual lab project VS-R08-2018, B. Erb and E. Meißner (Supervisor), F. Kargl (Examiner), Inst. f. Vert. Sys., Univ. Ulm, 2018 – Completed.
“Querying and Processing Event-sourced Graphs,” Master's thesis VS-M06-2017, B. Erb (Supervisor), F. Kargl (Examiner), Inst. of Distr. Sys., Ulm Univ., 2017 – Completed.
“Design and Implementation of an Web-based API and Interactive Dashboard,” Bachelor's thesis VS-B07-2017, B. Erb (Supervisor), F. Kargl (Examiner), Inst. of Distr. Sys., Ulm Univ., 2017 – Completed.
“Asynchrones latentes Snapshotting von dynamischen event-sourced Systemen,” Bachelor's thesis VS-B05-2017, B. Erb (Supervisor), F. Kargl (Examiner), Inst. of Distr. Sys., Ulm Univ., 2017 – Completed.
“A platform architecture for retroactive programming using event sourcing,” individual lab project VS-R23-2017, B. Erb and E. Meißner (Supervisor), F. Kargl (Examiner), Inst. f. Vert. Sys., Univ. Ulm, 2017 – Completed.
E. Meißner, “A Methodology for Performance Analysis and Performance Engineering of Distributed Event-sourced Systems,” Master's thesis VS-M22-2017, B. Erb (Supervisor), F. Kargl (Examiner), Inst. of Distr. Sys., Ulm Univ., 2017 – Completed.
Distributed event-sourced systems adopt a fairly new architectural style for data-intensive applications that maintain the complete history of the application state. However, the performance implications of such systems is not yet well explored, let alone how the performance of these systems can be improved. A central issue is the lack of systematic performance engineering approaches that incorporate the specific properties of distributed event-sourced systems, such as messaging and event persistence. To address this problem, we developed a methodology for performance engineering and performance analysis of distributed event-sourced systems as part of a software engineering process. This approach helps developers to identify bottlenecks and resolve performance issues based on specific micro benchmarks and subsequent targeted optimizations. To show the practicality of our methology, we applied it to the \cg platform to improve the overall performance of its current research prototype. Using our structured approach, we improved the performance of the prototype system and made it more than twice as fast for certain workloads.
“Vergleich und Evaluierung von Time Series Databases,” Bachelor's thesis VS-B07-2016, B. Erb (Supervisor), F. Kargl (Examiner), Inst. of Distr. Sys., Ulm Univ., 2016 – Completed.
“Performance Engineering in verteilten, polyglotten Berechnungsplattformen,” Master's thesis VS-M08-2016, B. Erb (Supervisor), F. Kargl (Examiner), Inst. of Distr. Sys., Ulm Univ., 2016 – Completed.
“Enabling Retroactive Computing Through Event Sourcing,” Master's thesis VS-M01-2016, B. Erb (Supervisor), F. Kargl (Examiner), Inst. of Distr. Sys., Ulm Univ., 2016 – Completed.
Event sourcing is a style of software architecture wherein state altering operations to an application are captured as immutable events. Each event is appended to an event log, with the current state of a system derived from this series of events. This thesis addresses the utilization of retroactive capabilities in event-sourced systems: computing alternate application states, post hoc bug fixes, or the support of algorithms which have access to their own history, for example. The possibility of retroactively accessing and modifying this event log is a potential capability of an event-sourced system, but a detailed exploration how these operations can be facilitated and supported has not yet been conducted. We examine how retroaction can be applied to event-sourced systems and discuss conceptual considerations. Furthermore, we demonstrate how different architectures can be used to provide retroaction and describe the prototypical implementation of an appropriate programming model. These findings are applied in the Chronograph research project, in order to utilize potential temporal aspects of this platform.
“A Persistence Layer for Distributed Event-Sourced Architectures,” Master's thesis VS-M09-2016, B. Erb (Supervisor), F. Kargl (Examiner), Inst. of Distr. Sys., Ulm Univ., 2016 – Completed.
Due to the increasingly large amount of data which is collected and processed each day, enabling fast, reliable, and scalable distributed computing on very large datasets has become more important than ever. Unfortunately, distributed computation on large inhomogeneous datasets is still time-consuming and it is very difficult to make evaluations and predictions. To address these issues, event sourcing and graph computing are relevant topics. While event sourcing provides techniques to save data in a particular way, which enables evaluations and makes predictions possible, graph computing provides a way to distribute the computation on large datasets. Although there is a conceptual idea which addresses these issues, no practical experience how such a concept can be implemented in case of persistence and communication is available. As a result, a prototype system to measure and evaluate different persistence and communication implementations for distributed event-sourced architectures using event sourcing and graph computing needs to be created. Such a system can be used to find a way how to persist and work on large distributed inhomogeneous datasets efficiently.
“Verwendung von CRDTs in mobilen verteilten Anwendungen,” Bachelor's thesis VS-B07-2015, B. Erb (Supervisor), F. Kargl (Examiner), Inst. of Distr. Sys., Ulm Univ., 2015 – Completed.
Einen Ansatz für asynchrone Datenhaltung in verteilten Systemen bieten Conflict-free Replicated Data Types (CRDT). Sie stellen Verfügbarkeit über strikte Konsistenz, trotzdem konvergieren die Zustände der Datentypen zu einem späteren Zeitpunkt mit Hilfe einfacher mathematischer Annahmen wie Kommutativität oder den Eigenschaften eines Halbverbandes. Diese Arbeit erklärt die Grundsätze, Unterscheidungsmöglichkeiten, Funktionsweisen, Anwendungsfälle und Probleme dieser CRDTs und überträgt deren Konzepte anschließend in den mobilen Kontext. Dabei werden auf theoretischer Basis zunächst geeignete Anwendungsszenarien unter verschiedenen Kriterien untersucht und anschließend ein Framework entwickelt, mit dem Entwickler mobiler Anwendungen CRDT-Instanzen verschiedener Datentypen verwenden können, die automatisiert über mehrere Geräte repliziert werden.
“Verhalten von TCP-Varianten in Hochgeschwindigkeitsnetzwerken,” Bachelor's thesis VS-B08-2015, B. Erb and T. Lukaseder (Supervisor), F. Kargl (Examiner), Inst. of Distr. Sys., Ulm Univ., 2015 – Completed.
“Ereignisorientierte, diskrete Netzwerksimulation mit Pregel,” Bachelor's thesis VS-B05-2015, B. Erb (Supervisor), F. Kargl (Examiner), Inst. of Distr. Sys., Ulm Univ., 2015 – Completed.
Diese Arbeit verfolgt das Ziel festzustellen, ob sich die Architektur von Pregel - ein Framework für verteilte Berechnungen auf großen Graphen - eignet, um eine ereignisorientiert-diskrete Netzwerksimulation zu implementieren. Dazu wurde ein Simulator entworfen, der das Verhalten eines Transportprotokolls innerhalb eines Computernetzwerks darstellt. In der Entwurfsphase hat es sich ergeben, dass die Konzepte, die eine ereignisorientiert-diskrete Simulation ausmachen, in Pregel umgesetzt werden können. Dieser Entwurf wurde praktisch umgesetzt, um innerhalb einer Evaluierung zu ermitteln, wie sich die Simulation verhält, je größer die Eingabe-Netzwerkgraphen werden.
“Distributed Versioning and Snapshot Mechanisms on Event-Sourced Graphs,” Master's thesis VS-M13-2015, B. Erb (Supervisor), F. Kargl (Examiner), Inst. of Distr. Sys., Ulm Univ., 2015 – Completed.
Two interesting approaches to tackle many of today's problems in large scale data processing and live query resolution on big graph datasets have emerged in recent years. Firstly, after Google's presentation of its graph computing platform Pregel in 2010, an influx of more or less similar platforms could be observed. These platforms all share the goal of providing highly performant data mining and analysis capabilities to users, enabling a wide variety of today's technologies like ranking web pages in the the web graph of the WWW or analysing user interactions in social networks. Secondly, the old concept of message logging for failure recovery was rediscovered and combined with event based computing in the early 2000s and is now known as event sourcing. This approach to system design keeps persistent logs of every single change of all entities in a computation, providing highly interesting options like state restoration by replaying old events, retroactive event modifications, phenomenal debugging capabilities and many more. A recently published paper suggests the merging of those two approaches to create a hybrid event-sourced graph computing platform. This platform would show unique characteristics compared to other known solutions. For example, computations on temporal data can yield information about the evolution of a graph and not only its current state. Furthermore, for backups or to enable offline analysis on large compute clusters, snapshot extraction – i.e. reproducing any consistent global state the graph has ever been in – from the event logs produced by event-sourced graph computations is possible. This thesis provides one of the first major works related to this proposed hybrid platform and provides background knowledge related to these aforementioned topics. It presents a thorough overview over the current state-of-the-art in graph computing platforms and causality tracking in distributed systems and finally develops an efficient mechanism for extracting arbitrary, consistent global snapshots from a distributed event log produced by an event-sourced graph computation.
“Design und Implementierung eines interaktiven Explorers für räumlich-zeitliche Trace-Daten,” individual lab project VS-R12-2015, R. van der Heijden and B. Erb (Supervisor), F. Kargl (Examiner), Inst. f. Vert. Sys., Univ. Ulm, 2015 – Completed.
Räumlich-zeitliche Trace-Daten speichern Bewegungsmuster von Entitäten in einem zeitlichen Verlauf. Die Exploration von solchen räumlich-zeitlichen Daten ist für verschiedenste Einsatzzwecke interessant, unter anderem auch für die Visualisierung von Fahrplandaten im öffentlichen Nahverkehr sowie für die Analyse von durchgeführten VANET-Simulationsläufen. Im Rahmen der Arbeit soll eine performante webbasierte Client/Server-Anwendung enstehen. Der Server soll Tracefiles aus verschiedenen Quellen importieren und effizient speicheren, die dann in interaktiven Explorationssessions von Clients abgespielt werden können.
“Designing a Disaster Area Network for First Responders in Disastrous and Emergency Scenarios,” Bachelor's thesis VS-B18-2015, B. Erb and R. van der Heijden (Supervisor), F. Kargl (Examiner), Inst. of Distr. Sys., Ulm Univ., 2015 – Completed.
Man-made disasters, earthquakes, floods, and other natural disasters come with a great number of casualties, which have to be treated as quickly as possible by emergency services to minimize fatalities. Due to the large number of casualties and aid workers, it is difficult to maintain an accurate overview of the situation. To improve the clarity of the situation, a comprehensive support system can be used for the forces on-site, which supports them in information gathering and distribution to all involved parties. Previous work has failed to implement independency of public infrastructure (e.g. power grid, cellular network) or suffer data loss due to single node failures. To solve this problem, we propose a fault-tolerant design that fully distributes information to all devices in a mobile ad hoc network, while allowing offline work outside of it. We present a proof-of-concept prototype for the proposed design and show that its data distribution component behaves as designed using a series of trials. To the best of our knowledge, there is currently no DAN system that uses multi master replication to fully distribute data, where every node has an individual copy of every piece of information.
“Communication Patterns for Concurrent and Distributed Computations,” Bachelor's thesis VS-B04-2015, B. Erb (Supervisor), F. Kargl (Examiner), Inst. of Distr. Sys., Ulm Univ., 2015 – Completed.
This bachelor thesis presents a catalog of communication patterns for concurrent and distributed computations. We compose this catalog by reviewing the inter-process communication in common concurrency models and surveying existing pattern resources, such as professional books and weblogs. In addition to the selection and composition of patterns, we determine our own pattern template structure and an appropriate visualization, specifically matching the requirements of communication patterns. The catalog itself consists of a variety of patterns, intended for the reader to get a grasp of proven solutions for recurring problems in the field of concurrent programming. We provide simplified examples for every solution by the means of message-passing.
“An Evaluation of Distributed Approaches to Large-Scale Graph Computing,” Bachelor's thesis VS-B09-2015, B. Erb (Supervisor), F. Kargl (Examiner), Inst. of Distr. Sys., Ulm Univ., 2015 – Completed.
This thesis takes a look at several considerations for developers and users of distributed graph computing platforms. Two popular computing platforms, Apache Giraph on Hadoop and the GraphX library in Apache Spark, are analyzed and tested through a benchmarking process. We examine a basic PageRank and ConnectedComponents algorithm for a variety of input graphs and cluster sizes. We hereby discover how immensely different parameters of distributed graph computations, such as graph sizes and topology properties, impact the execution time. Concluding, we carve out the application fields, for which both platforms are practical and where trade-offs have to be made.
“Sicherheitsanalyse von NoSQL-Datenbanken,” Bachelor's thesis VS-B15-2014, R. van der Heijden and B. Erb (Supervisor), F. Kargl (Examiner), Inst. of Distr. Sys., Ulm Univ., 2014 – Completed.
NoSQL-Datenbanken werden immer häufiger produktiv eingesetzt, um die steigenden Datenmengen bewältigen zu können. Derzeit ist allerdings noch nicht klar, ob diese die notwendigen Sicherheitsanforderungen erfüllen. Zu diesem Zweck stellt die vorliegende Arbeit eine Methodik zur Sicherheitsanalyse von NoSQL-Datenbanken vor. Hierzu werden die wichtigsten Sicherheitsrisiken identifiziert und es wird aufgezeigt, wie eine NoSQL-Datenbank auf diese überprüft werden kann. Die Methodik wird auf die NoSQL-Vertreter Neo4j und CouchDB angewendet. Es zeigt sich hierbei, dass beide Datenbanken schwerwiegende Sicherheitsdefizite aufweisen. Deshalb werden Empfehlungen zur Steigerung der Informationssicherheit gegeben, die bei Beachtung die ermittelten Schwachstellen beheben.
“Intrusion Detection in Software Defined Networks,” Bachelor's thesis VS-B02-2014, R. van der Heijden and B. Erb (Supervisor), F. Kargl (Examiner), Inst. of Distr. Sys., Ulm Univ., 2014 – Completed.
Intrusion detection systems are valuable tools to improve security in a network. Due to growing network bandwidths not all packets can be investigated because of resource limitations. Special traffic filters can be used to forward only traffic that is suspected of containing intrusions. Software defined network is an architecture which allows to interact with a network in a programmable way. With OpenFlow a switch can be programmed reactive, were flows are created dynamically and proactive, were flows are created statically. This work evaluates the impact of filtering traffic proactive and reactive. Evaluated was the number of alerts the SNORT IDS generated. An emulated SDN testbed was used for the evaluation. Compared to forwarding without filtering, the traffic can be reduced by more than a half.The results show that supporting an IDS is possible with OpenFlow, either in a reactive or a proactive way.
“Evaluation von Distributed Event Processing Frameworks für Zeitreihenanalysen,” Bachelor's thesis VS-B03-2014, B. Erb (Supervisor), F. Kargl (Examiner), Inst. f. Vert. Sys., Univ. Ulm, 2014 – Completed.
“Entwurf & Implementierung einer kollaborativen Web-Plattform zur Dokumentation von Design Patterns,” Bachelor's thesis VS-B07-2014, B. Erb (Supervisor), F. Kargl (Examiner), Inst. f. Vert. Sys., Univ. Ulm, 2014 – Completed.
In dieser Arbeit wurde eine Plattform zur kollaborativen Dokumentation von Patterns entworfen und prototypisch implementiert. Hierfür wurden die Anforderungen für eine generische Plattform gesammelt und Konzepte für die kollaborative Nutzung erörtert. Die Implementierung verwendet verschiedene Konzepte des Web 2.0 zur Kollaboration und zeigt prototypisch den Funktionsumfang einer Plattform für Patterns verschiedener Anwendungsgebiete.
“Distributed Architecture using Event Sourcing & Command Query Responsibility Segregation,” Bachelor's thesis VS-B04-2014, B. Erb (Supervisor), F. Kargl (Examiner), Inst. of Distr. Sys., Ulm Univ., 2014 – Completed.
In a common software system we occasionally want to know how we got to the current application state without making the system more complicated. Mostly, this question can not be answered because the software just saves the newest application state. Even if the software architects implement their own history support later, it can not tell anything about the previous changes. This is where Event Sourcing and Command Query Responsibility Segregation come into play. Event Sourcing saves every change as an event and Command Query Responsibility Segregation helps to handle the increased complexity. Therefore, we can build a system which provides a history support and which is still maintainable. If we use Event Sourcing and Command Query Responsibility Segregation, we just have to make small changes to a traditional architecture in order to fix that problem. By saving all changes as events we can evaluate everything we want.
“Design und Implementierung eines skalierenden Database-as-a-Service Systems,” Master's thesis VS-M05-2014, J. Domaschka and B. Erb (Supervisor), F. Kargl (Examiner), Inst. f. Vert. Sys., Univ. Ulm, 2014 – Completed.
Datenbanksysteme stellen in Anwendungen die zentrale Komponente zur Persistierung von Daten dar. Die Speicherung der Daten kann über verschieden Datenmodelle, welche die Literatur in die Kategorien relational und NoSQL einteilt, realisiert werden. Die Datenmodelle bieten unterschiedliche Funktionalitäten in Bereichen wie Konsistenz, Verteilung und Skalierbarkeit. Skalierbarkeit stellt besonders für Datenbanken innerhalb Webanwendungen eine wichtige Anforderungen dar, da zum einen die Nutzeranzahl von Webanwendungen immer weiter steigt und zum anderen die Webanwendungen starke Lastschwankungen bewältigen müssen. Um diese Lastschwankungen verarbeiten zu können, werden flexible Ressourcen benötigt, die das Cloud-Computing verspricht. Diese Arbeit betrachtet die Cloud-Computing-Architektur des DBaaS, welche Datenbanken als abstrakte Ressource bereitstellt. Der Schwerpunkt liegt hierbei auf skalierenden DBaaS-Systemen. Das Ziel dieser Arbeit besteht aus dem Design und der Implementierung eines DBaaS-Dienstes, der eine automatisierte Skalierung bietet und auf frei verfügbarer Software basiert. Hierfür werden die Anforderungen eines solchen DBaaS-Dienstes anhand eines Anwendungsfalls herausgearbeitet und die Skalierbarkeit existierender Datenbanken auf Basis von Benchmarks untersucht. Aus diesen Ergebnissen wird ein prototypisches DBaaS-System umgesetzt.
“Design und Implementierung eines zuverlässigen und verfügbaren (NoSQL) Datenbanksystems,” Master's thesis OMI-2014-M-02, J. Domaschka and B. Erb (Supervisor), S. Wesner and F. Kargl (Examiner), Inst. f. Organisation und Management von Informationssystemen, Ulm Univ., 2014 – Completed.
Datenbanken bilden das Rückgrat vieler Anwendungen. Wegen dieser zentralen Rolle sind Zuverlässigkeit und Ausfallsicherheit für sie essentiell. In dieser Arbeit sollen existierende Ansätze zur Fehlertoleranz bestehender relationaler und nicht-relationer Datenbanken zunächst untersucht und verglichen werden. Aufbauend darauf soll ein System mit Hilfe des Virtual Nodes Frameworks nachimplementiert werden.
“A Collection of Privacy Patterns,” Bachelor's thesis VS-B06-2014, B. Erb and H. Kopp (Supervisor), F. Kargl (Examiner), Inst. of Distr. Sys., Ulm Univ., 2014 – Completed.
In dieser Arbeit wurde die Dokumentation von Patterns im Kontext von Privacy untersucht. Es wurden Anforderungen an die Struktur von Privacy Patterns diskutiert sowie die Kategorisierung von Privacy Patterns in einem Katalog betrachtet. Auf Basis einer eigenen Pattern-Struktur wurde dann anhand von exemplarischen Privacy Patterns ein kleiner, beispielhafter Katalog von Privacy Patterns vorgestellt.
“Evaluation von existierenden Lösungen zur Simulation von Netzwerken,” Bachelor's thesis VS-B05-2013, B. Erb (Supervisor), F. Kargl (Examiner), Inst. of Distr. Sys., Ulm Univ., 2013 – Completed.
Gegenstand dieser Arbeit ist die Erstellung eines aktuellen Surveys von bestehenden Netzwerksimulatoren, insbesondere für VANET-Simulationen.
“Design & Durchführung einer Benutzerstudie zur Nutzung von Netzwerksimulatoren,” Bachelor's thesis VS-B06-2013, B. Erb (Supervisor), F. Kargl (Examiner), Inst. f. Vert. Sys., Univ. Ulm, 2013 – Completed.
Im Rahmen dieser Arbeit wurde eine Benutzerstudie durchgeführt, die sich mit Fragen der Benutzerbarkeit und Komplexität von Netzwerksimulatoren befasst und offene Problemstellungen für Anwender von Netzwerksimulatoren identifizierte.
F. Engelmann, “Content-Inspection in Hochgeschwindigkeitsnetzen,” Bachelor's thesis VS-B17-2013, R. van der Heijden and B. Erb (Supervisor), F. Kargl (Examiner), Inst. of Distr. Sys., Ulm Univ., 2013 – Completed.
Computational power for content filtering in high-speed networks reaches a limit, but many applications as intrusion detection systems rely on such processes. Especially signature based methods need extraction of header fields. Hence we created an parallel protocol-stack parser module on the NetFPGA 10G architecture with a framework for simple adaption to custom protocols. Our measurements prove that the appliance operates at 9.5 Gb/s with a delay in order of any active hop. The work provides the foundation to use for application specific projects in the NetFPGA context.
“Comparison of Concurrency Frameworks for the JVM,” Bachelor's thesis VS-B13-2013, B. Erb (Supervisor), F. Kargl (Examiner), Inst. of Distr. Sys., Ulm Univ., 2013 – Completed.
Aufgrund von Multi-Core-CPUs wird Nebenläufigkeit ein zunehmend wichtigerer Teil bei der Programmierung von performanten und skalierbaren Anwendungen. Für Java existieren diverse Frameworks, die höhere Abstraktionen für Nebenläufigkeit anbieten und somit nebenläufige Programmierung vereinfachen. Im Rahmen dieser Bachelor-Arbeit wurden wichtigte Frameworks vorgestellt und miteinander verglichen. Ebenso wurde aufgezeigt, welche Frameworks sich für welche Einsatzzwecke besonders eignen.