Topics for bachelor and master thesis as well as projects

Topics for bachelor and master theses as well as projects come from the areas of software engineering and programming languages. The concrete topics for theses are oriented towards our research areas.  Basics are taught in the corresponding courses of the institute.

The following list offers a selection of topics and subject areas. More detailed information can be obtained from the respective contact persons. In addition, we are open to your own suggestions for topics.

(Legend - B: Bachelor Thesis, M: Master Thesis, P: Project)

Domain-Specific Languages

Analysis Methods for Complex Model Transformations

  • P/M: Approach to Suggest Performance Improving Refactorings for ATL Transformations (Groner, Tichy)

    Context

    Model Driven Engineering (MDE) is one approach to handle the continuous growth in size and complexity of software systems to be developed. In MDE, models are used to describe the software system in an abstract way. Transformations are applied to these models. If, for example, the naming of a software component changes, the affected models do not need to be updated manually, but can be updated automatically by a transformation. To describe the task of a transformation, so-called model transformation languages are used, which contain special language concepts to be able to operate on models as comfortably as possible.

    Problem

    In general purpose languages (GPL), like Java, it is possible to define the same program using different language concepts. However, these language concepts can affect the performance of a program differently. In Java, for example, one can access an element in an ArrayList in constant time, whereas with a LinkedList one must first navigate through the list. This results in the problem that a developer must pay attention to use the best suitable language concepts. However, this problem occurs not only in the context of GPLs, but also in model transformation languages. The problem is aggravated by the fact that there is hardly any tool support to detect expensive language constructs in transformations, let alone support to replace them.

    Task / Goal

    • Analyze which expensive language constructs occur in transformation definitions
    • Develop refactorings to optimize transformation definitions by replacing expensive language constructs
    • Develop different analyses that detect expensive language concepts based on static and/or dynamically obtained information in model transformations in order to suggest suitable refactorings

    Related Work

    • The work focus on the transformation language Atlas Transformation Language (ATL)
    • Wimmer, M., Perez, S. M., Jouault, F., & Cabot, J. (2012). A Catalogue of Refactorings for Model-to-Model Transformations. J. Object Technol., 11(2), 2-1.

    Contact

    Raffaela Groner

  • P/B/M: Performance Visualizations for ATL Transformations (Groner, Tichy)

    Context

    Model Driven Engineering (MDE) is one approach to handle the continuous growth in size and complexity of software systems to be developed. In MDE, models are used to describe the software system in an abstract way. Transformations are applied to these models. If, for example, the naming of a software component changes, the affected models do not need to be updated manually, but can be updated automatically by a transformation. To describe the task of a transformation, so-called model transformation languages are used, which contain special language concepts to be able to operate on models as comfortably as possible.

    Problem

    Performance issues can also occur when using model transformation languages, as when using a general purpose languages (GPL), such as Java. Unfortunately, the tool support for these languages is much smaller than for GPLs. For example, there are few profilers for model transformation languages and the information they provide is difficult to interpret for users without expert knowledge of language details. In order to support users, there is currently a lack of visualizations that present the necessary information in a comprehensible way.

    Task / Goal

    • Develop different visualizations that help to understand the execution of a transformation and to find possible causes for performance issues
    • Conduct a study to evaluate the visualizations

    Related Work

    • The work focus on the transformation language Atlas Transformation Language (ATL)
    • ATL Profiler
    • Isaacs, K. E., Giménez, A., Jusufi, I., Gamblin, T., Bhatele, A., Schulz, M., Hamann, B., & Bremer, P. T. (2014, June). State of the Art of Performance Visualization. In EuroVis (STARs).

    Contact

    Raffaela Groner

  • P/B/M: Control Flow Visualizations for Complex Model Transformations (Ege, Tichy)

    Context

    Model-driven software engineering (MDSE) is centered around the transformation of models through complex transformation chains that are constructed by composing basic transformation steps. These basic steps apply a transformation rule on an input model to create an output model. We focus on the Henshin framework for declarative model transformations.

    Problem

    Defects in either the transformation rule or the input model can cause a basic transformation step to not be applicable at all or produce wrong output. At a higher level, defects in the structure of a transformation chain might schedule the application of basic steps incorrectly, thus leading to applicability issues or unexpected behavior. Due to the declarative nature of transformation steps and nondeterminism in pattern matching or scheduling of steps, these bugs are often difficult to find and fix.

    One possible approach to assist developers with understanding and debugging complex transformations is to present a visualization of the control flow through the transformation and offer features to edit this control flow and interact with it.

    Task / Goal

    • Development of a control flow visualization for Henshin transformations.
    • Integration in the Henshin tool (based on Eclipse Rich Client Platform).
    • Evaluation of the implementation with regard to
      • performance: what are the costs in time and memory space?
      • usability: how does the approach compare to exisiting features concerning identification and fixing of defects in transformations?

    Contact

    Florian Ege

Enhancing Collaborative Modeling

  • M: Improving the Comprehensibility of Evolving Graphical Models (Pietron, Tichy)

    Context
    We are convinced that (graphical) modeling is a cooperative and explorative process done by domain experts. However, several studies identified an insufficient collaboration support of graphical modeling tools. Therefore, we develop a new collaboration approach for graphical modeling tools that is based on the persistence of sequential edit operations (also called operation-based approach). We aim to enable both synchronous as well as asynchronous collaboration and improve the comprehensibility of evolving models.

    Problem
    At the present stage of our research we persist every kind of edit operation a user has performed to alter the model and present the result in a simple list. We see two main challenges: 1) an unfiltered listing of all kinds of recorded operations, such as moving the same block twenty times or renaming a block, does not necessarily help a user to easily understand the way a model evolved. Therefore, the presented edit operations must be able to be filtered, processed, or rearranged. 2) A listing does not always have to be the appropriate form of presentation. Does there exist further suitable visualizations?

    Tasks

    • Identify the user's needs regarding exploration of a model's history
    • Develop filtering mechanisms for recorded model edit operations
    • Develop multiple visualizations for recorded (and filtered) edit operations with the intention tu support the comprehension of evolving models
    • Evaluate your result with a user study

    Related Work

    Contact

    Jakob Pietron

    M: Improving the Comprehensibility of Evolving Graphical Models (Pietron, Tichy)
  • P/B/M: Reducing of visually interfering edit operations (Pietron, Tichy)

    Context
    We are convinced that (graphical) modeling is a cooperative and explorative process done by domain experts. However, several studies identified an insufficient collaboration support of graphical modeling tools. Therefore, we develop a new collaboration approach for graphical modeling tools that is based on the persistence of sequential edit operations (also called operation-based approach). We aim to enable both synchronous as well as asynchronous collaboration and improve the comprehensibility of evolving models.

    Problem
    In the context of synchronous collaboration it can happen that two or more users make changes within the same area of a graphical model. This might lead to visually interfering edit operations, e.g., Alice edits the value of a property Prop1 while Bob moves BlockA which contains Prop1. Also semantically conflicting edit operations, such as Bob deletes BlockA instead of moving it, can have a negative impact to the user experience.

    We assume that the negative impact of those interfering edit operations can be avoided be implicitly create a temporary branch that delays interfering edit operations of other users.

    Tasks

    • Systematically identify different types of be interfering edit operations
    • Develop an algorithm to detect interfering edit operations
    • Implement a solution that uses your algorithm and is able to temporary delay other users edit operations.
    • Conduct a user study that evaluates the impact of your approach on the user experience.

    Related Work

    Contact

    Jakob Pietron

    P/B/M: Reducing of visually interfering edit operations (Pietron, Tichy)

Quadcopter Lab

  • P/B/M: rosslt library implementation in Python (Witte, Tichy)

    Context
    rosslt ist eine C++ Bibliothek + Integration in ROS (Robot Operating System), die das Setzen und Verändern von Werten über Komponentengrenzen ermöglicht. In einer einfachen Robotikanwendung könnten z.B. in einer Komponente Wegpunkte erzeugt werden, in einer anderen Komponente eine Trajektorie durch diese Wegpunkte geplant werden und in einer dritten Komponente diese Trajektorien visualisiert werden. Jeder berechnete Wert hängt dabei von den ursprünglichen Wegpunkten ab. Ziel ist es, diese berechneten Werte veränderbar zu machen, indem die Berechnungen umgekehrt und auf die ursprünglichen Daten zurückzuführen. Dazu werden die Datenflüsse zur Laufzeit durch die rosslt Bibliothek getrackt und bei Bedarf invertiert.

    Problem
    Die rosslt-Client-Bibliothek wurde bisher nur prototypisch in C++ umgesetzt. Eine Implementierung in Python könnte potentiell die nötigen Änderungen am Code für das Tracking stark reduzieren. Python ist eine dynamisch typisierte Sprache, die weit bessere Introspection Möglichkeiten als C++ bietet. Eventuell bietet sich auch die Möglichkeit, durch Nutzung aspektorientierter Programmierung, die nötigen Änderungen auf (fast) null zu reduzieren.

    Contact
    Thomas Witte​​​​​​​

  • P/B/M: rosslt graph visualization and debugging (Witte, Tichy)

    Context
    rosslt ist eine C++ Bibliothek + Integration in ROS (Robot Operating System), die das Setzen und Verändern von Werten über Komponentengrenzen ermöglicht. In einer einfachen Robotikanwendung könnten z.B. in einer Komponente Wegpunkte erzeugt werden, in einer anderen Komponente eine Trajektorie durch diese Wegpunkte geplant werden und in einer dritten Komponente diese Trajektorien visualisiert werden. Jeder berechnete Wert hängt dabei von den ursprünglichen Wegpunkten ab. Ziel ist es, diese berechneten Werte veränderbar zu machen, indem die Berechnungen umgekehrt und auf die ursprünglichen Daten zurückzuführen. Dazu werden die Datenflüsse zur Laufzeit durch die rosslt Bibliothek getrackt und bei Bedarf invertiert.

    Problem
    Das Location-Tracking von ROS-Messages erlaubt es, den Ursprung (Node, Parameter) einzelner Nachrichten festzustellen. Das Ziel der Abschlussarbeit/des Projekts ist, dies graphisch aufzubereiten und interaktiv zu visualisieren. Eine solche Oberfläche sollte es erlauben zur Laufzeit ROS-Topics zu abbonieren, Nachrichten anzuzeigen und zu verändern. Dabei sollte die Source-Location der Nachricht genutzt werden um im Ursprungsnode so Änderungen vorzunehmen (z.B. Parameter setzen), dass sich die Änderung auch auf die folgenden Nachrichten entsprechend auswirkt. Zusätzlich kann der Ursprung und die Historie jedes Wertes in einer Message visualisiert werden um Datenflüsse und Berechnungen interaktiv zu debuggen.

    Contact
    Thomas Witte​​​​​​​​​​​​​​

  • P/B/M: Quadcopter Safety Monitoring (Witte, Tichy)

    Context
    Um Gefährdungen und Verletzungen zu vermeiden, ist die Sicherheit (safety) von entscheidender Bedeutung bei der Softwareentwicklung für Quadcopter. Dem gegenüber steht eine sehr heterogene Softwarearchitektur, die sich aus vielen verschiedenen, kommunizierenden Komponenten zusammensetzt. Diese Komponenten wurden evtl für einen ganz anderen Anwendungskontext entwickelt oder sicherheitskritische Zustände entwickeln sich erst aus dem Zusammenwirken mehrerer Komponenten. Durch Überwachung des Systemzustands und Tracking von Datenflüssen zur Laufzeit soll eine Watchdog-Komponente dazu in die Lage versetzt werden, sicherheitskritische Situationen zu erkennen, deren Ursache zu erklären und gegebenenfalls automatisiert einen sicheren Zustand herbeiführen.

    Problem
    Zunächst soll das bestehende Quadcoptersystem systematisch analysiert werden und safety-kritische Zustände identifiziert werden. Dies könnte z.B. Wertebereiche, Rate und Latenz von Nachrichten, interne Zustände von Softwarekomponenten (Vor-, Nachbedingungen, Assertions), sowie den Zustand von Hardwarekomponenten (z.B. Akkustand, Prozessorauslastung, Netzwerkdurchsatz) umfassen.
    In einem zweiten Schritt soll dann eine erweiterbare Softwarekomponente entwickelt werden, die diese sicherheitsrelevanten Zustände überwachen und auf Überschreitung des zulässigen Bereichs reagieren kann. Durch Analyse der Herkunft und Geschichte unzulässiger Nachrichten und des ROS-Nodegraphen soll zudem die Fehlersuche erleichtert und dem Entwickler erklärt werden, wo das Problem wahrscheinlich liegt und wie es sich eventuell lösen lässt.

    Contact
    Thomas Witte​​​​​​​​​​​​​​

  • P/B/M: ROS node graph reconfiguration at runtime (Witte, Tichy)

    Context
    ROS (Robot Operating System) ist ein Softwareframework zur Entwicklung von Robotikanwendungen. Die Software besteht dabei aus vielen unabhängigen Komponenten (oft verschiedene Prozesse, die sogar verteilt über mehrere Rechner laufen können), die untereinander Nachrichten austauschen. Diese komponentenbasierte Architektur fördert die Wiederverwendung bestehenden Codes und erlaubt das Starten, Beenden oder Ersetzen von Teilen der Anwendung zur Laufzeit.

    Problem
    Im Rahmen der Arbeit soll zunächst beschrieben und klassifiziert werden, welche Rekonfigurationen zur Laufzeit überhaupt möglich sind, also z.B. Parameteränderungen, die nur vorher bestimmte und vom Entwickler vorgesehene Änderungen am Verhalten zulassen bis zum Ersetzen von ganzen Komponenten, die beliebige Änderungen ermöglichen.
    Danach soll eine Softwarekomponente entstehen, die Rekonfigurationen an der Anwendung über Services der Anwendung selbst bereitstellt, sodass die Anwendung selbst Änderungen an ihrem Verhalten anstoßen kann. Die Durchführung der Rekonfiguration soll dabei überwacht werden, sodass sichergestellt ist, dass danach wieder ein konsistenter Zustand existiert oder gegebenenfalls ein Rollback möglich ist, der den alten Systemzustand wieder herstellt. Im Rahmen der Evaluation soll die Performance und Korrektheit solcher Rekonfigurationen (z.B. Zeit, Jitter, verlorene Nachrichten) analysiert werden.

    Literature
    Davide Brugali: Runtime reconfiguration of robot control systems: a ROS-based case study

    Contact
    Thomas Witte​​​​​​​​​​​​​​

Relaxed Conformance Editing

  • P/M: Developing Visualization Concepts for Relaxed Conformance Editing (Stegmaier, Tichy)

    Context

    Probably everyone has had to create graphical models as part of their studies. Class diagrams, ER diagrams, state machines or sequence diagrams are popular. There are also a variety of modeling tools for this purpose, although the usability usually "takes some getting used to". Among other things, this is due to the fact that many modeling tools only allow the creation of correct models. This means, for example, that arrows must always be connected to a start and end node. However, this type of modeling often limits you or makes it more cumbersome to use.

    In graphic drawing tools like Illustrator, Inkscape but also Powerpoint this restriction does not exist. However, the drawn shapes cannot be recognized as model elements and processed if necessary. One idea to counter this problem is the so-called "Relaxed Conformance Editing", which, as the name suggests, does not always require metamodel-conform graphical representations.

    Problem

    As part of a master's thesis, an implementation of the concepts of relaxed conformance editing has been developed. This implementation, called CouchEdit, consists of a backend that implements the basic concepts and a prototypical frontend that serves to demonstrate the functions of the backend. This backend takes arbitrary graphical elements and attempts to abstract or parse the model from them, for example by considering spatial positions relative to each other. The gathered information is used to generate suggestions such as "Make this a component of other element". Currently, these suggestions can only be accessed via the context menus of the elements. There are no visual hints that suggestions are available and there also is no visualization of which elements are referred to by a suggestion. So if there are multiple equal suggestions, the context menu has the same entry multiple times and there's no way to tell what effect each of them would have. So in summary there's a lack of proper visualizations for interacting with suggestions.

    Task / Goal

    • Develop visualization concepts for the interaction with relaxed conformance editing
    • Implement the developed visualization concepts as a prototype or into the frontend of CouchEdit
    • Evaluate the developed visualization concepts

    Related Work

    • Leander Nachreiner, Alexander Raschke, Michael Stegmaier, Matthias Tichy
      CouchEdit: A Relaxed Conformance Editing Approach
      MLE ’20: 2nd International Workshop on Modeling Language Engineering and Execution 
      DOI 10.1145/3417990.3421401

    Contact

    Michael Stegmaier

Software Configuration

Analyzing Product Lines with #SAT

  • P/B/M: Parameterizations for Approximate #SAT Solvers (Sundermann, Thüm)

    Context
    A #SAT solver computes the number of satisfying assignments of a propositional formula. This number is required for a variety of problems in different domains. For example, #SAT solvers enable a high number of analyses on feature models. However, #SAT is a computationally complex task and widely regarded as harder than SAT.

    Problem
    Due to the hardness of #SAT, state-of-the-art solvers often do not scale for complex formulas. One solution considered are approximate #SAT solvers which estimate the number of satisfying assignments. The required runtime of such solvers is heavily dependent on parameters used for the approximation. The goal is to identify effective parameters to achieve a strong trade-off between runtime and result quality.

    Tasks

    • Research approximate #SAT solvers
    • Analyze parameters allowed by the solvers
    • Find heuristic(s) for parametrization based on feature models and the propositional formula representing it
    • Evaluate heuristic(s)

    Related Work

    Contact

    Chico Sundermann, Thomas Thüm

  • P/M: Incremental #SAT for Feature Models (Sundermann, Thüm)

    Context
    A #SAT solver computes the number of satisfying assignments of a propositional formula which can be applied to feature models to compute the number of valid configurations. This enables a large variety of analyses (e.g. for supporting the development process). However,  #SAT is a computationally expensive task.

    Problem
    Feature Models are typically evolving. Even after tiny changes to a feature model, the #SAT solver needs to re-evaluate the entire resulting formula with state-of-the-art tools. Re-using information from a previous computation, may significantly accelerate sequential #SAT invocations. Your task is to develop a concept for an incremental #SAT solver and implement a prototype.

    Tasks

    • Research incremental regular SAT
    • Research #SAT solvers
    • Develop concept for an incremental #SAT solver
    • Implement a prototype


    Related Work

     

    Contact

    Chico Sundermann, Thomas Thüm

Knowledge Compilation

  • P/M: Efficiency of CNF Translation Techniques (Sundermann, Thüm)

    Context
    Propositional formulas are prevalent in a variety of computer science domains.  Such formulas are typically evaluated with SAT solvers which commonly operate on conjuctive normal form (CNF) due to its beneficial properties.

    Problem
    A large variety of translation techniques to CNF have been considered in the literature which typically result in semantically equivalent but different CNFs. It has been shown that the performance of SAT/#SAT solvers are heavily dependent on the resulting CNF. The goal of this work is to compare several CNF translation techniques. Furthermore, the runtime of SAT and #SAT solvers when given the different CNFs shall be examined.

    Tasks

    • Research on CNF translation techniques
    • Implement CNF translation techniques
    • Implement comparison benchmark for CNFs

    Related Work

    Contact

    Chico Sundermann, Thomas Thüm

  • P/M: Incremental Changes to d-DNNF (Sundermann, Thüm)

    Context
    A d-DNNF is a normal form for propositional formulas that allows faster queries for certain analyses. For example, a d-DNNF representing a feature model allows to compute the number of valid configurations within polynomial time. However, compiling any formula and, thus, a feature model to d-DNNF is computationally expensive.

    Problem
     Feature models are typically evolving over time. After a change, it may be beneficial to adapt the existing d-DNNF instead of compiling the entire formula again. Your task is to incrementally adapt an existing d-DNNF according to changes in the feature model.

    Tasks

    • Handle changes performed on CNF incrementally
    • Handle changes performed on feature model incrementally
    • Implement prototype
    • Empirically evaluate the benefits


    Related Work

    Contact

    Chico Sundermann, Thomas Thüm

  • P/B/M: Implementation and Evaluation of Static Variable Ordering Heuristics (Heß, Thüm)

    Context
    Binary decision diagrams (BDD) represent propositional formulas as directed, acyclic graphs. The order of the variables in this graph is crucial to its size. Some orderings may result in a linear size of the graph, while other can cause an exponential size. For the practical use of BDDs, for instance for the analysis of product lines, one is interested in a near-optimal variable order. Many heuristics have been proposed, implementend and analysed in the literature, where one differentiates between static variable ordering heuristics, which run ahead of the BDD compilation and dynamic heuristics, which run during and after the compilation.

    Problem
    In this thesis, the scope will be limited to static variable ordering heuristics. It is currently unknown which heuristics scale to real-world instances and how big their impact on the resulting size of the BDDs is. Many of the proposed heuristics have not been evaluated on current real-world instances.

    Tasks

    • Survey the literature on static variable ordering heuristics.
    • For a representative selection of heuristics:
      • Describe them in detail.
      • Implement them into OBDDimal*.
      • Evaluate them on real-world instances.

    Related Work

    Contact

    Tobias Heß, Thomas Thüm

    * OBDDimal is implemented in Rust, knowledge in Rust is helpful but not required.

  • P/B/M: Implementation and Evaluation of Dynamic Variable Ordering Heuristics (Heß, Thüm)

    Context
    Binary decision diagrams (BDD) represent propositional formulas as directed, acyclic graphs. The order of the variables in this graph is crucial to its size. Some orderings may result in a linear size of the graph, while other can cause an exponential size. For the practical use of BDDs, for instance for the analysis of product lines, one is interested in a near-optimal variable order. Many heuristics have been proposed, implementend and analysed in the literature, where one differentiates between static variable ordering heuristics, which run ahead of the BDD compilation and dynamic heuristics, which run during and after the compilation.

    Problem
    In this thesis, the scope will be limited to dynamic variable ordering heuristics. It is currently unknown which heuristics scale to real-world instances and how big their impact on the resulting size of the BDDs is. Many of the proposed heuristics have not been evaluated on current real-world instances.

    Tasks

    • Survey the literature on dynamic variable ordering heuristics and trigger mechanics invoking them.
    • For a representative selection of heuristics:
      • Describe them in detail.
      • Implement them into OBDDimal*.
      • Evaluate them on real-world instances.

    Related Work

    Contact

    Tobias Heß, Thomas Thüm

    * OBDDimal is implemented in Rust, knowledge in Rust is helpful but not required.

Constraint-Programmierung und Constraint Handling Rules

Constraint Handling Rules

Empirical Software Engineering

Conducting and Reporting Empirical Studies

  • B: Descriptive and Inferential Statistics in Empirical Software Engineering (Juhnke, Tichy)

    Context
    Empirical Software Engineering (ESE) plays an increasingly important role in software engineering (SE) with regard to the evaluation of software artifacts, technologies and processes. For the report of significant results, it is fundamentally important to select and correctly apply appropriate statistical methods in addition to an appropriate research method (e.g., experiment, case study, etc.) for the particular research question to be answered.

    Problem
    It can be assumed that many researchers who use statistical methods in the context of ESE research are not trained statisticians. In order to investigate this and to improve the quality of ESE research with respect to the descriptive and inferential statistical methods used, it is first necessary to understand a) to what extent statistical methods are currently applied, b) in which context they are applied (e.g., in the context of which research methods), c) whether and how their selection is reported, and d) whether they are applied correctly. Findings on these should provide insight into possible sources of error. Based on this, suggestions could be developed on how to support researchers in the application of statistical methods specifically related to ESE research.

    Tasks

    • Conducting a systematic literature search (SLR), i.e., selection and analysis of existing work related to ESE research 
    • Development of a taxonomy of typical deficiencies regarding the use of statistical methods in ESE research papers
    • In conjunction with this, building up a relevant literature collection and indexing it (supplementary material).
    • Design and development of supportive guidance for the appropriate selection and application of statistical methods related to ESE research
       

    Related Work

    Contact

    Katharina Juhnke

Quality Assurance

Safety and Security

  • B/M: Approach to Model Jointly Safety, Security and Behavior of Self-Adaptive Systems (Groner, Tichy)

    Context

    Internet-of-things (IOT) is a very popular application field for self-adaptive systems. However, such a system also entails safety and security risks. For example, a system where a drone performs a task. When this drone detects a low battery level, it initiates an emergency landing, and sends a message to another drone, which then completes the task. In this scenario, an attacker could intercept the message to the second drone, causing the task not to be completed, or cancel the initiated emergency landing, causing the first drone to crash.

    Problem

    Safety and security risks are often connected to each other, but are not yet considered in combination in self-adaptive systems. In addition, their ability to reconfigure themselves offers further opportunities for attack that must also be considered. Therefore, a modeling language is needed to describe not only the behavior of a self-adaptive system, but also possible safety and security risks. A hazard analysis can then identify failures and risks using these models at an early stage of the system development.

    Task / Goal

    • Develop a modelling language to describe self-adaptive systems and their behavior
    • Extend your modeling language to describe possible safety and security risks in the context of self-adaptive systems
    • Develop a hazard analysis to identify failures and hazards in a modeled self-adaptive system
    • Design a case study in which you apply your approach to a self-adaptive system (quadrocopter lab)

    Related Work

    • Kriaa, S., Bouissou, M., & Laarouchi, Y. (2015). A model based approach for SCADA safety and security joint modelling: S-Cube.
    • Giese, H., Tichy, M., & Schilling, D. (2004, September). Compositional hazard analysis of uml component and deployment models. In International Conference on Computer Safety, Reliability, and Security (pp. 166-179). Springer, Berlin, Heidelberg.

    Contact

    Raffaela Groner

Software Evolution

Development of Multi-Variant Software

  • P/B/M: Incremental Variability Mining (Bittner, Thüm)

    Context

    Feature traces identify the implementation of features in the source code of a software system. Variability mining is the process of recovering lost information on feature traces by (semi-)automatically inspecting the code base(s) for example by investigating dependencies between types in the used programming language.

    Problem

    Feature trace recording is another methodology to document feature traces already while developers are programming (instead of recovering feature traces retroactively as variability mining does. However, some heuristics in variability mining can operate completely automatically (i.e., without requiring any user interaction). Extending feature trace recording with these automated recovery mechanisms could avoid manual effort of developers by using user-knowledge to a greater extent.

    Task / Goal

    • Implement variability mining in our VariantSync prototype IDE.
    • Investigate to what extent mining feature traces can support developers before or while recording feature traces in the IDE.
    • Evaluate your results on existing software projects as in previous work on variability mining.

    Related Work

    Recommended but Optional Prerequisites

    • Having attended the lecture on Software Product Lines

    Contact

    Paul Bittner

    Thomas Thüm

  • M: Recommending or Automating Feature Contexts (Bittner, Thüm)

    M: Recommending or Automating Feature Contexts (Bittner, Thüm)

    Context

    Feature traces identify the implementation of certain program features in the implementation of the software. Feature Trace Recording is a novel methodology to document feature traces already while developers are programming. Therefore developers specify the feature which they are working on as a propositional formula, referred to as the feature context. For instance, when writing source code that communicates to the Linux Operating System, it would belong to the feature interaction Linux  ¬Windows ¬MacOS. The image above depicts an example where a developer works on the pop method of a stack in Java. For each edit to the source code, the developer describes which feature he is changing with the feature context.

    Problem

    Specifying the feature context for each interaction with the IDE or for each commit may become tedious for developers. We could potentially increase developers' acceptance on specifying the feature context when we can make it easier for them to do so.

    Task / Goal

    • Find ways for recommending a feature context based on the source code developers are editing.
    • Investigate if specifying a feature context can be (partially) automated for some use cases.

    Recommended but Optional Prerequisites

    • Having attended the lecture on Software Product Lines
    • knowledge on artificial intelligence might be helpful

    Contact

    Paul Bittner

    Thomas Thüm

Differencing and Merging

  • M: Semantic Lifting of Abstract Syntax Tree Edits (Bittner, Thüm, Kehrer)

    Context

    Many tasks need to assess the changes made to development artefacts. A common way to detect changes are diffing operators that take the old and new state of an artefact as input and yield an edit script (i.e., a list of edits that transform the old state to the new state). For example, git diff computes a list of source code lines that were inserted and a list of lines that where removed.

    Problem

    When computing such an edit script, the edits are a valid transformation indeed but might not the actual changes of the developers. In this sense, the edits might diverge from what developers actually did. This becomes more prominent when looking at diffs for abstract syntax trees (ASTs) where edit scripts consist of possibly very fine-grained tree operations.

    To this end, we classified semantic edits as edit operations that describe developer‘s intents more accurately when editing ASTs.

    Questions

    • Can we lift edit scripts to semantic-edit scripts?
    • Do semantic edits represent developers actions more accurately?
    • Do existing tree diffing algorithms yield edits impossible for programmers to execute on the concrete syntax?
    • Are semantic edits sufficient to assess developers’ intents?
    • Does semantic lifting increase diffing accuracy?
    • Can we use state-based diff (based on set operations) to lift edit scripts?

    Goal

    Create an algorithm for semantic lifting of edit scripts obtained from off-the-shelf tree differs to semantic edits. Optionally, extend the notion of semantic edits for increased accuracy. Evaluate your results on existing diffs.

    Related Work

    Contact

    Paul Bittner

    Thomas Thüm

    Timo Kehrer