Forschungsschwerpunkte

In den letzten Jahren hat sich die Hardwarelandschaft massiv verändert. Mit der Einführung von Mehrkernprozessoren findet zugleich ein Wandel im gesamten Programmierbewußtsein statt: Entwickler müssen von nun an Parallelität ihres Programmes nutzen, statt höhere Performanz vom Prozessor erwarten zu können. Darüber hinaus nimmt bei Rechensystemen nicht nur die Anzahl der Rechenkerne zu sondern gleichzeitig erhöht sich auch die Heterogenität der Ressourcen. Dies betrifft neben den Rechenkernen, die in verschiedenen Ausprägungen auch auf einem Chip zu finden sind, auch andere Komponenten der Rechnerarchitektur wie Cache und Speichersysteme oder Netzverbindungen zwischen den Rechensystemen.

Ähnliche Probleme stellen sich auch in modernen Cloud Systemen bei denen Anwendungen auf mehreren parallel laufenden Instanzen, potentiell von verschiedenen Cloud Anbietern, ausgeführt werden. Auch hier hat nicht nur die Heterogenität der verfügbaren Ressourcen und der Bedarf an Datenlokalität und -synchronität zugenommen, sondern Anwendungen werden dynamisch auf die verfügbaren (virtualisierten) Ressourcen verteilt.

Am Institut erforschen wir derzeit mit Partnern aus Forschung und Industrie im Rahmen einiger Nationaler und Europäischer Projekte Konzepte und Lösungen für diese Fragestellungen. Weitere Information dazu finden sich auf unserer Projektseite.

Aktuelle Forschungsfelder

The institute is in the process of building up a test-cluster system built with different hardware elements from energy efficient CPU over accelerators up to manycore systems to evaluate different approaches and methods for optimization of cloud image placement and programming models.

Forschungsfelder

  • Automated Evaluations of Distributed DBMS on Elastic Infrastructures

    • modern NoSQL and NewSQL Database Management Systems (DBMS) promise to deliver the non-functional features high-performance, scalability, elasticity and availability
    • elastic infrastructures such as the cloud  have become the preferred option to operate distributed DBMS as the cloud provides scalability and elasticity on the resource level
    • significant evaluations of the non-functional DBMS features requires the consideration of multiple DBMS- and Cloud-specific properties, demanding for supportive DBMS evaluation frameworks that automate the evaluation process and ensure reproducible evaluations
    • first research results have led to the Mowgli framework that enables the automated DBMS evaluations for the evaluation objectives performance, scalability, elasticity and availability    
  • Knowledge Representation & Stream Reasoning in Smart Interconnected Environments

    • Meaningful abstractions for streaming data while evaluating different pattern extraction techniques
    • Temporal representation of events and their effects in streaming environments
    • Dealing with uncertainty and incompleteness in Stream Reasoning using probabilistic graphical models
    • Hypothetical extrapolation of knowledge, to model unobservables or alternatives for sensing,  via non-monotonic/abductive reasoning
    • A logic-based abductive framework for indirect sensing is being developed as a first result of the work, currently named ATOPO.
  • Autonomous Infrastructure Management

    • Collecttion of Data and Metris for Cluster Management based on the co-developed TIMACS framework
      (more information)
  • Energy efficient Computing

    • Context-Aware Topology Optimisation and Virtual Machine Placement for Cloud Environment
    • Energy efficient Compute System architectures
      • Energy efficient component integration e.g. low power CPUs
      • Integration with the facility environment (heat re-use)
  • Cloud Computing

    • Cloud executionware dealing with the platform specific mapping of the application to the architectural model and Application Programming Interfaces (APIs) of the execution infrastructure of the Cloud provider, and with capabilities of monitoring the running application and possible reconfiguration to optimise its behavior in particular within the EU project PaaSage (see paasage.eu)
    • Future Cloud Architectures
  • Heterogeneous Computing Systems

    • Programming models for heterogeneous systems for embedded and high performance computing incorporating notions of cost for communication, data usage and access, algorithmic description
    • Operating Systems for large scale heterogeneous infrastructures that create minimal overhead for the system and thus exploits the resources best
    • Real-Time Systems & Scheduling with adaptive resource reservations and quality management for dynamic environments with unpredictable and fluctuating computational loads
  • Applied Machine Learning

    • The vast collection of monitoring data through various levels of our infrastructure has led us to look into several aspects of working such data, mainly in the form of time series. The activities of the institute are concentrated on two subtopics in this area:
      • Time series analysis in the form of forecasting and anomaly detection as well as time series quality measurement.
      • Time series synthesis, which can be used to generated arbitrary amounts of data to assist in analyzing imbalanced classes, or with the purpose of circumventing restrictions for sensitive data by generating statistically similar data in its fidelity, however not bound by regulations such as the GDPR.
  • Distributed Flow Graph Scheduling

    A convenient way to implement parallel computations is to describe them in the form of a flow graph, which is a graph that may contain branches and loops.
    During the execution of the application, this flow graph can be unrolled into a directed acyclic graph, depnding on the execution flow of the application.
    Traditionally, thse applications can only be executed on shared memory.
    We are developing methods to execute flow graph based applications on distributed memory systems. This includes the distributed unrolling into a direct acyclic graph and scheduling the generated tasks. This schedling uses data locality as a schedling metric to prevent excessive and unnecessary data transfers among nodes.

  • Context-Awareness in the IoT

    • Employing Context-Awareness in the field of the IoT
    • In particular, researching, how long-established formalisms like:
      • context types
      • context categorisation schemes
      • context acquisition
      • context modelling
      • context reasoning
      • context distribution
    • can be optimally combined and exploited in the IoT
    • As an example, this additional knowledge gain can in turn be used in delivering tailormade configured services to users interacting with the IoT infrastructure
Victorgrigas CC BY-SA 3.0