Research Trends in the Internet of Things

Learning Goals

Based on up-to-date examples, students learn and deepen their skills in independent and self-responsible working with scientific literature as well as with written and oral presentation of scientific and technical content.

Students reflect presented content and practise expressing their opinion in discussions among peers.

Depending on their topic, students get to know a concrete system, a generic concept, or one or multiple technical implementations. At the end of the semester they are able to put their topic in a wider context and are able to autonomously judge on the pros and cons.

Content

At the beginning students are introduced to basic principles of scientific work including literature research, writing techniques, and presentation styles. This shall help and guide students from a methodological point of view. The actual work (paper writing and preparation of presentation) happens in tight and individual interaction with the respective advisor. The results of the research are then presented in front of the plenum and discussed in the group. Topic-wise, the seminar covers all aspects of IoT ranging from the operation of a data centre, to specialised operating systems, to big data analytics, and middleware.

General Information

Information Resource Management

None

For students Master Communication Technology,

3

V/Ü/P/S 0/0/0/1

English or German

Seminar

Graded based on self-initiative, quality of paper, quality of talk, presence, and activity in discussions

21.10.2016 13:00

4 - 12

as announced in moodle

Topics

Along with the evolvement of IOT  the growing amount of highly distributed applications put new challenges on the specific application types, especially on the scalability and elasticity. Regarding databases the NoSQL movement already provided a large set of distributed databases (e.g MongoDB, Apache Cassandra). They promise a highly scalable architecture and providing elasticity to cope with the dynamic Cloud and IOT environment. The current offering in distributed databases comprises a wide-spread relational, NoSQL and even NewSQL databases with different architectures and characteristics which makes them hardly comparable. This seminar topic should provide an overview of existing frameworks to evaluate distributed databases in respect to their performance, scalability and elasticity capabilities. A starting point could be the YCSB benchmarking framework [1] and its extended version YCSB++ [2]

The evolvement of IoT led to rethink the architecture of application to highly distributed applications using microservices. Hence, new challenges are put on the specific application types, especially on the scalability and elasticity in combination with lightweight run-time environments. Whereas in the cloud computing virtual machines are a common approach, in IoT gained the microservice platforms a growing focus. Common microservice platforms are Docker and CoreOS. This seminar topic should provide a comparison of existing microservice platforms against VMs in respect to their performance. A starting point could be the [1]

IoT systems generally consists of many autonomous devices that offer a diversity of computing capabilities. Containers are a technology to process workflows in an isolated environment with shared resources. For this topic, you will do a research on existing container technologies and compare them to each other. This analysis will be done in respect of how they could solve issues that occur in IoT systems, such as workload balancing between nearby devices in terms of job migration. A starting point could be the following:


[1] http://ieeexplore.ieee.org/xpls/abs_all.jsp?arnumber=7501691

[2] http://link.springer.com/chapter/10.1007%2F978-3-319-33313-7_3

[3] http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7471374

[4] https://aaltodoc.aalto.fi/handle/123456789/21599

In IoT applications, such as Smart City or Ambient Assisted Living, the usage of a multitude of sensors causing the creation of a high amount of data. This data will further be processed. Regarding the principles of data locality, the most efficient way of processing is to have it close to the data source. In this topic you have to research on IoT systems and how they deal with data locality. Therefore you will have to do research on load balancing/distribution mechanisms suitable for IoT systems. As an outcome you will discuss those mechanisms.

 

A starting point could be [1].


[1] http://link.springer.com/chapter/10.1007/978-3-662-47895-0_49

Software Defined Networks (SDN) offer a flexible approach for controlling the network flow dynamically. A global view of the network and its network devices (i.e. sensors or virtual machines in a Cloud) allow to react on the actual needs: flows can be optimised between heavily loaded devices, or can be blocked in case of intrusions. While SDN offers the control mechanisms, sFlow for example defines a standard to monitor and analyse network devices.

The research question to be answered in this topic is as follows: how to detect shortcomings and network issues in IP based networks, in order to react on them using SDN functionalities?

As a starting point the following paper might be helpful. [http://www.sciencedirect.com/science/article/pii/S1389128613004003]

M.Sc. Christopher Hauser
Deutschland

From a server perspective, which hosts applications (optionally encapsulated in virtual machines or containers), the knowledge about the hardware resource affinity of the hosted workload can be used for optimisation. A (Linux-based) server in a data centre can improve the overall performance by knowing the hardware resources (e.g. disk bandwidth, memory frequency, ...) which are the most important for an application.The research question of this topic is as follows: how can the affinity be detected and adapted at runtime without harming the execution of an application? A review of existing approaches or tools like from Allinea should be listed and compared.

M.Sc. Christopher Hauser
Deutschland

With an increasing amount of network devices the whish to increase also the knowledge about the resource needs rises. Monitoring the network traffic and analyse it is the first step towards this goal. Yet, the monitoring and analysis is only as good as the model which describes the data that should be extracted from these historical and online traces. 

The research question of this topic targets traffic models and how to create and update them online. 

M.Sc. Christopher Hauser
Deutschland

Making decisions with one device is rather simple, but how to decide on something in a distributed setup? This topic should look up existing algorithms, find common metrics for comparison, to finally compare the algorithms with each other. Such algorithms might be theoretical works or pratical implemented solutions.

M.Sc. Christopher Hauser
Deutschland

Verteilte Dateisysteme agieren wie lokale Dateisysteme (ext4, ntfs) gegenüber einer Anwendung, legen die zu speichernden Elemente jedoch verteilt über mehrere Server hinweg ab. Dadurch können größere Datenvolumen und schnellere Verarbeitungszeiten erzielt werden, da mehrere Server im Parallelbetrieb arbeiten. Zudem erhöht eine Replizierung die Ausfallsicherheit.

Dieses Seminar-Thema soll drei der verteilten Dateisysteme genauer betrachten und miteinander vergleichen. Vergleichskriterien können die Architektur des Systems, vorhandene Geschwindigkeitsmessungen oder Funktionsmöglichkeiten sein.

M.Sc. Christopher Hauser
Deutschland

The evolvement of cloud computing and also IoT led to the continuous evolution of data centers on software and hardware level. On the hardware level a common trend in recent years is the usage of heterogeneous hardware to cope with the broad variety of computing intensive  application workloads. In this context heterogeneous hardware could be GPUs, FPGAs or similar computing resources. In order to facilitate the provisioning and sharing of such resources across multiple tenants  a similar approach as in cloud computing (e.g. OpenStack, Amazon EC2) could be beneficial. The topic of this paper should provide an overview of existing resource frameworks for provisioning heterogeneous hardware in a service based manner. A starting point could be [1], [2]

The evolvement of cloud computing and also IoT led to the continuous evolution of data centers on software and hardware level. On the hardware level a common trend in recent years is the usage of heterogeneous hardware to cope with the broad variety of computing intensive  application workloads. In this context heterogenous hardware could be GPUs, FPGAs or similar computing resources. On the software level database application can benefit from such heterogenous hardware by using them as accelerator for compute intensive tasks.  Current research trends especially focus distributed databases like NoSQL for acceleration with heterogenous hardware. This seminar topic should provide an overview of existing approaches to accelerate NoSQL databases by using accelerators such as GPUs or FPGAs. A starting point could be [1]

Moderne Daten Center stellen typischerweise die physikalische Infrastruktur für eine große Bandbreite an Anwendungen und Service Modellen bereit. Ein Service Modell, welches aktuell im Fokus von Forschung und Industrie liegt, ist Big Data. Big Data Anwendungen verarbeiten enorme Datenmengen, indem die Verarbeitung auf mehrere verteilte Jobs aufgeteilt wird, welche wiederum über die Ressource des Daten Center verteilt werden. Zielsetzung dieses Seminars ist es, existierende Big Data Frameworks auf Basis ihrer Features zu vergleichen und ihre Daten Center Anforderungen auf Hardware Ebene heraus zu arbeiten. Als Ausgangspunkt werden die Big Data Frameworks Apache Spark [1], Apache Storm [2] und Apache Flink [3] gewählt. Ein Startpunkt zu dem Thema Big Data und die resultierenden Herausforderungen bietet [4].

The rise of cloud computing requires a change in the way business applications are deployed to their infrastructure. The sheer amount of possible configurations and the error-proneness of manual deployments necessiate deployment and management automation. A common approach to achieve this kind of automation is the use of model driven approaches, were the user starts the deployment with model describing his application which is afterwards automatically deployed to the target (cloud) infrastructure and furthermore managed during the runtime.

The Topology and Orchestration Specification for Cloud Applications (TOSCA) [1] is an OASIS standard for describing cloud applications in a platform independent way. Around TOSCA an ecosystem has evolved offering tool support for managing applications described using TOSCA, e.g. OpenTOSCA [2], Cloudify [3] or Alien4Cloud [4].

The target of this topic is to provide an overview of the concepts of the TOSCA modeling language and its tool support by analyzing the offered features, but also their adherence to the TOSCA standard and cross-tool features like interoperability.

M.Sc. Daniel Baur

Cloud computing has emerged as the leading technology to provide on-demand computing services that can be represented as Software (SaaS), Infrastructure (IaaS) or Platforms (PaaS). To be able to continually improve this already wide-spread ecosystem, methods to evaluate new algorithms prior to real world usage have to be found. As the usage of real-world testbeds is not only costly with respect to money and time resources, but also limits experiments to the scale of the hardware, the usage of simulation environments becomes necessary. Using e.g. discrete event simulation, the time and resource usage for evaluation can be greatly reduced.

The task of this topic is to compare and evaluate existing simulation environments for cloud computing, like CloudSim[1]. An overview of existing simulation software is given in [2]. The result of this topic should evaluate the support of the different layers of cloud computing (physical layer to application layer) by each tools, and discuss the features of the tools based on those layers.

M.Sc. Daniel Baur

While Cloud Computing, due to its nearly unlimited on-demand resources, allows unhindered adaptation of one’s application to the end user’ needs, this holds only true of the application is designed and programmed in a way that it can take advantage of those resources. One architecture style, achieving the therefore required loose coupling between application components is Representational State Transfer (REST) [1]. To increase the performance and reduce the amount of data transferred for the ever increasing number of mobile devices, Facebook recently published GraphQL [2] as open-source software. The task of this topic is to give a brief introduction of the GraphQL fundamentals, to compare it to traditional REST by focusing on the import aspects of loose coupling, implementation complexity and performance. Additionally it should be researched if real-work implementations for well-established programming languages exist.

M.Sc. Daniel Baur

A data center operator requires to maintain his infrastructure as energy efficient as possible. Due to energy costs and consumption, becomes a challenge for an operator to search for an optimal solution that will monitor and provide fine-grained insights on physical level, but also a correlation with virtual level. Additionally, he can perform optimisations in his data centre, by migrating virtual machines to another host, in order to spare energy and turn off a physical machine. This seminar topic should provide an overview of the current solutions and focus on the correlation of the virtual machine consumption over the physical one.

Resource overprovisioning is a technique the data center operators use to increase utilization in cloud environments. But, this might hinder serious issues when an unpredictable application workload occurs and might result in a deteriorated performance of the application. However, there are research works that strive to eliminate the effects of the resource overprovisioning. The research question of this topic is how and with what mechanisms a cloud operator can enable the overbooking.

During the past few years, with the proliferation of the Internet of Things (IoT), billions of different devices and sensors are interconnected with a big part of them connected to the Internet. IoT devices ranging from high-end (e.g Rasberry Pi, smartphone, tables) running on Linux,  to low-end devices(e.g. Arduino, TelosB motes) that cannot run such traditional OSs. This seminar topic should identify and describe the aspects of such an IoT operating system and which OSs are out there operating on such low-end devices.

With tens of billions devices producing information and sharing it, within a private network or a public one, becomes evident that the data gathering and analysis is impossible. Hence, new mechanisms are required to introduce decentralised analysis, rather than gather everything in the same sink and analyse them either offline or online. By distributing and enabling devices to be able to perform an analysis as a pre-processing phase, eliminates the broadcasting of unuseful data, but also might be able to identify why these data were produced in the first place. The seminar topic should give what are the restrictions and which enablers are out there, so such a system can be realised. The work should be solid, with references to existent research works or usecases that move towards this direction.

Software Defined Networking (SDN) is a trending concept which is based on separating the data plane from the control plane. In SDN systems such as large-scale data centre networks, an essential part of network management is continuous monitoring of different performance metrics. One example can be link utilization for faster adoption of forwarding rules with respect to dynamic workload. The statistical results from monitoring have to be accurate and timely. The current flowbased network monitoring tools produce extreme overhead since the statistics are generated from the overall network at the central controller. To gain high accuracy and low overhead, some concepts have been developed: 1. OpenSketch, 2. PayLess, 3. MicroTE, 4. OpenSample

The evaluation should consist of an analysis of the above options such as their pros and cons as well as drawing a conclusion by determining the best solution for reduced overhead and high accuracy. The question that should be tackled after investigating the above options is, 'How to increase accuracy and decrease network overhead when aggregating the networks statistics by the SDN controller or a monitoring device?'

A good starting point can be this paper.

https://www.usenix.org/system/files/conference/nsdi13/nsdi13-final116.pdf

 

 

M.Sc. Mitalee Sarker

Energy efficiency plays today one of the most important roles in every aspect of human being. Also Cloud and High Performance Computing try to be not only powerful but also efficient, try to optimize energy consumption especially when load is low. That’s why there are some trends to use low power devices in the data centres. Is it possible? Is it economically reasonable? What low power technologies do exist today and could they be combined with traditional powerful solutions? All these questions could be answered after the investigation of this topic.

As a start point could be these papers:

[1] http://ieeexplore.ieee.org/lpdocs/epic03/wrapper.htm?arnumber=6676682

[2] http://linkinghub.elsevier.com/retrieve/pii/S1877750313000148

[3] http://www.mdpi.com/2079-9292/5/4/61

 

Schedulers in HPC also could be considered like a communication mechanism between users and clusters: to run an application on the cluster user has to submit a job, after that scheduler puts this job into a queue where the job waits until suitable resources are available. When it happens, scheduler starts the job on a compute node and when the job is finished makes a report about it to the user.

There are different scheduling systems which are available on the market. E.g. on the bwForCluster JUSTUS in Ulm [1] are installed Moab as a scheduler and Torque as a resource manager, on the bwUniCluster a role of the resource manager plays Slurm [2].

The main idea of this topic is to get new knowledge about scheduling systems in HPC, find out a difference between scheduler and resource manager, analyse what scheduling systems are available on the market and what are used and where.

As an extra point must be done research if it is possible to use common HPC scheduling systems on the low power computers like Raspberry Pi, what kind of adaptations have to be done for that.

Useful links:

[1] https://www.bwhpc-c5.de/wiki/index.php/Category:BwForCluster_Chemistry

[2] https://www.bwhpc-c5.de/wiki/index.php/Category:BwUniCluster

[3] http://ieeexplore.ieee.org/document/6184961/?arnumber=6184961

[4] www.sciencedirect.com/science/article/pii/S016781911300121X

Monitoring of High Performance Computing systems (HPC) plays an important role for the cluster maintenance. Knowing the state of the cluster allows administrators to keep the whole system under control. This control brings optimal usage of resources, reduced queueing time of the jobs and as a result – makes HPC-providers and HPC-users satisfied.

In the HPC area exist some traditional monitoring solutions like Ganglia [1, 2, 3] and Nagios, but there are also a lot of alternative solutions like Zabbix [4].

The main idea of this topic is to analyse a market of the monitoring solutions for HPC, understand the main trends and challenges [5], make a comparison of the most popular products. Also should be done analysis, what monitoring systems are used in top 10 clusters from the top-500 list (June 2016) [6] and what monitoring systems are used in top 10 German clusters. As a conclusion, basing on new knowledge student should also make a short investigation what monitoring system could suit in the best way for the bwForCluster JUSTUS in Ulm and why [7].

Useful links and papers:

[1] http://www.sciencedirect.com/science/article/pii/S0167819104000535

[2] http://link.springer.com/chapter/10.1007/978-3-540-24680-0_12

[3] http://ieeexplore.ieee.org/document/1253327/?arnumber=1253327

[4] http://www.zabbix.com/

[5] http://dl.acm.org/citation.cfm?id=2063378

[6] https://www.top500.org/list/2016/06/

[7] https://www.bwhpc-c5.de/wiki/index.php/Category:BwForCluster_Chemistry

Typical data centers consist of a mixture of different servers. They may have different processor architectures, different disks and come with or without a GPU. The execution times of applications and individual threads depends on the specific hardware on which they are executed. Therefore, the scheduler has the potential to improve the application's performance by choosing the "right" hardware for each thread.

The research question to be answered in this topic is: How can the heterogeneity in data centers be exploited by the scheduler?

One possible starting point could be [1].

[1] www.sciencedirect.com/science/article/pii/S002002551400228X

In heterogeneous computing systems, it might sometimes benefit the application's performance, if a task that is already executed on a processor is moved to a different processor. This could be done to, e.g., improve data locality exploit the communication patterns of tasks. However, this migration comes with associated cost, that has to be incorporated into migration decisions.

The research question to be answered in this topic is: What metrics need to be considered to estimate the cost of task migration and which models exist to describe this cost?

One possible starting point could be [1].

[1] dl.acm.org/citation.cfm

For both data centers and IoT applications, distributed operating systems promise high performance combined with high convenience for the user. Each node in the system runs an instance of the OS. These instances appear as a single operating system to the user, while the effort can be distributed among the nodes.

This topic should give an overview over different distributed operating systems and their architecture and compare them among each other.

Possible starting points could be [1] [2] [3].

This topic is intended to be done by a bachelor student.

[1] www.sigops.org/sosp/sosp09/papers/baumann-sosp09.pdf
[2] www.usenix.org/legacy/event/osdi08/tech/full_papers/boyd-wickizer/boyd_wickizer.pdf
[3] groups.csail.mit.edu/carbon/

The best way to understand, analyse and evaluate an experiment is to do a visualisation of its results. Especially when these results consist of many gigabytes of data, what today is absolutely normal for the typical HPC applications. That’s why visualisation applications are also considered as a “must have” on the most of HPC systems.

The main idea of this topic is to analyse what visualisation applications are available and used in HPC [1, 2, 3], which of them are free and commercial, are they specialized for some scientific fields or can suit to every type of experiments. In this topic student should also make an accent on the detailed analysis of the functions, features and capabilities of the “COVISE” application - COllaborative VIsualization and Simulation Environment – an application, that allows to integrate simulations, postprocessing and visualization functionalities in a seamless manner [4].

References:

[1] http://ieeexplore.ieee.org/document/4426922/?arnumber=4426922

[2] http://ieeexplore.ieee.org/document/1185578/?arnumber=1185578&tag=1

[3] http://dl.acm.org/citation.cfm?id=1836065

[4] http://www.hlrs.de/de/covise/

Transmission Control Protocol (TCP), is a connection-oriented and highly reliable protocol between the hosts in the computer communication network. The protocol mainly works in the 4th Layer, Transport Layer of Open Systems Interconnection (OSI) model. TCP uses Internet Protocol (IP) for the delivery of the datagram.


As the Internet is a packet oriented network operating with a best-effort policy without end-to-end quality of service mechanisms, traffic is subject to congestion on its transmission. In order to cope with this situation, TCP implements congestion control mechanisms in the end-systems and is reacting on implicit congestion signals such as packet loss or increase in inter arrival times of packets from the same flow. Additionally, there are mechanisms such as ECN (see RFC3168) where congestion information is signaled by switches and routers.


There are a wide range of different TCP variants that mostly differ in terms of congestion control and management. They can be classified as packet loss driven, inter arrival time and delay driven and hybrid approaches. An example for a modern approach is TCP-CUBIC designed for fast and long distance network infrastructures. It uses cubic functions to adjust the congestion window at the sender side in order to enhance stability and scalability and is currently the standard mechanism on Linux Systems.

Additionally there are mechanisms replacing TCP and offering alternative solutions for reliable transfer. Examples are:

  • SCTP : Stream Control Transmission Protocol (SCTP) offers reliable transport services and runs on top of a connectionless packet network. It supports sequential data delivery in multiple streams. It is more secure and redundant than TCP as one single SCTP point can obtain several IP addresses.

  • QUIC : QUIC (Quick UDP Internet Connections) is developed by Google. It is a modern transport protocol which overcomes some limitations of TCP. As it is developed on top of UDP, it has less connection establishment time. Some other features of QUIC are improved congestion control, forward error correction, connection migration etc.


Based on a literature and standard review of different TCP variants such as TCP Vegas, High-Speed TCP, Scalable TCP, FAST TCP and a summary of their key differences, the seminar report should discuss why interoperation of different TCP variants such as TCP Vegas and TCP Reno is leading to unfairness and how TCP is positioned against alternative solutions such as QUIC and SCTP for high speed networks from 100 Gbps and beyond.


Some helpful references are given below. They can be a good starting point.

(1) https://tools.ietf.org/html/draft-rhee-tcp-cubic-00

(2) https://tools.ietf.org/html/rfc793

(3) https://tools.ietf.org/html/rfc4960

(4) https://www.chromium.org/quic



M.Sc. Mitalee Sarker

Recently, GPGPU get employed to process large amounts of data. The hardware, the GPU, is already able to be virtualized and delivered as a service in cloud infrastructures (see Amazon EC2 instance type G2). Such a hardware fits very well to machine learning systems.

For this topic you will analyze machine learning libraries (among others [1] and [2]). As a result the way is analyzed how they can be elastically used in a cloud infrastructure. Based on multiple Big Data use cases that you choose on your own, you will sketch if and how they could benefit of the integration of Cloud resources.


[1] https://www.tensorflow.org/

[2] https://developer.nvidia.com/deep-learning

[3] http://ieeexplore.ieee.org/document/6428800/?arnumber=6428800

[4] https://deeplearning4j.org/gpu

In this topic, you will examine why and how SDN can be used to enable and support Big Data applications. You will do research on the requirements of use cases for Big Data applications and the abilities/possibilities of SDN. You will select two Big Data use cases that do not make use of SDN and draw a sketch how SDN could be used for that, and how this would look like. The results of this will allow you to discuss SDN in the area of Big Data applications.

You can find an overview arguing on pros and cons of SDN for Big Data in [1]. In [2] SDN is used to support Big Data applications that are based on Hadoop.


[1] http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7389832

 

[2] http://dl.acm.org/citation.cfm?id=2959059


Databases are commonly known and appreciated for their ACID properties of transactions. With respect to consistency the properties are usually referred to as transactional consistency. In reality, however, multiple kinds of transactional consistency exist that allow different interleaving of transactions and required different ways to implement applications using the database.

 

The outcome of this topic shall be a description of the transaction concept, an overview over available consistency concepts, and an in-depth comparison of at least two of the concepts with one being Snapshot Isolation.

Infrastructure as Code (IaC) is the practice of specifying computing system configurations through code, and managing them through traditional software engineering methods. These methods utilize configuration management tools like Puppet, Chef, CFEngine and ansible. In this semianry you will compare the different approaches of these tools.

This topic is intended to be done by a bachelor student.

Some starting points could be:

[1] Does Your Configuration Code Smell?: http://dl.acm.org/citation.cfm?id=2901761&CFID=855003261&CFTOKEN=75244774

Serverless computing, or Function as a Service (FaaS) is the idea of not managing servers or containers, but instead focus on raw functions which are executed on demand. In this seminary you will give an introcution to serverless computing and compare it with more classic approaches in terms of scalability, cost efficiency and performance.

This topic is intended to be done by a bachelor student.

Some starting points could be:

[1] https://www.usenix.org/system/files/conference/hotcloud16/hotcloud16_hendrickson.pdf

[2] https://acloud.guru/course/serverlessconf-nyc-2016/learn/f38ba073-7e1f-ab03-9e46-a93e42de4cca/41a35d14-4eeb-23fc-ee88-40d66d829def/watch

An important aspect when storing data in large scale (e.g. in facebook, google, or youtube) is to ensure its durability. Hence, the failures of individual storade devices (disks and ssd) should not lead to the loss of data. Traditionally, this was ensured by replicating each block of data several times (e.g. 3). The drawback with this approach is that a failure tolerance of two failures triples the needed amount of storage which is both hard to manage and unattractive from an economical point of view.

 

The use of Erasure Encoding has shown to be a good solution for reducing the redundancy in the system while still ensuring availability. In the recent years several algorithms have been proposed that offer different properties and trade-offs (compactification degree vs. time to recover from a failed disk).

 

The outcome of this seminar shall be an overview on the trade-off dimensions of Reed-Solomon-based erasure encondings developed in recent years as well as the identification of preferred usage scenarios. In addition up to three of these algorithms shall be compared in detail.

Since Fischer-Lynch-Patterson's seminal work on agreement in a distributed system has been presented in the 1980s. The agreement /consensus problem has been heavily re-visited by theoretical computer scientists and the insights only slowly find their way into common knowledge.

 

Goal of this seminar topic is on the one hand to re-visit why consensus may not be possible under certain circumstances and on the other hand how consensus may be solved on stacking primitive registers.