Projects - Visual Computing Group

Reconstruction and Visualization of Virtual Reality Content

Together with Immersight we work on the reconstruction and visualization of indoor virtual reality scenes. The goal of the project is to use consumer camera systems to capture an indoor environment, which is subsequentially reconstructed to obtain a 3D model. To modify the model, intuitive interaction paradigms are developed, such that the user is able to sketch design alternatives based on the data acquired from the real world environment. By developing visualization techniques tailored for virtual reality, these design alternatives will become explorable in an immersive fashion.

Cooperation Partners: Immersight GmbH
Funding Source: BMWi. Project Duration: 2017-2019.

Neural Network Visualization in Biology

Together with the Institute of Protein Biochemistry we work towards visual interfaces facilitating life science domain experts to use deep neural networks for image classification in biology. While the latest progresse in deep learning makes this family of techniques valuable for many imaging-based research disciplines, domain experts often lack the required in-depth knowledge to use these techniques effectively. Within this project we focus on the reconstruction of microfibrils, whereby the goal is to develop a visual interface, that enables biologists to exploit deep neural networks for this task.

Cooperation Partners: Institute of Protein Biochemistry (Ulm University)
Funding Source: Carl Zeiss Stiftung. Project Duration: 2018-2019.

Interactive Visualization of the Lion Man in a Digital Exhibition

The Lion Man is one of the oldest man-made works of art. The sculpture was carved about 40,000 years ago from mammoth ivory and was found just before the outbreak of the second world war in the Lonetal, at the foot of the Swabian Alb. Within this project we visualize digital scans of the lion man in close cooperation with the Ulmer Museum and in connection to the successful "Ice Age Art" application for Unesco World Cultural Heritage site. The aim of the project is the development of an interactive media station which primarily supports an interactive exploration of the lion man, but also the Lonetal as well as the Lonetal caves. Through the media station, the lion man and its place of discovery are interactively explorable through a touch screen. In this context, "comprehensible" 3D visualization is a central aspect, which allows the museum users for the first time to interact with the lion man in order to view it from all sides and at any magnification level. In addition, virtual cuts can be made through the sculpture to explore the inner structures. The direct touch interaction creates an unprecedented proximity to the sculpture, which today can only be viewed in a glass display case. The 3D visualizations are additionally enriched by textual and visual information to illuminate the origin and discovery history of the lion man.

Cooperation Partners: Ulmer Museum
Funding Source: Schwenk, Ravensburger Stiftung. Project Duration: 2015-2018.

Multi-Touch Powerwall Visualizaion

With funding from the Carl Zeiss Stiftung, we are currently setting up our interactive visualization lab, which contains a high-resolution, multi-touch powerwall as its centerpiece. The powerwall, consisting of a Stewart StarGlas 60 Screen, has a screen size of 179in with a width of 3.88m and a height of 2.58m. Content is back projected with 24 full HD projectors, which generate a stereoscopic projection by means of polarization filters. To enable multi-user touch input, the powerwall is equipped with a DEFI X Series Multi-Touch Module, which enables us to track 10 individual touch events. Interactive 2D and 3D content is created through our high-end rendering cluster equipped with latest NVIDIA graphics hardware. The lab enables demos, multi-user collaborations as well as public outreach.

Funding Source: Carl Zeiss Stiftung. Project Duration: 2014-2017.

CAMILIS - Computer-Aided Minimally Invasive Liver Surgery

Within the CAMILIS project, an approach and system for image-guided laparoscopic liver surgery is developed. In a close cooperation with surgeons, engineers and computer scientists, all image-guiding aspects related to laparoscopy are in focus. Thus, the project covers research in navigation technology, medical image processing, interactive visualization as well as augmented reality. In our subproject, we specifically aim at multimodal medical visualization and multimodal data fusion. By incorporating pre- and intraoperative data, we are able to generate a fused data set, which is visualized interactively to support surgeons during liver interventions. The visualization is enhanced such that occlusion-free views to inner lesions become possible without compromising with respect to depth perception. All work is conducted in the context of an existing navigation system, and the final goal are clinical trials of a novel navigation system.

Cooperation Partners: Cascination AG, ARTORG, University of Bern, Karolinska University Hospital, Sectra Medical Systems
Funding Source: EUREKA/EuroStars. Project Duration: 2014-2016.

Collaborative Visual Exploration and Presentation

Visualization plays a crucial role in many areas of e-Science. In this project, we aim at integrating visualization early on in the discovery process, in order to reduce data movement and computation times. Within many research projects, visualization is currently often established as the last step of a long pipeline of compute and data intensive processing stages. While the importance of this use of visualization is well known, facilitating visualization as a final step is not enough when dealing with e-Science applications. We address the challenges arising from the early collaborations enabled by these in-situ visualizations, and we investigate which role visualization plays to strengthen such collaborations, by enabling a more direct interaction between researchers.

Funding Source: Vinnova, SeRC e-Science Center. Project Duration: 2014-2016.