Projekte - Visual Computing Gruppe
The purpose of this project is to develop and analyze new mixed reality techniques for teaching in higher education. We work on the applicability of different mixed reality scenarios, learning analytics, and mixed reality authoring in collaboration with psychologists and engineers.Cooperation Partners: Prof. Dr. Tina Seufert (Ulm University), Prof. Dr. Enrico Rukzio (Ulm University), Prof. Dr. Steffi Zander (Bauhaus-Universität Weimar)
Funding Source: Federal Ministry of Education and Research (BMBF).
Project Duration: 2018-2022.
Collaborative Virtual Reality
ln this project, a virtual collaboration lab for the analysis of scientific data is planned, designed, implemented and rolled out. The focus is on biomedical analysis, but also numerical simulation and design verification. In phase one of the project we will work on the conception and realization at the Universities Ulm and Stuttgart. For phase two, a roll-out to the Universities of Konstanz, Heidelberg and the Karlsruhe lnstitute for Technology is planned. Main goals are besides the generalizability, which is targeted by processing orthogonal application scenarios, usability, sustainability and scalability.Cooperation Partners: Dr. Guido Reina (University of Stuttgart), Prof. Dr. Daniel Weiskopf (University of Stuttgart), Prof. Dr. Stefan Wesner (Ulm University)
Funding Source: Ministry of Science, Research and the Arts of Baden-Württemberg (MWK).
Project Duration: 2018-2019.
Visual Analysis of Protein-Ligand Interactions
The visual analysis of protein structures has been researched in several projects within the past few years. While molecular structures are relevant, it is necessary to focus on both interaction partners and to also take into account their physico-chemical properties in order to understand protein interactions. Thus, within this research project, we plan to enable the visual analysis of protein-ligand interactions as captured in state-of-the-art simulations by focusing on these properties. The main goal is to make these time-dependent data sets better accessible for protein designers, and help them to develop adaptions that enable a more efficient interaction. In particular we will develop novel visualization techniques which convey the relevant properties by means of abstract representations as well as structural embeddings. We will enhance these techniques for a visual comparison of different interactions, which eventually enable us to develop domain-centered immersive visual analytics approaches. We will evaluate our methods together with domain experts in order to ensure their effectiveness for the visual analysis of protein-ligand interactions.Cooperation Partners: Dr. Michael Krone (University of Stuttgart), Assoc.-Prof. Barbora Kozlikova (Masaryk University, Czech Republic),
Funding Source: DFG. Project Duration: 2018-2021.
Interoperable Extension of the Inviwo Visualization Software
Within this project, we plan to enable the interoperable extension of the Inviwo visualization software. The main pillar of our strategy is to lower the barrier for users and developers to create and distribute own Inviwo extensions. Therefore, we will ease the development of own functionalities and establish together with the Communication and Information Centre a system for distributing these functionalities to others. Thus, in contrast to modern source code version management systems, also non-programmers will be able to distribute their creations and thus directly extend the software. To fulfill the quality assurance requirements which emerge from such an approach, we will further realize and establish automated testing procedures.Cooperation Partners: Institute of Software Engineering and Programming Languages (Ulm University)
Funding Source: DFG. Project Duration: 2018-2021.
Reconstruction and Visualization of Virtual Reality Content
Together with Immersight we work on the reconstruction and visualization of indoor virtual reality scenes. The goal of the project is to use consumer camera systems to capture an indoor environment, which is subsequentially reconstructed to obtain a 3D model. To modify the model, intuitive interaction paradigms are developed, such that the user is able to sketch design alternatives based on the data acquired from the real world environment. By developing visualization techniques tailored for virtual reality, these design alternatives will become explorable in an immersive fashion.Cooperation Partners: Immersight GmbH
Funding Source: BMWi. Project Duration: 2017-2019.
Neural Network Visualization in Biology
Together with the Institute of Protein Biochemistry we work towards visual interfaces facilitating life science domain experts to use deep neural networks for image classification in biology. While the latest progresse in deep learning makes this family of techniques valuable for many imaging-based research disciplines, domain experts often lack the required in-depth knowledge to use these techniques effectively. Within this project we focus on the reconstruction of microfibrils, whereby the goal is to develop a visual interface, that enables biologists to exploit deep neural networks for this task.Cooperation Partners: Institute of Protein Biochemistry (Ulm University)
Funding Source: Carl Zeiss Stiftung. Project Duration: 2018-2019.
Interactive Visualization of the Lion Man in a Digital Exhibition
The Lion Man is one of the oldest man-made works of art. The sculpture was carved about 40,000 years ago from mammoth ivory and was found just before the outbreak of the second world war in the Lonetal, at the foot of the Swabian Alb. Within this project we visualize digital scans of the lion man in close cooperation with the Ulmer Museum and in connection to the successful "Ice Age Art" application for Unesco World Cultural Heritage site. The aim of the project is the development of an interactive media station which primarily supports an interactive exploration of the lion man, but also the Lonetal as well as the Lonetal caves. Through the media station, the lion man and its place of discovery are interactively explorable through a touch screen. In this context, "comprehensible" 3D visualization is a central aspect, which allows the museum users for the first time to interact with the lion man in order to view it from all sides and at any magnification level. In addition, virtual cuts can be made through the sculpture to explore the inner structures. The direct touch interaction creates an unprecedented proximity to the sculpture, which today can only be viewed in a glass display case. The 3D visualizations are additionally enriched by textual and visual information to illuminate the origin and discovery history of the lion man.Cooperation Partners: Ulmer Museum
Funding Source: Schwenk, Ravensburger Stiftung. Project Duration: 2015-2018.
Multi-Touch Powerwall Visualization
With funding from the Carl Zeiss Stiftung, we are currently setting up our interactive visualization lab, which contains a high-resolution, multi-touch powerwall as its centerpiece. The powerwall, consisting of a Stewart StarGlas 60 Screen, has a screen size of 179in with a width of 3.88m and a height of 2.58m. Content is back projected with 24 full HD projectors, which generate a stereoscopic projection by means of polarization filters. To enable multi-user touch input, the powerwall is equipped with a DEFI X Series Multi-Touch Module, which enables us to track 10 individual touch events. Interactive 2D and 3D content is created through our high-end rendering cluster equipped with latest NVIDIA graphics hardware. The lab enables demos, multi-user collaborations as well as public outreach.Funding Source: Carl Zeiss Stiftung. Project Duration: 2014-2017.
Advanced Augmented Reality Based Visualization Techniques for Surgery
Within this project, an approach and system for image-guided laparoscopic liver surgery is developed. In a close cooperation with surgeons, engineers and computer scientists, all image-guiding aspects related to laparoscopy are in focus. Thus, the project covers research in navigation technology, medical image processing, interactive visualization as well as augmented reality. In our subproject, we specifically aim at multimodal medical visualization and multimodal data fusion. By incorporating pre- and intraoperative data, we are able to generate a fused data set, which is visualized interactively to support surgeons during liver interventions. The visualization is enhanced such that occlusion-free views to inner lesions become possible without compromising with respect to depth perception. All work is conducted in the context of an existing navigation system, and the final goal are clinical trials of a novel navigation system.Cooperation Partners: Cascination AG, ARTORG, University of Bern, Karolinska University Hospital, Sectra Medical Systems
Funding Source: VINNOVA. Project Duration: 2014-2016.
Collaborative Visual Exploration and Presentation
Visualization plays a crucial role in many areas of e-Science. In this project, we aim at integrating visualization early on in the discovery process, in order to reduce data movement and computation times. Within many research projects, visualization is currently often established as the last step of a long pipeline of compute and data intensive processing stages. While the importance of this use of visualization is well known, facilitating visualization as a final step is not enough when dealing with e-Science applications. We address the challenges arising from the early collaborations enabled by these in-situ visualizations, and we investigate which role visualization plays to strengthen such collaborations, by enabling a more direct interaction between researchers.Funding Source: Vinnova, SeRC e-Science Center. Project Duration: 2014-2016.