News of the Institute of Media Informatics

PhD exposition/Engel, D.: Deep Learning in Volume Rendering

Ulm University

Introduction of a dissertaion project | Wednesday, 10 April 2024, 10:00 am | O27/331

 

Dominik Engel, member of the research group Visual Computing gives an introdcution of his dissertation topic of the title Deep Learning in Volume Rendering.

Abstract:

A variety of scientific fields commonly acquire volumetric data, for example using computed tomography (CT) or magentic resonance imaging (MRI) in medicine. Such volumetric data is often complex and requires visualization to gain understanding. However, rendering volumetric data entails several challenges, such as filtering the data to reveal the structures of interest. Meaningful filtering of 3D data is difficult to implement in 2D graphical user interfaces. Furthermore, rendering of volumetric data is generally compute intensive, especially when considering volumetric shading.  Lastly, volume rendering needs to be interactive and achieve reasonable frame rates in order to fully explore the 3D data from different views, while adapting the filtering.
To combat those challenges, this work explores how volume rendering and its individual aspects can be assisted by means of deep neural networks. Deep neural networks have recently proven very competent in many disciplines, like natural language processing, vision, but also computer graphics. They excell in the approximation of complex functions and can learn relevant features and representations when trained with sufficient data. In the scope of this dissertation, we show how these capabilities can be used throughout the volume rendering pipeline. This pipeline consists of the filtering of structures of interest, shading of those structures and the composition of the light transmitted from the volume. In the filtering step, we leverage the strong representations learned by self-supervised neural networks to enable an interactive click-to-select workflow that segments structures annotated by users within slice views. For shading, we propose a volume-to-volume network to predict volumetric ambient occlusion that respects how the volume is filtered. Lastly, we employ a neural net to invert the composition step, separating different structures in an already composited semi-transparent volume rendered images into a modifiable layered representation.

We are looking forward to numerous participation and fair discussions.