Dennis Wolf, M.Sc.
Dennis Wolf war von 2010 bis 2016 Student an der Universität Ulm. Seit 2016 promoviert er in der Forschungsgruppe Mensch-Computer-Interaktion.
Dennis hat einen B.Sc. in Medieninformatik (2013) und einen M.Sc. in Medieninformatik (2016) von der Universität Ulm. Der Titel seiner Masterarbeit lautet: "OctiCam: An Immersive and Mobile Video Communication System for Relatives and Children".
Im Winter 2016 absolvierte er ein 3 monatiges Forschungspraktikum an der Universität Cambridge, UK in der Intelligent Interactive Systems Group unter der Leitung von Dr Per Ola Kristensson. Dies führte zu einer Kooperation an der Publikation "Performance Envelopes of In-Air Direct and Smartwatch Indirect Control for Head-Mounted Augmented Reality", die auf der Konferenz IEEE VR 2018 vorgestellt wurde.
- Mixed Reality
- Multi-Modal Feedback
- Biometric Data
- User Adaptation
Interaction with head-mounted displays (HMDs) can be seen as a continuous loop of user input and system output. Its most fundamental form is user head rotation: head movement is measured by internal and external sensors which updates the virtual camera in the virtual scene and generates a new image from the new perspective that is finally presented to the user. To improve user experience and presence in the scene, a virtual environment should address more than just the visual and auditory channel of the users and adapt to the users' specific needs.
To explore the potential of user adaptivity in mixed-reality environments, my research is focused on multi-modal input and output concepts and dynamic application content via biometric-feedback loops. First successful projects evaluated multi-modal input for augmented reality (AR), an AR framework for cognitively impaired users, first stepts towards cognitive load adaptation in AR and immersive multi-modal output for virtual environments.
Vorlesungen, Projekte und Seminare
- Don't Sweat It!: Exploring Asymmetrical Biometric Feedback in Collaborative Gaming (MA, Shyukryan Karaosmanoglu, 2019)
- The Impact of Discrete Rotations on Locomotion in VR (BA, Laura Bottner, 2019)
- FootVR: Exploring Foot-Controlled Locomotion in Virtual Reality (BA, Anja Schikorr, 2019)
- Fall Prevention in VR (BA, Sarah Schaupp, 2019)
Offene Themen für Abschlussarbeiten
Eine Liste aller offener Abschlussarbeiten finden Sie auf unserer Übersichtsseite.
Bei Interesse schreiben Sie mir einfach eine Mail oder kommen in meinem Büro vorbei. Weitere mögliche Themen wären:
- Haptisches Feedback in VR/AR
- Integration von biometrischem Feedback in VR Umgebungen
- Übergang zwischen virtueller und physikalischer Realität
- Intergration of vibro-tactile, thermal, and EMS actuators into a virtual reality head-mounted display for increased immersion (MA, Leo Hnatek, 2017)
- Exploring Hand-Tracking and Smartwatch-Based Pointing in Virtual Reality (MA, Suhasaleem Holalkere, 2018)
- An Augmented Reality Framework for Assisted Activities in Dementia Therapy (MA, Daniel Besserer, 2018)
- C-AR: Improving Situational Awareness for Automated Driving Level 3 through Multi-User Augmented Reality Interaction (MA, David Klein, 2018)
- Adaptive AR Interfaces Via EEG (BA, Tobias Wagner, 2019)
- An In-Depth Evaluation of and Compensation for the Heisenberg Effect (BA, Marco Combosch, 2019)
- An Evaluation of Path Visualization and Positional Tracking for the Purpose of Indoor Navigation Using the HoloLens and Bluetooth Low Energy Beacons (BA, Linus Hunziger)
- VRJumping: Exploring Jump-Based Locomotion in Virtual Reality (MA, Christopher Kunder, 2019)
One of the great benefits of virtual reality (VR) is the implementation of features that go beyond realism. Common ``unrealistic'' locomotion techniques (like teleportation) can avoid spatial limitation of tracking, but minimize potential benefits of more realistic techniques (e.g. walking). As an alternative that combines realistic physical movement with hyper-realistic virtual outcome, we present JumpVR, a jump-based locomotion augmentation technique that virtually scales users' physical jumps...
Virtual and augmented reality head-mounted displays (HMDs) are currently heavily relying on spatially tracked input devices (STID) for interaction. These STIDs are all prone to the phenomenon that a discrete input (e.g. button press) will disturb the position of the tracker, resulting in a different selection point during ray-cast interaction (Heisenberg Effect of Spatial Interaction). Besides the knowledge of its existence, there is currently a lack of a deeper understanding of its severity, structure and impact on throughput and angular error during a selection task. In this work...
Cognitive impairment such as memory loss, an impaired executive function and decreasing motivation can gradually undermine instrumental activities of daily living (IADL). With an older growing population, previous works have explored assistive technologies (ATs) to automate repetitive components of therapy and thereby increase patients’ autonomy and reduce dependence on carers. While most ATs were built around screens and projection-based augmented reality (AR), the potential of head-mounted displays (HMDs) for therapeutic assistance is still under-explored. As a contribution to this effort we present cARe, an HMD-based AR framework that uses in-situ instructions and a guidance mechanism to assist patients with manual tasks.
While the real world provides humans with a huge variety of sensory stimuli, virtual worlds most of all communicate their properties by visual and auditory feedback due to the design of current head mounted displays (HMDs). Since HMDs offer sufficient contact area to integrate additional actuators, prior works utilised a limited amount of haptic actuators to integrate respective information about the virtual world.
The scarcity of established input methods for augmented reality (AR) head-mounted displays (HMD) motivates us to investigate the performance envelopes of two easily realisable solutions: indirect cursor control via a smartwatch and direct control by in-air touch.
Since 360 degree movies are a fairly new medium, creators are facing several challenges such as controlling the attention of a user. In traditional movies this is done by applying cuts and tracking shots which is not possible or advisable in VR since rotating the virtual scene in front of the user’s eyes will lead to simulator sickness. One of the reasons this effect occurs is when the physical movement (measured by the vestibular system) and the visual movement are not coherent.
GyroVR uses head worn flywheels designed to render inertia in Virtual Reality (VR). Motions such as flying, diving or floating in outer space generate kinesthetic forces onto our body which impede movement and are currently not represented in VR. GyroVR simulates those kinesthetic forces by attaching flywheels to the users head which leverage the gyroscopic effect of resistance when changing the spinning axis of rotation.
OctiCam is a mobile and child-friendly device that consists of a stuffed toy octopus on the outside and a communication proxy on the inside. Relying only on two squeeze buttons in the tentacles, we simplied the interaction with OctiCam to a child-friendly level. A build-in microphone and speaker allows audio chats while a build-in camera streams a 360 degree video using a fish-eye lens.
This project deals with a novel multi-screen interactive TV setup (smarTVision) and its enhancement through Companion-Technology. Due to their flexibility and the variety of interaction options, such multi-screen scenarios are hardly intuitive for the user. While research known so far focuses on technology and features, the user itself is often not considered adequately. Companion-Technology has the potential of making such interfaces really user-friendly. Building upon smarTVision, it’s extension via concepts of Companion-Technology is envisioned. This combination represents a versatile test bed that not only can be used for evaluating usefulness of Companion-Technology in a TV scenario, but can also serve to evaluate Companion-Systems in general.
ColorSnakes is an authentication mechanism based solely on software modification which provides protection against shoulder surfing and to some degree to video attacks. A ColorSnakes PIN consists of a starting colored digit and is followed by four consecutive digits. From the starting colored digit, users indirectly draw a path (selection path) consisting of their PIN. The input path can be drawn anywhere on the grid.
- Conference Reviewer: CHI '17, '18, '19, '20, MobileHCI '18, PerDis '17, GI '19
- Conference Presenter: IEEE VR '18, UIST '18, ISMAR '19, MUM '19