
<bib>
<comment>
This file was created by the TYPO3 extension publications
--- Timezone: CEST
Creation date: 2026-04-14
Creation time: 18:33:30
--- Number of references
315
</comment>
<reference>
<bibtype>article</bibtype>
<title>eHMI for All - Investigating the Effect of External Communication of Automated Vehicles on Pedestrians, Manual Drivers, and Cyclists in Virtual Reality</title>
<year>2026</year>
<month>4</month>
<DOI>10.1145/3772318.3790585</DOI>
<journal>Proceedings of the 2026 CHI Conference on Human Factors in Computing Systems</journal>
<publisher>ACM</publisher>
<web_url2>https://github.com/M-Colley/ehmi-for-all-chi26-data</web_url2>
<file_url>t3://file?uid=539770</file_url>
<authors>
<person>
<fn>Mark</fn>
<sn>Colley</sn>
</person>
<person>
<fn>Simon</fn>
<sn>Kopp</sn>
</person>
<person>
<fn>Debargha</fn>
<sn>Dey</sn>
</person>
<person>
<fn>Pascal</fn>
<sn>Jansen</sn>
</person>
<person>
<fn>Enrico</fn>
<sn>Rukzio</sn>
</person>
</authors>
</reference>
<reference>
<bibtype>article</bibtype>
<title>MIRAGE: Enabling Real-Time Automotive Mediated Reality</title>
<year>2026</year>
<month>4</month>
<DOI>10.1145/3772318.3791195</DOI>
<journal>Proceedings of the 2026 CHI Conference on Human Factors in Computing Systems</journal>
<publisher>ACM</publisher>
<web_url2>https://github.com/J-Britten/MIRAGE</web_url2>
<file_url>t3://file?uid=539771</file_url>
<authors>
<person>
<fn>Pascal</fn>
<sn>Jansen</sn>
</person>
<person>
<fn>Julian</fn>
<sn>Britten</sn>
</person>
<person>
<fn>Mark</fn>
<sn>Colley</sn>
</person>
<person>
<fn>Markus</fn>
<sn>Sasalovici</sn>
</person>
<person>
<fn>Enrico</fn>
<sn>Rukzio</sn>
</person>
</authors>
</reference>
<reference>
<bibtype>inproceedings</bibtype>
<citeid>10.1145/3772363.3799272</citeid>
<title>BotaXplore: Enhancing Visitor Engagement and Learning in Botanical Gardens Through Mobile Technology</title>
<year>2026</year>
<isbn>9798400722813</isbn>
<DOI>10.1145/3772363.3799272</DOI>
<booktitle>Extended Abstracts of the 2026 CHI Conference on Human Factors in Computing Systems</booktitle>
<publisher>Association for Computing Machinery</publisher>
<address>New York, NY, USA</address>
<series>CHI EA '26</series>
<keywords>Self-Assessment, Behavior Change, Interaction Design, Self-Efficacy, Sustainability</keywords>
<web_url>https://doi.org/10.1145/3772363.3799272</web_url>
<file_url>t3://file?uid=539775</file_url>
<authors>
<person>
<fn>Albin</fn>
<sn>Zeqiri</sn>
</person>
<person>
<fn>Tobias</fn>
<sn>Wagner</sn>
</person>
<person>
<fn>Johanna</fn>
<sn>Grüneberg</sn>
</person>
<person>
<fn>Enrico</fn>
<sn>Rukzio</sn>
</person>
</authors>
</reference>
<reference>
<bibtype>inproceedings</bibtype>
<citeid>10.1145/3772318.3790711</citeid>
<title>Investigating the Effects of Eco-Friendly Service Options on Rebound Behavior in Ride-Hailing</title>
<year>2026</year>
<isbn>9798400722783</isbn>
<DOI>10.1145/3772318.3790711</DOI>
<booktitle>Proceedings of the 2026 CHI Conference on Human Factors in Computing Systems</booktitle>
<publisher>Association for Computing Machinery</publisher>
<address>New York, NY, USA</address>
<series>CHI '26</series>
<keywords>Carbon Emissions; Rebound Effects; CO2 Emissions; Eco-Feedback; Ride-Hailing; Design Interventions; Automobiles;  Behavioral Science</keywords>
<file_url>t3://file?uid=539777</file_url>
<authors>
<person>
<fn>Albin</fn>
<sn>Zeqiri</sn>
</person>
<person>
<fn>Michael</fn>
<sn>Rietzler</sn>
</person>
<person>
<fn>Enrico</fn>
<sn>Rukzio</sn>
</person>
</authors>
</reference>
<reference>
<bibtype>article</bibtype>
<title>AirClick: Modularized Interactive Inflatables for On-Demand Room Transformation</title>
<year>2025</year>
<month>12</month>
<DOI>10.1145/3771882.3771888</DOI>
<journal>24th International Conference on Mobile and Ubiquitous Multimedia (MUM '25)</journal>
<publisher>ACM</publisher>
<web_url2>https://github.com/Pascal-Jansen/AirClick</web_url2>
<file_url>t3://file?uid=535451</file_url>
<authors>
<person>
<fn>Pascal</fn>
<sn>Jansen</sn>
</person>
<person>
<fn>Benno</fn>
<sn>Hölz</sn>
</person>
<person>
<fn>Julian</fn>
<sn>Britten</sn>
</person>
<person>
<fn>Mark</fn>
<sn>Colley</sn>
</person>
<person>
<fn>Enrico</fn>
<sn>Rukzio</sn>
</person>
</authors>
</reference>
<reference>
<bibtype>article</bibtype>
<title>Longitudinal Effects of Visualizing Uncertainty of Situation Detection and Prediction of Automated Vehicles on User Perceptions</title>
<year>2025</year>
<month>5</month>
<DOI>10.1016/j.trf.2025.05.013</DOI>
<journal>Transportation Research Part F: Traffic Psychology and Behaviour,  Joint First Authors</journal>
<publisher>Elsevier</publisher>
<address>Amsterdam, The Netherlands</address>
<file_url>t3://file?uid=529834</file_url>
<authors>
<person>
<fn>Pascal</fn>
<sn>Jansen</sn>
</person>
<person>
<fn>Mark</fn>
<sn>Colley</sn>
</person>
<person>
<fn>Max</fn>
<sn>Rädler</sn>
</person>
<person>
<fn>Jonas</fn>
<sn>Schwedler</sn>
</person>
<person>
<fn>Enrico</fn>
<sn>Rukzio</sn>
</person>
</authors>
</reference>
<reference>
<bibtype>inproceedings</bibtype>
<citeid>BumpyRideCHI25</citeid>
<title>Bumpy Ride? Understanding the Effects of External Forces on Spatial Interactions in Moving Vehicles</title>
<abstract>As the use of Head-Mounted Displays in moving vehicles increases, passengers can immerse themselves in visual experiences independent of their physical environment. However, interaction methods are susceptible to physical motion, leading to input errors and reduced task performance. This work investigates the impact of G-forces, vibrations, and unpredictable maneuvers on 3D interaction methods. We conducted a field study with 24 participants in both stationary and moving vehicles to examine the effects of vehicle motion on four interaction methods: (1) Gaze&Pinch, (2) DirectTouch, (3) Handray, and (4) HeadGaze. Participants performed selections in a Fitts' Law task. Our findings reveal a significant effect of vehicle motion on interaction accuracy and duration across the tested combinations of Interaction Method x Road Type x Curve Type. We found a significant impact of movement on throughput, error rate, and perceived workload. Finally, we propose future research considerations and recommendations on interaction methods during vehicle movement.</abstract>
<status>1</status>
<year>2025</year>
<month>4</month>
<DOI>10.1145/3706598.3714077</DOI>
<booktitle>Proceedings of the CHI 2025 (SIGCHI Conference on Human Factors in Computing Systems)</booktitle>
<publisher>ACM</publisher>
<series>CHI '25</series>
<file_url>t3://file?uid=525741</file_url>
<authors>
<person>
<fn>Markus</fn>
<sn>Sasalovici</sn>
</person>
<person>
<fn>Albin</fn>
<sn>Zeqiri</sn>
</person>
<person>
<fn>Robin Connor</fn>
<sn>Schramm</sn>
</person>
<person>
<fn>Oscar Javier</fn>
<sn>Ariza Nuñez</sn>
</person>
<person>
<fn>Pascal</fn>
<sn>Jansen</sn>
</person>
<person>
<fn>Jann Philipp</fn>
<sn>Freiwald</sn>
</person>
<person>
<fn>Mark</fn>
<sn>Colley</sn>
</person>
<person>
<fn>Christian</fn>
<sn>Winkler</sn>
</person>
<person>
<fn>Enrico</fn>
<sn>Rukzio</sn>
</person>
</authors>
</reference>
<reference>
<bibtype>inproceedings</bibtype>
<title>Improving External Communication of Automated Vehicles Using Bayesian Optimization</title>
<year>2025</year>
<month>4</month>
<DOI>10.1145/3706598.3714187</DOI>
<booktitle>Proceedings of the CHI 2025 (SIGCHI Conference on Human Factors in Computing Systems)</booktitle>
<publisher>ACM</publisher>
<series>CHI '25</series>
<file_url>t3://file?uid=525199</file_url>
<authors>
<person>
<fn>Mark</fn>
<sn>Colley</sn>
</person>
<person>
<fn>Pascal</fn>
<sn>Jansen</sn>
</person>
<person>
<fn>Mugdha</fn>
<sn>Keskar</sn>
</person>
<person>
<fn>Enrico</fn>
<sn>Rukzio</sn>
</person>
</authors>
</reference>
<reference>
<bibtype>inproceedings</bibtype>
<citeid>RoadsCHI25</citeid>
<title>Introducing ROADS: A Systematic Comparison of Remote Control Interaction Concepts for Automated Vehicles at Road Works</title>
<abstract>As vehicle automation technology continues to mature, there is a necessity for robust remote monitoring and intervention features. These are essential for intervening during vehicle malfunctions, challenging road conditions, or in areas that are difficult to navigate. This evolution in the role of the human operator—from a constant driver to an intermittent teleoperator—necessitates the development of suitable interaction interfaces. While some interfaces were suggested, a comparative study is missing. We designed, implemented, and evaluated three interaction concepts (pathPlanning, trajectory, and waypoint) with up to four concurrent requests of automated vehicles in a within-subjects study with N=23 participants.
The results showed a clear preference for the pathPlanning concept. It also led to the highest usability but lower satisfaction. With trajectory, the fewest requests were resolved. The study's findings contribute to the ongoing development of HMIs focused on the remote assistance of automated vehicles.</abstract>
<status>1</status>
<year>2025</year>
<month>4</month>
<DOI>https://doi.org/10.1145/3706598.3713476</DOI>
<booktitle>Proceedings of the CHI 2025 (SIGCHI Conference on Human Factors in Computing Systems) (conditionally accepted)</booktitle>
<publisher>ACM</publisher>
<series>CHI '25</series>
<file_url>t3://file?uid=525742</file_url>
<authors>
<person>
<fn>Mark</fn>
<sn>Colley</sn>
</person>
<person>
<fn>Jonathan</fn>
<sn>Westhauser</sn>
</person>
<person>
<fn>Jonas</fn>
<sn>Andersson</sn>
</person>
<person>
<fn>Alexander</fn>
<sn>Mirnig</sn>
</person>
<person>
<fn>Enrico</fn>
<sn>Rukzio</sn>
</person>
</authors>
</reference>
<reference>
<bibtype>inproceedings</bibtype>
<title>OptiCarVis: Improving Automated Vehicle Functionality Visualizations Using Bayesian Optimization to Enhance User Experience</title>
<year>2025</year>
<month>4</month>
<DOI>10.1145/3706598.3713514</DOI>
<booktitle>Proceedings of the CHI 2025 (SIGCHI Conference on Human Factors in Computing Systems)</booktitle>
<publisher>ACM</publisher>
<series>CHI '25</series>
<file_url>t3://file?uid=525200</file_url>
<authors>
<person>
<fn>Pascal</fn>
<sn>Jansen</sn>
</person>
<person>
<fn>Mark</fn>
<sn>Colley</sn>
</person>
<person>
<fn>Svenja</fn>
<sn>Krauß</sn>
</person>
<person>
<fn>Daniel</fn>
<sn>Hirschle</sn>
</person>
<person>
<fn>Enrico</fn>
<sn>Rukzio</sn>
</person>
</authors>
</reference>
<reference>
<bibtype>conference</bibtype>
<title>When Do We Feel Present in a Virtual Reality? Towards Sensitivity and User Acceptance of Presence Questionnaires</title>
<abstract>Presence is an important and widely used metric to measure the quality of virtual reality (VR) applications. Given the multifaceted and subjective nature of presence, the most common measures for presence are questionnaires. But there is little research on their validity regarding specific presence dimensions and their responsiveness to differences in perception among users. We investigated four presence questionnaires (SUS, PQ, IPQ, Bouchard) on their responsiveness to intensity variations of known presence dimensions and asked users about their consistency with their experience. Therefore, we created five VR scenarios that were designed to emphasize a specific presence dimension. Our findings showed heterogeneous sensitivity of the questionnaires dependent on the different dimensions of presence. This highlights a context-specific suitability of presence questionnaires. The questionnaires' sensitivity was further stated as lower than actually perceived. Based on our findings, we offer guidance on selecting these questionnaires based on their suitability for particular use cases.</abstract>
<type>Paper</type>
<year>2025</year>
<month>4</month>
<reviewed>1</reviewed>
<DOI>10.1145/3706598.3714204</DOI>
<institution>Ulm University</institution>
<institute>Institute for Media Informatics</institute>
<booktitle>Proceedings of the CHI 2025 (SIGCHI Conference on Human Factors in Computing Systems)</booktitle>
<journal>ACM</journal>
<publisher>ACM</publisher>
<event_place>Yokohama, Japan</event_place>
<web_url>https://doi.org/10.48550/arXiv.2504.10162</web_url>
<file_url>t3://file?uid=526290</file_url>
<authors>
<person>
<fn>Annalisa</fn>
<sn>Degenhard</sn>
</person>
<person>
<fn>Ali</fn>
<sn>Askari</sn>
</person>
<person>
<fn>Michael</fn>
<sn>Rietzler</sn>
</person>
<person>
<fn>Enrico</fn>
<sn>Rukzio</sn>
</person>
</authors>
</reference>
<reference>
<bibtype>inproceedings</bibtype>
<title>Fly Away: Evaluating the Impact of Motion Fidelity on Optimized User Interface Design via Bayesian Optimization in Automated Urban Air Mobility Simulations</title>
<abstract>Automated Urban Air Mobility (UAM) can improve passenger transportation and reduce congestion, but its success depends on passenger trust. While initial research addresses passengers' information needs, questions remain about how to simulate air taxi flights and how these simulations impact users and interface requirements.
We conducted a between-subjects study (N=40), examining the influence of motion fidelity in Virtual-Reality-simulated air taxi flights on user effects and interface design. Our study compared simulations with and without motion cues using a 3-Degrees-of-Freedom motion chair. Optimizing the interface design across six objectives, such as trust and mental demand, we used multi-objective Bayesian optimization to determine the most effective design trade-offs.
Our results indicate that motion fidelity decreases users' trust, understanding, and acceptance, highlighting the need to consider motion fidelity in future UAM studies to approach realism. However, minimal evidence was found for differences or equality in the optimized interface designs, suggesting personalized interface designs.</abstract>
<status>1</status>
<year>2025</year>
<month>1</month>
<DOI>10.1145/3706598.3713288</DOI>
<booktitle>Proceedings of the CHI 2025 (SIGCHI Conference on Human Factors in Computing Systems)</booktitle>
<publisher>ACM</publisher>
<series>CHI '25</series>
<file_url>t3://file?uid=525237</file_url>
<authors>
<person>
<fn>Luca-Maxim</fn>
<sn>Meinhardt</sn>
</person>
<person>
<fn>Clara</fn>
<sn>Schramm</sn>
</person>
<person>
<fn>Pascal</fn>
<sn>Jansen</sn>
</person>
<person>
<fn>Mark</fn>
<sn>Colley</sn>
</person>
<person>
<fn>Enrico</fn>
<sn>Rukzio</sn>
</person>
</authors>
</reference>
<reference>
<bibtype>article</bibtype>
<title>HUD-SUMO: Simulacra of In-Vehicle Head-Up Displays Using SUMO To Study Large-Scale Effects</title>
<year>2025</year>
<DOI>10.5555/3721488.3721614</DOI>
<journal>10th Annual ACM/IEEE International Conference on Human Robot Interaction (HRI)</journal>
<publisher>ACM/IEEE</publisher>
<address>New York, NY, USA</address>
<web_url2>https://github.com/Pascal-Jansen/HUD-SUMO</web_url2>
<file_url>t3://file?uid=526881</file_url>
<authors>
<person>
<fn>Pascal</fn>
<sn>Jansen</sn>
</person>
<person>
<fn>Mark</fn>
<sn>Colley</sn>
</person>
<person>
<fn>Elisabeth</fn>
<sn>Wimmer</sn>
</person>
<person>
<fn>Jan</fn>
<sn>Maresch</sn>
</person>
<person>
<fn>Enrico</fn>
<sn>Rukzio</sn>
</person>
</authors>
</reference>
<reference>
<bibtype>inproceedings</bibtype>
<title>Light My Way: Developing and Exploring a Multimodal Interface to Assist People With Visual Impairments to Exit Highly Automated Vehicles</title>
<abstract>The introduction of Highly Automated Vehicles (HAVs) has the potential to increase the independence of blind and visually impaired people (BVIPs). However, ensuring safety and situation awareness when exiting these vehicles in unfamiliar environments remains challenging. To address this, we conducted an interactive workshop with N=5 BVIPs to identify their information needs when exiting an HAV and evaluated three prior-developed low-fidelity prototypes. The insights from this workshop guided the development of PathFinder, a multimodal interface combining visual, auditory, and tactile modalities tailored to BVIP's unique needs. In a three-factorial within-between-subject study with N=16 BVIPs, we evaluated PathFinder against an auditory-only baseline in urban and rural scenarios. PathFinder significantly reduced mental demand and maintained high perceived safety in both scenarios, while the auditory baseline led to lower perceived safety in the urban scenario compared to the rural one. Qualitative feedback further supported PathFinder's effectiveness in providing spatial orientation during exiting.</abstract>
<year>2025</year>
<DOI>10.1145/3706598.3713454</DOI>
<booktitle>Proceedings of the CHI 2025 (SIGCHI Conference on Human Factors in Computing Systems) (conditionally accepted)</booktitle>
<publisher>ACM</publisher>
<file_url>t3://file?uid=525232</file_url>
<authors>
<person>
<fn>Luca-Maxim</fn>
<sn>Meinhardt</sn>
</person>
<person>
<fn>Lina</fn>
<sn>Wilke</sn>
</person>
<person>
<fn>Maryam</fn>
<sn>Elhaidary</sn>
</person>
<person>
<fn>Julia</fn>
<sn>von Abel</sn>
</person>
<person>
<fn>Paul</fn>
<sn>Fink</sn>
</person>
<person>
<fn>Rietzler</fn>
<sn>Michael</sn>
</person>
<person>
<fn>Mark</fn>
<sn>Colley</sn>
</person>
<person>
<fn>Enrico</fn>
<sn>Rukzio</sn>
</person>
</authors>
</reference>
<reference>
<bibtype>inproceedings</bibtype>
<citeid>PlantPalCHI25</citeid>
<title>PlantPal: Leveraging Precision Agriculture Robots to Facilitate Remote Engagement in Urban Gardening</title>
<abstract>Technological progress has often been measured by the extent to which it shields and protects us from the harshness of nature. At the same time, it has long been recognised that our resulting disengagement from nature negatively affects our wellbeing and impedes awareness of our vital dependence on natural environments. To understand how HCI has considered the possibilities that digital technology offers for engaging with nature, we conducted a scoping review encompassing more than 20 years of HCI research on nature engagement. We compare the orientations, motivations, and methodologies of different threads within this growing body of work. We show how HCI research has enabled varied forms of direct and indirect engagement with nature, and we develop a typology of the roles proposed for technology in this work. We highlight promising and under-utilised approaches to designing for nature engagement and discuss directions for future research.</abstract>
<status>1</status>
<year>2025</year>
<month>1</month>
<booktitle>Proceedings of the CHI 2025 (SIGCHI Conference on Human Factors in Computing Systems)</booktitle>
<publisher>ACM</publisher>
<series>CHI '25</series>
<file_url>t3://file?uid=525683</file_url>
<authors>
<person>
<fn>Albin</fn>
<sn>Zeqiri</sn>
</person>
<person>
<fn>Julian</fn>
<sn>Britten</sn>
</person>
<person>
<fn>Clara</fn>
<sn>Schramm</sn>
</person>
<person>
<fn>Pascal</fn>
<sn>Jansen</sn>
</person>
<person>
<fn>Michael</fn>
<sn>Rietzler</sn>
</person>
<person>
<fn>Enrico</fn>
<sn>Rukzio</sn>
</person>
</authors>
</reference>
<reference>
<bibtype>inproceedings</bibtype>
<title>Scrolling in the Deep: Analysing Contextual Influences on Intervention Effectiveness during Infinite Scrolling on Social Media</title>
<abstract>Infinite scrolling on social media platforms is designed to encourage prolonged engagement, leading users to spend more time than desired, which can provoke negative emotions.
Interventions to mitigate infinite scrolling have shown initial success, yet users become desensitized due to the lack of contextual relevance.
Understanding how contextual factors influence intervention effectiveness remains underexplored.
We conducted a 7-day user study (N=72) investigating how these contextual factors affect users' reactance and responsiveness to interventions during infinite scrolling.
Our study revealed an interplay, with contextual factors such as being at home, sleepiness, and valence playing significant roles in the intervention's effectiveness. Low valence coupled with being at home slows down the responsiveness to interventions, and sleepiness lowers reactance towards interventions, increasing user acceptance of the intervention.
Overall, our work contributes to a deeper understanding of user responses toward interventions and paves the way for developing more effective interventions during infinite scrolling.</abstract>
<status>1</status>
<year>2025</year>
<month>1</month>
<DOI>10.1145/3706598.3713187</DOI>
<booktitle>Proceedings of the CHI 2025 (SIGCHI Conference on Human Factors in Computing Systems) (conditionally accepted)</booktitle>
<publisher>ACM</publisher>
<series>CHI '25</series>
<file_url>t3://file?uid=525236</file_url>
<authors>
<person>
<fn>Luca-Maxim</fn>
<sn>Meinhardt</sn>
</person>
<person>
<fn>Maryam</fn>
<sn>Elhaidary</sn>
</person>
<person>
<fn>Mark</fn>
<sn>Colley</sn>
</person>
<person>
<fn>Jan Ole</fn>
<sn>Rixen</sn>
</person>
<person>
<fn>Rietzler</fn>
<sn>Michael</sn>
</person>
<person>
<fn>Aditya Kumar</fn>
<sn>Purohit</sn>
</person>
<person>
<fn>Enrico</fn>
<sn>Rukzio</sn>
</person>
</authors>
</reference>
<reference>
<bibtype>article</bibtype>
<title>UAM-SUMO: Simulacra of Urban Air Mobility Using SUMO To Study Large-Scale Effects</title>
<year>2025</year>
<DOI>10.5555/3721488.3721610</DOI>
<journal>10th Annual ACM/IEEE International Conference on Human Robot Interaction (HRI)</journal>
<publisher>ACM/IEEE</publisher>
<address>New York, NY, USA</address>
<web_url2>https://github.com/M-Colley/uam-sumo</web_url2>
<file_url>t3://file?uid=526880</file_url>
<authors>
<person>
<fn>Mark</fn>
<sn>Colley</sn>
</person>
<person>
<fn>Julian</fn>
<sn>Czymmeck</sn>
</person>
<person>
<fn>Pascal</fn>
<sn>Jansen</sn>
</person>
<person>
<fn>Luca-Maxim</fn>
<sn>Meinhardt</sn>
</person>
<person>
<fn>Patrick</fn>
<sn>Ebel</sn>
</person>
<person>
<fn>Enrico</fn>
<sn>Rukzio</sn>
</person>
</authors>
</reference>
<reference>
<bibtype>article</bibtype>
<citeid>VRCreatIn</citeid>
<title>VRCreatIn: Taking In-Situ Pen and Tablet Interaction Beyond Ideation to 3D Modeling Lighting and Texturing</title>
<abstract>Mixed reality (MR) in-situ authoring has demonstrated advantages for 3D content design regarding perception, understanding, and accessibility, satisfying the growing demand for MR content in industry, education, and entertainment. However, existing MR tools mostly focus on ideation and sketching, making further steps such as 3D modeling, lighting, and texturing not sufficiently researched yet. This research gap raises the need to explore end-to-end 3D content creation workflows in MR environments. We introduce VRCreatIn, an all-in-one virtual reality (VR) solution for 3D content creation informed by expert interviews (N=6) and a design space analysis. It pioneers an integrated multimodal workflow through all stages of 3D modeling, lighting, and texturing based on pen and tablet interaction. Our usability walkthrough (N=10) confirms that VRCreatIn transfers the benefits of pen and tablet to the whole content creation workflow, broadening the scope of 3D content creation in VR. These contributions pave the way for future research, establishing VRCreatIn as a cornerstone for comprehensive 3D design in VR environments that can be transferred to the whole MR continuum.</abstract>
<year>2024</year>
<month>12</month>
<day>2</day>
<reviewed>1</reviewed>
<DOI>10.1145/3701571.3701580</DOI>
<journal>MUM '24: Proceedings of the International Conference on Mobile and Ubiquitous Multimedia</journal>
<publisher>ACM</publisher>
<pages>24 - 35</pages>
<web_url2>https://www.youtube.com/watch?v=YU1WljMbNWY</web_url2>
<file_url>t3://file?uid=524075</file_url>
<authors>
<person>
<fn>Tobias</fn>
<sn>Drey</sn>
</person>
<person>
<fn>Nico</fn>
<sn>Rixen</sn>
</person>
<person>
<fn>Julian</fn>
<sn>Karlbauer</sn>
</person>
<person>
<fn>Enrico</fn>
<sn>Rukzio</sn>
</person>
</authors>
</reference>
<reference>
<bibtype>article</bibtype>
<title>Story-Driven: Exploring the Impact of Providing Real-time Context Information on Automated Storytelling</title>
<abstract>A vehicle's interior offers a unique, mobile design space for human-computer interaction with fast-changing interaction parameters due to the vehicle's variable movement. Traditional in-vehicle interactive systems often lack the necessary flexibility to adjust to these parameters. Fortunately, recent research in generative AI has shown tremendous potential for the personalization of custom-tailored generated content that comes with the benefit of great situation-dependent adaptability. With audio books being a popular choice of entertainment for car rides, generative storytelling offers the possibility to provide a custom and immersive driving experience that can adapt to the continuously changing context during a car ride. We propose a prototype solution for real-time-adaptive story narration tailored to the variable environment of any car ride. In a real world field study (n = 30) we evaluate the prototype regarding user experience, immersion, and environment perceptibility. Participants' feedback shows a significant improvement over traditional storytelling and highlights the importance of context information for automotive interfaces.</abstract>
<status>2</status>
<year>2024</year>
<month>10</month>
<day>13</day>
<reviewed>1</reviewed>
<journal>Proceedings of the 37th Annual ACM Symposium on User Interface Software and Technology (Conditionally Accepted)</journal>
<publisher>ACM</publisher>
<address>New York, NY, USA</address>
<file_url>t3://file?uid=496068</file_url>
<note>Conditionally Accepted</note>
<authors>
<person>
<fn>Jan Henry</fn>
<sn>Belz</sn>
</person>
<person>
<fn>Lina</fn>
<sn>Weilke</sn>
</person>
<person>
<fn>Anton</fn>
<sn>Winter</sn>
</person>
<person>
<fn>Philipp</fn>
<sn>Hallgarten</sn>
</person>
<person>
<fn>Tobias</fn>
<sn>Grosse-Puppendahl</sn>
</person>
<person>
<fn>Enrico</fn>
<sn>Rukzio</sn>
</person>
</authors>
</reference>
<reference>
<bibtype>article</bibtype>
<title>Examining Psychological Conflict-Handling Strategies for Highly Automated Vehicles to Resolve Legal User-Vehicle Conflicts</title>
<year>2024</year>
<month>9</month>
<DOI>10.1145/3678511</DOI>
<journal>Proc. ACM Interact. Mob. Wearable Ubiquitous Technol. (IMWUT)</journal>
<publisher>ACM</publisher>
<address>New York, NY, USA</address>
<file_url>t3://file?uid=495724</file_url>
<authors>
<person>
<fn>Annika</fn>
<sn>Stampf</sn>
</person>
<person>
<fn>Mark</fn>
<sn>Colley</sn>
</person>
<person>
<fn>Ann-Kathrin</fn>
<sn>Knuth</sn>
</person>
<person>
<fn>Cagla</fn>
<sn>Tasci</sn>
</person>
<person>
<fn>Enrico</fn>
<sn>Rukzio</sn>
</person>
</authors>
</reference>
<reference>
<bibtype>article</bibtype>
<title>Exploring Passenger-Automated Vehicle Negotiation Utilizing Large Language Models for Natural Interaction</title>
<year>2024</year>
<month>9</month>
<journal>Proceedings of the 16th International Conference on Automotive User Interfaces and Interactive Vehicular Applications (AutomotiveUI ’24)</journal>
<publisher>ACM</publisher>
<address>New York, NY, USA</address>
<file_url>t3://file?uid=495721</file_url>
<authors>
<person>
<fn>Annika</fn>
<sn>Stampf</sn>
</person>
<person>
<fn>Mark</fn>
<sn>Colley</sn>
</person>
<person>
<fn>Bettina</fn>
<sn>Girst</sn>
</person>
<person>
<fn>Enrico</fn>
<sn>Rukzio</sn>
</person>
</authors>
</reference>
<reference>
<bibtype>article</bibtype>
<title>Move, Connect, Interact: Introducing a Design Space for Cross-Traffic Interaction</title>
<year>2024</year>
<month>9</month>
<DOI>10.1145/3678580</DOI>
<journal>Proc. ACM Interact. Mob. Wearable Ubiquitous Technol. (IMWUT)</journal>
<publisher>ACM</publisher>
<address>New York, NY, USA</address>
<file_url>t3://file?uid=495723</file_url>
<authors>
<person>
<fn>Annika</fn>
<sn>Stampf</sn>
</person>
<person>
<fn>Markus</fn>
<sn>Sasalovici</sn>
</person>
<person>
<fn>Luca-Maxim</fn>
<sn>Meinhardt</sn>
</person>
<person>
<fn>Mark</fn>
<sn>Colley</sn>
</person>
<person>
<fn>Marcel</fn>
<sn>Giss</sn>
</person>
<person>
<fn>Enrico</fn>
<sn>Rukzio</sn>
</person>
</authors>
</reference>
<reference>
<bibtype>article</bibtype>
<title>UnitEye: Introducing a User-Friendly Plugin to Democratize Eye Tracking Technology in Unity Environments</title>
<year>2024</year>
<month>9</month>
<DOI>10.1145/3670653.3670655</DOI>
<journal>Mensch und Computer 2024, Joint First Authors, cond. accepted</journal>
<file_url>t3://file?uid=490604</file_url>
<authors>
<person>
<fn>Tobias</fn>
<sn>Wagner</sn>
</person>
<person>
<fn>Mark</fn>
<sn>Colley</sn>
</person>
<person>
<fn>Daniel</fn>
<sn>Breckel</sn>
</person>
<person>
<fn>Michael</fn>
<sn>Kösel</sn>
</person>
<person>
<fn>Enrico</fn>
<sn>Rukzio</sn>
</person>
</authors>
</reference>
<reference>
<bibtype>article</bibtype>
<title>Law and Order: Investigating the Effects of Conflictual Situations in Manual and Automated Driving in a German Sample</title>
<status>1</status>
<year>2024</year>
<month>7</month>
<DOI>10.1016/j.ijhcs.2024.103260</DOI>
<journal>International Journal of Human-Computer Studies</journal>
<publisher>Elsevier</publisher>
<address>Amsterdam, The Netherlands</address>
<file_url>t3://file?uid=495725</file_url>
<authors>
<person>
<fn>Annika</fn>
<sn>Stampf</sn>
</person>
<person>
<fn>Ann-Kathrin</fn>
<sn>Knuth</sn>
</person>
<person>
<fn>Mark</fn>
<sn>Colley</sn>
</person>
<person>
<fn>Enrico</fn>
<sn>Rukzio</sn>
</person>
</authors>
</reference>
<reference>
<bibtype>article</bibtype>
<title>A Wall I Enjoy: Motivating Gentle Full-Body Movements Through Touchwall Interaction Compared to Standing and Sitting Smartphone Usage</title>
<abstract>Sedentary occupations and recreational activities carried out primarily while seated promote extended time periods spent in unhealthy sitting postures, contributing to physical and mental health issues. While apps and reminders can be effective, they often fail to sustain enjoyment and motivation or do not target stationary settings. In our work, we investigate whether sedentary waiting periods could be broken up through gentle full-body movements via full-body interactions on a large touchwall instead of remaining seated or standing. In a mixed-methods study (N=18), we compared a Match-3 game played (1) on a full-body touchwall, (2) on a smartphone standing, and (3) on a smartphone sitting, investigating user experience, performance, and acceptance. The touchwall game subtly motivated people to move, stretch and bend their bodies without performance loss while enjoying the game compared to the smartphone conditions. We suggest that full-body touchwall interaction has the potential to fill occasional waiting time while encouraging breaking up sedentary behavior.</abstract>
<year>2024</year>
<month>6</month>
<DOI>10.1145/3639701.3656302</DOI>
<journal>IMX '24: Proceedings of the 2024 ACM International Conference on Interactive Media Experiences</journal>
<address>New York, NY, USA</address>
<web_url>https://dl.acm.org/doi/abs/10.1145/3639701.3656302</web_url>
<file_url>t3://file?uid=490664</file_url>
<authors>
<person>
<fn>Jana</fn>
<sn>Funke</sn>
</person>
<person>
<fn>Michael</fn>
<sn>Wolf</sn>
</person>
<person>
<fn>Christian</fn>
<sn>van Onzenoodt</sn>
</person>
<person>
<fn>Katja</fn>
<sn>Rogers</sn>
</person>
<person>
<fn>Timo</fn>
<sn>Ropinski</sn>
</person>
<person>
<fn>Enrico</fn>
<sn>Rukzio</sn>
</person>
</authors>
</reference>
<reference>
<bibtype>article</bibtype>
<title>Where do you exercise? The Impact of Different Virtual Environments on Exergame Performance and User Experience: A Preliminary study</title>
<abstract>Environments can affect mood, motivation, and productivity. Green spaces, for example, are known to have calming effects on people’s moods. In virtual reality (VR), we could take advantage of these effects, as we have full visual control over the environment. In this paper, we explore how such potential effects caused by the environment impact performance and user experience (UX) when playing an exergame. We created four environments differing in their level of detail and visual realism.: (1) a white room, (2) outer space, (3) an abstract space, and (4) a forest environment. In a user study (N=26) in which participants played an exergame in all four environments, we found evidence that VEs influence enjoyment and performance. The simulation of green spaces or abstract VEs with enjoyable background sounds has a particularly positive impact. We discuss how environmental features impact performance and UX and present promising avenues for future work investigating specific parts of environmental features.</abstract>
<year>2024</year>
<month>6</month>
<DOI>10.1145/3639701.3663632</DOI>
<journal>IMX '24: Proceedings of the 2024 ACM International Conference on Interactive Media Experiences</journal>
<address>New York, NY, USA</address>
<web_url>https://dl.acm.org/doi/abs/10.1145/3639701.3663632</web_url>
<file_url>t3://file?uid=490665</file_url>
<authors>
<person>
<fn>Jana</fn>
<sn>Funke</sn>
</person>
<person>
<fn>Julia</fn>
<sn>Spahr</sn>
</person>
<person>
<fn>Teresa</fn>
<sn>Hirzle</sn>
</person>
<person>
<fn>Enrico</fn>
<sn>Rukzio</sn>
</person>
</authors>
</reference>
<reference>
<bibtype>conference</bibtype>
<title>DungeonMaker: Embedding Tangible Creation and Destruction in Hybrid Board Games through Personal Fabrication Technology</title>
<status>1</status>
<year>2024</year>
<month>5</month>
<reviewed>1</reviewed>
<isbn>979-8-4007-0330-0/24/05</isbn>
<DOI>10.1145/3613904.3642243</DOI>
<booktitle>Proceedings of the 2024 CHI Conference on Human Factors in Computing Systems</booktitle>
<publisher>ACM</publisher>
<address>New York, NY, USA</address>
<pages>20</pages>
<event_name>CHI 2024</event_name>
<event_place>Honolulu</event_place>
<keywords>DungeonMaker</keywords>
<web_url>https://arxiv.org/abs/2403.09592</web_url>
<web_url2>https://www.youtube.com/watch?v=MOF6NQ3J8iE</web_url2>
<file_url>t3://file?uid=487097</file_url>
<authors>
<person>
<fn>Evgeny</fn>
<sn>Stemasov</sn>
</person>
<person>
<fn>Tobias</fn>
<sn>Wagner</sn>
</person>
<person>
<fn>Ali</fn>
<sn>Askari</sn>
</person>
<person>
<fn>Jessica</fn>
<sn>Janek</sn>
</person>
<person>
<fn>Omid</fn>
<sn>Rajabi</sn>
</person>
<person>
<fn>Anja</fn>
<sn>Schikorr</sn>
</person>
<person>
<fn>Julian</fn>
<sn>Frommel</sn>
</person>
<person>
<fn>Jan</fn>
<sn>Gugenheimer</sn>
</person>
<person>
<fn>Enrico</fn>
<sn>Rukzio</sn>
</person>
</authors>
</reference>
<reference>
<bibtype>article</bibtype>
<title>Effects of a Gaze-Based 2D Platform Game on User Enjoyment, Perceived Competence, and Digital Eye Strain</title>
<status>1</status>
<year>2024</year>
<month>5</month>
<reviewed>1</reviewed>
<journal>In Proc. of CHI 2024 (SIGCHI Conference on Human Factors in Computing Systems)</journal>
<publisher>ACM</publisher>
<address>New York, NY, USA</address>
<file_url>t3://file?uid=485718</file_url>
<authors>
<person>
<fn>Mark</fn>
<sn>Colley</sn>
</person>
<person>
<fn>Beate</fn>
<sn>Wanner</sn>
</person>
<person>
<fn>Max</fn>
<sn>Rädler</sn>
</person>
<person>
<fn>Marcel</fn>
<sn>Rötzer</sn>
</person>
<person>
<fn>Julian</fn>
<sn>Frommel</sn>
</person>
<person>
<fn>Teresa</fn>
<sn>Hirzle</sn>
</person>
<person>
<fn>Pascal</fn>
<sn>Jansen</sn>
</person>
<person>
<fn>Enrico</fn>
<sn>Rukzio</sn>
</person>
</authors>
</reference>
<reference>
<bibtype>article</bibtype>
<title>Investigating the Effects of External Communication and Platoon Behavior on Manual Drivers at Highway Access</title>
<status>1</status>
<year>2024</year>
<month>5</month>
<journal>In Proc. of CHI 2024 (SIGCHI Conference on Human Factors in Computing Systems)</journal>
<publisher>ACM</publisher>
<address>New York, NY, USA</address>
<file_url>t3://file?uid=485716</file_url>
<authors>
<person>
<fn>Mark</fn>
<sn>Colley</sn>
</person>
<person>
<fn>Omid</fn>
<sn>Rajabi</sn>
</person>
<person>
<fn>Enrico</fn>
<sn>Rukzio</sn>
</person>
</authors>
</reference>
<reference>
<bibtype>conference</bibtype>
<title>pARam: Leveraging Parametric Design in Extended Reality to Support the Personalization of Artifacts for Personal Fabrication</title>
<status>1</status>
<year>2024</year>
<month>5</month>
<reviewed>1</reviewed>
<isbn>979-8-4007-0330-0/24/05</isbn>
<DOI>10.1145/3613904.3642083</DOI>
<booktitle>Proceedings of the 2024 CHI Conference on Human Factors in Computing Systems</booktitle>
<publisher>ACM</publisher>
<address>New York, NY, USA</address>
<pages>22</pages>
<event_name>CHI 2024</event_name>
<event_place>Honolulu</event_place>
<keywords>param</keywords>
<web_url>https://arxiv.org/abs/2403.09607</web_url>
<web_url2>https://www.youtube.com/watch?v=a_6UfJWtY2E</web_url2>
<file_url>t3://file?uid=487098</file_url>
<authors>
<person>
<fn>Evgeny</fn>
<sn>Stemasov</sn>
</person>
<person>
<fn>Simon</fn>
<sn>Demharter</sn>
</person>
<person>
<fn>Max</fn>
<sn>Rädler</sn>
</person>
<person>
<fn>Jan</fn>
<sn>Gugenheimer</sn>
</person>
<person>
<fn>Enrico</fn>
<sn>Rukzio</sn>
</person>
</authors>
</reference>
<reference>
<bibtype>article</bibtype>
<title>Visualizing Imperfect Situation Detection and Prediction in Automated Vehicles: Understanding Users’ Perceptions via User-Chosen Scenarios</title>
<year>2024</year>
<month>5</month>
<journal>Transportation Research Part F: Traffic Psychology and Behaviour,  Joint First Authors</journal>
<publisher>Elsevier</publisher>
<address>Amsterdam, The Netherlands</address>
<file_url>t3://file?uid=490081</file_url>
<authors>
<person>
<fn>Pascal</fn>
<sn>Jansen</sn>
</person>
<person>
<fn>Mark</fn>
<sn>Colley</sn>
</person>
<person>
<fn>Tim</fn>
<sn>Pfeifer</sn>
</person>
<person>
<fn>Enrico</fn>
<sn>Rukzio</sn>
</person>
</authors>
</reference>
<reference>
<bibtype>article</bibtype>
<citeid>10.1145/3643558</citeid>
<title>'Eco Is Just Marketing': Unraveling Everyday Barriers to the Adoption of Energy-Saving Features in Major Home Appliances</title>
<abstract>Energy-saving features (ESFs) represent a simple way to reduce the resource consumption of home appliances (HAs), yet they remain under-utilized. While prior research focused on increasing the use of ESFs through behavior change interventions, there is currently no clarity on the barriers that restrict their utilization in the first place. To bridge this gap, we conducted a qualitative analysis of 349 Amazon product reviews and 98 Reddit discussions, yielding three qualitative themes that showcase how users perceive, interact with, and evaluate ESFs in HAs. Based on these themes, we derived frequent barriers to ESF adoption, which guided a subsequent expert focus group (N=5) to assess the suitability of behavior change interventions and potential alternative strategies for ESF adoption. Our findings deepen the understanding of everyday barriers surrounding ESFs and enable the targeted design and assessment of interventions for future HAs.</abstract>
<year>2024</year>
<month>3</month>
<DOI>10.1145/3643558</DOI>
<journal>Proc. ACM Interact. Mob. Wearable Ubiquitous Technol. (IMWUT)</journal>
<volume>8</volume>
<publisher>ACM</publisher>
<pages>27</pages>
<number>1</number>
<keywords>energy-saving mechanisms, home appliances, sustainability</keywords>
<file_url>t3://file?uid=486903</file_url>
<authors>
<person>
<fn>Albin</fn>
<sn>Zeqiri</sn>
</person>
<person>
<fn>Pascal</fn>
<sn>Jansen</sn>
</person>
<person>
<fn>Jan Ole</fn>
<sn>Rixen</sn>
</person>
<person>
<fn>Michael</fn>
<sn>Rietzler</sn>
</person>
<person>
<fn>Enrico</fn>
<sn>Rukzio</sn>
</person>
</authors>
</reference>
<reference>
<bibtype>article</bibtype>
<title>AutoTherm: A Dataset and Benchmark for Thermal Comfort Estimation Indoors and in Vehicles</title>
<year>2024</year>
<DOI>10.1145/3678503</DOI>
<journal>Proc. ACM Interact. Mob. Wearable Ubiquitous Technol. (IMWUT); Joint First Authors</journal>
<publisher>ACM</publisher>
<address>New York, NY, USA</address>
<file_url>t3://file?uid=491825</file_url>
<authors>
<person>
<fn>Mark</fn>
<sn>Colley</sn>
</person>
<person>
<fn>Sebastian</fn>
<sn>Hartwig</sn>
</person>
<person>
<fn>Albin</fn>
<sn>Zeqiri</sn>
</person>
<person>
<fn>Timo</fn>
<sn>Ropinski</sn>
</person>
<person>
<fn>Enrico</fn>
<sn>Rukzio</sn>
</person>
</authors>
</reference>
<reference>
<bibtype>conference</bibtype>
<citeid>10.1145/3675094.3678487</citeid>
<title>Exploring Contextual Feature Combinations for Prediction of Subjective Thermal Perceptions</title>
<abstract>Thermal attributes in the environment impact well-being, but their inclusion in standard well-being monitoring is challenging due to complex measurement requirements. Industry standards like the Predicted Mean Vote (PMV) index need numerous measures and specialized setups, making large-scale applications impractical. This study investigates predicting thermal perception ratings using only contextual factors. We conducted an ablation study using the Chinese Thermal Comfort Dataset (CTCD) and a Random Forest (RF) classifier to evaluate prediction performance with different contextual feature combinations on five labeling scales. Results showed that omitting measures required for PMV index calculation and relying on contextual features exclusively achieved F1 scores similar to those when including PMV measures. Key predictive factors included daily outdoor temperature and a person's clothing, weight, and age. These findings suggest that leveraging more accessible contextual data to estimate thermal perception ratings is promising, and further research should explore more contextual factors to enhance prediction accuracy and support well-being assessments.</abstract>
<year>2024</year>
<isbn>9798400710582</isbn>
<DOI>10.1145/3675094.3678487</DOI>
<booktitle>Companion of the 2024 on ACM International Joint Conference on Pervasive and Ubiquitous Computing</booktitle>
<publisher>Association for Computing Machinery</publisher>
<address>New York, NY, USA</address>
<series>UbiComp '24</series>
<pages>371–376</pages>
<keywords>context-based estimation, thermal perception, ubiquitous computing</keywords>
<file_url>t3://file?uid=521915</file_url>
<authors>
<person>
<fn>Albin</fn>
<sn>Zeqiri</sn>
</person>
<person>
<fn>Michael</fn>
<sn>Rietzler</sn>
</person>
<person>
<fn>Enrico</fn>
<sn>Rukzio</sn>
</person>
</authors>
</reference>
<reference>
<bibtype>article</bibtype>
<title>Hey, What’s Going On? Conveying Traffic Information to People with Visual Impairments in Highly Automated Vehicles: Introducing OnBoard</title>
<year>2024</year>
<DOI>10.1145/3659618</DOI>
<journal>Proc. ACM Interact. Mob. Wearable Ubiquitous Technol. (IMWUT)</journal>
<publisher>ACM</publisher>
<address>New York, NY, USA</address>
<file_url>t3://file?uid=487924</file_url>
<authors>
<person>
<fn>Luca-Maxim</fn>
<sn>Meinhardt</sn>
</person>
<person>
<fn>Maximilian</fn>
<sn>Rück</sn>
</person>
<person>
<fn>Julian</fn>
<sn>Zähnle</sn>
</person>
<person>
<fn>Maryam</fn>
<sn>Elhaidary</sn>
</person>
<person>
<fn>Mark</fn>
<sn>Colley</sn>
</person>
<person>
<fn>Michael</fn>
<sn>Rietzler</sn>
</person>
<person>
<fn>Enrico</fn>
<sn>Rukzio</sn>
</person>
</authors>
</reference>
<reference>
<bibtype>article</bibtype>
<title>PedSUMO: Simulacra of Automated Vehicle-Pedestrian Interaction Using SUMO To Study Large-Scale Effects</title>
<year>2024</year>
<journal>9th Annual ACM/IEEE International Conference on Human Robot Interaction (HRI)</journal>
<publisher>ACM/IEEE</publisher>
<address>New York, NY, USA</address>
<web_url>https://github.com/M-Colley/pedsumo</web_url>
<file_url>t3://file?uid=484761</file_url>
<authors>
<person>
<fn>Mark</fn>
<sn>Colley</sn>
</person>
<person>
<fn>Julian</fn>
<sn>Czymmeck</sn>
</person>
<person>
<fn>Mustafa</fn>
<sn>Kücükkocak</sn>
</person>
<person>
<fn>Pascal</fn>
<sn>Jansen</sn>
</person>
<person>
<fn>Enrico</fn>
<sn>Rukzio</sn>
</person>
</authors>
</reference>
<reference>
<bibtype>article</bibtype>
<title>Exploring the Effects of Head-Mounted Augmented Reality on Helping Behaviour</title>
<abstract>Augmented Reality (AR) can alter environments and steer attention. While prior work dominantly focuses on exploring performances of augmentations, this work aims to understand the societal impact of AR in complex social situations. Focusing on prosocial helping behaviour, we created two scenarios and designed five augmentations aiming to motivate a user to help. We wanted to understand (1) the impact on situation perception and (2) the impact on the social structure. In an online video experiment (n=294), we found that augmenting can impact anxiety about the situation and significantly increase the perceived reason to help being directed towards the augmentation. Similarly, we found that the helped rated the "reason" and "thankfulness" significantly higher towards AR than the helper, creating a disagreement around agency and responsibility. We discuss the implications of AR in complex social structures and how responsibility and agency will become important when embedding AR in our social lives.</abstract>
<year>2023</year>
<month>12</month>
<day>03</day>
<reviewed>1</reviewed>
<DOI>10.1145/3626705.3627969</DOI>
<journal>Proceedings of the 22nd International Conference on Mobile and Ubiquitous Multimedia</journal>
<publisher>ACM</publisher>
<address>New York, NY, United States</address>
<file_url>t3://file?uid=485402</file_url>
<authors>
<person>
<fn>Jan Ole</fn>
<sn>Rixen</sn>
</person>
<person>
<fn>Jan Henry</fn>
<sn>Belz</sn>
</person>
<person>
<fn>Luca-Maxim</fn>
<sn>Meinhardt</sn>
</person>
<person>
<fn>Jan</fn>
<sn>Gugenheimer</sn>
</person>
<person>
<fn>Enrico</fn>
<sn>Rukzio</sn>
</person>
</authors>
</reference>
<reference>
<bibtype>article</bibtype>
<title>Effects of 3D Displays on Mental Workload, Situation Awareness, Trust, and Performance Assessment in Automated Vehicles</title>
<year>2023</year>
<month>12</month>
<DOI>10.1145/3626705.3627786</DOI>
<journal>International Conference on Mobile and Ubiquitous Multimedia (MUM ’23)</journal>
<publisher>ACM</publisher>
<address>New York, NY, uSA</address>
<file_url>t3://file?uid=482940</file_url>
<authors>
<person>
<fn>Mark</fn>
<sn>Colley</sn>
</person>
<person>
<fn>Annika</fn>
<sn>Stampf</sn>
</person>
<person>
<fn>William</fn>
<sn>Fischer</sn>
</person>
<person>
<fn>Enrico</fn>
<sn>Rukzio</sn>
</person>
</authors>
</reference>
<reference>
<bibtype>article</bibtype>
<title>Effects of Uncertain Trajectory Prediction Visualization in Highly Automated Vehicles on Trust, Situation Awareness, and Cognitive Load</title>
<abstract>Automated vehicles are expected to improve safety, mobility, and inclusion. User acceptance is required for the successful
introduction of this technology. One essential prerequisite for acceptance is appropriately trusting the vehicle’s capabilities.
System transparency via visualizing internal information could calibrate this trust by enabling the surveillance of the vehicle’s
detection and prediction capabilities, including its failures. Additionally, concurrently increased situation awareness could
improve take-overs in case of emergency. This work reports the results of two online comparative video-based studies on
visualizing prediction and maneuver-planning information. Effects on trust, cognitive load, and situation awareness were
measured using a simulation (N=280) and state-of-the-art road user prediction and maneuver planning on a pre-recorded
real-world video using a real prototype (N=238). Results show that color conveys uncertainty best, that the planned trajectory
increased trust, and that the visualization of other predicted trajectories improved perceived safety</abstract>
<year>2023</year>
<month>12</month>
<DOI>10.1145/3631408</DOI>
<journal>Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies (IMWUT)</journal>
<publisher>ACM</publisher>
<address>New York, NY, USA</address>
<pages>23</pages>
<file_url>t3://file?uid=483418</file_url>
<authors>
<person>
<fn>Mark</fn>
<sn>Colley</sn>
</person>
<person>
<fn>Oliver</fn>
<sn>Speidel</sn>
</person>
<person>
<fn>Jan</fn>
<sn>Strohbeck</sn>
</person>
<person>
<fn>Jan Ole</fn>
<sn>Rixen</sn>
</person>
<person>
<fn>Jan Henry</fn>
<sn>Belz</sn>
</person>
<person>
<fn>Enrico</fn>
<sn>Rukzio</sn>
</person>
</authors>
</reference>
<reference>
<bibtype>article</bibtype>
<citeid>stemasov_brickstart_2023</citeid>
<title>BrickStARt: Enabling In-situ Design and Tangible Exploration for Personal Fabrication using Mixed Reality</title>
<year>2023</year>
<month>11</month>
<day>05</day>
<DOI>10.1145/3626465</DOI>
<booktitle>Proceedings of the ACM on Human-Computer Interaction</booktitle>
<journal>Proceedings of the ACM on Human-Computer Interaction (PACM ISS)</journal>
<edition>ISS</edition>
<volume>7</volume>
<publisher>ACM</publisher>
<series>Proceedings of the ACM on Human-Computer Interaction</series>
<pages>23</pages>
<web_url>https://arxiv.org/abs/2310.03700</web_url>
<web_url2>https://youtu.be/sR9SbgtNHDQ</web_url2>
<file_url>t3://file?uid=482286</file_url>
<authors>
<person>
<fn>Evgeny</fn>
<sn>Stemasov</sn>
</person>
<person>
<fn>Jessica</fn>
<sn>Hohn</sn>
</person>
<person>
<fn>Maurice</fn>
<sn>Cordts</sn>
</person>
<person>
<fn>Anja</fn>
<sn>Schikorr</sn>
</person>
<person>
<fn>Enrico</fn>
<sn>Rukzio</sn>
</person>
<person>
<fn>Jan</fn>
<sn>Gugenheimer</sn>
</person>
</authors>
</reference>
<reference>
<bibtype>article</bibtype>
<title>A Demonstration of AutoVis: Enabling Mixed-Immersive Analysis of Automotive User Interface Interaction Studies</title>
<year>2023</year>
<month>9</month>
<DOI>10.1145/3581961.3610374</DOI>
<journal>Proceedings of the 15th International Conference on Automotive User Interfaces and Interactive Vehicular Applications (AutomotiveUI ’23 Adjunct)</journal>
<publisher>ACM</publisher>
<address>New York, NY, USA</address>
<web_url2>https://autovis-demo.onrender.com/</web_url2>
<file_url>t3://file?uid=480184</file_url>
<authors>
<person>
<fn>Pascal</fn>
<sn>Jansen</sn>
</person>
<person>
<fn>Julian</fn>
<sn>Britten</sn>
</person>
<person>
<fn>Alexander</fn>
<sn>Häusele</sn>
</person>
<person>
<fn>Thilo</fn>
<sn>Segschneider</sn>
</person>
<person>
<fn>Mark</fn>
<sn>Colley</sn>
</person>
<person>
<fn>Enrico</fn>
<sn>Rukzio</sn>
</person>
</authors>
</reference>
<reference>
<bibtype>article</bibtype>
<title>Effects of Urgency and Cognitive Load on Interaction in Highly Automated Vehicles</title>
<status>1</status>
<year>2023</year>
<month>9</month>
<DOI>10.1145/3604254</DOI>
<journal>Proceedings of the 25th International Conference on Mobile Human-Computer Interaction (MobileHCI '23)</journal>
<publisher>ACM</publisher>
<address>New York, NY, USA</address>
<file_url>t3://file?uid=481873</file_url>
<authors>
<person>
<fn>Mark</fn>
<sn>Colley</sn>
</person>
<person>
<fn>Cristina</fn>
<sn>Evangelista</sn>
</person>
<person>
<fn>Tito Daza</fn>
<sn>Rubiano</sn>
</person>
<person>
<fn>Enrico</fn>
<sn>Rukzio</sn>
</person>
</authors>
</reference>
<reference>
<bibtype>article</bibtype>
<title>Stairway to Heaven: A Demonstration of Different Trajectories and Weather Conditions in Automated Urban Air Mobility</title>
<year>2023</year>
<month>9</month>
<DOI>https://doi.org/10.1145/3581961.3610372</DOI>
<journal>Proceedings of the 15th International Conference on Automotive User Interfaces and Interactive Vehicular Applications (AutomotiveUI ’23 Adjunct); Joint First Authors</journal>
<publisher>ACM</publisher>
<address>New York, NY, USA</address>
<file_url>t3://file?uid=481874</file_url>
<authors>
<person>
<fn>Luca-Maxim</fn>
<sn>Meinhardt</sn>
</person>
<person>
<fn>Mark</fn>
<sn>Colley</sn>
</person>
<person>
<fn>Alexander</fn>
<sn>Fassbender</sn>
</person>
<person>
<fn>Enrico</fn>
<sn>Rukzio</sn>
</person>
</authors>
</reference>
<reference>
<bibtype>article</bibtype>
<title>The Loop and Ways to Break It: Investigating Infinite Scrolling Behaviour in Social Media Applications and Reasons to Stop</title>
<status>1</status>
<year>2023</year>
<month>9</month>
<DOI>10.1145/3604275</DOI>
<journal>Proceedings of the 25th International Conference on Mobile Human-Computer Interaction (MobileHCI '23)</journal>
<publisher>ACM</publisher>
<keywords>rixen2023loop</keywords>
<file_url>t3://file?uid=481676</file_url>
<authors>
<person>
<fn>Jan Ole</fn>
<sn>Rixen</sn>
</person>
<person>
<fn>Luca-Maxim</fn>
<sn>Meinhardt</sn>
</person>
<person>
<fn>Michael</fn>
<sn>Glöckler</sn>
</person>
<person>
<fn>Anna</fn>
<sn>Schlothauer</sn>
</person>
<person>
<fn>Marius-Lukas</fn>
<sn>Ziegenbein</sn>
</person>
<person>
<fn>Mark</fn>
<sn>Colley</sn>
</person>
<person>
<fn>Jan</fn>
<sn>Gugenheimer</sn>
</person>
<person>
<fn>Enrico</fn>
<sn>Rukzio</sn>
</person>
</authors>
</reference>
<reference>
<bibtype>article</bibtype>
<title>Come Fly With Me - Investigating the Effects of Path Visualizations in Automated Urban Air Mobility</title>
<year>2023</year>
<month>6</month>
<DOI>10.1145/3596249</DOI>
<journal>Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies (IMWUT); Joint First Authors</journal>
<address>New York, NY, USA</address>
<file_url>t3://file?uid=478333</file_url>
<authors>
<person>
<fn>Mark</fn>
<sn>Colley</sn>
</person>
<person>
<fn>Luca-Maxim</fn>
<sn>Meinhardt</sn>
</person>
<person>
<fn>Alexander</fn>
<sn>Fassbender</sn>
</person>
<person>
<fn>Michael</fn>
<sn>Rietzler</sn>
</person>
<person>
<fn>Enrico</fn>
<sn>Rukzio</sn>
</person>
</authors>
</reference>
<reference>
<bibtype>article</bibtype>
<title>Scalability in External Communication of Automated Vehicles: Evaluation and Recommendations</title>
<year>2023</year>
<month>6</month>
<DOI>10.1145/3596248</DOI>
<journal>Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies (IMWUT)</journal>
<address>New York, NY, USA</address>
<file_url>t3://file?uid=478332</file_url>
<authors>
<person>
<fn>Mark</fn>
<sn>Colley</sn>
</person>
<person>
<fn>Julian</fn>
<sn>Britten</sn>
</person>
<person>
<fn>Enrico</fn>
<sn>Rukzio</sn>
</person>
</authors>
</reference>
<reference>
<bibtype>article</bibtype>
<title>Addressing Passenger-Vehicle Conflicts: Challenges and Research Directions</title>
<year>2023</year>
<month>4</month>
<day>23</day>
<journal>CHI 2023 Workshop - AutomationXP23: Intervening, Teaming, Delegating - Creating Engaging Automation Experiences</journal>
<web_url2>t3://file?uid=479188</web_url2>
<authors>
<person>
<fn>Annika</fn>
<sn>Stampf</sn>
</person>
<person>
<fn>Enrico</fn>
<sn>Rukzio</sn>
</person>
</authors>
</reference>
<reference>
<bibtype>conference</bibtype>
<citeid>rixen2023dissonance</citeid>
<title>May I Still Define Myself? Exploring How Dissonance in Displaying Personal Information Through Head-Mounted Augmented Reality Can Affect Personal Information Sovereignty</title>
<abstract>Head-mounted Augmented Reality enables individuals to overlay digital information onto the physical world, consequently influencing how they assess and react to augmented social situations. While prior work has shown that augmenting social situations with faithful personal information can benefit a conversation, honest mistakes or an attempt to deceive might lead to a dissonance between augmentation and verbally disclosed information. In this work, we take the first steps towards understanding the happenings in case of information dissonance by conducting a preliminary within-subject online video study (N=30), investigating how it affects users, perception of the interlocutor, and if augmentation or interlocutor would act as the more trusted instance. We found that only 26.7% trusted the interlocutor’s verbally uttered information, while a majority believed the AR device (46.7%) or were undecided (26.7%). We discuss this split in trust and argue for the importance of and factors for a follow-up study on this topic.</abstract>
<year>2023</year>
<month>4</month>
<day>19</day>
<reviewed>1</reviewed>
<DOI>10.1145/3544549.3585821</DOI>
<publisher>Association for Computing Machinery</publisher>
<address>New York (USA)</address>
<event_name>CHI EA '23: Extended Abstracts of the 2023 CHI Conference on Human Factors in Computing Systems</event_name>
<event_place>Hamburg</event_place>
<keywords>Human-centered computing
Human computer interaction (HCI)
Empirical studies in HCI</keywords>
<web_url>https://dl.acm.org/doi/10.1145/3544549.3585821</web_url>
<web_url2>https://dl.acm.org/action/downloadSupplement?doi=10.1145%2F3544549.3585821&file=3544549.3585821-supplemental-materials.zip - - "Additional Material"</web_url2>
<web_url_date>2023-11-30</web_url_date>
<file_url>t3://file?uid=484541 - - "Link to publication"</file_url>
<authors>
<person>
<fn>Jan Ole</fn>
<sn>Rixen</sn>
</person>
<person>
<fn>Christian</fn>
<sn>Funk</sn>
</person>
<person>
<fn>Enrico</fn>
<sn>Rukzio</sn>
</person>
<person>
<fn>Jan</fn>
<sn>Gugenheimer</sn>
</person>
</authors>
</reference>
<reference>
<bibtype>article</bibtype>
<title>Investigating the Effects of Individual Spatial Abilities on Virtual Reality Object Manipulation</title>
<status>1</status>
<year>2023</year>
<month>4</month>
<DOI>https://doi.org/10.1145/3544548.3581004</DOI>
<journal>In Proc. of CHI 2023 (SIGCHI Conference on Human Factors in Computing Systems)</journal>
<publisher>ACM</publisher>
<address>New York, NY, USA</address>
<web_url2>https://youtu.be/5nitNnpFocM</web_url2>
<file_url>t3://file?uid=476332</file_url>
<authors>
<person>
<fn>Tobias</fn>
<sn>Drey</sn>
</person>
<person>
<fn>Michael</fn>
<sn>Montag</sn>
</person>
<person>
<fn>Andrea</fn>
<sn>Vogt</sn>
</person>
<person>
<fn>Nico</fn>
<sn>Rixen</sn>
</person>
<person>
<fn>Tina</fn>
<sn>Seufert</sn>
</person>
<person>
<fn>Steffi</fn>
<sn>Zander</sn>
</person>
<person>
<fn>Michael</fn>
<sn>Rietzler</sn>
</person>
<person>
<fn>Enrico</fn>
<sn>Rukzio</sn>
</person>
</authors>
</reference>
<reference>
<bibtype>article</bibtype>
<title>Up, Up and Away - Investigating Information Needs for Helicopter Pilots in Future Urban Air Mobility</title>
<status>1</status>
<year>2023</year>
<month>4</month>
<DOI>10.1145/3544549.3585643</DOI>
<journal>In Extended Abstracts of CHI 2023 (SIGCHI Conference on Human Factors in Computing Systems), Joint First Authors</journal>
<publisher>ACM</publisher>
<address>New York, NY, USA</address>
<web_url>https://www.youtube.com/watch?v=17dXbcwqU5c</web_url>
<file_url>t3://file?uid=476549</file_url>
<authors>
<person>
<fn>Luca-Maxim</fn>
<sn>Meinhardt</sn>
</person>
<person>
<fn>Mark</fn>
<sn>Colley</sn>
</person>
<person>
<fn>Alexander</fn>
<sn>Fassbender</sn>
</person>
<person>
<fn>Michael</fn>
<sn>Rietzler</sn>
</person>
<person>
<fn>Enrico</fn>
<sn>Rukzio</sn>
</person>
</authors>
</reference>
<reference>
<bibtype>article</bibtype>
<title>AutoVis: Enabling Mixed-Immersive Analysis of Automotive User Interface Interaction Studies</title>
<status>1</status>
<year>2023</year>
<month>1</month>
<DOI>10.1145/3544548.3580760</DOI>
<journal>In Proc. of CHI 2023 (SIGCHI Conference on Human Factors in Computing Systems)</journal>
<publisher>ACM</publisher>
<address>New York, NY, USA</address>
<web_url2>https://autovis-demo.onrender.com/</web_url2>
<file_url>t3://file?uid=476230</file_url>
<authors>
<person>
<fn>Pascal</fn>
<sn>Jansen</sn>
</person>
<person>
<fn>Julian</fn>
<sn>Britten</sn>
</person>
<person>
<fn>Alexander</fn>
<sn>Häusele</sn>
</person>
<person>
<fn>Thilo</fn>
<sn>Segschneider</sn>
</person>
<person>
<fn>Mark</fn>
<sn>Colley</sn>
</person>
<person>
<fn>Enrico</fn>
<sn>Rukzio</sn>
</person>
</authors>
</reference>
<reference>
<bibtype>article</bibtype>
<citeid>Sauter2023BehindScreens</citeid>
<title>Behind the Screens: Exploring Eye Movement Visualization to Optimize Online Teaching and Learning</title>
<abstract>The effective delivery of e-learning depends on the continuous monitoring and management of student attention. While instructors in traditional classroom settings can easily assess crowd attention through gaze cues, these cues are largely unavailable in online learning environments. To address this challenge and highlight the significance of our study, we collected eye movement data from twenty students and developed four visualization methods: (a) a
heat map, (b) an ellipse map, (c) two moving bars, and (d) a vertical bar, which were overlaid on 13 instructional videos. Our results revealed unexpected preferences among the instructors. Contrary to expectations, they did not prefer the established heat map and vertical bar for live online instruction. Instead, they chose the less intrusive ellipse visualization. Nevertheless, the heat map remained the preferred choice for retrospective analysis due to its more detailed information. Importantly, all visualizations were found to be useful and to help restore emotional connections in online learning. In conclusion, our innovative visualizations of crowd attention show considerable potential for a wide range of applications, extending beyond e-learning to all online presentations and retrospective analyses. The significant results of our study underscore the critical role these visualizations will play in enhancing both the effectiveness and emotional connectedness of future e-learning experiences, thereby facilitating the educational landscape.</abstract>
<status>1</status>
<year>2023</year>
<DOI>10.1145/3603555.3603560</DOI>
<journal>Proceedings of Mensch und Computer 2023</journal>
<tags>Sauter2023BehindScreens</tags>
<file_url>t3://file?uid=480120</file_url>
<authors>
<person>
<fn>Marian</fn>
<sn>Sauter</sn>
</person>
<person>
<fn>Tobias</fn>
<sn>Wagner</sn>
</person>
<person>
<fn>Teresa</fn>
<sn>Hirzle</sn>
</person>
<person>
<fn>Bao</fn>
<sn>Xin Lin</sn>
</person>
<person>
<fn>Enrico</fn>
<sn>Rukzio</sn>
</person>
<person>
<fn>A</fn>
<sn>Huckauf</sn>
</person>
</authors>
</reference>
<reference>
<bibtype>article</bibtype>
<title>Exploring Gesture and Gaze Proxies to Communicate Instructor’s Nonverbal Cues in Lecture Videos</title>
<abstract>Teaching via lecture video has become the defacto standard for remote education, but videos make it difficult to interpret instructors' nonverbal referencing to the content.
This is problematic, as nonverbal cues are essential for students to follow and understand a lecture.
As remedy, we explored different proxies representing instructors' pointing gestures and gaze to provide students a point of reference in a lecture video: no proxy, gesture proxy, gaze proxy, alternating proxy, and concurrent proxies.
In an online study with 100 students, we evaluated the proxies' effects on mental effort, cognitive load, learning performance, and user experience.
Our results show that the proxies had no significant effect on learning-directed aspects and that the gesture and alternating proxy achieved the highest pragmatic quality.
Furthermore, we found that alternating between proxies is a promising approach providing students with information about instructors' pointing and gaze position in a lecture video.</abstract>
<year>2023</year>
<DOI>10.1145/3544549.3585842</DOI>
<journal>In Extended Abstracts of the 2023 CHI Conference on Human Factors in Computing Systems (CHI EA ’23)</journal>
<address>New York, NY, USA</address>
<keywords>Gaze
Gesture
Education
Lecture video
Eye-tracking</keywords>
<web_url2>https://dl.acm.org/doi/10.1145/3544549.3585842</web_url2>
<file_url>t3://file?uid=476912</file_url>
<authors>
<person>
<fn>Tobias</fn>
<sn>Wagner</sn>
</person>
<person>
<fn>Teresa</fn>
<sn>Hirzle</sn>
</person>
<person>
<fn>Anke</fn>
<sn>Huckauf</sn>
</person>
<person>
<fn>Enrico</fn>
<sn>Rukzio</sn>
</person>
</authors>
</reference>
<reference>
<bibtype>article</bibtype>
<title>Tiles to Move: Investigating Tile-Based Locomotion for Virtual Reality</title>
<abstract>Tile-based locomotion (TBL) is a popular locomotion technique for computer, console, and board games.
However, despite its simplicity and unconventional movement, the transfer of TBL to virtual reality (VR)
as a game platform remains unexplored. To fill this gap, we introduce TBL for VR on the example of two
techniques: a controller and a feet-based one. In a first user study, we evaluated the usability and acceptance
of the techniques compared to teleportation and touchpad locomotion. In a second exploratory user study, we
evaluated the user experience of both TBL techniques in a maze and a museum scenario. The findings show that
both techniques provide enjoyment and acceptable usability by creating either a relaxing (controller-based) or
a physically active (feet-based) solution. Finally, our results highlight that TBL techniques work particularly
well for small, constrained spaces that allow users to focus on exploring details in the nearby environment
(important for games) in contrast to large open spaces that require faster locomotion, like teleportation.</abstract>
<status>1</status>
<year>2023</year>
<DOI>10.1145/3611060</DOI>
<journal>Proc. ACM Human Computer Interaction 7, CHI PLAY, Article 414 (November 2023)</journal>
<edition>7</edition>
<web_url>https://doi.org/10.1145/3611060</web_url>
<file_url>t3://file?uid=480810</file_url>
<authors>
<person>
<fn>Jana</fn>
<sn>Funke</sn>
</person>
<person>
<fn>Anja</fn>
<sn>Schikorr</sn>
</person>
<person>
<fn>Sukran</fn>
<sn>Karaosmanoglu</sn>
</person>
<person>
<fn>Teresa</fn>
<sn>Hirzle</sn>
</person>
<person>
<fn>Frank</fn>
<sn>Steinicke</sn>
</person>
<person>
<fn>Enrico</fn>
<sn>Rukzio</sn>
</person>
</authors>
</reference>
<reference>
<bibtype>article</bibtype>
<title>(Eco-)Logical to Compare? - Utilizing Peer Comparison to Encourage Ecological Driving in Manual and Automated Driving</title>
<year>2022</year>
<month>9</month>
<issn>978-1-4503-9415-4/22/09</issn>
<DOI>10.1145/3543174.3545256</DOI>
<journal>Proceedings of the 14th International Conference on Automotive User Interfaces and Interactive Vehicular Applications (AutomotiveUI ’22)</journal>
<publisher>ACM</publisher>
<address>New York, NY, USA</address>
<file_url>t3://file?uid=459240</file_url>
<authors>
<person>
<fn>Mark</fn>
<sn>Colley</sn>
</person>
<person>
<fn>Jan Ole</fn>
<sn>Rixen</sn>
</person>
<person>
<fn>Walter Italgo</fn>
<sn>Pellegrino</sn>
</person>
<person>
<fn>Enrico</fn>
<sn>Rukzio</sn>
</person>
</authors>
</reference>
<reference>
<bibtype>article</bibtype>
<title>A Systematic Evaluation of Solutions for the Final 100m Challenge of Highly Automated Vehicles</title>
<status>1</status>
<year>2022</year>
<month>9</month>
<DOI>10.1145/3546713</DOI>
<journal>Proceedings of the 24rd International Conference on Mobile Human-Computer Interaction (MobileHCI '22)</journal>
<publisher>ACM</publisher>
<file_url>t3://file?uid=459244</file_url>
<authors>
<person>
<fn>Mark</fn>
<sn>Colley</sn>
</person>
<person>
<fn>Bastian</fn>
<sn>Wankmüller</sn>
</person>
<person>
<fn>Enrico</fn>
<sn>Rukzio</sn>
</person>
</authors>
</reference>
<reference>
<bibtype>article</bibtype>
<title>AR4CAD: Creation and Exploration of a Taxonomy of Augmented Reality Visualization for Connected Automated Driving</title>
<year>2022</year>
<month>9</month>
<DOI>10.1145/3546712</DOI>
<journal>Proceedings of the 24rd International Conference on Mobile Human-Computer Interaction (MobileHCI '22), Joint First Authors</journal>
<publisher>ACM</publisher>
<keywords>AR4CAD</keywords>
<file_url>t3://file?uid=459245</file_url>
<authors>
<person>
<fn>Tobias</fn>
<sn>Müller</sn>
</person>
<person>
<fn>Mark</fn>
<sn>Colley</sn>
</person>
<person>
<fn>Gülsemin</fn>
<sn>Dogru</sn>
</person>
<person>
<fn>Enrico</fn>
<sn>Rukzio</sn>
</person>
</authors>
</reference>
<reference>
<bibtype>article</bibtype>
<title>Feedback Strategies for Crowded Intersections in Automated Traffic --- A Desirable Future?</title>
<year>2022</year>
<month>9</month>
<issn>978-1-4503-9415-4/22/09</issn>
<DOI>10.1145/3543174.3545255</DOI>
<journal>Proceedings of the 14th International Conference on Automotive User Interfaces and Interactive Vehicular Applications (AutomotiveUI ’22)</journal>
<publisher>ACM</publisher>
<file_url>t3://file?uid=459241</file_url>
<authors>
<person>
<fn>Mark</fn>
<sn>Colley</sn>
</person>
<person>
<fn>Julian</fn>
<sn>Britten</sn>
</person>
<person>
<fn>Simon</fn>
<sn>Demharter</sn>
</person>
<person>
<fn>Tolga</fn>
<sn>Hisir</sn>
</person>
<person>
<fn>Enrico</fn>
<sn>Rukzio</sn>
</person>
</authors>
</reference>
<reference>
<bibtype>article</bibtype>
<title>Introducing VAMPIRE -- Using Kinaesthetic Feedback in Virtual Reality for Automated Driving Experiments</title>
<year>2022</year>
<month>9</month>
<issn>978-1-4503-9415-4/22/09</issn>
<DOI>10.1145/3543174.3545252</DOI>
<journal>Proceedings of the 14th International Conference on Automotive User Interfaces and Interactive Vehicular Applications (AutomotiveUI ’22)  Joint First Authors</journal>
<publisher>ACM</publisher>
<file_url>t3://file?uid=459249</file_url>
<authors>
<person>
<fn>Philipp</fn>
<sn>Hock</sn>
</person>
<person>
<fn>Mark</fn>
<sn>Colley</sn>
</person>
<person>
<fn>Ali</fn>
<sn>Askari</sn>
</person>
<person>
<fn>Tobias</fn>
<sn>Wagner</sn>
</person>
<person>
<fn>Martin</fn>
<sn>Baumann</sn>
</person>
<person>
<fn>Enrico</fn>
<sn>Rukzio</sn>
</person>
</authors>
</reference>
<reference>
<bibtype>article</bibtype>
<title>Investigating the Effects of External Communication and Automation Behavior on Manual Drivers at Intersections</title>
<status>1</status>
<year>2022</year>
<month>9</month>
<DOI>10.1145/3546711</DOI>
<journal>Proceedings of the 24rd International Conference on Mobile Human-Computer Interaction (MobileHCI '22)</journal>
<publisher>ACM</publisher>
<file_url>t3://file?uid=459243</file_url>
<authors>
<person>
<fn>Mark</fn>
<sn>Colley</sn>
</person>
<person>
<fn>Tim</fn>
<sn>Fabian</sn>
</person>
<person>
<fn>Enrico</fn>
<sn>Rukzio</sn>
</person>
</authors>
</reference>
<reference>
<bibtype>article</bibtype>
<citeid>SpARklingPaper</citeid>
<title>SpARklingPaper: Enhancing Common Pen- And Paper-Based Handwriting Training for Children by Digitally Augmenting Papers Using a Tablet Screen</title>
<status>1</status>
<year>2022</year>
<month>9</month>
<reviewed>1</reviewed>
<DOI>https://doi.org/10.1145/3550337</DOI>
<journal>Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies (IMWUT)</journal>
<volume>6</volume>
<publisher>ACM</publisher>
<chapter>113</chapter>
<series>3</series>
<pages>1-29</pages>
<web_url2>https://youtu.be/iYGCJtrQIZY</web_url2>
<file_url>t3://file?uid=459599</file_url>
<authors>
<person>
<fn>Tobias</fn>
<sn>Drey</sn>
</person>
<person>
<fn>Jessica</fn>
<sn>Janek</sn>
</person>
<person>
<fn>Josef</fn>
<sn>Lang</sn>
</person>
<person>
<fn>Dietmar</fn>
<sn>Puschmann</sn>
</person>
<person>
<fn>Michael</fn>
<sn>Rietzler</sn>
</person>
<person>
<fn>Enrico</fn>
<sn>Rukzio</sn>
</person>
</authors>
</reference>
<reference>
<bibtype>article</bibtype>
<title>Towards Implicit Interaction in Highly Automated Vehicles - A Systematic Literature Review</title>
<status>1</status>
<year>2022</year>
<month>9</month>
<DOI>10.1145/3546726</DOI>
<journal>Proceedings of the 24rd International Conference on Mobile Human-Computer Interaction (MobileHCI '22)</journal>
<publisher>ACM</publisher>
<file_url>t3://file?uid=459248</file_url>
<authors>
<person>
<fn>Annika</fn>
<sn>Stampf</sn>
</person>
<person>
<fn>Mark</fn>
<sn>Colley</sn>
</person>
<person>
<fn>Enrico</fn>
<sn>Rukzio</sn>
</person>
</authors>
</reference>
<reference>
<bibtype>article</bibtype>
<citeid>tochi_21_hirzle</citeid>
<title>Understanding, Addressing, and Analysing Digital Eye Strain in Virtual Reality Head-Mounted Displays</title>
<status>1</status>
<year>2022</year>
<month>8</month>
<DOI>10.1145/3492802</DOI>
<journal>ACM Transactions on Computer-Human Interaction (TOCHI)</journal>
<volume>29</volume>
<publisher>ACM</publisher>
<series>4</series>
<pages>1-80</pages>
<web_url2>https://youtu.be/ns2HwQ2p_hM _blank</web_url2>
<file_url>t3://file?uid=456150</file_url>
<authors>
<person>
<fn>Teresa</fn>
<sn>Hirzle</sn>
</person>
<person>
<fn>Fabian</fn>
<sn>Fischbach</sn>
</person>
<person>
<fn>Julian</fn>
<sn>Karlbauer</sn>
</person>
<person>
<fn>Pascal</fn>
<sn>Jansen</sn>
</person>
<person>
<fn>Jan</fn>
<sn>Gugenheimer</sn>
</person>
<person>
<fn>Enrico</fn>
<sn>Rukzio</sn>
</person>
<person>
<fn>Andreas</fn>
<sn>Bulling</sn>
</person>
</authors>
</reference>
<reference>
<bibtype>article</bibtype>
<title>A Design Space for Human Sensor and Actuator Focused In-Vehicle Interaction Based on a Systematic Literature Review</title>
<year>2022</year>
<month>6</month>
<DOI>10.1145/3534617</DOI>
<journal>Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies (IMWUT)</journal>
<edition>2</edition>
<volume>6</volume>
<publisher>ACM</publisher>
<address>New York, NY, USA</address>
<pages>1-51</pages>
<web_url2>https://in-vehicle-interaction-design-space.onrender.com/</web_url2>
<file_url>t3://file?uid=457401</file_url>
<authors>
<person>
<fn>Pascal</fn>
<sn>Jansen</sn>
</person>
<person>
<fn>Mark</fn>
<sn>Colley</sn>
</person>
<person>
<fn>Enrico</fn>
<sn>Rukzio</sn>
</person>
</authors>
</reference>
<reference>
<bibtype>article</bibtype>
<citeid>hirzle_AttentionOfManyObservers_2022</citeid>
<title>Attention of Many Observers Visualized by Eye Movements</title>
<abstract>Interacting with a group of people requires to direct the attention of the whole group, thus requires feedback about the crowd’s attention. In face-to-face interactions, head and eye movements serve as indicator for crowd attention. However, when interacting online, such indicators are not available. To substitute this information, gaze visualizations were adapted for a crowd scenario. We developed, implemented, and evaluated four types of visualizations of crowd attention in an online study with 72 participants using lecture videos enriched with audience’s gazes. All participants reported increased connectedness to the audience, especially for visualizations depicting the whole distribution of gaze including spatial information. Visualizations avoiding spatial overlay by depicting only the variability were regarded as less helpful, for real-time as well as for retrospective analyses of lectures. Improving our visualizations of crowd attention has the potential for a broad variety of applications, in all kinds of social interaction and communication in groups.</abstract>
<year>2022</year>
<month>6</month>
<DOI>10.1145/3517031.3529235</DOI>
<institution>Ulm University</institution>
<journal>ETRA '22: 2022 Symposium on Eye Tracking Research and Applications</journal>
<tags>hirzle_AttentionOfManyObservers_2022</tags>
<web_url2>https://www.uni-ulm.de/in/mi/hci/projects/attention-of-many-observers-visualized-by-eye-movements/</web_url2>
<file_url>t3://file?uid=463792</file_url>
<authors>
<person>
<fn>Teresa</fn>
<sn>Hirzle</sn>
</person>
<person>
<fn>Marian</fn>
<sn>Sauter</sn>
</person>
<person>
<fn>Tobias</fn>
<sn>Wagner</sn>
</person>
<person>
<fn>Susanne</fn>
<sn>Hummel</sn>
</person>
<person>
<fn>Enrico</fn>
<sn>Rukzio</sn>
</person>
<person>
<fn>Anke</fn>
<sn>Huckauf</sn>
</person>
</authors>
</reference>
<reference>
<bibtype>article</bibtype>
<citeid>sauter_CanEyeMovement_2022</citeid>
<title>Can Eye Movement Synchronicity Predict Test Performance With Unreliably-Sampled Data in an Online Learning Context?</title>
<abstract>Webcam-based eye-tracking promises easy and quick data collection without the need for specific or additional eye-tracking hardware. This makes it especially attractive for educational research, in particular for modern formats, such as MOOCs. However, in order to fulfill its promises, webcam-based eye tracking has to overcome several challenges, most importantly, varying spatial and temporal resolutions. Another challenge that the educational domain faces especially, is that typically individual students are of interest in contrast to average values. In this paper, we explore whether an attention measure that is based on eye movement synchronicity of a group of students can be applied with unreliably-sampled data. Doing so we aim to reproduce earlier work that showed that, on average, eye movement synchronicity can predict performance in a comprehension quiz. We were not able to reproduce the findings with unreliably-sampled data, which highlights the challenges that lie ahead of webcam-based eye tracking in practice.</abstract>
<year>2022</year>
<month>6</month>
<DOI>10.1145/3517031.3529239</DOI>
<institution>Ulm University</institution>
<journal>ETRA '22: 2022 Symposium on Eye Tracking Research and Applications</journal>
<file_url>t3://file?uid=463880</file_url>
<authors>
<person>
<fn>Marian</fn>
<sn>Sauter</sn>
</person>
<person>
<fn>Teresa</fn>
<sn>Hirzle</sn>
</person>
<person>
<fn>Tobias</fn>
<sn>Wagner</sn>
</person>
<person>
<fn>Susanne</fn>
<sn>Hummel</sn>
</person>
<person>
<fn>Enrico</fn>
<sn>Rukzio</sn>
</person>
<person>
<fn>A</fn>
<sn>Huckauf</sn>
</person>
</authors>
</reference>
<reference>
<bibtype>article</bibtype>
<title>Accessibility-Related Publication Distribution in HCI Based on a Meta-Analysis</title>
<year>2022</year>
<month>5</month>
<journal>In Extended Abstracts of CHI 2022 (SIGCHI Conference on Human Factors in Computing Systems)</journal>
<publisher>ACM</publisher>
<web_url2>t3://file?uid=454504</web_url2>
<authors>
<person>
<fn>Mark</fn>
<sn>Colley</sn>
</person>
<person>
<fn>Taras</fn>
<sn>Kränzle</sn>
</person>
<person>
<fn>Enrico</fn>
<sn>Rukzio</sn>
</person>
</authors>
</reference>
<reference>
<bibtype>article</bibtype>
<title>Challenges of Explainability, Cooperation, and External Communication of Automated Vehicles</title>
<year>2022</year>
<month>5</month>
<journal>CHI 2022 Workshop - Engaging with Automation: Understanding and Designing for Operation, Appropriation, and Behaviour Change</journal>
<web_url2>t3://file?uid=454506</web_url2>
<authors>
<person>
<fn>Mark</fn>
<sn>Colley</sn>
</person>
<person>
<fn>Enrico</fn>
<sn>Rukzio</sn>
</person>
</authors>
</reference>
<reference>
<bibtype>article</bibtype>
<title>Consent in the Age of AR: Investigating The Comfort With Displaying Personal Information in Augmented Reality</title>
<status>1</status>
<year>2022</year>
<month>5</month>
<DOI>10.1145/3491102.3502140</DOI>
<journal>In Proc. of CHI 2022 (SIGCHI Conference on Human Factors in Computing Systems)</journal>
<publisher>ACM</publisher>
<web_url2>https://www.youtube.com/watch?v=kOfnshoI9XY</web_url2>
<file_url>t3://file?uid=454581</file_url>
<authors>
<person>
<fn>Jan Ole</fn>
<sn>Rixen</sn>
</person>
<person>
<fn>Mark</fn>
<sn>Colley</sn>
</person>
<person>
<fn>Ali</fn>
<sn>Askari</sn>
</person>
<person>
<fn>Jan</fn>
<sn>Gugenheimer</sn>
</person>
<person>
<fn>Enrico</fn>
<sn>Rukzio</sn>
</person>
</authors>
</reference>
<reference>
<bibtype>article</bibtype>
<title>Effects of Pedestrian Behavior, Time Pressure, and Repeated Exposure on Crossing Decisions in Front of Automated Vehicles Equipped with External Communication</title>
<status>1</status>
<year>2022</year>
<month>5</month>
<DOI>10.1145/3491102.3517571</DOI>
<journal>In Proc. of CHI 2022 (SIGCHI Conference on Human Factors in Computing Systems)</journal>
<publisher>ACM</publisher>
<address>New York, NY, USA</address>
<web_url2>https://www.youtube.com/watch?v=L7xv6sGweVY _blank</web_url2>
<file_url>t3://file?uid=454505</file_url>
<authors>
<person>
<fn>Mark</fn>
<sn>Colley</sn>
</person>
<person>
<fn>Elvedin</fn>
<sn>Bajrovic</sn>
</person>
<person>
<fn>Enrico</fn>
<sn>Rukzio</sn>
</person>
</authors>
</reference>
<reference>
<bibtype>article</bibtype>
<title>Keep it Short: A Comparison of Voice Assistants' Response Behavior</title>
<year>2022</year>
<month>5</month>
<DOI>10.1145/3491102.3517684</DOI>
<journal>In Proc. of CHI 2022 (SIGCHI Conference on Human Factors in Computing Systems) (conditionally accepted)</journal>
<web_url2>https://www.youtube.com/watch?v=6tOWyvrFdVo</web_url2>
<file_url>t3://file?uid=455762</file_url>
<authors>
<person>
<fn>Gabriel</fn>
<sn>Haas</sn>
</person>
<person>
<fn>Michael</fn>
<sn>Rietzler</sn>
</person>
<person>
<fn>Matt</fn>
<sn>Jones</sn>
</person>
<person>
<fn>Enrico</fn>
<sn>Rukzio</sn>
</person>
</authors>
</reference>
<reference>
<bibtype>article</bibtype>
<title>ShapeFindAR: Exploring In-Situ Spatial Search for Physical Artifact Retrieval using Mixed Reality</title>
<status>1</status>
<year>2022</year>
<month>5</month>
<reviewed>1</reviewed>
<DOI>10.1145/3491102.3517682</DOI>
<journal>In Proc. of CHI 2022 (SIGCHI Conference on Human Factors in Computing Systems)</journal>
<publisher>ACM</publisher>
<address>New York, NY, USA</address>
<web_url>https://arxiv.org/abs/2203.17211</web_url>
<web_url2>https://www.youtube.com/watch?v=rc2JNFkAHx0</web_url2>
<file_url>t3://file?uid=454835</file_url>
<authors>
<person>
<fn>Evgeny</fn>
<sn>Stemasov</sn>
</person>
<person>
<fn>Tobias</fn>
<sn>Wagner</sn>
</person>
<person>
<fn>Jan</fn>
<sn>Gugenheimer</sn>
</person>
<person>
<fn>Enrico</fn>
<sn>Rukzio</sn>
</person>
</authors>
</reference>
<reference>
<bibtype>article</bibtype>
<citeid>CollaborativeLearning</citeid>
<title>Towards Collaborative Learning in Virtual Reality: A Comparison of Co-Located Symmetric and Asymmetric Pair-Learning</title>
<status>1</status>
<year>2022</year>
<month>5</month>
<reviewed>1</reviewed>
<DOI>https://doi.org/10.1145/3491102.3517641</DOI>
<journal>In Proc. of CHI 2022 (SIGCHI Conference on Human Factors in Computing Systems)</journal>
<publisher>ACM</publisher>
<web_url2>https://youtu.be/_m32zfsjWcY</web_url2>
<file_url>t3://file?uid=453912</file_url>
<authors>
<person>
<fn>Tobias</fn>
<sn>Drey</sn>
</person>
<person>
<fn>Patrick</fn>
<sn>Albus</sn>
</person>
<person>
<fn>Simon</fn>
<sn>der Kinderen</sn>
</person>
<person>
<fn>Maximilian</fn>
<sn>Milo</sn>
</person>
<person>
<fn>Thilo</fn>
<sn>Segschneider</sn>
</person>
<person>
<fn>Linda</fn>
<sn>Chanzab</sn>
</person>
<person>
<fn>Michael</fn>
<sn>Rietzler</sn>
</person>
<person>
<fn>Tina</fn>
<sn>Seufert</sn>
</person>
<person>
<fn>Enrico</fn>
<sn>Rukzio</sn>
</person>
</authors>
</reference>
<reference>
<bibtype>article</bibtype>
<title>Effects of Scene Detection, Scene Prediction, and Maneuver Planning Visualizations on Trust, Situation Awareness, and Cognitive Load in Highly Automated Vehicles</title>
<year>2022</year>
<month>4</month>
<DOI>10.1145/3534609</DOI>
<journal>Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies (IMWUT)</journal>
<publisher>ACM</publisher>
<address>New York, NY, USA</address>
<file_url>t3://file?uid=458207</file_url>
<authors>
<person>
<fn>Mark</fn>
<sn>Colley</sn>
</person>
<person>
<fn>Max</fn>
<sn>Rädler</sn>
</person>
<person>
<fn>Jonas</fn>
<sn>Glimmann</sn>
</person>
<person>
<fn>Enrico</fn>
<sn>Rukzio</sn>
</person>
</authors>
</reference>
<reference>
<bibtype>article</bibtype>
<title>Requirements for the Interaction With Highly Automated Construction Site Delivery Trucks</title>
<year>2022</year>
<month>2</month>
<day>18</day>
<issn>2673-2726</issn>
<DOI>10.3389/fhumd.2022.794890</DOI>
<journal>Frontiers in Human Dynamics, section Digital Impacts</journal>
<publisher>Frontiers</publisher>
<address>Lausanne, Switzerland</address>
<web_url2>https://www.frontiersin.org/article/10.3389/fhumd.2022.794890 _blank</web_url2>
<file_url>https://oparu.uni-ulm.de/xmlui/bitstream/handle/123456789/41966/Colley_2022.pdf</file_url>
<authors>
<person>
<fn>Mark</fn>
<sn>Colley</sn>
</person>
<person>
<fn>Stefanos</fn>
<sn>Mytilineos</sn>
</person>
<person>
<fn>Marcel</fn>
<sn>Walch</sn>
</person>
<person>
<fn>Enrico</fn>
<sn>Rukzio</sn>
</person>
<person>
<fn>Jan</fn>
<sn>Gugenheimer</sn>
</person>
</authors>
</reference>
<reference>
<bibtype>conference</bibtype>
<citeid>stemasov2022_ephemeral-fabrication</citeid>
<title>Ephemeral Fabrication: Exploring a Ubiquitous Fabrication Scenario of Low-Effort, In-Situ Creation of Short-Lived Physical Artifacts</title>
<status>1</status>
<year>2022</year>
<month>2</month>
<day>13</day>
<reviewed>1</reviewed>
<DOI>10.1145/3490149.3501331</DOI>
<booktitle>16th Annual Conference on Tangible, Embedded and Embodied Interaction (TEI 2022)</booktitle>
<publisher>ACM</publisher>
<howpublished>(conditionally accepted for publication)</howpublished>
<web_url>https://arxiv.org/abs/2112.09019</web_url>
<web_url2>https://youtu.be/31TkNNP5NRc</web_url2>
<file_url>t3://file?uid=449060</file_url>
<authors>
<person>
<fn>Evgeny</fn>
<sn>Stemasov</sn>
</person>
<person>
<fn>Alexander</fn>
<sn>Botner</sn>
</person>
<person>
<fn>Enrico</fn>
<sn>Rukzio</sn>
</person>
<person>
<fn>Jan</fn>
<sn>Gugenheimer</sn>
</person>
</authors>
</reference>
<reference>
<bibtype>article</bibtype>
<title>Effects of Mode Distinction, User Visibility, and Vehicle Appearance on Mode Confusion When Interacting With Highly Automated Vehicles</title>
<year>2022</year>
<DOI>10.1016/j.trf.2022.06.020</DOI>
<journal>Transportation Research Part F: Traffic Psychology and Behaviour</journal>
<file_url>t3://file?uid=458404</file_url>
<authors>
<person>
<fn>Mark</fn>
<sn>Colley</sn>
</person>
<person>
<fn>Christian</fn>
<sn>Hummler</sn>
</person>
<person>
<fn>Enrico</fn>
<sn>Rukzio</sn>
</person>
</authors>
</reference>
<reference>
<bibtype>conference</bibtype>
<title>Exploring the Social Context of Collaborative Driving</title>
<year>2022</year>
<web_url>http://arxiv.org/abs/2201.11028</web_url>
<web_url2>https://arxiv.org/abs/2201.11028</web_url2>
<authors>
<person>
<fn>Mark</fn>
<sn>Colley</sn>
</person>
<person>
<fn>Sebastian</fn>
<sn>Pickl</sn>
</person>
<person>
<fn>Frank</fn>
<sn>Uhlig</sn>
</person>
<person>
<fn>Enrico</fn>
<sn>Rukzio</sn>
</person>
</authors>
</reference>
<reference>
<bibtype>article</bibtype>
<title>User Gesticulation Inside an Autonomous Vehicle with External Communication can Cause Confusion in Pedestrians and a Lower Willingness to Cross</title>
<year>2022</year>
<issn>1369-8478</issn>
<DOI>10.1016/j.trf.2022.03.011</DOI>
<journal>Transportation Research Part F: Traffic Psychology and Behaviour</journal>
<publisher>Elsevier</publisher>
<pages>120-137</pages>
<web_url>https://www.sciencedirect.com/science/article/pii/S1369847822000523?dgcid=author</web_url>
<file_url>t3://file?uid=458209</file_url>
<authors>
<person>
<fn>Mark</fn>
<sn>Colley</sn>
</person>
<person>
<fn>Bastian</fn>
<sn>Wankmüller</sn>
</person>
<person>
<fn>Tim</fn>
<sn>Mend</sn>
</person>
<person>
<fn>Thomas</fn>
<sn>Väth</sn>
</person>
<person>
<fn>Enrico</fn>
<sn>Rukzio</sn>
</person>
<person>
<fn>Jan</fn>
<sn>Gugenheimer</sn>
</person>
</authors>
</reference>
<reference>
<bibtype>article</bibtype>
<title>SwiVR-Car-Seat: Exploring Vehicle Motion Effects on Interaction Quality in Virtual Reality Automated Driving Using a Motorized Swivel Seat</title>
<year>2021</year>
<month>12</month>
<DOI>10.1145/3494968</DOI>
<journal>Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies (IMWUT)</journal>
<volume>5</volume>
<publisher>ACM</publisher>
<address>New York, NY, USA</address>
<pages>1-26</pages>
<web_url2>https://youtu.be/hvxtrgGAaUE</web_url2>
<file_url>t3://file?uid=447439</file_url>
<authors>
<person>
<fn>Mark</fn>
<sn>Colley</sn>
</person>
<person>
<fn>Pascal</fn>
<sn>Jansen</sn>
</person>
<person>
<fn>Enrico</fn>
<sn>Rukzio</sn>
</person>
<person>
<fn>Jan</fn>
<sn>Gugenheimer</sn>
</person>
</authors>
</reference>
<reference>
<bibtype>article</bibtype>
<title>Evaluating the Impact of Decals on Driver Stereotype Perception and Exploration of Personalization of Automated Vehicles via Digital Decals</title>
<status>1</status>
<year>2021</year>
<month>9</month>
<DOI>10.1145/3409118.3475132</DOI>
<journal>Proceedings of the 13th International Conference on Automotive User Interfaces and Interactive Vehicular Applications (AutomotiveUI ’21)</journal>
<publisher>ACM</publisher>
<web_url2>https://youtu.be/jCXT337OIVE</web_url2>
<file_url>t3://file?uid=446427</file_url>
<authors>
<person>
<fn>Mark</fn>
<sn>Colley</sn>
</person>
<person>
<fn>Mirjam</fn>
<sn>Lanzer</sn>
</person>
<person>
<fn>Jan Henry</fn>
<sn>Belz</sn>
</person>
<person>
<fn>Marcel</fn>
<sn>Walch</sn>
</person>
<person>
<fn>Enrico</fn>
<sn>Rukzio</sn>
</person>
</authors>
</reference>
<reference>
<bibtype>article</bibtype>
<title>How Should Automated Vehicles Communicate Critical Situations? A Comparative Analysis of Visualization Concepts</title>
<status>1</status>
<year>2021</year>
<month>9</month>
<DOI>10.1145/3478111</DOI>
<journal>Proc. ACM Interact. Mob. Wearable Ubiquitous Technol.,</journal>
<publisher>ACM</publisher>
<address>New York, NY, USA</address>
<file_url>t3://file?uid=446428</file_url>
<authors>
<person>
<fn>Mark</fn>
<sn>Colley</sn>
</person>
<person>
<fn>Svenja</fn>
<sn>Krauß</sn>
</person>
<person>
<fn>Mirjam</fn>
<sn>Lanzer</sn>
</person>
<person>
<fn>Enrico</fn>
<sn>Rukzio</sn>
</person>
</authors>
</reference>
<reference>
<bibtype>article</bibtype>
<title>Investigating the Effects of Feedback Communication of Autonomous Vehicles</title>
<status>1</status>
<year>2021</year>
<month>9</month>
<DOI>10.1145/3409118.3475133</DOI>
<journal>Proceedings of the 13th International Conference on Automotive User Interfaces and Interactive Vehicular Applications (AutomotiveUI ’21)</journal>
<publisher>ACM</publisher>
<web_url2>https://youtu.be/P-dD9urwuGE</web_url2>
<file_url>t3://file?uid=446422</file_url>
<authors>
<person>
<fn>Mark</fn>
<sn>Colley</sn>
</person>
<person>
<fn>Jan Henry</fn>
<sn>Belz</sn>
</person>
<person>
<fn>Enrico</fn>
<sn>Rukzio</sn>
</person>
</authors>
</reference>
<reference>
<bibtype>article</bibtype>
<title>ORIAS: On-The-Fly Object Identification and Action Selection for Highly Automated Vehicles</title>
<status>1</status>
<year>2021</year>
<month>9</month>
<DOI>10.1145/3409118.3475134</DOI>
<journal>Proceedings of the 13th International Conference on Automotive User Interfaces and Interactive Vehicular Applications (AutomotiveUI ’21)</journal>
<publisher>ACM</publisher>
<web_url2>https://youtu.be/F5DZWqwjIUo</web_url2>
<file_url>t3://file?uid=446423</file_url>
<authors>
<person>
<fn>Mark</fn>
<sn>Colley</sn>
</person>
<person>
<fn>Ali</fn>
<sn>Askari</sn>
</person>
<person>
<fn>Marcel</fn>
<sn>Walch</sn>
</person>
<person>
<fn>Marcel</fn>
<sn>Woide</sn>
</person>
<person>
<fn>Enrico</fn>
<sn>Rukzio</sn>
</person>
</authors>
</reference>
<reference>
<bibtype>article</bibtype>
<citeid>BeingStuck</citeid>
<title>To Be or Not to Be Stuck, or Is It a Continuum?: A Systematic Literature Review on the Concept of Being Stuck in Games</title>
<year>2021</year>
<month>9</month>
<reviewed>1</reviewed>
<DOI>10.1145/3474656</DOI>
<journal>Proc. ACM Hum.-Comput. Interact. 5, CHI PLAY, Article 229 (September 2021)</journal>
<publisher>ACM</publisher>
<web_url2>https://youtu.be/BFDHEiZZLG8</web_url2>
<file_url>t3://file?uid=444390</file_url>
<authors>
<person>
<fn>Tobias</fn>
<sn>Drey</sn>
</person>
<person>
<fn>Fabian</fn>
<sn>Fischbach</sn>
</person>
<person>
<fn>Pascal</fn>
<sn>Jansen</sn>
</person>
<person>
<fn>Julian</fn>
<sn>Frommel</sn>
</person>
<person>
<fn>Michael</fn>
<sn>Rietzler</sn>
</person>
<person>
<fn>Enrico</fn>
<sn>Rukzio</sn>
</person>
</authors>
</reference>
<reference>
<bibtype>article</bibtype>
<citeid>DiscreteRotationWolf</citeid>
<title>Augmenting Teleportation in Virtual Reality With Discrete Rotation Angles</title>
<abstract>Locomotion is one of the most essential interaction tasks in virtual reality
(VR) with teleportation being widely accepted as the state-of-the-art locomotion technique at the time of this writing. A major draw-back of teleportation is the accompanying physical rotation that is necessary to adjust the users' orientation either before or after teleportation. This is a limiting factor for tethered head-mounted displays (HMDs) and static body postures and can induce additional simulator sickness for HMDs with three degrees-of-freedom
(DOF) due to missing parallax cues. To avoid physical rotation, previous work proposed discrete rotation at fixed intervals (InPlace) as a controller-based technique with low simulator sickness, yet the impact of varying intervals on spatial disorientation, user presence and performance remains to be explored.
An unevaluated technique found in commercial VR games is reorientation during the teleportation process (TeleTurn), which prevents physical rotation but potentially increases interaction time due to its continuous orientation selection. In an exploratory user study, where participants were free to apply both techniques, we evaluated the impact of rotation parameters of either technique on user performance and preference. Our results indicate that discrete InPlace rotation introduced no significant spatial disorientation, while user presence scores were increased. Discrete TeleTurn and teleportation without rotation was ranked higher and achieved a higher presence score than continuous TeleTurn, which is the current state-of-the-art found in VR games.
Based on observations, that participants avoided TeleTurn rotation when discrete InPlace rotation was available, we distilled guidelines for designing teleportation without physical rotation.</abstract>
<year>2021</year>
<month>6</month>
<day>08</day>
<howpublished>Arxiv.org</howpublished>
<file_url>http://arxiv.org/abs/2106.04257</file_url>
<authors>
<person>
<fn>Dennis</fn>
<sn>Wolf</sn>
</person>
<person>
<fn>Michael</fn>
<sn>Rietzler</sn>
</person>
<person>
<fn>Laura</fn>
<sn>Bottner</sn>
</person>
<person>
<fn>Enrico</fn>
<sn>Rukzio</sn>
</person>
</authors>
</reference>
<reference>
<bibtype>article</bibtype>
<citeid>chi21_ssq</citeid>
<title>A Critical Assessment of the Use of SSQ as a Measure of General Discomfort in VR Head-Mounted Displays</title>
<year>2021</year>
<month>5</month>
<DOI>10.1145/3411764.3445361</DOI>
<journal>In Proc. of CHI 2021 (SIGCHI Conference on Human Factors in Computing Systems)</journal>
<publisher>ACM</publisher>
<web_url>https://youtu.be/4UkAeAtENKo _blank - "Presentation Video"</web_url>
<file_url>t3://file?uid=437948</file_url>
<authors>
<person>
<fn>Teresa</fn>
<sn>Hirzle</sn>
</person>
<person>
<fn>Maurice</fn>
<sn>Cordts</sn>
</person>
<person>
<fn>Enrico</fn>
<sn>Rukzio</sn>
</person>
<person>
<fn>Jan</fn>
<sn>Gugenheimer</sn>
</person>
<person>
<fn>Andreas</fn>
<sn>Bulling</sn>
</person>
</authors>
</reference>
<reference>
<bibtype>article</bibtype>
<title>A Taxonomy of Vulnerable Road Users for HCI Based On A Systematic Literature Review</title>
<year>2021</year>
<month>5</month>
<DOI>10.1145/3411764.3445480</DOI>
<journal>In Proc. of CHI 2021 (SIGCHI Conference on Human Factors in Computing Systems) Holländer and Colley are Joint First Authors</journal>
<publisher>ACM</publisher>
<address>New York, NY, USA</address>
<web_url>https://youtu.be/3aWUYZUr6xI</web_url>
<web_url2>https://youtu.be/3aWUYZUr6xI</web_url2>
<file_url>t3://file?uid=435432</file_url>
<note>Holländer and Colley are Joint First Author</note>
<authors>
<person>
<fn>Kai</fn>
<sn>Holländer</sn>
</person>
<person>
<fn>Mark</fn>
<sn>Colley</sn>
</person>
<person>
<fn>Enrico</fn>
<sn>Rukzio</sn>
</person>
<person>
<fn>Andreas</fn>
<sn>Butz</sn>
</person>
</authors>
</reference>
<reference>
<bibtype>article</bibtype>
<title>Effects of Semantic Segmentation Visualization on Trust, Situation Awareness, and Cognitive Load in Highly Automated Vehicles</title>
<year>2021</year>
<month>5</month>
<DOI>10.1145/3411764.3445351</DOI>
<journal>In Proc. of CHI 2021 (SIGCHI Conference on Human Factors in Computing Systems)</journal>
<publisher>ACM</publisher>
<web_url>https://youtu.be/xaTmurTw9TY</web_url>
<web_url2>https://youtu.be/xaTmurTw9TY</web_url2>
<file_url>t3://file?uid=435434</file_url>
<authors>
<person>
<fn>Mark</fn>
<sn>Colley</sn>
</person>
<person>
<fn>Benjamin</fn>
<sn>Eder</sn>
</person>
<person>
<fn>Jan Ole</fn>
<sn>Rixen</sn>
</person>
<person>
<fn>Enrico</fn>
<sn>Rukzio</sn>
</person>
</authors>
</reference>
<reference>
<bibtype>article</bibtype>
<title>Exploring Augmented Visual Alterations in Interpersonal Communication</title>
<year>2021</year>
<month>5</month>
<DOI>10.1145/3411764.3445597</DOI>
<journal>In Proc. of CHI 2021 (SIGCHI Conference on Human Factors in Computing Systems)</journal>
<publisher>ACM</publisher>
<web_url2>https://www.youtube.com/watch?v=Mhlem-U439Q</web_url2>
<file_url>t3://file?uid=435433</file_url>
<authors>
<person>
<fn>Jan Ole</fn>
<sn>Rixen</sn>
</person>
<person>
<fn>Teresa</fn>
<sn>Hirzle</sn>
</person>
<person>
<fn>Mark</fn>
<sn>Colley</sn>
</person>
<person>
<fn>Yannick</fn>
<sn>Etzel</sn>
</person>
<person>
<fn>Enrico</fn>
<sn>Rukzio</sn>
</person>
<person>
<fn>Jan</fn>
<sn>Gugenheimer</sn>
</person>
</authors>
</reference>
<reference>
<bibtype>article</bibtype>
<title>Feels Like Team Spirit: Biometric and Strategic Interdependence in Asymmetric Multiplayer VR Games</title>
<year>2021</year>
<month>5</month>
<reviewed>1</reviewed>
<DOI>10.1145/3411764.3445492</DOI>
<journal>In Proc. of CHI 2021 (SIGCHI Conference on Human Factors in Computing Systems)</journal>
<publisher>ACM</publisher>
<web_url2>https://youtu.be/YMkVznNuxxE</web_url2>
<file_url>t3://file?uid=444330</file_url>
<authors>
<person>
<fn>Shyukryan</fn>
<sn>Karaosmanoglu</sn>
</person>
<person>
<fn>Katja</fn>
<sn>Rogers</sn>
</person>
<person>
<fn>Dennis</fn>
<sn>Wolf</sn>
</person>
<person>
<fn>Enrico</fn>
<sn>Rukzio</sn>
</person>
<person>
<fn>Lennart E.</fn>
<sn>Nacke</sn>
</person>
</authors>
</reference>
<reference>
<bibtype>conference</bibtype>
<citeid>WorkshopCHI2021Drey</citeid>
<title>Questionnaires and Qualitative Feedback Methods to Measure User Experience in Mixed Reality</title>
<year>2021</year>
<month>5</month>
<booktitle>CHI 2021 Workshop - Evaluating User Experiences in Mixed Reality</booktitle>
<web_url>https://arxiv.org/abs/2104.06221</web_url>
<web_url2>https://youtu.be/wRPfIhUyFl4</web_url2>
<file_url>https://arxiv.org/abs/2104.06221</file_url>
<authors>
<person>
<fn>Tobias</fn>
<sn>Drey</sn>
</person>
<person>
<fn>Michael</fn>
<sn>Rietzler</sn>
</person>
<person>
<fn>Enrico</fn>
<sn>Rukzio</sn>
</person>
</authors>
</reference>
<reference>
<bibtype>article</bibtype>
<title>Increasing Pedestrian Safety Using External Communication of Autonomous Vehicles for Signalling Hazards</title>
<status>1</status>
<year>2021</year>
<DOI>https://doi.org/10.1145/3447526.3472024</DOI>
<journal>Proceedings of the 23rd International Conference on Mobile Human-Computer Interaction (MobileHCI '21)</journal>
<publisher>ACM</publisher>
<web_url2>https://youtu.be/cBzJ0s9R_f0</web_url2>
<file_url>t3://file?uid=446424</file_url>
<authors>
<person>
<fn>Mark</fn>
<sn>Colley</sn>
</person>
<person>
<fn>Surong</fn>
<sn>Li</sn>
</person>
<person>
<fn>Enrico</fn>
<sn>Rukzio</sn>
</person>
</authors>
</reference>
<reference>
<bibtype>article</bibtype>
<title>Investigating the Design of Information Presentation in Take-Over Requests in Automated Vehicles</title>
<status>1</status>
<year>2021</year>
<DOI>https://doi.org/10.1145/3447526.3472025</DOI>
<journal>Proceedings of the 23rd International Conference on Mobile Human-Computer Interaction (MobileHCI '21)</journal>
<publisher>ACM</publisher>
<web_url2>https://youtu.be/bj3rU8Nho3U</web_url2>
<file_url>t3://file?uid=446425</file_url>
<authors>
<person>
<fn>Mark</fn>
<sn>Colley</sn>
</person>
<person>
<fn>Lukas</fn>
<sn>Gruler</sn>
</person>
<person>
<fn>Marcel</fn>
<sn>Woide</sn>
</person>
<person>
<fn>Enrico</fn>
<sn>Rukzio</sn>
</person>
</authors>
</reference>
<reference>
<bibtype>article</bibtype>
<title>Resync: Towards Transferring Somnolent Passengers to Consciousness</title>
<status>1</status>
<year>2021</year>
<DOI>10.1145/3447527.3474847</DOI>
<journal>23rd International Conference on Mobile Human-Computer Interaction Adjunct Proceedings</journal>
<web_url2>https://youtu.be/1yH9XFIHz34</web_url2>
<file_url>t3://file?uid=464061</file_url>
<authors>
<person>
<fn>Mark</fn>
<sn>Colley</sn>
</person>
<person>
<fn>Dennis</fn>
<sn>Wolf</sn>
</person>
<person>
<fn>Sabrina</fn>
<sn>Böhm</sn>
</person>
<person>
<fn>Tobias</fn>
<sn>Lahmann</sn>
</person>
<person>
<fn>Luca</fn>
<sn>Porta</sn>
</person>
<person>
<fn>Enrico</fn>
<sn>Rukzio</sn>
</person>
</authors>
</reference>
<reference>
<bibtype>article</bibtype>
<citeid>stemasovUbiquitousFabrication2021</citeid>
<title>The Road to Ubiquitous Personal Fabrication: Modeling-free Instead of Increasingly Simple</title>
<abstract>The tools for personal digital fabrication (DF) are on the verge of reaching mass-adoption beyond technology enthusiasts, empowering consumers to fabricate personalized artifacts. We argue that to achieve similar outreach and impact as personal computing, personal fabrication research may have to venture beyond ever-simpler interfaces for creation, toward lowest-effort workflows for remixing. We surveyed novice-friendly DF workflows from the perspective of HCI. Through this survey, we found two distinct approaches for this challenge: 1) simplifying expert modeling tools (AutoCAD →Tinkercad) and 2) enriching tools not involving primitive-based modeling with powerful customization (e.g., Thingiverse). Drawing parallel to content creation domains such as photography, we argue that the bulk of content is created via remixing (2). In this article, we argue that to be able to include the majority of the population in DF, research should embrace omission of workflow steps, shifting toward automation, remixing, and templates, instead of modeling from the ground up.</abstract>
<status>1</status>
<year>2021</year>
<reviewed>1</reviewed>
<issn>1558-2590</issn>
<DOI>10.1109/MPRV.2020.3029650</DOI>
<booktitle>Special Issue on Pervasive Manufacturing: Making Anything, Anywhere</booktitle>
<journal>IEEE Pervasive Computing, Special Issue on Pervasive Manufacturing: Making Anything, Anywhere</journal>
<edition>20</edition>
<volume>1</volume>
<publisher>IEEE</publisher>
<pages>1-9</pages>
<web_url>https://arxiv.org/abs/2101.02467</web_url>
<web_url2>https://youtu.be/W9sWo4lM40s</web_url2>
<file_url>t3://file?uid=435019</file_url>
<note>accepted for publication</note>
<authors>
<person>
<fn>Evgeny</fn>
<sn>Stemasov</sn>
</person>
<person>
<fn>Enrico</fn>
<sn>Rukzio</sn>
</person>
<person>
<fn>Jan</fn>
<sn>Gugenheimer</sn>
</person>
</authors>
</reference>
<reference>
<bibtype>poster</bibtype>
<citeid>graphical_markers_wolf</citeid>
<title>Exploring the Performance of Graphically Designed AR Markers</title>
<abstract>The design of graphical augmented reality (AR) markers requires compromise between the aesthetic appearance and tracking reliability. To investigate the topic, we created a virtual reality (VR) pipeline to evaluate marker performance, and validated it against real-world performance for a set of graphical AR markers. We report that, with the well known Vuforia framework and typical smart-phone hardware, well designed 20×20 cm markers can be tracked at distances of up to 68 cm. We note that the number of feature points is particularly important to a marker's angular performance.</abstract>
<year>2020</year>
<month>11</month>
<day>23</day>
<reviewed>1</reviewed>
<DOI>10.1145/3428361.3432076</DOI>
<booktitle>19th International Conference on Mobile and Ubiquitous Multimedia</booktitle>
<publisher>ACM</publisher>
<series>MUM 2020</series>
<pages>317–319</pages>
<file_url>t3://file?uid=432458</file_url>
<authors>
<person>
<fn>Ashley</fn>
<sn>Colley</sn>
</person>
<person>
<fn>Dennis</fn>
<sn>Wolf</sn>
</person>
<person>
<fn>Klaus</fn>
<sn>Kammerer</sn>
</person>
<person>
<fn>Enrico</fn>
<sn>Rukzio</sn>
</person>
<person>
<fn>Jonna</fn>
<sn>Häkkila</sn>
</person>
</authors>
</reference>
<reference>
<bibtype>conference</bibtype>
<citeid>Jansen_ShARe_2020</citeid>
<title>ShARe: Enabling Co-Located Asymmetric Multi-User Interaction for Augmented Reality Head-Mounted Displays</title>
<abstract>Head-Mounted Displays (HMDs) are the dominant form of enabling Virtual Reality (VR) and Augmented Reality (AR) for personal use. One of the biggest challenges of HMDs is the exclusion of people in the vicinity, such as friends or family. While recent research on asymmetric interaction for VR HMDs has contributed to solving this problem in the VR domain, AR HMDs come with similar but also different problems, such as conficting information in visualization through the HMD and projection. In this work, we propose ShARe, a modifed AR HMD combined with a projector that can display augmented content onto planar surfaces to include the outside users (non-HMD users). To combat the challenge of conficting visualization between augmented and projected content, ShARe visually aligns the content presented through the AR HMD with the projected content using an internal calibration procedure and a servo motor. Using marker tracking, non-HMD users are able to interact with the projected content using touch and gestures. To further explore the arising design space, we implemented three types of applications (collaborative game, competitive game, and external visualization). ShARe is a proof-of-concept system that showcases how AR HMDs can facilitate interaction with outside users to combat exclusion and instead foster rich, enjoyable social interactions.</abstract>
<status>1</status>
<year>2020</year>
<month>10</month>
<day>20</day>
<reviewed>1</reviewed>
<DOI>10.1145/3379337.3415843</DOI>
<journal>In Proceedings of UIST '20 ACM Symposium on User Interface Software and Technology)</journal>
<publisher>ACM</publisher>
<event_name>ACM Symposium on User Interface Software and Technology (UIST)</event_name>
<event_place>Minneapolis, Minnesota, USA</event_place>
<web_url2>https://youtu.be/6wuBt2Nv18w</web_url2>
<file_url>t3://file?uid=430150</file_url>
<authors>
<person>
<fn>Pascal</fn>
<sn>Jansen</sn>
</person>
<person>
<fn>Fabian</fn>
<sn>Fischbach</sn>
</person>
<person>
<fn>Jan</fn>
<sn>Gugenheimer</sn>
</person>
<person>
<fn>Evgeny</fn>
<sn>Stemasov</sn>
</person>
<person>
<fn>Julian</fn>
<sn>Frommel</sn>
</person>
<person>
<fn>Enrico</fn>
<sn>Rukzio</sn>
</person>
</authors>
</reference>
<reference>
<bibtype>article</bibtype>
<title>'They Like to Hear My Voice'': Exploring Usage Behavior in Speech-Based Mobile Instant Messaging</title>
<status>1</status>
<year>2020</year>
<month>10</month>
<DOI>https://doi.org/10.1145/3379503.3403561</DOI>
<journal>In Proc. of MobileHCI 2020 (International Conference on Human-Computer Interaction with Mobile Devices and Services)</journal>
<publisher>ACM</publisher>
<event_name>International Conference on Human-Computer Interaction with Mobile Devices and Services (Mobile HCI)</event_name>
<file_url>t3://file?uid=429435</file_url>
<authors>
<person>
<fn>Gabriel</fn>
<sn>Haas</sn>
</person>
<person>
<fn>Jan</fn>
<sn>Gugenheimer</sn>
</person>
<person>
<fn>Jan Ole</fn>
<sn>Rixen</sn>
</person>
<person>
<fn>Florian</fn>
<sn>Schaub</sn>
</person>
<person>
<fn>Enrico</fn>
<sn>Rukzio</sn>
</person>
</authors>
</reference>
<reference>
<bibtype>article</bibtype>
<title>VoiceMessage++: Exploring the Concept of Augmented Voice Messages</title>
<status>1</status>
<year>2020</year>
<month>10</month>
<DOI>https://doi.org/10.1145/3379503.3403560</DOI>
<journal>In Proc. of MobileHCI 2020 (International Conference on Human-Computer Interaction with Mobile Devices and Services)</journal>
<publisher>ACM</publisher>
<event_name>International Conference on Human-Computer Interaction with Mobile Devices and Services (Mobile HCI)</event_name>
<file_url>t3://file?uid=434664</file_url>
<authors>
<person>
<fn>Gabriel</fn>
<sn>Haas</sn>
</person>
<person>
<fn>Jan</fn>
<sn>Gugenheimer</sn>
</person>
<person>
<fn>Enrico</fn>
<sn>Rukzio</sn>
</person>
</authors>
</reference>
<reference>
<bibtype>article</bibtype>
<title>A Design Space for External Communication of Autonomous Vehicles</title>
<year>2020</year>
<month>9</month>
<DOI>10.1145/3409120.3410646</DOI>
<journal>Proceedings of the 12th International Conference on Automotive User Interfaces and Interactive Vehicular Applications (AutomotiveUI ’20)</journal>
<publisher>ACM</publisher>
<address>New York, NY, USA</address>
<web_url2>https://youtu.be/127RxzS6iyA</web_url2>
<file_url>t3://file?uid=427230</file_url>
<authors>
<person>
<fn>Mark</fn>
<sn>Colley</sn>
</person>
<person>
<fn>Enrico</fn>
<sn>Rukzio</sn>
</person>
</authors>
</reference>
<reference>
<bibtype>article</bibtype>
<title>Effect of Visualization of Pedestrian Intention Recognition on Trust and Cognitive Load</title>
<year>2020</year>
<month>9</month>
<DOI>10.1145/3409120.3410648</DOI>
<journal>Proceedings of the 12th International Conference on Automotive User Interfaces and Interactive Vehicular Applications (AutomotiveUI ’20)</journal>
<publisher>ACM</publisher>
<address>New York, NY, USA</address>
<web_url2>https://youtu.be/aGWgtVbY1Lc</web_url2>
<file_url>t3://file?uid=435445</file_url>
<authors>
<person>
<fn>Mark</fn>
<sn>Colley</sn>
</person>
<person>
<fn>Christian</fn>
<sn>Bräuner</sn>
</person>
<person>
<fn>Mirjam</fn>
<sn>Lanzer</sn>
</person>
<person>
<fn>Marcel</fn>
<sn>Walch</sn>
</person>
<person>
<fn>Martin</fn>
<sn>Baumann</sn>
</person>
<person>
<fn>Enrico</fn>
<sn>Rukzio</sn>
</person>
</authors>
</reference>
<reference>
<bibtype>article</bibtype>
<title>Evaluating Highly Automated Trucks as Signaling Lights</title>
<year>2020</year>
<month>9</month>
<DOI>10.1145/3409120.3410647</DOI>
<journal>Proceedings of the 12th International Conference on Automotive User Interfaces and Interactive Vehicular Applications (AutomotiveUI ’20)</journal>
<publisher>ACM</publisher>
<address>New York, NY, USA</address>
<web_url2>https://youtu.be/SNYoO-jUwl4</web_url2>
<file_url>t3://file?uid=427232</file_url>
<authors>
<person>
<fn>Mark</fn>
<sn>Colley</sn>
</person>
<person>
<fn>Stefanos</fn>
<sn>Mytilineos</sn>
</person>
<person>
<fn>Marcel</fn>
<sn>Walch</sn>
</person>
<person>
<fn>Jan</fn>
<sn>Gugenheimer</sn>
</person>
<person>
<fn>Enrico</fn>
<sn>Rukzio</sn>
</person>
</authors>
</reference>
<reference>
<bibtype>article</bibtype>
<title>Using large-scale augmented floor surfaces for industrial applications and evaluation on perceived sizes</title>
<abstract>Large high-resolution displays (LHRDs) provide an enabling technology to achieve immersive, isometrically registered, virtual environments. It has been shown that LHRDs allow better size judgments, higher collaboration performance, and shorter task completion times. This paper presents novel insights into human size perception using large-scale floor displays, in particular in-depth evaluations of size judgment accuracy, precision, and task completion time. These investigations have been performed in the context of six, novel applications in the domain of automotive production planning. In our studies, we used a 54-sqm sized LED floor and a standard tablet visualizing relatively scaled and true to scale 2D content, which users had to estimate using different aids. The study involved 22 participants and three different conditions. Results indicate that true to scale floor visualizations reduce the mean absolute percentage error of spatial estimations. In all three conditions, we did not find the typical overestimation or underestimation of size judgments.</abstract>
<year>2020</year>
<month>8</month>
<day>11</day>
<DOI>https://doi.org/10.1007/s00779-020-01433-z</DOI>
<journal>Personal and Ubiquitous Computing</journal>
<publisher>Springer</publisher>
<file_url>t3://file?uid=434666</file_url>
<authors>
<person>
<fn>Michael</fn>
<sn>Otto</sn>
</person>
<person>
<fn>Eva</fn>
<sn>Lampen</sn>
</person>
<person>
<fn>Philipp</fn>
<sn>Agethen</sn>
</person>
<person>
<fn>Gabriel</fn>
<sn>Zachmann</sn>
</person>
<person>
<fn>Enrico</fn>
<sn>Rukzio</sn>
</person>
</authors>
</reference>
<reference>
<bibtype>article</bibtype>
<citeid>haasIAMR2020</citeid>
<title>Interactive Auditory Mediated Reality: Towards User-defined Personal Soundscapes</title>
<year>2020</year>
<month>7</month>
<day>6</day>
<reviewed>1</reviewed>
<DOI>http://dx.doi.org/10.1145/3357236.3395493</DOI>
<journal>In Proc. of DIS 2020 (ACM Conference on Designing Interactive Systems)</journal>
<publisher>ACM</publisher>
<file_url>t3://file?uid=434665</file_url>
<authors>
<person>
<fn>Gabriel</fn>
<sn>Haas</sn>
</person>
<person>
<fn>Evgeny</fn>
<sn>Stemasov</sn>
</person>
<person>
<fn>Michael</fn>
<sn>Rietzler</sn>
</person>
<person>
<fn>Enrico</fn>
<sn>Rukzio</sn>
</person>
</authors>
</reference>
<reference>
<bibtype>article</bibtype>
<citeid>etra20_survey</citeid>
<title>A Survey of Digital Eye Strain in Gaze-Based Interactive Systems</title>
<year>2020</year>
<month>6</month>
<isbn>9781450371339</isbn>
<DOI>10.1145/3379155.3391313</DOI>
<booktitle>ACM Symposium on Eye Tracking Research and Applications</booktitle>
<journal>ETRA '20 Full Papers: ACM Symposium on Eye Tracking Research and Applications</journal>
<publisher>ACM</publisher>
<file_url>t3://file?uid=424288</file_url>
<authors>
<person>
<fn>Teresa</fn>
<sn>Hirzle</sn>
</person>
<person>
<fn>Maurice</fn>
<sn>Cordts</sn>
</person>
<person>
<fn>Enrico</fn>
<sn>Rukzio</sn>
</person>
<person>
<fn>Andreas</fn>
<sn>Bulling</sn>
</person>
</authors>
</reference>
<reference>
<bibtype>conference</bibtype>
<citeid>WorkshopCHI2020Drey</citeid>
<title>Discussing the Risks of Adaptive Virtual Environments for User Autonomy</title>
<year>2020</year>
<month>4</month>
<booktitle>CHI 2020 Workshop - Exploring Potentially Abusive Ethical, Social and Political Implications of Mixed Reality Research in HCI</booktitle>
<file_url>t3://file?uid=433711</file_url>
<authors>
<person>
<fn>Tobias</fn>
<sn>Drey</sn>
</person>
<person>
<fn>Enrico</fn>
<sn>Rukzio</sn>
</person>
</authors>
</reference>
<reference>
<bibtype>article</bibtype>
<citeid>JumpVR</citeid>
<title>JumpVR: Jump-Based Locomotion Augmentation for Virtual Reality</title>
<year>2020</year>
<month>4</month>
<reviewed>1</reviewed>
<DOI>10.1145/3313831.3376243</DOI>
<journal>In Proc. of CHI 2020 (SIGCHI Conference on Human Factors in Computing Systems)</journal>
<volume>ACM</volume>
<web_url2>https://youtu.be/cWBxh3I5lHg</web_url2>
<file_url>t3://file?uid=422939</file_url>
<authors>
<person>
<fn>Dennis</fn>
<sn>Wolf</sn>
</person>
<person>
<fn>Katja</fn>
<sn>Rogers</sn>
</person>
<person>
<fn>Christoph</fn>
<sn>Kunder</sn>
</person>
<person>
<fn>Enrico</fn>
<sn>Rukzio</sn>
</person>
</authors>
</reference>
<reference>
<bibtype>article</bibtype>
<citeid>MixMatch2020</citeid>
<title>Mix&Match: Towards Omitting Modelling through In-Situ Alteration and Remixing of Model Repository Artifacts in Mixed Reality</title>
<status>1</status>
<year>2020</year>
<month>4</month>
<reviewed>1</reviewed>
<DOI>10.1145/3313831.3376839</DOI>
<journal>In Proc. of CHI 2020 (SIGCHI Conference on Human Factors in Computing Systems)</journal>
<publisher>ACM</publisher>
<web_url>https://arxiv.org/abs/2003.09169</web_url>
<web_url2>https://youtu.be/B5EnkIk9ZFY</web_url2>
<file_url>t3://file?uid=423723</file_url>
<authors>
<person>
<fn>Evgeny</fn>
<sn>Stemasov</sn>
</person>
<person>
<fn>Tobias</fn>
<sn>Wagner</sn>
</person>
<person>
<fn>Jan</fn>
<sn>Gugenheimer</sn>
</person>
<person>
<fn>Enrico</fn>
<sn>Rukzio</sn>
</person>
</authors>
</reference>
<reference>
<bibtype>article</bibtype>
<title>Towards a Design Space for External Communication of Autonomous Vehicles</title>
<status>1</status>
<year>2020</year>
<month>4</month>
<DOI>10.1145/3334480.3382844</DOI>
<journal>In Extended Abstracts of CHI 2020 (SIGCHI Conference on Human Factors in Computing Systems)</journal>
<publisher>ACM</publisher>
<file_url>t3://file?uid=435449</file_url>
<authors>
<person>
<fn>Mark</fn>
<sn>Colley</sn>
</person>
<person>
<fn>Enrico</fn>
<sn>Rukzio</sn>
</person>
</authors>
</reference>
<reference>
<bibtype>article</bibtype>
<title>Towards Inclusive External Communication of Autonomous  Vehicles for Pedestrians with Vision Impairments</title>
<year>2020</year>
<month>4</month>
<DOI>10.1145/3313831.3376472</DOI>
<journal>In Proc. of CHI 2020 (SIGCHI Conference on Human Factors in Computing Systems)</journal>
<publisher>ACM</publisher>
<keywords>honmention2020</keywords>
<web_url2>https://youtu.be/1L7zTJ86PE8</web_url2>
<file_url>t3://file?uid=423724</file_url>
<authors>
<person>
<fn>Mark</fn>
<sn>Colley</sn>
</person>
<person>
<fn>Marcel</fn>
<sn>Walch</sn>
</person>
<person>
<fn>Jan</fn>
<sn>Gugenheimer</sn>
</person>
<person>
<fn>Ali</fn>
<sn>Askari</sn>
</person>
<person>
<fn>Enrico</fn>
<sn>Rukzio</sn>
</person>
</authors>
</reference>
<reference>
<bibtype>article</bibtype>
<title>Towards Progress Assessment for Adaptive Hints in Educational Virtual Reality Games</title>
<status>1</status>
<year>2020</year>
<month>4</month>
<DOI>10.1145/3334480.3382789</DOI>
<journal>In Extended Abstracts of CHI 2020 (SIGCHI Conference on Human Factors in Computing Systems)</journal>
<publisher>ACM</publisher>
<web_url2>https://youtu.be/3uW-NBEatTg</web_url2>
<file_url>t3://file?uid=435010</file_url>
<authors>
<person>
<fn>Tobias</fn>
<sn>Drey</sn>
</person>
<person>
<fn>Pascal</fn>
<sn>Jansen</sn>
</person>
<person>
<fn>Fabian</fn>
<sn>Fischbach</sn>
</person>
<person>
<fn>Julian</fn>
<sn>Frommel</sn>
</person>
<person>
<fn>Enrico</fn>
<sn>Rukzio</sn>
</person>
</authors>
</reference>
<reference>
<bibtype>article</bibtype>
<citeid>Heisenberg</citeid>
<title>Understanding the Heisenberg Effect of Spatial Interaction: A Selection Induced Error for Spatially Tracked Input Devices</title>
<year>2020</year>
<month>4</month>
<reviewed>1</reviewed>
<DOI>10.1145/3313831.3376876</DOI>
<journal>In Proc. of CHI 2020 (SIGCHI Conference on Human Factors in Computing Systems)</journal>
<publisher>ACM</publisher>
<web_url2>https://youtu.be/HwDPTof3BUo</web_url2>
<file_url>t3://file?uid=422938</file_url>
<authors>
<person>
<fn>Dennis</fn>
<sn>Wolf</sn>
</person>
<person>
<fn>Jan</fn>
<sn>Gugenheimer</sn>
</person>
<person>
<fn>Marco</fn>
<sn>Combosch</sn>
</person>
<person>
<fn>Enrico</fn>
<sn>Rukzio</sn>
</person>
</authors>
</reference>
<reference>
<bibtype>article</bibtype>
<title>Unveiling the Lack of Scalability in Research on External Communication of Autonomous Vehicles</title>
<status>1</status>
<year>2020</year>
<month>4</month>
<DOI>10.1145/3334480.3382865</DOI>
<journal>In Extended Abstracts of CHI 2020 (SIGCHI Conference on Human Factors in Computing Systems)</journal>
<publisher>ACM</publisher>
<file_url>t3://file?uid=435448</file_url>
<authors>
<person>
<fn>Mark</fn>
<sn>Colley</sn>
</person>
<person>
<fn>Marcel</fn>
<sn>Walch</sn>
</person>
<person>
<fn>Enrico</fn>
<sn>Rukzio</sn>
</person>
</authors>
</reference>
<reference>
<bibtype>article</bibtype>
<citeid>VRSketchIn</citeid>
<title>VRSketchIn: Exploring the Design Space of Pen and Tablet Interaction for 3D Sketching in Virtual Reality</title>
<status>1</status>
<year>2020</year>
<month>4</month>
<reviewed>1</reviewed>
<DOI>10.1145/3313831.3376628</DOI>
<journal>In Proc. of CHI 2020 (SIGCHI Conference on Human Factors in Computing Systems)</journal>
<publisher>ACM</publisher>
<web_url2>https://youtu.be/XhMiRWOS8PA</web_url2>
<file_url>t3://file?uid=422349</file_url>
<authors>
<person>
<fn>Tobias</fn>
<sn>Drey</sn>
</person>
<person>
<fn>Jan</fn>
<sn>Gugenheimer</sn>
</person>
<person>
<fn>Julian</fn>
<sn>Karlbauer</sn>
</person>
<person>
<fn>Maximilian</fn>
<sn>Milo</sn>
</person>
<person>
<fn>Enrico</fn>
<sn>Rukzio</sn>
</person>
</authors>
</reference>
<reference>
<bibtype>inbook</bibtype>
<title>Combining heterogeneous digital human simulations: presenting a novel co-simulation approach for incorporating different character animation technologies</title>
<year>2020</year>
<month>1</month>
<day>22</day>
<DOI>https://doi.org/10.1007/s00371-020-01792-x</DOI>
<booktitle>The Visual Computer</booktitle>
<journal>The Visual Computer</journal>
<publisher>Springer</publisher>
<web_url>https://link.springer.com/article/10.1007/s00371-020-01792-x?wt_mc=Internal.Event.1.SEM.ArticleAuthorOnlineFirst&utm_source=ArticleAuthorOnlineFirst&utm_medium=email&utm_content=AA_en_06082018&ArticleAuthorOnlineFirst_20200123#article-info</web_url>
<file_url>https://www.uni-ulm.de/fileadmin/website_uni_ulm/iui.inst.100/institut/Papers/Prof_Rukzio/2020/2020_gaisbauer_the_visual_computer_Co_Simulation_preprint.pdf</file_url>
<authors>
<person>
<fn>Felix</fn>
<sn>Gaisbauer</sn>
</person>
<person>
<fn>Eva</fn>
<sn>Lampen</sn>
</person>
<person>
<fn>Philipp</fn>
<sn>Agethen</sn>
</person>
<person>
<fn>Enrico</fn>
<sn>Rukzio</sn>
</person>
</authors>
</reference>
<reference>
<bibtype>article</bibtype>
<title>Telewalk: Towards Free and Endless Walking in Room-Scale Virtual Reality</title>
<year>2020</year>
<DOI>https://doi.org/10.1145/3313831.3376821</DOI>
<journal>Proc. of CHI 2020 (SIGCHI Conference on Human Factors in Computing Systems)</journal>
<publisher>ACM</publisher>
<web_url2>https://youtu.be/qyWLkedJtjw</web_url2>
<file_url>t3://file?uid=423725</file_url>
<authors>
<person>
<fn>Michael</fn>
<sn>Rietzler</sn>
</person>
<person>
<fn>Thomas</fn>
<sn>Dreja</sn>
</person>
<person>
<fn>Enrico</fn>
<sn>Rukzio</sn>
</person>
</authors>
</reference>
<reference>
<bibtype>conference</bibtype>
<citeid>ColleyM2020</citeid>
<title>Towards Reducing Energy Waste through Usage of External Communication of Autonomous Vehicles</title>
<year>2020</year>
<booktitle>CHI 2020 Workshop - Should I Stay or Should I Go? Automated Vehicles in the Age of Climate Change</booktitle>
<web_url2>t3://file?uid=423728</web_url2>
<authors>
<person>
<fn>Mark</fn>
<sn>Colley</sn>
</person>
<person>
<fn>Marcel</fn>
<sn>Walch</sn>
</person>
<person>
<fn>Enrico</fn>
<sn>Rukzio</sn>
</person>
</authors>
</reference>
<reference>
<bibtype>conference</bibtype>
<citeid>WalchW2020</citeid>
<title>Turn Drivers Into Users and Keep Them Out-Of-The-Loop to Save Energy</title>
<year>2020</year>
<booktitle>CHI 2020 Workshop - Should I Stay or Should I Go? Automated Vehicles in the Age of Climate Change</booktitle>
<file_url>t3://file?uid=423731</file_url>
<authors>
<person>
<fn>Marcel</fn>
<sn>Walch</sn>
</person>
<person>
<fn>Mark</fn>
<sn>Colley</sn>
</person>
<person>
<fn>Philipp</fn>
<sn>Hock</sn>
</person>
<person>
<fn>Enrico</fn>
<sn>Rukzio</sn>
</person>
<person>
<fn>Michael</fn>
<sn>Weber</sn>
</person>
</authors>
</reference>
<reference>
<bibtype>article</bibtype>
<citeid>cARe_Demo</citeid>
<title>A Demonstration of cARe: An Augmented Reality Support System for Geriatric Inpatients with Mild Cognitive Impairment</title>
<year>2019</year>
<month>11</month>
<DOI>10.1145/3365610.3368472</DOI>
<journal>In Adj. Proc. of MUM19 (18th International Conference on Mobile and Ubiquitous Multimedia)</journal>
<publisher>ACM</publisher>
<web_url2>https://www.uni-ulm.de/index.php?id=100473</web_url2>
<file_url>https://www.uni-ulm.de/fileadmin/website_uni_ulm/iui.inst.100/institut/Papers/Prof_Rukzio/2019/a58-wolf.pdf</file_url>
<authors>
<person>
<fn>Dennis</fn>
<sn>Wolf</sn>
</person>
<person>
<fn>Daniel</fn>
<sn>Besserer</sn>
</person>
<person>
<fn>Karolina</fn>
<sn>Sejunaite</sn>
</person>
<person>
<fn>Anja</fn>
<sn>Schuler</sn>
</person>
<person>
<fn>Matthias</fn>
<sn>Riepe</sn>
</person>
<person>
<fn>Enrico</fn>
<sn>Rukzio</sn>
</person>
</authors>
</reference>
<reference>
<bibtype>article</bibtype>
<citeid>cARe_MUM</citeid>
<title>cARe: An Augmented Reality Support System for Geriatric Inpatients with Mild Cognitive Impairment</title>
<year>2019</year>
<month>11</month>
<DOI>10.1145/3365610.3365612</DOI>
<journal>In Proc. of MUM19 (18th International Conference on Mobile and Ubiquitous Multimedia)</journal>
<publisher>ACM</publisher>
<web_url2>https://www.uni-ulm.de/index.php?id=100473</web_url2>
<file_url>https://www.uni-ulm.de/fileadmin/website_uni_ulm/iui.inst.100/institut/Papers/Prof_Rukzio/2019/a2-wolf.pdf</file_url>
<authors>
<person>
<fn>Dennis</fn>
<sn>Wolf</sn>
</person>
<person>
<fn>Daniel</fn>
<sn>Besserer</sn>
</person>
<person>
<fn>Karolina</fn>
<sn>Sejunaite</sn>
</person>
<person>
<fn>Anja</fn>
<sn>Schuler</sn>
</person>
<person>
<fn>Matthias</fn>
<sn>Riepe</sn>
</person>
<person>
<fn>Enrico</fn>
<sn>Rukzio</sn>
</person>
</authors>
</reference>
<reference>
<bibtype>article</bibtype>
<citeid>Rietzler_VirtualMuscleForce</citeid>
<title>Virtual Muscle Force: Communicating Kinesthetic Forces Through Pseudo-Haptic Feedback and Muscle Input</title>
<year>2019</year>
<month>10</month>
<day>20</day>
<DOI>10.1145/3332165.3347871</DOI>
<journal>In Proc. of UIST 2019 (ACM Symposium on User Interface Software and Technology)</journal>
<publisher>ACM</publisher>
<web_url2>https://youtu.be/4pfR75jP3oE</web_url2>
<file_url>https://www.uni-ulm.de/fileadmin/website_uni_ulm/iui.inst.100/institut/Papers/Prof_Rukzio/2019/uist19-rietzler_virtualMuscleForce.pdf</file_url>
<authors>
<person>
<fn>Michael</fn>
<sn>Rietzler</sn>
</person>
<person>
<fn>Gabriel</fn>
<sn>Haas</sn>
</person>
<person>
<fn>Thomas</fn>
<sn>Dreja</sn>
</person>
<person>
<fn>Florian</fn>
<sn>Geiselhart</sn>
</person>
<person>
<fn>Enrico</fn>
<sn>Rukzio</sn>
</person>
</authors>
</reference>
<reference>
<bibtype>article</bibtype>
<citeid>EEG</citeid>
<title>Low-Cost Real-Time Mental Load Adaptation for Augmented Reality Instructions - A Feasibility Study</title>
<year>2019</year>
<month>10</month>
<DOI>10.1109/ISMAR-Adjunct.2019.00015</DOI>
<journal>In Adj. Proc. of ISMAR 2019 (2019 IEEE International Symposium on Mixed and Augmented Reality)</journal>
<publisher>IEEE</publisher>
<file_url>t3://file?uid=414589</file_url>
<authors>
<person>
<fn>Dennis</fn>
<sn>Wolf</sn>
</person>
<person>
<fn>Tobias</fn>
<sn>Wagner</sn>
</person>
<person>
<fn>Enrico</fn>
<sn>Rukzio</sn>
</person>
</authors>
</reference>
<reference>
<bibtype>article</bibtype>
<citeid>MIG2019</citeid>
<title>Natural Posture Blending Using Deep Neural Networks</title>
<status>1</status>
<year>2019</year>
<month>10</month>
<DOI>10.1145/3359566.3360052</DOI>
<journal>In Proc. of MIG 2019 (ACM Siggraph Conference on Motion, Interaction and Games)</journal>
<publisher>ACM</publisher>
<file_url>https://www.uni-ulm.de/fileadmin/website_uni_ulm/iui.inst.100/institut/Papers/Prof_Rukzio/2019/a2-gaisbauer.pdf</file_url>
<authors>
<person>
<fn>Felix</fn>
<sn>Gaisbauer</sn>
</person>
<person>
<fn>Jannes</fn>
<sn>Lehwald</sn>
</person>
<person>
<fn>Janis</fn>
<sn>Sprenger</sn>
</person>
<person>
<fn>Enrico</fn>
<sn>Rukzio</sn>
</person>
</authors>
</reference>
<reference>
<bibtype>article</bibtype>
<title>For a Better (Simulated) World: Considerations for VR in External Communication Research</title>
<year>2019</year>
<month>9</month>
<DOI>10.1145/3349263.3351523</DOI>
<journal>In Adj. Proc.  of AutomotiveUI ’19 (11th International Conference on Automotive User Interfaces and Interactive Vehicular Applications)</journal>
<publisher>ACM</publisher>
<file_url>t3://file?uid=435447</file_url>
<authors>
<person>
<fn>Mark</fn>
<sn>Colley</sn>
</person>
<person>
<fn>Marcel</fn>
<sn>Walch</sn>
</person>
<person>
<fn>Enrico</fn>
<sn>Rukzio</sn>
</person>
</authors>
</reference>
<reference>
<bibtype>article</bibtype>
<title>Including People with Impairments from the Start: External Communication of Autonomous Vehicles</title>
<year>2019</year>
<month>9</month>
<DOI>10.1145/3349263.3351521</DOI>
<journal>In Adj. Proc.  of AutomotiveUI ’19 (11th International Conference on Automotive User Interfaces and Interactive Vehicular Applications)</journal>
<publisher>ACM</publisher>
<keywords>Self-driving vehicles
Autonomous vehicles
impaired pedestrians
external communication
intention communication
interface design
inclusive design research
accessibility</keywords>
<file_url>t3://file?uid=414341</file_url>
<authors>
<person>
<fn>Mark</fn>
<sn>Colley</sn>
</person>
<person>
<fn>Marcel</fn>
<sn>Walch</sn>
</person>
<person>
<fn>Jan</fn>
<sn>Gugenheimer</sn>
</person>
<person>
<fn>Enrico</fn>
<sn>Rukzio</sn>
</person>
</authors>
</reference>
<reference>
<bibtype>article</bibtype>
<citeid>HockBKRB2019</citeid>
<title>Towards Opt-Out Permission Policies to Maximize the Use of Automated Driving</title>
<year>2019</year>
<month>9</month>
<DOI>10.1145/3342197.3344521</DOI>
<journal>In Proc. of AutomotiveUI 2019 (ACM Conference on Automotive User Interfaces and Interactive Vehicular Applications)</journal>
<event_name>11th International ACM Conference on Automotive User Interfaces and Interactive Vehicular Applications</event_name>
<event_place>Utrecht, Netherlands</event_place>
<file_url>https://www.uni-ulm.de/fileadmin/website_uni_ulm/iui.inst.100/institut/Papers/Prof_Rukzio/2019/2019_hock_automotiveUI.pdf</file_url>
<authors>
<person>
<fn>Philipp</fn>
<sn>Hock</sn>
</person>
<person>
<fn>Franziska</fn>
<sn>Babel</sn>
</person>
<person>
<fn>Johannes</fn>
<sn>Kraus</sn>
</person>
<person>
<fn>Enrico</fn>
<sn>Rukzio</sn>
</person>
<person>
<fn>Martin</fn>
<sn>Baumann</sn>
</person>
</authors>
</reference>
<reference>
<bibtype>article</bibtype>
<citeid>FaceOnIsmar</citeid>
<title>Face/On: Multi-Modal Haptic Feedback for Head-Mounted Displays in Virtual Reality</title>
<year>2019</year>
<month>8</month>
<issn>1077-2626</issn>
<DOI>10.1109/TVCG.2019.2932215</DOI>
<journal>IEEE Transactions on Visualization and Computer Graphics (TVCG)</journal>
<event_name>ISMAR 2019</event_name>
<event_place>Peking, China</event_place>
<web_url2>https://www.uni-ulm.de/index.php?id=98825</web_url2>
<file_url>t3://file?uid=439057</file_url>
<authors>
<person>
<fn>Dennis</fn>
<sn>Wolf</sn>
</person>
<person>
<fn>Michael</fn>
<sn>Rietzler</sn>
</person>
<person>
<fn>Leo</fn>
<sn>Hnatek</sn>
</person>
<person>
<fn>Enrico</fn>
<sn>Rukzio</sn>
</person>
</authors>
</reference>
<reference>
<bibtype>article</bibtype>
<citeid>otto_2019_VR</citeid>
<title>A Virtual Reality Assembly Assessment Benchmark for Measuring VR Performance & Limitations</title>
<year>2019</year>
<month>6</month>
<journal>In Proc. CIRP CMS 2019 (52th CIRP Conference on Manufacturing Systems)</journal>
<web_url>https://doi.org/10.1016/j.procir.2019.03.195</web_url>
<file_url>https://www.uni-ulm.de/fileadmin/website_uni_ulm/iui.inst.100/institut/Papers/Prof_Rukzio/2019/2019_otto_VR2A.pdf</file_url>
<authors>
<person>
<fn>Michael</fn>
<sn>Otto</sn>
</person>
<person>
<fn>Eva</fn>
<sn>Lampen</sn>
</person>
<person>
<fn>Philipp</fn>
<sn>Agethen</sn>
</person>
<person>
<fn>Mareike</fn>
<sn>Langohr</sn>
</person>
<person>
<fn>Gabriel</fn>
<sn>Zachmann</sn>
</person>
<person>
<fn>Enrico</fn>
<sn>Rukzio</sn>
</person>
</authors>
</reference>
<reference>
<bibtype>article</bibtype>
<citeid>otto_2019_EAWS</citeid>
<title>Applicability Evaluation of Kinect for EAWS Ergonomic Assessments</title>
<year>2019</year>
<month>6</month>
<journal>In Proc. of CIRP CMS 2019 (52th CIRP Conference on Manufacturing Systems)</journal>
<publisher>Elsevier</publisher>
<web_url>https://doi.org/10.1016/j.procir.2019.03.194</web_url>
<file_url>https://www.uni-ulm.de/fileadmin/website_uni_ulm/iui.inst.100/institut/Papers/Prof_Rukzio/2019/2019_otto_EAWS.pdf</file_url>
<authors>
<person>
<fn>Michael</fn>
<sn>Otto</sn>
</person>
<person>
<fn>Eva</fn>
<sn>Lampen</sn>
</person>
<person>
<fn>Felix</fn>
<sn>Auris</sn>
</person>
<person>
<fn>Felix</fn>
<sn>Gaisbauer</sn>
</person>
<person>
<fn>Enrico</fn>
<sn>Rukzio</sn>
</person>
</authors>
</reference>
<reference>
<bibtype>article</bibtype>
<citeid>otto_2019_perdis</citeid>
<title>Evaluation on Perceived Sizes Using Large-Scale Augmented Floor Visualization Devices</title>
<year>2019</year>
<month>6</month>
<journal>In Proc. of Perdis 2019 (8th ACM International Symposium on Pervasive Displays)</journal>
<publisher>ACM</publisher>
<web_url>https://doi.org/10.1145/3321335.3324951</web_url>
<file_url>https://www.uni-ulm.de/fileadmin/website_uni_ulm/iui.inst.100/institut/Papers/Prof_Rukzio/2019/2019_otto_PERDIS.pdf</file_url>
<authors>
<person>
<fn>Michael</fn>
<sn>Otto</sn>
</person>
<person>
<fn>Eva</fn>
<sn>Lampen</sn>
</person>
<person>
<fn>Philipp</fn>
<sn>Agethen</sn>
</person>
<person>
<fn>Mareike</fn>
<sn>Langohr</sn>
</person>
<person>
<fn>Gerald</fn>
<sn>Masan</sn>
</person>
<person>
<fn>Enrico</fn>
<sn>Rukzio</sn>
</person>
</authors>
</reference>
<reference>
<bibtype>article</bibtype>
<citeid>chi19_designspaceforgazeinteraction</citeid>
<title>A Design Space for Gaze Interaction on Head-Mounted Displays</title>
<year>2019</year>
<month>5</month>
<DOI>10.1145/3290605.3300855</DOI>
<journal>In Proceedings of CHI 2019 (SIGCHI Conference on Human Factors in Computing Systems)</journal>
<publisher>ACM</publisher>
<web_url>https://www.uni-ulm.de/?gazedesignspace</web_url>
<file_url>https://www.uni-ulm.de/fileadmin/website_uni_ulm/iui.inst.100/institut/mitarbeiterbereiche/hirzle/Publications/A_Design_Space_for_Gaze_Interaction_on_Head-Mounted_Displays_CHI_19.pdf</file_url>
<authors>
<person>
<fn>Teresa</fn>
<sn>Hirzle</sn>
</person>
<person>
<fn>Jan</fn>
<sn>Gugenheimer</sn>
</person>
<person>
<fn>Florian</fn>
<sn>Geiselhart</sn>
</person>
<person>
<fn>Andreas</fn>
<sn>Bulling</sn>
</person>
<person>
<fn>Enrico</fn>
<sn>Rukzio</sn>
</person>
</authors>
</reference>
<reference>
<bibtype>article</bibtype>
<citeid>chi19_hirzle_positionpaper</citeid>
<title>On the Importance of Visual (Digital) Wellbeing for HMDs</title>
<abstract>Most digital devices are screen-based devices and as such our eyes are very much in demand when consuming digital content. This is especially important for augmented and virtual reality (AR/VR) head-mounted displays (HMDs) that are entering the consumer market bringing digital displays even closer to the eyes. The influence of looking at digital screens for the majority of our waking time manifests itself already in an increased occurrence of the computer vision syndrome (CVS). In this position paper we therefore propose to design content for HMDs explicitly around the unique properties and abilities of the human eye and the visual system to avoid visual discomfort or even possible impairments. Hereby we focus on concepts of how eye health features can implicitly be integrated as visual digital wellbeing features into content design for HMDs.</abstract>
<year>2019</year>
<month>5</month>
<journal>In Proc. of CHI 2019 Workshop on Designing for Digital Wellbeing</journal>
<web_url>https://digitalwellbeingworkshop.wordpress.com</web_url>
<file_url>https://www.uni-ulm.de/fileadmin/website_uni_ulm/iui.inst.100/institut/mitarbeiterbereiche/hirzle/Publications/On_the_Importance_of_Visual__Digital__Wellbeing_for_HMDs_CHI_19_Workshop_on_Digital_Wellbeing.pdf</file_url>
<authors>
<person>
<fn>Teresa</fn>
<sn>Hirzle</sn>
</person>
<person>
<fn>Jan</fn>
<sn>Gugenheimer</sn>
</person>
<person>
<fn>Enrico</fn>
<sn>Rukzio</sn>
</person>
<person>
<fn>Andreas</fn>
<sn>Bulling</sn>
</person>
</authors>
</reference>
<reference>
<bibtype>article</bibtype>
<citeid>grapp_2019</citeid>
<title>Proposing a Co-simulation Model for Coupling Heterogeneous Character Animation Systems</title>
<year>2019</year>
<month>2</month>
<DOI>10.5220/0007356400650076</DOI>
<journal>In Proc. of GRAPP 2019 (14th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications)</journal>
<file_url>https://www.uni-ulm.de/fileadmin/website_uni_ulm/iui.inst.100/institut/Papers/Prof_Rukzio/2019/Gaisbauer_GRAPP_2019_18_CR.pdf</file_url>
<authors>
<person>
<fn>Felix</fn>
<sn>Gaisbauer</sn>
</person>
<person>
<fn>Jannes</fn>
<sn>Lehwald</sn>
</person>
<person>
<fn>Philipp</fn>
<sn>Agethen</sn>
</person>
<person>
<fn>Julia</fn>
<sn>Sues</sn>
</person>
<person>
<fn>Enrico</fn>
<sn>Rukzio</sn>
</person>
</authors>
</reference>
<reference>
<bibtype>article</bibtype>
<title>Playing Guardian Angel: Using a Gamified Approach to Overcome the Overconfidence Bias in Driving</title>
<status>1</status>
<year>2019</year>
<reviewed>1</reviewed>
<DOI>10.1145/3365610.3365614</DOI>
<journal>In Proc. of MUM19 (18th International Conference on Mobile and Ubiquitous Multimedia)</journal>
<publisher>ACM</publisher>
<file_url>https://www.uni-ulm.de/fileadmin/website_uni_ulm/iui.inst.100/institut/Papers/Prof_Rukzio/2019/a12-maurer.pdf</file_url>
<authors>
<person>
<fn>Steffen</fn>
<sn>Maurer</sn>
</person>
<person>
<fn>Lara</fn>
<sn>Scatturin</sn>
</person>
<person>
<fn>Enrico</fn>
<sn>Rukzio</sn>
</person>
</authors>
</reference>
<reference>
<bibtype>article</bibtype>
<citeid>cgf_2018</citeid>
<title>A Probabilistic Steering Parameter Model for Deterministic Motion Planning Algorithms</title>
<year>2018</year>
<month>11</month>
<day>26</day>
<DOI>https://doi.org/10.1111/cgf.13591</DOI>
<journal>Computer Graphics Forum</journal>
<publisher>Wiley</publisher>
<file_url>https://www.uni-ulm.de/fileadmin/website_uni_ulm/iui.inst.100/institut/Papers/Prof_Rukzio/2018/agethen.pdf</file_url>
<authors>
<person>
<fn>Philipp</fn>
<sn>Agethen</sn>
</person>
<person>
<fn>Felix</fn>
<sn>Gaisbauer</sn>
</person>
<person>
<fn>Enrico</fn>
<sn>Rukzio</sn>
</person>
</authors>
</reference>
<reference>
<bibtype>article</bibtype>
<citeid>CantYouHearMe</citeid>
<title>Can’t You Hear Me? Investigating Personal Soundscape Curation</title>
<year>2018</year>
<month>11</month>
<DOI>10.1145/3282894.3282897</DOI>
<journal>In Proc. of MUM 2018 (17th International Conference on Mobile and Ubiquitous Multimedia)</journal>
<publisher>ACM</publisher>
<file_url>https://www.uni-ulm.de/fileadmin/website_uni_ulm/iui.inst.100/institut/Papers/Prof_Rukzio/2018/Can_t_you_hear_me__MUM_2018.pdf</file_url>
<authors>
<person>
<fn>Gabriel</fn>
<sn>Haas</sn>
</person>
<person>
<fn>Evgeny</fn>
<sn>Stemasov</sn>
</person>
<person>
<fn>Enrico</fn>
<sn>Rukzio</sn>
</person>
</authors>
</reference>
<reference>
<bibtype>article</bibtype>
<citeid>mig_18_tap</citeid>
<title>Counterbalancing Virtual Reality Induced Temporal Disparities of Human Locomotion for the Manufacturing Industry</title>
<year>2018</year>
<month>11</month>
<DOI>10.1145/3274247.3274517</DOI>
<journal>In Proc. of MIG 2018 (ACM Siggraph Conference on Motion, Interaction and Games)</journal>
<file_url>https://www.uni-ulm.de/fileadmin/website_uni_ulm/iui.inst.100/institut/Papers/Prof_Rukzio/2018/MIG18_paper_31.pdf</file_url>
<authors>
<person>
<fn>Philipp</fn>
<sn>Agethen</sn>
</person>
<person>
<fn>Max</fn>
<sn>Link</sn>
</person>
<person>
<fn>Felix</fn>
<sn>Gaisbauer</sn>
</person>
<person>
<fn>Thies</fn>
<sn>Pfeiffer</sn>
</person>
<person>
<fn>Enrico</fn>
<sn>Rukzio</sn>
</person>
</authors>
</reference>
<reference>
<bibtype>article</bibtype>
<citeid>mig_18_motion_planning</citeid>
<title>Towards Realistic Walk Path Simulation of Single Subjects: Presenting a Probabilistic Motion Planning Algorithm</title>
<year>2018</year>
<month>11</month>
<DOI>10.1145/3274247.3274504</DOI>
<journal>In Proc. of MIG 2018 (ACM Siggraph Conference on Motion, Interaction and Games)</journal>
<publisher>ACM</publisher>
<file_url>https://www.uni-ulm.de/fileadmin/website_uni_ulm/iui.inst.100/institut/Papers/Prof_Rukzio/2018/MIG18_paper_14.pdf</file_url>
<authors>
<person>
<fn>Philipp</fn>
<sn>Agethen</sn>
</person>
<person>
<fn>Felix</fn>
<sn>Gaisbauer</sn>
</person>
<person>
<fn>Martin</fn>
<sn>Manns</sn>
</person>
<person>
<fn>Max</fn>
<sn>Link</sn>
</person>
<person>
<fn>Enrico</fn>
<sn>Rukzio</sn>
</person>
</authors>
</reference>
<reference>
<bibtype>article</bibtype>
<citeid>Stemasov:AuditoryReality2018</citeid>
<title>Augmenting Human Hearing Through Interactive Auditory Mediated Reality</title>
<abstract>To filter and shut out an increasingly loud environment, many resort to the use of personal audio technology. They drown out unwanted sounds, by wearing headphones. This uniform interaction with all surrounding sounds can have a negative impact on social relations and situational awareness. Leveraging mediation through smarter headphones, users gain more agency over their sense of hearing: For instance by being able to selectively alter the volume and other features of specific sounds, without losing the ability to add media. In this work, we propose the vision of interactive auditory mediated reality (AMR). To understand users' attitude and requirements, we conducted a week-long event sampling study (n = 12), where users recorded and rated sources (n = 225) which they would like to mute, amplify or turn down. The results indicate that besides muting, a distinct, "quiet-but-audible" volume exists. It caters to two requirements at the same time: aesthetics/comfort and information acquisition.</abstract>
<year>2018</year>
<month>10</month>
<day>15</day>
<DOI>10.1145/3266037.3266104</DOI>
<journal>In Adj. Proc. of UIST '18 (ACM Symposium on User Interface Software and Technology)</journal>
<event_place>Berlin</event_place>
<file_url>https://www.uni-ulm.de/fileadmin/website_uni_ulm/iui.inst.100/institut/Papers/Prof_Rukzio/2018/AugmentedHearing_Poster_UIST2018.pdf</file_url>
<authors>
<person>
<fn>Evgeny</fn>
<sn>Stemasov</sn>
</person>
<person>
<fn>Gabriel</fn>
<sn>Haas</sn>
</person>
<person>
<fn>Michael</fn>
<sn>Rietzler</sn>
</person>
<person>
<fn>Enrico</fn>
<sn>Rukzio</sn>
</person>
</authors>
</reference>
<reference>
<bibtype>article</bibtype>
<citeid>Dobbelstein:2018:Movelet</citeid>
<title>Movelet: a Self-Actuated Movable Bracelet for Positional Haptic Feedback on the User's Forearm</title>
<year>2018</year>
<month>10</month>
<day>8</day>
<DOI>10.1145/3267242.3267249</DOI>
<journal>In Proc. of ISWC 2018 (2018 ACM International Symposium on Wearable Computers)</journal>
<web_url>https://youtu.be/HSHAU5eqtZA</web_url>
<file_url>https://www.uni-ulm.de/fileadmin/website_uni_ulm/iui.inst.100/institut/mitarbeiterbereiche/dobbelstein/movelet.pdf</file_url>
<authors>
<person>
<fn>David</fn>
<sn>Dobbelstein</sn>
</person>
<person>
<fn>Evgeny</fn>
<sn>Stemasov</sn>
</person>
<person>
<fn>Daniel</fn>
<sn>Besserer</sn>
</person>
<person>
<fn>Irina</fn>
<sn>Stenske</sn>
</person>
<person>
<fn>Enrico</fn>
<sn>Rukzio</sn>
</person>
</authors>
</reference>
<reference>
<bibtype>article</bibtype>
<citeid>Dreja:DemoVRSpinning2018</citeid>
<title>A Demonstration of VRSpinning: Exploring the Design Space of a 1D Rotation Platform to Increase the Perception of Self-Motion in VR</title>
<year>2018</year>
<month>10</month>
<DOI>10.1145/3266037.3271645</DOI>
<journal>In Adj. Proc. of UIST '18 (ACM Symposium on User Interface Software and Technology)</journal>
<web_url2>https://www.uni-ulm.de/in/mi/mi-forschung/uulm-hci/projects/vrspinning/</web_url2>
<file_url>https://www.uni-ulm.de/fileadmin/website_uni_ulm/iui.inst.100/institut/Papers/Prof_Rukzio/2018/Demo_SpinVR_2_.pdf</file_url>
<authors>
<person>
<fn>Thomas</fn>
<sn>Dreja</sn>
</person>
<person>
<fn>Michael</fn>
<sn>Rietzler</sn>
</person>
<person>
<fn>Teresa</fn>
<sn>Hirzle</sn>
</person>
<person>
<fn>Jan</fn>
<sn>Gugenheimer</sn>
</person>
<person>
<fn>Julian</fn>
<sn>Frommel</sn>
</person>
<person>
<fn>Enrico</fn>
<sn>Rukzio</sn>
</person>
</authors>
</reference>
<reference>
<bibtype>article</bibtype>
<citeid>cARe</citeid>
<title>cARe: An Augmented Reality Support System for Dementia Patients</title>
<year>2018</year>
<month>10</month>
<DOI>10.1145/3266037.3266095</DOI>
<journal>In Adj. Proc. of UIST '18 (ACM Symposium on User Interface Software and Technology)</journal>
<file_url>https://www.uni-ulm.de/fileadmin/website_uni_ulm/iui.inst.100/institut/mitarbeiterbereiche/wolf/cARePosterAbstract.pdf</file_url>
<authors>
<person>
<fn>Dennis</fn>
<sn>Wolf</sn>
</person>
<person>
<fn>Daniel</fn>
<sn>Besserer</sn>
</person>
<person>
<fn>Karolina</fn>
<sn>Sejunaite</sn>
</person>
<person>
<fn>Matthias</fn>
<sn>Riepe</sn>
</person>
<person>
<fn>Enrico</fn>
<sn>Rukzio</sn>
</person>
</authors>
</reference>
<reference>
<bibtype>article</bibtype>
<citeid>FaceOn</citeid>
<title>Face/On: Actuating the Facial Contact Area of a Head-Mounted Display for Increased Immersion</title>
<year>2018</year>
<month>10</month>
<DOI>10.1145/3266037.3271631</DOI>
<journal>In Adj. Proc. of UIST '18 (ACM Symposium on User Interface Software and Technology)</journal>
<file_url>https://www.uni-ulm.de/fileadmin/website_uni_ulm/iui.inst.100/institut/mitarbeiterbereiche/wolf/FaceOnNew.pdf</file_url>
<authors>
<person>
<fn>Dennis</fn>
<sn>Wolf</sn>
</person>
<person>
<fn>Leo</fn>
<sn>Hnatek</sn>
</person>
<person>
<fn>Enrico</fn>
<sn>Rukzio</sn>
</person>
</authors>
</reference>
<reference>
<bibtype>article</bibtype>
<citeid>RethinkingRDW</citeid>
<title>Rethinking Redirected Walking: On the Use of Curvature Gains Beyond Perceptual Limitations and Revisiting Bending Gains</title>
<year>2018</year>
<month>10</month>
<journal>In Proc. of ISMAR 2018 (IEEE International Symposium for Mixed and Augmented Reality)</journal>
<web_url>https://doi.org/10.1109/ISMAR.2018.00041</web_url>
<file_url>https://www.uni-ulm.de/fileadmin/website_uni_ulm/iui.inst.100/institut/mitarbeiterbereiche/rietzler/RedirectedWalking.pdf</file_url>
<authors>
<person>
<fn>Michael</fn>
<sn>Rietzler</sn>
</person>
<person>
<fn>Jan</fn>
<sn>Gugenheimer</sn>
</person>
<person>
<fn>Teresa</fn>
<sn>Hirzle</sn>
</person>
<person>
<fn>Martin</fn>
<sn>Deubzer</sn>
</person>
<person>
<fn>Eike</fn>
<sn>Langbehn</sn>
</person>
<person>
<fn>Enrico</fn>
<sn>Rukzio</sn>
</person>
</authors>
</reference>
<reference>
<bibtype>article</bibtype>
<citeid>Snapband</citeid>
<title>SnapBand: a Flexible Multi-Location Touch Input Band</title>
<year>2018</year>
<month>10</month>
<DOI>10.1145/3267242.3267248</DOI>
<journal>In Proc. of ISWC 2018 (2018 ACM International Symposium on Wearable Computers)</journal>
<file_url>https://www.uni-ulm.de/fileadmin/website_uni_ulm/iui.inst.100/institut/mitarbeiterbereiche/dobbelstein/snapband.pdf</file_url>
<authors>
<person>
<fn>David</fn>
<sn>Dobbelstein</sn>
</person>
<person>
<fn>Tobias</fn>
<sn>Arnold</sn>
</person>
<person>
<fn>Enrico</fn>
<sn>Rukzio</sn>
</person>
</authors>
</reference>
<reference>
<bibtype>article</bibtype>
<citeid>pietron2018_studydesign</citeid>
<title>Study Design Template for Identifying Usability Issues in Graphical Modeling Tools</title>
<year>2018</year>
<month>10</month>
<journal>2nd Workshop on Tools for Model Driven Engineering (MDETools'18) at MODELS'18</journal>
<file_url>https://www.uni-ulm.de/fileadmin/website_uni_ulm/iui.inst.100/institut/Papers/Prof_Rukzio/2018/mdetools18.pdf</file_url>
<authors>
<person>
<fn>Jakob</fn>
<sn>Pietron</sn>
</person>
<person>
<fn>Alexander</fn>
<sn>Raschke</sn>
</person>
<person>
<fn>Michael</fn>
<sn>Stegmaier</sn>
</person>
<person>
<fn>Matthias</fn>
<sn>Tichy</sn>
</person>
<person>
<fn>Enrico</fn>
<sn>Rukzio</sn>
</person>
</authors>
</reference>
<reference>
<bibtype>article</bibtype>
<citeid>Hirzle:SymbioticHMS2018</citeid>
<title>Towards a Symbiotic Human-Machine Depth Sensor: Exploring 3D Gaze for Object Reconstruction</title>
<abstract>Eye tracking is expected to become an integral part of future augmented reality (AR) head-mounted displays (HMDs) given that it can easily be integrated into existing hardware and provides a versatile interaction modality. To augment objects in the real world, AR HMDs require a three-dimensional understanding of the scene, which is currently solved using depth cameras. In this work we aim to explore how 3D gaze data can be used to enhance scene understanding for AR HMDs by envisioning a symbiotic human-machine depth camera, fusing depth data with 3D gaze information. We present a first proof of concept, exploring to what extend we are able to recognise what a user is looking at by plotting 3D gaze data. To measure 3D gaze, we implemented a vergence-based algorithm and built an eye tracking setup consisting of a Pupil Labs headset and an OptiTrack motion capture system, allowing us to measure 3D gaze inside a 50x50x50 cm volume. We show first 3D gaze plots of "gazed-at" objects and describe our vision of a symbiotic human-machine depth camera that combines a depth camera and human 3D gaze information.</abstract>
<year>2018</year>
<month>10</month>
<DOI>10.1145/3266037.3266119</DOI>
<journal>In Adj. Proc. of UIST '18 (ACM Symposium on User Interface Software and Technology)</journal>
<keywords>3D gaze; eye-based interaction; human-machine symbiosis</keywords>
<web_url>https://www.uni-ulm.de/?hm_depthsensor</web_url>
<file_url>https://www.uni-ulm.de/fileadmin/website_uni_ulm/iui.inst.100/institut/Papers/Prof_Rukzio/2018/3DGazeAbstractUIST2018_both.pdf</file_url>
<authors>
<person>
<fn>Teresa</fn>
<sn>Hirzle</sn>
</person>
<person>
<fn>Jan</fn>
<sn>Gugenheimer</sn>
</person>
<person>
<fn>Florian</fn>
<sn>Geiselhart</sn>
</person>
<person>
<fn>Andreas</fn>
<sn>Bulling</sn>
</person>
<person>
<fn>Enrico</fn>
<sn>Rukzio</sn>
</person>
</authors>
</reference>
<reference>
<bibtype>article</bibtype>
<citeid>MaurerKKEGR</citeid>
<title>Designing a Guardian Angel: Giving an Automated Vehicle the Possibility to Override its Driver</title>
<year>2018</year>
<month>9</month>
<DOI>https://doi.org/10.1145/3239060.3239078</DOI>
<journal>In Proc. of AutomotiveUI 18 (10th International Conference on Automotive User Interfaces and Interactive Vehicular Applications)</journal>
<file_url>https://www.uni-ulm.de/fileadmin/website_uni_ulm/iui.inst.100/institut/Papers/Prof_Rukzio/2018/Maurer-1073.pdf</file_url>
<authors>
<person>
<fn>Steffen</fn>
<sn>Maurer</sn>
</person>
<person>
<fn>Susanne</fn>
<sn>Kuhnert</sn>
</person>
<person>
<fn>Issam</fn>
<sn>Kraiem</sn>
</person>
<person>
<fn>Rainer</fn>
<sn>Erbach</sn>
</person>
<person>
<fn>Petra</fn>
<sn>Grimm</sn>
</person>
<person>
<fn>Enrico</fn>
<sn>Rukzio</sn>
</person>
</authors>
</reference>
<reference>
<bibtype>article</bibtype>
<citeid>HockKBWRB2018</citeid>
<title>How to Design Valid Simulator Studies for Investigating User Experience in Automated Driving – Review and Hands-On Considerations</title>
<abstract>Simulator studies have been conducted in the automotive domain since the 1960s. Recently, automated driving studies have become more popular as real-world automated cars start to emerge but at this time not all levels of automation can be realized. A simulation does not entail all details of real driving, creating a realistic simulation experience - both on a psychological and physical level - proposes recurring challenges. These are among others: sample acquisition, simulator sickness, simulator training, interface design, take-over requests and secondary tasks in automated driving simulator studies.
In this paper, we review existing literature and summarize important lessons from simulations in the domain of driving automation to provide considerations for studies investigating driver behavior in the age of highly automated driving.</abstract>
<year>2018</year>
<month>9</month>
<DOI>https://doi.org/10.1145/3239060.3239066</DOI>
<journal>In Proc. of AutomotiveUI 18 (10th International Conference on Automotive User Interfaces and Interactive Vehicular Applications)</journal>
<event_name>Proceedings of the 10th International Conference on Automotive User Interfaces and Interactive Vehicular Applications (Automotive'UI 18),</event_name>
<event_place>Toronto - Canada</event_place>
<keywords>Automated driving; driving simulator; secondary task; simulator sickness; interface design; user studies</keywords>
<file_url>https://www.uni-ulm.de/fileadmin/website_uni_ulm/iui.inst.100/institut/Papers/Prof_Rukzio/2018/AutoUI2018_TowardsAutomatedDriving.pdf</file_url>
<authors>
<person>
<fn>Philipp</fn>
<sn>Hock</sn>
</person>
<person>
<fn>Johannes</fn>
<sn>Kraus</sn>
</person>
<person>
<fn>Franziska</fn>
<sn>Babel</sn>
</person>
<person>
<fn>Marcel</fn>
<sn>Walch</sn>
</person>
<person>
<fn>Enrico</fn>
<sn>Rukzio</sn>
</person>
<person>
<fn>Martin</fn>
<sn>Baumann</sn>
</person>
</authors>
</reference>
<reference>
<bibtype>article</bibtype>
<citeid>Pointing</citeid>
<title>Towards Accurate Cursorless Pointing: The Effects of Ocular Dominance and Handedness</title>
<year>2018</year>
<month>8</month>
<DOI>10.1007/s00779-017-1100-7</DOI>
<journal>Personal and Ubiquitous Computing</journal>
<volume>22</volume>
<publisher>Springer</publisher>
<pages>633–646</pages>
<number>4</number>
<web_url>http://link.springer.com/article/10.1007/s00779-017-1100-7</web_url>
<file_url>https://www.uni-ulm.de/fileadmin/website_uni_ulm/iui.inst.100/institut/mitarbeiterbereiche/plaumann/PUC_2017.pdf</file_url>
<authors>
<person>
<fn>Katrin</fn>
<sn>Plaumann</sn>
</person>
<person>
<fn>Matthias</fn>
<sn>Weing</sn>
</person>
<person>
<fn>Christian</fn>
<sn>Winkler</sn>
</person>
<person>
<fn>Michael</fn>
<sn>Müller</sn>
</person>
<person>
<fn>Enrico</fn>
<sn>Rukzio</sn>
</person>
</authors>
</reference>
<reference>
<bibtype>article</bibtype>
<citeid>tap_2018</citeid>
<title>Behavior Analysis of Human Locomotion in Real World and Virtual Reality for Manufacturing Industry</title>
<year>2018</year>
<month>7</month>
<day>3</day>
<journal>ACM Transactions on Applied Perception (TAP)</journal>
<volume>15</volume>
<file_url>https://www.uni-ulm.de/fileadmin/website_uni_ulm/iui.inst.100/institut/Papers/Prof_Rukzio/2018/Behavior_Analysis_of_Human_Locomotion_in_the_Real_World_and_Virtual.pdf</file_url>
<authors>
<person>
<fn>Philipp</fn>
<sn>Agethen</sn>
</person>
<person>
<fn>Viswa</fn>
<sn>Subramanian Sekar</sn>
</person>
<person>
<fn>Felix</fn>
<sn>Gaisbauer</sn>
</person>
<person>
<fn>Thies</fn>
<sn>Pfeiffer</sn>
</person>
<person>
<fn>Michael</fn>
<sn>Otto</sn>
</person>
<person>
<fn>Enrico</fn>
<sn>Rukzio</sn>
</person>
</authors>
</reference>
<reference>
<bibtype>article</bibtype>
<citeid>Rietzler:2018:VRSpinning</citeid>
<title>VRSpinning: Exploring the Design Space of a 1D Rotation
Platform to Increase the Perception of Self-Motion in VR</title>
<year>2018</year>
<month>6</month>
<DOI>10.1145/3196709.3196755</DOI>
<journal>In Proc. of DIS 2018 (ACM Conference on Designing Interactive Systems)</journal>
<web_url>https://www.uni-ulm.de/in/mi/mi-forschung/uulm-hci/projects/vrspinning/</web_url>
<web_url2>https://youtu.be/KzrtOPbr4t4</web_url2>
<file_url>https://www.uni-ulm.de/fileadmin/website_uni_ulm/iui.inst.100/institut/Papers/Prof_Rukzio/2018/SpinVR_Small.compressed.pdf</file_url>
<authors>
<person>
<fn>Michael</fn>
<sn>Rietzler</sn>
</person>
<person>
<fn>Teresa</fn>
<sn>Hirzle</sn>
</person>
<person>
<fn>Jan</fn>
<sn>Gugenheimer</sn>
</person>
<person>
<fn>Julian</fn>
<sn>Frommel</sn>
</person>
<person>
<fn>Thomas</fn>
<sn>Dreja</sn>
</person>
<person>
<fn>Enrico</fn>
<sn>Rukzio</sn>
</person>
</authors>
</reference>
<reference>
<bibtype>article</bibtype>
<citeid>motion_reuse_2018</citeid>
<title>A Motion Reuse Framework for Accelerated Simulation of Manual Assembly Processes </title>
<year>2018</year>
<month>5</month>
<DOI>10.1016/j.procir.2018.03.282</DOI>
<journal>Proc. of 51th CIRP Conference on Manufacturing Systems (CMS)</journal>
<file_url>https://www.uni-ulm.de/fileadmin/website_uni_ulm/iui.inst.100/institut/Papers/Prof_Rukzio/2018/A_Motion_Reuse_Framework_for_Accelerated_Simulation_of_Manual_Assembly_Processes.pdf</file_url>
<authors>
<person>
<fn>Felix</fn>
<sn>Gaisbauer</sn>
</person>
<person>
<fn>Jannes</fn>
<sn>Lehwald</sn>
</person>
<person>
<fn>Philipp</fn>
<sn>Agethen</sn>
</person>
<person>
<fn>Enrico</fn>
<sn>Rukzio</sn>
</person>
</authors>
</reference>
<reference>
<bibtype>article</bibtype>
<citeid>cms_interactive_2018</citeid>
<title>Interactive Simulation for Walk Path Planning within the Automotive Industry</title>
<year>2018</year>
<month>5</month>
<DOI>10.1016/j.procir.2018.03.223</DOI>
<journal>Proc. of 51th CIRP Conference on Manufacturing Systems (CMS)</journal>
<file_url>https://www.uni-ulm.de/fileadmin/website_uni_ulm/iui.inst.100/institut/Papers/Prof_Rukzio/2018/Interactive_Simulation_for_Walk_Path_Planning_within_the_Automotive.pdf</file_url>
<authors>
<person>
<fn>Philipp</fn>
<sn>Agethen</sn>
</person>
<person>
<fn>Felix</fn>
<sn>Gaisbauer</sn>
</person>
<person>
<fn>Michael</fn>
<sn>Otto</sn>
</person>
<person>
<fn>Enrico</fn>
<sn>Rukzio</sn>
</person>
</authors>
</reference>
<reference>
<bibtype>article</bibtype>
<citeid>mosim_cms_2018</citeid>
<title>Presenting a Modular Framework for a Holistic Simulation of Manual Assembly Tasks </title>
<year>2018</year>
<month>5</month>
<DOI>10.1016/j.procir.2018.03.281</DOI>
<journal>Proc. of 51th CIRP Conference on Manufacturing Systems (CMS)</journal>
<file_url>https://www.uni-ulm.de/fileadmin/website_uni_ulm/iui.inst.100/institut/Papers/Prof_Rukzio/2018/Presenting_a_Modular_Framework_for_a_Holistic_Simulation_of_Manual.pdf</file_url>
<authors>
<person>
<fn>Felix</fn>
<sn>Gaisbauer</sn>
</person>
<person>
<fn>Philipp</fn>
<sn>Agethen</sn>
</person>
<person>
<fn>Michael</fn>
<sn>Otto</sn>
</person>
<person>
<fn>Thomas</fn>
<sn>Bär</sn>
</person>
<person>
<fn>Julia</fn>
<sn>Sues</sn>
</person>
<person>
<fn>Enrico</fn>
<sn>Rukzio</sn>
</person>
</authors>
</reference>
<reference>
<bibtype>poster</bibtype>
<citeid>eg18pa</citeid>
<title>A Probabilistic Motion Planning Algorithm for Realistic Walk Path Simulation</title>
<year>2018</year>
<month>4</month>
<DOI>10.2312/egp.20181009</DOI>
<organization>Proc. of Eurographics (Poster)</organization>
<journal>Proc. of Eurographics</journal>
<file_url>t3://file?uid=434677</file_url>
<authors>
<person>
<fn>Philipp</fn>
<sn>Agethen</sn>
</person>
<person>
<fn>Thomas</fn>
<sn>Neher</sn>
</person>
<person>
<fn>Felix</fn>
<sn>Gaisbauer</sn>
</person>
<person>
<fn>Martin</fn>
<sn>Manns</sn>
</person>
<person>
<fn>Enrico</fn>
<sn>Rukzio</sn>
</person>
</authors>
</reference>
<reference>
<bibtype>article</bibtype>
<citeid>BreakingTheTracking</citeid>
<title>Breaking the Tracking: Enabling Weight Perception using Perceivable Tracking Offsets</title>
<year>2018</year>
<month>4</month>
<DOI>10.1145/3173574.3173702</DOI>
<journal>In Proc. of CHI 2018 (SIGCHI Conference on Human Factors in Computing Systems</journal>
<web_url2>https://youtu.be/IqFZy6wg_sc</web_url2>
<file_url>file:354716</file_url>
<authors>
<person>
<fn>Michael</fn>
<sn>Rietzler</sn>
</person>
<person>
<fn>Florian</fn>
<sn>Geiselhart</sn>
</person>
<person>
<fn>Jan</fn>
<sn>Gugenheimer</sn>
</person>
<person>
<fn>Enrico</fn>
<sn>Rukzio</sn>
</person>
</authors>
</reference>
<reference>
<bibtype>article</bibtype>
<citeid>PseudoKinestheticFeedback</citeid>
<title>Conveying the Perception of Kinesthetic Feedback in Virtual Reality using State-of-the-Art Hardware</title>
<year>2018</year>
<month>4</month>
<DOI>10.1145/3173574.3174034</DOI>
<journal>In Proc. of CHI 2018 (SIGCHI Conference on Human Factors in Computing Systems</journal>
<web_url2>https://youtu.be/IMiHyTDCE4o</web_url2>
<file_url>https://www.uni-ulm.de/fileadmin/website_uni_ulm/iui.inst.100/institut/mitarbeiterbereiche/rietzler/ConveyingThePerception.pdf</file_url>
<authors>
<person>
<fn>Michael</fn>
<sn>Rietzler</sn>
</person>
<person>
<fn>Florian</fn>
<sn>Geiselhart</sn>
</person>
<person>
<fn>Julian</fn>
<sn>Frommel</sn>
</person>
<person>
<fn>Enrico</fn>
<sn>Rukzio</sn>
</person>
</authors>
</reference>
<reference>
<bibtype>article</bibtype>
<citeid>FaceTouchPaper</citeid>
<title>FaceDisplay: Towards Asymmetric Multi-User Interaction for Nomadic Virtual Reality</title>
<year>2018</year>
<month>4</month>
<DOI>10.1145/3173574.3173628</DOI>
<journal>In Proc. of CHI 2018 (SIGCHI Conference on Human Factors in Computing Systems</journal>
<web_url>https://youtu.be/idcNiVseXic</web_url>
<web_url2>https://www.uni-ulm.de/in/mi/mi-forschung/uulm-hci/projects/facedisplay/</web_url2>
<file_url>https://www.uni-ulm.de/fileadmin/website_uni_ulm/iui.inst.100/institut/mitarbeiterbereiche/gugenheimer/FaceDisplay_small.pdf</file_url>
<authors>
<person>
<fn>Jan</fn>
<sn>Gugenheimer</sn>
</person>
<person>
<fn>Evgeny</fn>
<sn>Stemasov</sn>
</person>
<person>
<fn>Harpreet</fn>
<sn>Sareen</sn>
</person>
<person>
<fn>Enrico</fn>
<sn>Rukzio</sn>
</person>
</authors>
</reference>
<reference>
<bibtype>poster</bibtype>
<citeid>eg18_gais_mmu</citeid>
<title>Introducing a Modular Concept for Exchanging Character Animation Approaches</title>
<year>2018</year>
<month>4</month>
<DOI>10.2312/egp.20181011</DOI>
<organization>Proc. of Eurographics (Poster)</organization>
<journal>Proc. of Eurographics</journal>
<file_url>https://www.uni-ulm.de/fileadmin/website_uni_ulm/iui.inst.100/institut/Papers/Prof_Rukzio/2018/007-008.pdf</file_url>
<authors>
<person>
<fn>Felix</fn>
<sn>Gaisbauer</sn>
</person>
<person>
<fn>Philipp</fn>
<sn>Agethen</sn>
</person>
<person>
<fn>Thomas</fn>
<sn>Bär</sn>
</person>
<person>
<fn>Enrico</fn>
<sn>Rukzio</sn>
</person>
</authors>
</reference>
<reference>
<bibtype>poster</bibtype>
<citeid>eg18_gais_deep</citeid>
<title>Presenting a Deep Motion Blending Approach for Simulating Natural Reach Motions</title>
<year>2018</year>
<month>4</month>
<DOI>10.2312/egp.20181010</DOI>
<organization>Proc. of Eurographics (Poster)</organization>
<journal>Proc. of Eurographics</journal>
<file_url>t3://file?uid=434678</file_url>
<authors>
<person>
<fn>Felix</fn>
<sn>Gaisbauer</sn>
</person>
<person>
<fn>Philipp</fn>
<sn>Froehlich</sn>
</person>
<person>
<fn>Jannes</fn>
<sn>Lehwald</sn>
</person>
<person>
<fn>Philipp</fn>
<sn>Agethen</sn>
</person>
<person>
<fn>Enrico</fn>
<sn>Rukzio</sn>
</person>
</authors>
</reference>
<reference>
<bibtype>poster</bibtype>
<citeid>Hirzle:2018:WatchVR</citeid>
<title>WatchVR: Exploring the Usage of a Smartwatch for Interaction in Mobile Virtual Reality</title>
<abstract>Mobile virtual reality (VR) head-mounted displays (HMDs) are steadily becoming part of people’s everyday life. Most current interaction approaches rely either on additional hardware (e.g. Daydream Controller) or offer only a limited interaction concept (e.g. Google Cardboard). We explore a solution where a conventional smartwatch, a device users already carry around with them, is used to enable short interactions but also allows for longer complex interactions with mobile VR. To explore the possibilities of a smartwatch for interaction, we conducted a user study in which we compared two variables with regard to user performance: interaction method (touchscreen vs inertial sensors) and wearing method (hand-held vs wrist-worn). We found that selection time and error rate were lowest when holding the smartwatch in one hand using its inertial sensors for interaction (hand-held).</abstract>
<year>2018</year>
<month>4</month>
<DOI>10.1145/3170427.3188629</DOI>
<organization>In Proceedings of CHI EA '18 (CHI '18 Extended Abstracts on Human Factors in Computing Systems)</organization>
<keywords>3D pointing; smartwatch; nomadic virtual reality; mobile virtual reality</keywords>
<file_url>t3://file?uid=435518</file_url>
<authors>
<person>
<fn>Teresa</fn>
<sn>Hirzle</sn>
</person>
<person>
<fn>Jan Ole</fn>
<sn>Rixen</sn>
</person>
<person>
<fn>Jan</fn>
<sn>Gugenheimer</sn>
</person>
<person>
<fn>Enrico</fn>
<sn>Rukzio</sn>
</person>
</authors>
</reference>
<reference>
<bibtype>article</bibtype>
<citeid>FaceDisplayIEEEVR</citeid>
<title>A Demonstration of FaceDisplay: Asymetric Multi-User Interaction for Mobile VR</title>
<abstract>Mobile VR HMDs enable users to experience virtual reality content in a variety of nomadic scenarios, excluding all the people in the surrounding (Non-HMD Users) and reducing them to be sole bystanders. This leads to a scenario where the HMD User experiences a sense of isolation and the Non-HMD Users a sense of exclusion. To battle these phenomena we present FaceDisplay, a modified VR HMD consisting of three touch sensitive displays and a depth camera attached to its back. This allows Non-HMD User to see inside the immersed users virtual world and enable them to interact via touch and gestures. We built a VR HMD prototype consisting of three additional screens and present interaction techniques and an example application that leverage the FaceDisplay design space.</abstract>
<year>2018</year>
<month>3</month>
<DOI>10.1109/VR.2018.8446319</DOI>
<journal>In Adj. Proc. of IEEE VR 2018 (IEEE Conference on Virtual Reality and 3D User Interfaces)</journal>
<pages>753-754</pages>
<file_url>https://www.uni-ulm.de/fileadmin/website_uni_ulm/iui.inst.100/institut/Papers/Prof_Rukzio/2018/IEEEVR_DemoFaceDisplay.pdf</file_url>
<authors>
<person>
<fn>Jan</fn>
<sn>Gugenheimer</sn>
</person>
<person>
<fn>Evgeny</fn>
<sn>Stemasov</sn>
</person>
<person>
<fn>Harpreet</fn>
<sn>Sareen</sn>
</person>
<person>
<fn>Enrico</fn>
<sn>Rukzio</sn>
</person>
</authors>
</reference>
<reference>
<bibtype>article</bibtype>
<citeid>8446551</citeid>
<title>A Demonstration of ShareVR: Co-Located Experiences for Virtual Reality Between HMD and Non-HMD Users</title>
<year>2018</year>
<month>3</month>
<DOI>10.1109/VR.2018.8446551</DOI>
<journal>In Adj. Proc. of IEEE VR 2018 (2018 IEEE Conference on Virtual Reality and 3D User Interfaces)</journal>
<keywords>DemoShareVR
Resists
Virtual reality
Visualization
Games
Space exploration;Electronic mail;Aerospace electronics;Human-centered computing- Visualization- Visualization techniques- Treemaps;Human-centered computing-Visualization-Visualization design and evaluation methods</keywords>
<file_url>https://www.uni-ulm.de/fileadmin/website_uni_ulm/iui.inst.100/institut/Papers/Prof_Rukzio/2018/IEEEVR_DemoShareVR.pdf</file_url>
<authors>
<person>
<fn>Jan</fn>
<sn>Gugenheimer</sn>
</person>
<person>
<fn>Evgeny</fn>
<sn>Stemasov</sn>
</person>
<person>
<fn>Julian</fn>
<sn>Frommel</sn>
</person>
<person>
<fn>Enrico</fn>
<sn>Rukzio</sn>
</person>
</authors>
</reference>
<reference>
<bibtype>article</bibtype>
<citeid>SlowMoDemo</citeid>
<title>Demo of the Matrix Has You: Realizing Slow Motion in Full-Body Virtual Reality</title>
<year>2018</year>
<month>3</month>
<DOI>10.1109/VR.2018.8446136</DOI>
<journal>In Adj. Proc. of IEEE VR 2018 (2018 IEEE Conference on Virtual Reality and 3D User Interfaces)</journal>
<file_url>https://www.uni-ulm.de/fileadmin/website_uni_ulm/iui.inst.100/institut/mitarbeiterbereiche/rietzler/document.pdf</file_url>
<authors>
<person>
<fn>Michael</fn>
<sn>Rietzler</sn>
</person>
<person>
<fn>Florian</fn>
<sn>Geiselhart</sn>
</person>
<person>
<fn>Julia</fn>
<sn>Brich</sn>
</person>
<person>
<fn>Enrico</fn>
<sn>Rukzio</sn>
</person>
</authors>
</reference>
<reference>
<bibtype>article</bibtype>
<citeid>plaumann;TremorCorrection</citeid>
<title>Improving Input Accuracy on Smartphones for Persons who are Affected by Tremor using Motion Sensors</title>
<year>2017</year>
<month>12</month>
<DOI>10.1145/3161169</DOI>
<journal>Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies</journal>
<volume>1</volume>
<publisher>ACM</publisher>
<pages>30</pages>
<number>4</number>
<number2>156</number2>
<web_url2>https://www.uni-ulm.de/en/in/mi/mi-forschung/uulm-hci/projects/circularselection10/</web_url2>
<file_url>https://www.uni-ulm.de/fileadmin/website_uni_ulm/iui.inst.100/institut/mitarbeiterbereiche/plaumann/revisedResubmitMinorRevisions_comp.pdf</file_url>
<authors>
<person>
<fn>Katrin</fn>
<sn>Plaumann</sn>
</person>
<person>
<fn>Milos</fn>
<sn>Babic</sn>
</person>
<person>
<fn>Tobias</fn>
<sn>Drey</sn>
</person>
<person>
<fn>Witali</fn>
<sn>Hepting</sn>
</person>
<person>
<fn>Daniel</fn>
<sn>Stooß</sn>
</person>
<person>
<fn>Enrico</fn>
<sn>Rukzio</sn>
</person>
</authors>
</reference>
<reference>
<bibtype>article</bibtype>
<citeid>Rietzler;SlowMotion</citeid>
<title>The Matrix Has You: Realizing Slow Motion in Full-Body Virtual Reality</title>
<year>2017</year>
<month>11</month>
<day>8</day>
<DOI>10.1145/3139131.3139145</DOI>
<journal>In Proc. of VRST 2017 (23rd ACM Symposium on Virtual Reality Software and Technology)</journal>
<file_url>https://www.uni-ulm.de/fileadmin/website_uni_ulm/iui.inst.100/institut/mitarbeiterbereiche/geiselhart/a2-reitzler.pdf</file_url>
<authors>
<person>
<fn>Michael</fn>
<sn>Rietzler</sn>
</person>
<person>
<fn>Florian</fn>
<sn>Geiselhart</sn>
</person>
<person>
<fn>Enrico</fn>
<sn>Rukzio</sn>
</person>
</authors>
</reference>
<reference>
<bibtype>article</bibtype>
<citeid>inscent</citeid>
<title>inScent: a Wearable Olfactory Display as an Amplification for Mobile Notifications</title>
<year>2017</year>
<month>9</month>
<DOI>10.1145/3123021.3123035</DOI>
<booktitle>The International Symposium on Wearable Computers (ISWC 2017)</booktitle>
<journal>Proc. of ISWC 2017 (2017 ACM International Symposium on Wearable Computers)</journal>
<event_name>The International Symposium on Wearable Computers (ISWC 2017)</event_name>
<event_place>Maui</event_place>
<keywords>InScent</keywords>
<web_url>https://www.uni-ulm.de/in/mi/mi-forschung/uulm-hci/projects/inscent-a-wearable-olfactory-display-as-an-amplification-of-mobile-notifications/</web_url>
<web_url2>https://youtu.be/pzugi0AHaJs</web_url2>
<file_url>https://www.uni-ulm.de/fileadmin/website_uni_ulm/iui.inst.100/institut/mitarbeiterbereiche/dobbelstein/inScent_a_Wearable_Olfactory_Display.pdf</file_url>
<authors>
<person>
<fn>David</fn>
<sn>Dobbelstein</sn>
</person>
<person>
<fn>Steffen</fn>
<sn>Herrdum</sn>
</person>
<person>
<fn>Enrico</fn>
<sn>Rukzio</sn>
</person>
</authors>
</reference>
<reference>
<bibtype>article</bibtype>
<citeid>smartwatchstudy</citeid>
<title>The Effects of Mobility, Encumbrance, and (Non-)Dominant Hand on Interaction with Smartwatches</title>
<year>2017</year>
<month>9</month>
<DOI>10.1145/3123021.3123033</DOI>
<journal>Proc. of ISWC 2017 (2017 ACM International Symposium on Wearable Computers)</journal>
<file_url>https://www.uni-ulm.de/fileadmin/website_uni_ulm/iui.inst.100/institut/mitarbeiterbereiche/dobbelstein/The_Effects_of_Encumbrance_Mobility_and_Hand_On_Smartwatch_Interaction.pdf</file_url>
<authors>
<person>
<fn>David</fn>
<sn>Dobbelstein</sn>
</person>
<person>
<fn>Gabriel</fn>
<sn>Haas</sn>
</person>
<person>
<fn>Enrico</fn>
<sn>Rukzio</sn>
</person>
</authors>
</reference>
<reference>
<bibtype>article</bibtype>
<citeid>icme_ipa</citeid>
<title>Iterative Path Adaption (IPA): Predictive Trajectory-Estimation Using Static Pathfinding Algorithms</title>
<year>2017</year>
<month>7</month>
<DOI>10.1016/j.procir.2017.12.170</DOI>
<journal>Proc. of 11th CIRP Conference on Intelligent Computation in Manufacturing Engineering (ICME)</journal>
<file_url>https://www.sciencedirect.com/science/article/pii/S2212827117311125/pdf?md5=4b491e089592ad6b24e8e5d6a1df4a8d&pid=1-s2.0-S2212827117311125-main.pdf</file_url>
<authors>
<person>
<fn>Felix</fn>
<sn>Gaisbauer</sn>
</person>
<person>
<fn>Philipp</fn>
<sn>Agethen</sn>
</person>
<person>
<fn>Ruediger</fn>
<sn>Lunde</sn>
</person>
<person>
<fn>Enrico</fn>
<sn>Rukzio</sn>
</person>
</authors>
</reference>
<reference>
<bibtype>article</bibtype>
<citeid>icme_interact</citeid>
<title>Towards Realistic Walk Path Simulation in Automotive Assembly Lines: A Probabilistic Approach</title>
<year>2017</year>
<month>7</month>
<journal>Proc. of 11th CIRP Conference on Intelligent Computation in Manufacturing Engineering (ICME)</journal>
<file_url>https://www.uni-ulm.de/fileadmin/website_uni_ulm/iui.inst.100/institut/Papers/Prof_Rukzio/2018/mgi18.pdf</file_url>
<authors>
<person>
<fn>Philipp</fn>
<sn>Agethen</sn>
</person>
<person>
<fn>Felix</fn>
<sn>Gaisbauer</sn>
</person>
<person>
<fn>Philipp</fn>
<sn>Froehlich</sn>
</person>
<person>
<fn>Martin</fn>
<sn>Manns</sn>
</person>
<person>
<fn>Enrico</fn>
<sn>Rukzio</sn>
</person>
</authors>
</reference>
<reference>
<bibtype>article</bibtype>
<citeid>pocketthumb</citeid>
<title>PocketThumb: a Wearable Dual-Sided Touch Interface for Cursor-based Control of Smart-Eyewear</title>
<year>2017</year>
<month>6</month>
<DOI>10.1145/3090055</DOI>
<journal>Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies (IMWUT)</journal>
<web_url>https://www.uni-ulm.de/in/mi/mi-forschung/uulm-hci/projects/pocketthumb-a-wearable-dual-sided-touch-interface-for-cursor-based-control-of-smart-eyewear/</web_url>
<web_url2>https://youtu.be/Ep0GUToErJg</web_url2>
<file_url>https://www.uni-ulm.de/fileadmin/website_uni_ulm/iui.inst.100/institut/mitarbeiterbereiche/dobbelstein/pocketThumb.pdf</file_url>
<authors>
<person>
<fn>David</fn>
<sn>Dobbelstein</sn>
</person>
<person>
<fn>Christian</fn>
<sn>Winkler</sn>
</person>
<person>
<fn>Gabriel</fn>
<sn>Haas</sn>
</person>
<person>
<fn>Enrico</fn>
<sn>Rukzio</sn>
</person>
</authors>
</reference>
<reference>
<bibtype>conference</bibtype>
<citeid>2017_CHI_Frommel_CanTouchThis</citeid>
<title>CanTouchThis: Examining the Effect of Physical Contact in a Mobile Multiplayer Game</title>
<year>2017</year>
<month>5</month>
<DOI>10.1145/3027063.3053087</DOI>
<booktitle>In Proceedings of CHI EA '17 (CHI '17 Extended Abstracts on Human Factors in Computing Systems)</booktitle>
<file_url>https://www.uni-ulm.de/fileadmin/website_uni_ulm/iui.inst.100/institut/Papers/Prof_Weber/2017-CHI-Frommel-CanTouchThis.pdf</file_url>
<authors>
<person>
<fn>Julian</fn>
<sn>Frommel</sn>
</person>
<person>
<fn>Jan</fn>
<sn>Gugenheimer</sn>
</person>
<person>
<fn>David</fn>
<sn>Klein</sn>
</person>
<person>
<fn>Enrico</fn>
<sn>Rukzio</sn>
</person>
<person>
<fn>Michael</fn>
<sn>Weber</sn>
</person>
</authors>
</reference>
<reference>
<bibtype>conference</bibtype>
<citeid>HockBGR2017</citeid>
<title>CarVR: Enabling In-Car Virtual Reality Entertainment</title>
<year>2017</year>
<month>5</month>
<DOI>10.1145/3025453.3025665</DOI>
<booktitle>In Proc. of CHI 2017 (SIGCHI Conference on Human Factors in Computing Systems)</booktitle>
<web_url>https://www.uni-ulm.de/in/mi/mi-forschung/uulm-hci/projects/carvr-enabling-in-car-virtual-reality-entertainment/</web_url>
<web_url2>https://youtu.be/oVJVr88a_D8</web_url2>
<file_url>https://www.uni-ulm.de/fileadmin/website_uni_ulm/iui.inst.100/institut/Papers/Prof_Rukzio/2018/paper2106-compressed.pdf</file_url>
<authors>
<person>
<fn>Philipp</fn>
<sn>Hock</sn>
</person>
<person>
<fn>Sebastian</fn>
<sn>Benedikter</sn>
</person>
<person>
<fn>Jan</fn>
<sn>Gugenheimer</sn>
</person>
<person>
<fn>Enrico</fn>
<sn>Rukzio</sn>
</person>
</authors>
</reference>
<reference>
<bibtype>article</bibtype>
<citeid>shareVR</citeid>
<title>ShareVR: Enabling Co-Located Experiences for Virtual Reality between HMD and Non-HMD Users</title>
<year>2017</year>
<month>5</month>
<DOI>10.1145/3025453.3025683</DOI>
<journal>In Proc. of CHI 2017 (SIGCHI Conference on Human Factors in Computing Systems)</journal>
<keywords>shareVR2017</keywords>
<web_url>https://www.uni-ulm.de/in/mi/mi-forschung/uulm-hci/projects/sharevr-enabling-co-located-experiences-for-virtual-reality-between-hmd-and-non-hmd-users/</web_url>
<web_url2>https://youtu.be/Uc5fkTFMHr4</web_url2>
<file_url>t3://file?uid=355655 - - "Download pdf file."</file_url>
<authors>
<person>
<fn>Jan</fn>
<sn>Gugenheimer</sn>
</person>
<person>
<fn>Evgeny</fn>
<sn>Stemasov</sn>
</person>
<person>
<fn>Julian</fn>
<sn>Frommel</sn>
</person>
<person>
<fn>Enrico</fn>
<sn>Rukzio</sn>
</person>
</authors>
</reference>
<reference>
<bibtype>article</bibtype>
<citeid>VaiR</citeid>
<title>VaiR3: Simulating 3D Airflows in Virtual Reality</title>
<year>2017</year>
<month>5</month>
<DOI>10.1145/3025453.3026009</DOI>
<journal>In Proc. of CHI 2017 (SIGCHI Conference on Human Factors in Computing Systems)</journal>
<web_url>https://youtu.be/aqRJX0D7EBk</web_url>
<file_url>file:354717</file_url>
<authors>
<person>
<fn>Michael</fn>
<sn>Rietzler</sn>
</person>
<person>
<fn>Katrin</fn>
<sn>Plaumann</sn>
</person>
<person>
<fn>Taras</fn>
<sn>Kränzle</sn>
</person>
<person>
<fn>Marcel</fn>
<sn>Erath</sn>
</person>
<person>
<fn>Alexander</fn>
<sn>Stahl</sn>
</person>
<person>
<fn>Enrico</fn>
<sn>Rukzio</sn>
</person>
</authors>
</reference>
<reference>
<bibtype>article</bibtype>
<citeid>BodySign</citeid>
<title>The Impact of Assistive Technology on Communication Quality Between Deaf and Hearing Individuals</title>
<year>2017</year>
<month>3</month>
<DOI>10.1145/2998181.2998203</DOI>
<journal>In Proc. of CSCW 2017 (20th ACM Conference on Computer Supported Cooperative Work & Social Computing)</journal>
<web_url>https://youtu.be/dKtg_YBgwdY</web_url>
<file_url>https://www.uni-ulm.de/fileadmin/website_uni_ulm/iui.inst.100/institut/mitarbeiterbereiche/gugenheimer/bodySign_small.pdf</file_url>
<authors>
<person>
<fn>Jan</fn>
<sn>Gugenheimer</sn>
</person>
<person>
<fn>Katrin</fn>
<sn>Plaumann</sn>
</person>
<person>
<fn>Florian</fn>
<sn>Schaub</sn>
</person>
<person>
<fn>Patrizia</fn>
<sn>Di Campli San Vito</sn>
</person>
<person>
<fn>Saskia</fn>
<sn>Duck</sn>
</person>
<person>
<fn>Melanie</fn>
<sn>Rabus</sn>
</person>
<person>
<fn>Enrico</fn>
<sn>Rukzio</sn>
</person>
</authors>
</reference>
<reference>
<bibtype>article</bibtype>
<citeid>inscentdemo</citeid>
<title>Demonstration of inScent: a Wearable Olfactory Display as an Amplification for Mobile Notifications</title>
<year>2017</year>
<DOI>10.1145/3123024.3123185</DOI>
<journal>Proc. of UbiComp/ISWC 2017 Adjunct (Demo), ACM, 4 pages</journal>
<web_url>https://www.youtube.com/watch?v=pzugi0AHaJs</web_url>
<file_url>https://www.uni-ulm.de/fileadmin/website_uni_ulm/iui.inst.100/institut/Papers/Prof_Rukzio/2016/david_merged.pdf</file_url>
<authors>
<person>
<fn>David</fn>
<sn>Dobbelstein</sn>
</person>
<person>
<fn>Steffen</fn>
<sn>Herrdum</sn>
</person>
<person>
<fn>Enrico</fn>
<sn>Rukzio</sn>
</person>
</authors>
</reference>
<reference>
<bibtype>article</bibtype>
<citeid>demopocketthumb</citeid>
<title>Demonstration of PocketThumb: a Wearable Dual-Sided Touch Interface for Cursor-based Control of Smart-Eyewear</title>
<year>2017</year>
<DOI>10.1145/3123024.3123185</DOI>
<journal>Proc. of UbiComp/ISWC 2017 Adjunct (Demo), ACM, 4 pages</journal>
<web_url>https://youtu.be/Ep0GUToErJg</web_url>
<file_url>https://www.uni-ulm.de/fileadmin/website_uni_ulm/iui.inst.100/institut/mitarbeiterbereiche/dobbelstein/pocketthumb_demo.pdf</file_url>
<authors>
<person>
<fn>David</fn>
<sn>Dobbelstein</sn>
</person>
<person>
<fn>Christian</fn>
<sn>Winkler</sn>
</person>
<person>
<fn>Gabriel</fn>
<sn>Haas</sn>
</person>
<person>
<fn>Enrico</fn>
<sn>Rukzio</sn>
</person>
</authors>
</reference>
<reference>
<bibtype>article</bibtype>
<citeid>facedisplayea</citeid>
<title>FaceDisplay: Enabling Multi-User Interaction for Mobile Virtual Reality</title>
<year>2017</year>
<DOI>10.1145/3027063.3052962</DOI>
<journal>Proceedings of the 2017 CHI Conference Extended Abstracts on Human Factors in Computing Systems</journal>
<web_url>https://www.youtube.com/watch?v=S5jWuwx1cP4</web_url>
<web_url2>https://www.uni-ulm.de/in/mi/mi-forschung/uulm-hci/projects/facedisplay/</web_url2>
<file_url>https://www.uni-ulm.de/fileadmin/website_uni_ulm/iui.inst.100/institut/mitarbeiterbereiche/gugenheimer/ea369-gugenheimer.pdf</file_url>
<authors>
<person>
<fn>Jan</fn>
<sn>Gugenheimer</sn>
</person>
<person>
<fn>Evgeny</fn>
<sn>Stemasov</sn>
</person>
<person>
<fn>Harpreed</fn>
<sn>Sareen</sn>
</person>
<person>
<fn>Enrico</fn>
<sn>Rukzio</sn>
</person>
</authors>
</reference>
<reference>
<bibtype>inbook</bibtype>
<citeid>CompanionBookB5</citeid>
<title>Interaction with Adaptive and Ubiquitous User Interfaces</title>
<year>2017</year>
<isbn>978-3-319-43665-4</isbn>
<DOI>10.1007/978-3-319-43665-4_11</DOI>
<booktitle>Companion Technology - A Paradigm Shift in Human-Technology Interaction</booktitle>
<publisher>Springer</publisher>
<chapter>11</chapter>
<editor>Biundo, Susanne, and Andreas Wendemuth</editor>
<pages>209-229</pages>
<web_url2>https://www.uni-ulm.de/index.php?id=92261</web_url2>
<file_url>https://www.uni-ulm.de/fileadmin/website_uni_ulm/iui.inst.100/institut/mitarbeiterbereiche/wolf/CompanionBookB5Chapter.pdf</file_url>
<authors>
<person>
<fn>Jan</fn>
<sn>Gugenheimer</sn>
</person>
<person>
<fn>Christian</fn>
<sn>Winkler</sn>
</person>
<person>
<fn>Dennis</fn>
<sn>Wolf</sn>
</person>
<person>
<fn>Enrico</fn>
<sn>Rukzio</sn>
</person>
</authors>
</reference>
<reference>
<bibtype>article</bibtype>
<citeid>OttoJVRB2017</citeid>
<title>Presenting a Holistic Framework for Scalable, Marker-less Motion Capturing: Skeletal Tracking Performance Analysis, Sensor Fusion Algorithms and Usage in Automotive Industry</title>
<year>2017</year>
<DOI>10.20385/1860-2037/13.2016.3</DOI>
<journal>Journal of Virtual Reality and Broadcasting</journal>
<volume>13</volume>
<number>3</number>
<web_url>http://www.jvrb.org/past-issues/13.2016/4481</web_url>
<file_url>https://www.uni-ulm.de/fileadmin/website_uni_ulm/iui.inst.100/institut/Papers/Prof_Rukzio/2016/1320163.pdf</file_url>
<authors>
<person>
<fn>Michael</fn>
<sn>Otto</sn>
</person>
<person>
<fn>Philipp</fn>
<sn>Agethen</sn>
</person>
<person>
<fn>Florian</fn>
<sn>Geiselhart</sn>
</person>
<person>
<fn>Michael</fn>
<sn>Rietzler</sn>
</person>
<person>
<fn>Felix</fn>
<sn>Gaisbauer</sn>
</person>
<person>
<fn>Enrico</fn>
<sn>Rukzio</sn>
</person>
</authors>
</reference>
<reference>
<bibtype>article</bibtype>
<citeid>ZihslerHWDSSR2016</citeid>
<title>Carvatar: Increasing Trust in Highly-Automated Driving Through Social Cues</title>
<year>2016</year>
<month>10</month>
<DOI>10.1145/3004323.3004354%20</DOI>
<journal>Adjunct Proceedings of the 8th International Conference on Automotive User Interfaces and Interactive Vehicular Applications (AutomotiveUI '16 Adjunct)</journal>
<file_url>https://www.uni-ulm.de/fileadmin/website_uni_ulm/iui.inst.100/institut/mitarbeiterbereiche/hock/Publications/carvatar_zihsler_109.pdf</file_url>
<authors>
<person>
<fn>Jens</fn>
<sn>Zihsler</sn>
</person>
<person>
<fn>Philipp</fn>
<sn>Hock</sn>
</person>
<person>
<fn>Marcel</fn>
<sn>Walch</sn>
</person>
<person>
<fn>Kirill</fn>
<sn>Dzuba</sn>
</person>
<person>
<fn>Denis</fn>
<sn>Schwager</sn>
</person>
<person>
<fn>Patrick</fn>
<sn>Szauer</sn>
</person>
<person>
<fn>Enrico</fn>
<sn>Rukzio</sn>
</person>
</authors>
</reference>
<reference>
<bibtype>article</bibtype>
<citeid>ftUIST</citeid>
<title>FaceTouch: Enabling Touch Interaction in Display Fixed UIs for Mobile Virtual Reality</title>
<year>2016</year>
<month>10</month>
<DOI>10.1145/2984511.2984576</DOI>
<journal>In Proceedings of UIST 2016 (ACM Symposium on User Interface Software and Technology)</journal>
<web_url>https://www.uni-ulm.de/en/in/mi/mi-forschung/research-group-rukzio/projects/facetouch-touch-interaction-for-mobile-virtual-reality/</web_url>
<web_url2>https://youtu.be/MHbN9lseHYE</web_url2>
<file_url>https://www.uni-ulm.de/fileadmin/website_uni_ulm/iui.inst.100/institut/Papers/Prof_Rukzio/2016/p49c.pdf</file_url>
<authors>
<person>
<fn>Jan</fn>
<sn>Gugenheimer</sn>
</person>
<person>
<fn>David</fn>
<sn>Dobbelstein</sn>
</person>
<person>
<fn>Christian</fn>
<sn>Winkler</sn>
</person>
<person>
<fn>Gabriel</fn>
<sn>Haas</sn>
</person>
<person>
<fn>Enrico</fn>
<sn>Rukzio</sn>
</person>
</authors>
</reference>
<reference>
<bibtype>article</bibtype>
<citeid>GyroVR</citeid>
<title>GyroVR: Simulating Inertia in Virtual Reality using Head Worn Flywheels</title>
<year>2016</year>
<month>10</month>
<DOI>10.1145/2984511.2984535</DOI>
<journal>In Proceedings UIST 2016 (ACM Symposium on User Interface Software and Technology)</journal>
<web_url>https://youtu.be/RPWsaIYYI6g</web_url>
<web_url2>https://www.uni-ulm.de/index.php?id=85211</web_url2>
<file_url>https://www.uni-ulm.de/fileadmin/website_uni_ulm/iui.inst.100/institut/mitarbeiterbereiche/wolf/GyroVRPaper.pdf</file_url>
<authors>
<person>
<fn>Jan</fn>
<sn>Gugenheimer</sn>
</person>
<person>
<fn>Dennis</fn>
<sn>Wolf</sn>
</person>
<person>
<fn>Eythor</fn>
<sn>Eiriksson</sn>
</person>
<person>
<fn>Pattie</fn>
<sn>Maes</sn>
</person>
<person>
<fn>Enrico</fn>
<sn>Rukzio</sn>
</person>
</authors>
</reference>
<reference>
<bibtype>article</bibtype>
<citeid>SwiVRDemo</citeid>
<title>A Demonstration of SwiVRChair: A Motorized Swivel Chair to Nudge Users’ Orientation for 360 Degree Storytelling in Virtual Reality</title>
<year>2016</year>
<month>9</month>
<DOI>10.1145/2968219.2971363</DOI>
<journal>In Adj. Proc. (Demo) of Ubicomp 2016 (ACM International Joint Conference on Pervasive and Ubiquitous Computing),</journal>
<web_url>https://www.uni-ulm.de/index.php?id=85224</web_url>
<file_url>https://www.uni-ulm.de/fileadmin/website_uni_ulm/iui.inst.100/institut/Papers/Prof_Rukzio/2016/SwiVRDemoUbiComp.pdf</file_url>
<authors>
<person>
<fn>Jan</fn>
<sn>Gugenheimer</sn>
</person>
<person>
<fn>Dennis</fn>
<sn>Wolf</sn>
</person>
<person>
<fn>Gabriel</fn>
<sn>Haas</sn>
</person>
<person>
<fn>Sebastian</fn>
<sn>Krebs</sn>
</person>
<person>
<fn>Enrico</fn>
<sn>Rukzio</sn>
</person>
</authors>
</reference>
<reference>
<bibtype>article</bibtype>
<citeid>CircularSelection</citeid>
<title>Circular Selection: Optimizing List Selection for Smartwatches</title>
<year>2016</year>
<month>9</month>
<reviewed>1</reviewed>
<DOI>10.1145/2971763.2971766</DOI>
<journal>In Proc. of ISWC 2016 (International Symposium on Wearable Computers)</journal>
<extern>1</extern>
<web_url>http://www.uni-ulm.de/en/in/mi/mi-forschung/research-group-rukzio/projects/circularselection/</web_url>
<web_url2>https://youtu.be/z71Xebwu2oo</web_url2>
<file_url>www.uni-ulm.de/fileadmin/website_uni_ulm/iui.inst.100/institut/mitarbeiter/plaumann/p128-plaumann.pdf</file_url>
<authors>
<person>
<fn>Katrin</fn>
<sn>Plaumann</sn>
</person>
<person>
<fn>Michael</fn>
<sn>Müller</sn>
</person>
<person>
<fn>Enrico</fn>
<sn>Rukzio</sn>
</person>
</authors>
</reference>
<reference>
<bibtype>article</bibtype>
<citeid>EyeVR2016</citeid>
<title>EyeVR - Low Cost VR Eye Interaction </title>
<year>2016</year>
<month>9</month>
<DOI>10.1145/2968219.2971384</DOI>
<journal>In Adj. Proc. (Demo) of Ubicomp 2016 (ACM International Joint Conference on Pervasive and Ubiquitous Computing)</journal>
<file_url>https://www.uni-ulm.de/fileadmin/website_uni_ulm/iui.inst.100/institut/Papers/Prof_Rukzio/2016/EyeVRUbicompDemo2016.pdf</file_url>
<authors>
<person>
<fn>Florian</fn>
<sn>Geiselhart</sn>
</person>
<person>
<fn>Michael</fn>
<sn>Rietzler</sn>
</person>
<person>
<fn>Enrico</fn>
<sn>Rukzio</sn>
</person>
</authors>
</reference>
<reference>
<bibtype>article</bibtype>
<citeid>AgethenCARV2016</citeid>
<title>Presenting a Novel Motion Capture-based Approach for Walk Path Segmentation and Drift Analysis in Manual Assembly</title>
<year>2016</year>
<month>9</month>
<DOI>10.1016/j.procir.2016.07.048</DOI>
<journal>In Proc. of 6th Conference on Changeable, Agile, Reconfigurable and Virtual Production (CARV)</journal>
<number>52</number>
<file_url>https://www.uni-ulm.de/fileadmin/website_uni_ulm/iui.inst.100/institut/Papers/Prof_Rukzio/2016/Motion_Capture-based_Walk_Path_Segmentation_and_Drift_Analysis.pdf</file_url>
<authors>
<person>
<fn>Philipp</fn>
<sn>Agethen</sn>
</person>
<person>
<fn>Michael</fn>
<sn>Otto</sn>
</person>
<person>
<fn>Felix</fn>
<sn>Gaisbauer</sn>
</person>
<person>
<fn>Enrico</fn>
<sn>Rukzio</sn>
</person>
</authors>
</reference>
<reference>
<bibtype>article</bibtype>
<citeid>TremorDemo</citeid>
<title>Towards Improving Touchscreen Input Speed and Accuracy on Smartphones for Tremor Affected Persons</title>
<year>2016</year>
<month>9</month>
<DOI>10.1145/2968219.2971396</DOI>
<journal>In Adj. Proc. (Demo) of Ubicomp 2016 (ACM International Joint Conference on Pervasive and Ubiquitous Computing)</journal>
<file_url>https://www.uni-ulm.de/fileadmin/website_uni_ulm/iui.inst.100/institut/Papers/Prof_Rukzio/2016/TremoUbicompDemo2016.pdf</file_url>
<authors>
<person>
<fn>Katrin</fn>
<sn>Plaumann</sn>
</person>
<person>
<fn>Milos</fn>
<sn>Babic</sn>
</person>
<person>
<fn>Tobias</fn>
<sn>Drey</sn>
</person>
<person>
<fn>Witali</fn>
<sn>Hepting</sn>
</person>
<person>
<fn>Daniel</fn>
<sn>Stooß</sn>
</person>
<person>
<fn>Enrico</fn>
<sn>Rukzio</sn>
</person>
</authors>
</reference>
<reference>
<bibtype>article</bibtype>
<citeid>AgethenOMR2016</citeid>
<title>Using Marker-less Motion Capture Systems for Walk Path Analysis in Paced Assembly Flow Lines</title>
<year>2016</year>
<month>7</month>
<DOI>10.1016/j.procir.2016.04.125</DOI>
<journal>In Proc. of 6th Conference on Learning Factories (CLF)</journal>
<file_url>https://www.uni-ulm.de/fileadmin/website_uni_ulm/iui.inst.100/institut/Papers/Prof_Rukzio/2016/Marker-less_Motion_Capture_Systems_for_Walk_Path_Analysis.pdf</file_url>
<authors>
<person>
<fn>Philipp</fn>
<sn>Agethen</sn>
</person>
<person>
<fn>Michael</fn>
<sn>Otto</sn>
</person>
<person>
<fn>Stefan</fn>
<sn>Mengel</sn>
</person>
<person>
<fn>Enrico</fn>
<sn>Rukzio</sn>
</person>
</authors>
</reference>
<reference>
<bibtype>article</bibtype>
<citeid>RietzlerFusionKit2016</citeid>
<title>FusionKit: A Generic Toolkit for Skeleton, Marker and Rigid-Body Tracking</title>
<year>2016</year>
<month>6</month>
<reviewed>1</reviewed>
<DOI>10.1145/2933242.2933263</DOI>
<journal>In Proc. of EICS 2016 (8th ACM SIGCHI Symposium on  Engineering Interactive Computing Systems)</journal>
<extern>1</extern>
<web_url>https://youtu.be/ZQoDrKShPfk</web_url>
<web_url2>https://www.uni-ulm.de/index.php?id=85220</web_url2>
<file_url>https://www.uni-ulm.de/fileadmin/website_uni_ulm/iui.inst.100/institut/Papers/Prof_Rukzio/2016/FusionKit.pdf</file_url>
<authors>
<person>
<fn>Michael</fn>
<sn>Rietzler</sn>
</person>
<person>
<fn>Florian</fn>
<sn>Geiselhart</sn>
</person>
<person>
<fn>Janek</fn>
<sn>Thomas</sn>
</person>
<person>
<fn>Enrico</fn>
<sn>Rukzio</sn>
</person>
</authors>
</reference>
<reference>
<bibtype>article</bibtype>
<citeid>MultiUserGestures</citeid>
<title>Who Has the Force?   Solving Conflicts for Multi User Mid-Air Gestures for TVs</title>
<year>2016</year>
<month>6</month>
<DOI>10.1145/2932206.2932208</DOI>
<journal>In Proc. of TVX 2016 (ACM International Conference on Interactive Experiences for Television and Online Video)</journal>
<web_url>https://www.uni-ulm.de/index.php?id=85223</web_url>
<file_url>https://www.uni-ulm.de/fileadmin/website_uni_ulm/iui.inst.100/institut/Papers/Prof_Rukzio/2016/p25-plaumann_comp.pdf</file_url>
<authors>
<person>
<fn>Katrin</fn>
<sn>Plaumann</sn>
</person>
<person>
<fn>David</fn>
<sn>Lehr</sn>
</person>
<person>
<fn>Enrico</fn>
<sn>Rukzio</sn>
</person>
</authors>
</reference>
<reference>
<bibtype>article</bibtype>
<citeid>CATS2016</citeid>
<title>Dual Reality for Production Verification Workshops: A Comprehensive Set of Virtual Methods</title>
<year>2016</year>
<month>5</month>
<DOI>doi:10.1016/j.procir.2016.02.140</DOI>
<journal>In Proc. of 6th CIRP Conference on Assembly Technologies and Systems (CATS)</journal>
<file_url>https://www.uni-ulm.de/fileadmin/website_uni_ulm/iui.inst.100/institut/Papers/Prof_Rukzio/2016/DualRealityOtto.pdf</file_url>
<authors>
<person>
<fn>Michael</fn>
<sn>Otto</sn>
</person>
<person>
<fn>Michael</fn>
<sn>Prieur</sn>
</person>
<person>
<fn>Philipp</fn>
<sn>Agethen</sn>
</person>
<person>
<fn>Enrico</fn>
<sn>Rukzio</sn>
</person>
</authors>
</reference>
<reference>
<bibtype>article</bibtype>
<citeid>FaceTouchDemo</citeid>
<title>FaceTouch: Touch Interaction for Mobile Virtual Reality</title>
<year>2016</year>
<month>5</month>
<DOI>10.1145/2851581.2890242</DOI>
<journal>In Adj. Proc. (Demo) of CHI 2016 (SIGCHI Conference on Human Factors in Computing Systems)</journal>
<web_url>https://www.uni-ulm.de/en/in/mi/mi-forschung/research-group-rukzio/projects/facetouch-touch-interaction-for-mobile-virtual-reality/</web_url>
<web_url2>https://youtu.be/tvAjOvXB56c</web_url2>
<file_url>https://www.uni-ulm.de/fileadmin/website_uni_ulm/iui.inst.100/institut/Papers/Prof_Rukzio/2016/FaceTouchGugenheimer.pdf</file_url>
<authors>
<person>
<fn>Jan</fn>
<sn>Gugenheimer</sn>
</person>
<person>
<fn>David</fn>
<sn>Dobbelstein</sn>
</person>
<person>
<fn>Christian</fn>
<sn>Winkler</sn>
</person>
<person>
<fn>Gabriel</fn>
<sn>Haas</sn>
</person>
<person>
<fn>Enrico</fn>
<sn>Rukzio</sn>
</person>
</authors>
</reference>
<reference>
<bibtype>article</bibtype>
<citeid>SwiVRchair</citeid>
<title>SwiVRChair: A Motorized Swivel Chair to Nudge Users' Orientation for 360 Degree Storytelling in Virtual Reality</title>
<year>2016</year>
<month>5</month>
<DOI>10.1145/2858036.2858040</DOI>
<journal>In Proc. of CHI 2016 (SIGCHI Conference on Human Factors in Computing Systems)</journal>
<web_url>https://youtu.be/qRSqbCHiMS4</web_url>
<web_url2>https://www.uni-ulm.de/index.php?id=85224</web_url2>
<file_url>http://www.uni-ulm.de/fileadmin/website_uni_ulm/iui.inst.100/institut/mitarbeiterbereiche/wolf/SwiVRChair.pdf</file_url>
<authors>
<person>
<fn>Jan</fn>
<sn>Gugenheimer</sn>
</person>
<person>
<fn>Dennis</fn>
<sn>Wolf</sn>
</person>
<person>
<fn>Gabriel</fn>
<sn>Haas</sn>
</person>
<person>
<fn>Sebastian</fn>
<sn>Krebs</sn>
</person>
<person>
<fn>Enrico</fn>
<sn>Rukzio</sn>
</person>
</authors>
</reference>
<reference>
<bibtype>article</bibtype>
<citeid>WatchNavigation</citeid>
<title>Unconstrained Pedestrian Navigation based on Vibro-tactile Feedback around the Wristband of a Smartwatch</title>
<year>2016</year>
<month>5</month>
<DOI>10.1145/2851581.2892292</DOI>
<journal>In Adj. Proc. (Poster) of CHI 2016 (SIGCHI Conference on Human Factors in Computing Systems)</journal>
<file_url>https://www.uni-ulm.de/fileadmin/website_uni_ulm/iui.inst.100/institut/Papers/Prof_Rukzio/2016/UnconstrainedDobbelstein.pdf</file_url>
<authors>
<person>
<fn>David</fn>
<sn>Dobbelstein</sn>
</person>
<person>
<fn>Philipp</fn>
<sn>Henzler</sn>
</person>
<person>
<fn>Enrico</fn>
<sn>Rukzio</sn>
</person>
</authors>
</reference>
<reference>
<bibtype>article</bibtype>
<citeid>SmarTVision</citeid>
<title>How Companion-Technology can Enhance a Multi-Screen Television Experience: A Test Bed for Adaptive Multimodal Interaction in Domestic Environments</title>
<year>2016</year>
<month>2</month>
<language>English</language>
<DOI>10.1007/s13218-015-0395-7</DOI>
<journal>KI - Künstliche Intelligenz</journal>
<publisher>Springer</publisher>
<address>Berlin Heidelberg</address>
<web_url>https://youtu.be/srODhHgU3LA</web_url>
<web_url2>https://www.uni-ulm.de/index.php?id=85227</web_url2>
<file_url>https://www.uni-ulm.de/fileadmin/website_uni_ulm/iui.inst.100/institut/mitarbeiterbereiche/rukzio/publications/GugenheimerCompanion2015.pdf</file_url>
<authors>
<person>
<fn>Jan</fn>
<sn>Gugenheimer</sn>
</person>
<person>
<fn>Frank</fn>
<sn>Honold</sn>
</person>
<person>
<fn>Dennis</fn>
<sn>Wolf</sn>
</person>
<person>
<fn>Felix</fn>
<sn>Schüssel</sn>
</person>
<person>
<fn>Julian</fn>
<sn>Seifert</sn>
</person>
<person>
<fn>Michael</fn>
<sn>Weber</sn>
</person>
<person>
<fn>Enrico</fn>
<sn>Rukzio</sn>
</person>
</authors>
</reference>
<reference>
<bibtype>conference</bibtype>
<citeid>moteye</citeid>
<title>Reducing In-Vehicle Interaction Complexity: Gaze-Based Mapping of a Rotary Knob to Multiple Interfaces
</title>
<year>2016</year>
<DOI>10.1145/3012709.3016064</DOI>
<booktitle>In Adj. Proc. (Poster) of MUM 2016 (International Conference on Mobile and Ubiquitous Multimedia)</booktitle>
<event_name>International Conference on Mobile and Ubiquitous Multimedia</event_name>
<file_url>https://www.uni-ulm.de/fileadmin/website_uni_ulm/iui.inst.100/institut/Papers/Prof_Rukzio/2016/Reducing_In-Vehicle_Interaction_Complexity_-_Gaze-Based_Mapping_of_a_Rotary_Knob_to_Multiple_Interfaces.pdf</file_url>
<authors>
<person>
<fn>David</fn>
<sn>Dobbelstein</sn>
</person>
<person>
<fn>Marcel</fn>
<sn>Walch</sn>
</person>
<person>
<fn>Andreas</fn>
<sn>Köll</sn>
</person>
<person>
<fn>Ömer</fn>
<sn>Şahin</sn>
</person>
<person>
<fn>Tamino</fn>
<sn>Hartmann</sn>
</person>
<person>
<fn>Enrico</fn>
<sn>Rukzio</sn>
</person>
</authors>
</reference>
<reference>
<bibtype>article</bibtype>
<citeid>UbiBeamInteract</citeid>
<title>UbiBeam: Exploring the Interaction Space for Home Deployed Projector-Camera Systems
</title>
<year>2015</year>
<month>10</month>
<day>14</day>
<DOI>10.1007/978-3-319-22698-9_23</DOI>
<journal>In Proc. of Interact 2015 (IFIP TC15 Conference on Human-Computer Interaction)</journal>
<web_url>https://youtu.be/t-2ddmX5s2M</web_url>
<web_url2>https://www.uni-ulm.de/en/in/mi/mi-forschung/research-group-rukzio/projects/ubibeam-an-interactive-projector-camera-system-for-domestic-deployment/</web_url2>
<file_url>https://www.uni-ulm.de/fileadmin/website_uni_ulm/iui.inst.100/institut/Papers/Prof_Rukzio/2015/UbiBeam.pdf</file_url>
<authors>
<person>
<fn>Jan</fn>
<sn>Gugenheimer</sn>
</person>
<person>
<fn>Pascal</fn>
<sn>Knierim</sn>
</person>
<person>
<fn>Christian</fn>
<sn>Winkler</sn>
</person>
<person>
<fn>Julian</fn>
<sn>Seifert</sn>
</person>
<person>
<fn>Enrico</fn>
<sn>Rukzio</sn>
</person>
</authors>
</reference>
<reference>
<bibtype>article</bibtype>
<citeid>BetterThanYouThink</citeid>
<title>Better than you Think: Head Gestures for Mid Air Input</title>
<year>2015</year>
<month>9</month>
<reviewed>1</reviewed>
<DOI>10.1007/978-3-319-22698-9_36</DOI>
<journal>In Proc. of Interact 2015 (IFIP TC15 Conference on Human-Computer Interaction)</journal>
<publisher>Springer</publisher>
<extern>1</extern>
<event_name>INTERACT 2015</event_name>
<web_url>http://www.uni-ulm.de/en/in/mi/mi-forschung/research-group-rukzio/projects/assist/</web_url>
<file_url>www.uni-ulm.de/fileadmin/website_uni_ulm/iui.inst.100/institut/Papers/Prof_Rukzio/2015/InteractPaper.pdf</file_url>
<in_library>1</in_library>
<authors>
<person>
<fn>Katrin</fn>
<sn>Plaumann</sn>
</person>
<person>
<fn>Jan</fn>
<sn>Ehlers</sn>
</person>
<person>
<fn>Florian</fn>
<sn>Geiselhart</sn>
</person>
<person>
<fn>Gabriel</fn>
<sn>Yuras</sn>
</person>
<person>
<fn>Anke</fn>
<sn>Huckauf</sn>
</person>
<person>
<fn>Enrico</fn>
<sn>Rukzio</sn>
</person>
</authors>
</reference>
<reference>
<bibtype>article</bibtype>
<citeid>GeiselhartCMS2015</citeid>
<title>On the use of Multi-Depth-Camera based Motion Tracking Systems in
Production Planning Environments</title>
<year>2015</year>
<month>6</month>
<DOI>10.1016/j.procir.2015.12.088</DOI>
<booktitle>Proc. of 48th CIRP Conference on Manufacturing Systems - CIRP CMS 2015, 6 pages</booktitle>
<journal>In Proc. of CIRP CMS 2015 (48th CIRP Conference on Manufacturing Systems)</journal>
<publisher>Elsevier</publisher>
<event_name>48th CIRP Conference on MANUFACTURING SYSTEMS - CIRP CMS 2015</event_name>
<file_url>/fileadmin/website_uni_ulm/iui.inst.100/institut/mitarbeiterbereiche/geiselhart/cirp_cms15_geiselhart.pdf</file_url>
<authors>
<person>
<fn>Florian</fn>
<sn>Geiselhart</sn>
</person>
<person>
<fn>Michael</fn>
<sn>Otto</sn>
</person>
<person>
<fn>Enrico</fn>
<sn>Rukzio</sn>
</person>
</authors>
</reference>
<reference>
<bibtype>conference</bibtype>
<citeid>dobbelstein_CHI2015_belt</citeid>
<title>Belt: An Unobtrusive Touch Input Device for Head-worn Displays</title>
<year>2015</year>
<month>4</month>
<DOI>10.1145/2702123.2702450</DOI>
<booktitle>Proc. of CHI 2015 (SIGCHI Conference on Human Factors in Computing Systems)</booktitle>
<web_url>https://www.uni-ulm.de/index.php?id=85243</web_url>
<web_url2>https://youtu.be/o0a46fhmBS8</web_url2>
<file_url>https://www.uni-ulm.de/fileadmin/website_uni_ulm/iui.inst.100/institut/mitarbeiterbereiche/dobbelstein/Belt_-_An_Unobtrusive_Touch_Input_Device_for_Head-worn_Displays.pdf</file_url>
<authors>
<person>
<fn>David</fn>
<sn>Dobbelstein</sn>
</person>
<person>
<fn>Philipp</fn>
<sn>Hock</sn>
</person>
<person>
<fn>Enrico</fn>
<sn>Rukzio</sn>
</person>
</authors>
</reference>
<reference>
<bibtype>conference</bibtype>
<citeid>winkler_CHI2015_glassunlock</citeid>
<title>Glass Unlock: Enhancing Security of Smartphone Unlocking through Leveraging a Private Near-eye Display</title>
<year>2015</year>
<month>4</month>
<DOI>10.1145/2702123.2702316</DOI>
<booktitle>Proc. of CHI 2015 (SIGCHI Conference on Human Factors in Computing Systems)</booktitle>
<web_url>https://www.uni-ulm.de/index.php?id=85251</web_url>
<web_url2>https://www.youtube.com/watch?v=LqfbVckVUNs</web_url2>
<file_url>http://www.uni-ulm.de/fileadmin/website_uni_ulm/iui.inst.100/institut/Papers/Prof_Rukzio/2015/Glass_unlock_winkler.pdf</file_url>
<authors>
<person>
<fn>Christian</fn>
<sn>Winkler</sn>
</person>
<person>
<fn>Jan</fn>
<sn>Gugenheimer</sn>
</person>
<person>
<fn>Alexander</fn>
<sn>De Luca</sn>
</person>
<person>
<fn>Gabriel</fn>
<sn>Haas</sn>
</person>
<person>
<fn>Philipp</fn>
<sn>Speidel</sn>
</person>
<person>
<fn>David</fn>
<sn>Dobbelstein</sn>
</person>
<person>
<fn>Enrico</fn>
<sn>Rukzio</sn>
</person>
</authors>
</reference>
<reference>
<bibtype>article</bibtype>
<citeid>ColorSnakes</citeid>
<title>ColorSnakes: Using Colored Decoys to Secure Authentication in Sensitive Contexts</title>
<year>2015</year>
<DOI>10.1145/2785830.2785834</DOI>
<journal>In Proc. of MobileHCI 2015 (17th International Conference on  Human-Computer Interaction with Mobile Devices and Services)</journal>
<web_url>https://youtu.be/hz1MYrhhj1Y</web_url>
<web_url2>https://www.uni-ulm.de/index.php?id=85228</web_url2>
<file_url>https://www.uni-ulm.de/fileadmin/website_uni_ulm/iui.inst.100/institut/mitarbeiterbereiche/wolf/ColorSnakes.pdf</file_url>
<authors>
<person>
<fn>Jan</fn>
<sn>Gugenheimer</sn>
</person>
<person>
<fn>Alexander</fn>
<sn>De Luca</sn>
</person>
<person>
<fn>Hayato</fn>
<sn>Hess</sn>
</person>
<person>
<fn>Stefan</fn>
<sn>Karg</sn>
</person>
<person>
<fn>Dennis</fn>
<sn>Wolf</sn>
</person>
<person>
<fn>Enrico</fn>
<sn>Rukzio</sn>
</person>
</authors>
</reference>
<reference>
<bibtype>article</bibtype>
<citeid>OctiCamPoster</citeid>
<title>OctiCam: An immersive and mobile video communication device for parents and children</title>
<year>2015</year>
<language>English</language>
<DOI>10.18725/OPARU-3252</DOI>
<journal>Proc. of ISCT 2015 (1st International Symposium on Companion-Technology)</journal>
<web_url>http://vts.uni-ulm.de/doc.asp?id=9771</web_url>
<web_url2>https://www.uni-ulm.de/index.php?id=85226</web_url2>
<file_url>https://www.uni-ulm.de/fileadmin/website_uni_ulm/iui.inst.100/institut/mitarbeiterbereiche/wolf/OctiCamISCT15.pdf</file_url>
<authors>
<person>
<fn>Dennis</fn>
<sn>Wolf</sn>
</person>
<person>
<fn>Jan</fn>
<sn>Gugenheimer</sn>
</person>
<person>
<fn>Enrico</fn>
<sn>Rukzio</sn>
</person>
</authors>
</reference>
<reference>
<bibtype>conference</bibtype>
<citeid>RukzioGAO2015</citeid>
<title>Towards ubiquitous tracking: Presenting a scalable, markerless tracking approach using multiple depth cameras</title>
<year>2015</year>
<booktitle>In Proc. of EuroVR 2015 (European Association for Virtual Reality and Augmented Reality), Best Industrial Paper award</booktitle>
<journal>In Proc. of EuroVR 2015 (European Association for Virtual Reality and Augmented Reality), Best Industrial Paper award</journal>
<event_name>EuroVR 2015</event_name>
<event_place>Milano</event_place>
<keywords>RukzioGAO2015</keywords>
<file_url>t3://file?uid=433776 - - "Link to publication"</file_url>
<authors>
<person>
<fn>Michael</fn>
<sn>Otto</sn>
</person>
<person>
<fn>Philipp</fn>
<sn>Agethen</sn>
</person>
<person>
<fn>Florian</fn>
<sn>Geiselhart</sn>
</person>
<person>
<fn>Enrico</fn>
<sn>Rukzio</sn>
</person>
</authors>
</reference>
<reference>
<bibtype>conference</bibtype>
<citeid>OttoPR2014</citeid>
<title>Using Scalable, Interactive Floor Projection for Production Planning Scenario</title>
<year>2014</year>
<month>11</month>
<day>16</day>
<DOI>10.1145/2669485.2669547</DOI>
<booktitle>In Adj. Proc. (Poster) of ITS 2014 (Ninth ACM International Conference on Interactive Tabletops and Surfaces)</booktitle>
<file_url>/fileadmin/website_uni_ulm/iui.inst.100/institut/Papers/Prof_Rukzio/2014/ProductionPlanning_ITS14.pdf</file_url>
<authors>
<person>
<fn>Michael</fn>
<sn>Otto</sn>
</person>
<person>
<fn>Michael</fn>
<sn>Prieur</sn>
</person>
<person>
<fn>Enrico</fn>
<sn>Rukzio</sn>
</person>
</authors>
</reference>
<reference>
<bibtype>conference</bibtype>
<citeid>SeifertBWSSHMM2014</citeid>
<title>Hover Pad: Interacting with Autonomous and Self-Actuated Displays in Space</title>
<year>2014</year>
<month>10</month>
<day>5</day>
<DOI>10.1145/2642918.2647385</DOI>
<booktitle>Proceedings of ACM Symposium on User Interface Software and Technology (UIST)</booktitle>
<web_url>https://www.youtube.com/watch?v=qAS6EC7cvU8</web_url>
<web_url2>https://www.uni-ulm.de/index.php?id=56454</web_url2>
<file_url>https://www.uni-ulm.de/fileadmin/website_uni_ulm/iui.inst.100/institut/Papers/Prof_Rukzio/2014/Seifert-et-al.-HoverPad.pdf</file_url>
<authors>
<person>
<fn>Julian</fn>
<sn>Seifert</sn>
</person>
<person>
<fn>Sebastian</fn>
<sn>Boring</sn>
</person>
<person>
<fn>Christian</fn>
<sn>Winkler</sn>
</person>
<person>
<fn>Florian</fn>
<sn>Schaub</sn>
</person>
<person>
<fn>Fabian</fn>
<sn>Schwab</sn>
</person>
<person>
<fn>Steffen</fn>
<sn>Herrdum</sn>
</person>
<person>
<fn>Fabian</fn>
<sn>Maier</sn>
</person>
<person>
<fn>Daniel</fn>
<sn>Mayer</sn>
</person>
<person>
<fn>Enrico</fn>
<sn>Rukzio</sn>
</person>
</authors>
</reference>
<reference>
<bibtype>article</bibtype>
<citeid>winkler2014_Springer_Tischbasierte Projektion</citeid>
<title>Projizierte tischbasierte Benutzungsschnittstellen</title>
<year>2014</year>
<month>5</month>
<day>28</day>
<DOI>10.1007/s00287-014-0803-7</DOI>
<journal>Informatik-Spektrum </journal>
<pages>1-5</pages>
<file_url>https://www.uni-ulm.de/fileadmin/website_uni_ulm/iui.inst.100/institut/Papers/Prof_Rukzio/2014/Winkler_Rukzio_2014_Projizierte_tischbasierte_Benutzungsschnittstellen.pdf</file_url>
<authors>
<person>
<fn>Christian</fn>
<sn>Winkler</sn>
</person>
<person>
<fn>Enrico</fn>
<sn>Rukzio</sn>
</person>
</authors>
</reference>
<reference>
<bibtype>inproceedings</bibtype>
<citeid>Winkler:2014CHIa</citeid>
<title>Pervasive Information through Constant Personal Projection: The Ambient Mobile Pervasive Display (AMP-D)</title>
<year>2014</year>
<month>4</month>
<DOI>10.1145/2556288.2557365</DOI>
<booktitle>Proc. of CHI 2014 (SIGCHI Conference on Human Factors in Computing Systems), ACM, 10 pages, Honorable Mention Award</booktitle>
<keywords>Winkler:2014CHIa</keywords>
<web_url>t3://page?uid=85269 - - "Link to Proect Site at UUlm"</web_url>
<web_url2>http://www.youtube.com/watch?v=ahG06CERqAI</web_url2>
<file_url>t3://file?uid=151372</file_url>
<authors>
<person>
<fn>Christian</fn>
<sn>Winkler</sn>
</person>
<person>
<fn>Julian</fn>
<sn>Seifert</sn>
</person>
<person>
<fn>David</fn>
<sn>Dobbelstein</sn>
</person>
<person>
<fn>Enrico</fn>
<sn>Rukzio</sn>
</person>
</authors>
</reference>
<reference>
<bibtype>inproceedings</bibtype>
<citeid>Winkler:2014CHIb</citeid>
<title>SurfacePhone: A Mobile Projection Device for Single- and Multiuser Everywhere Tabletop Interaction</title>
<year>2014</year>
<month>4</month>
<DOI>10.1145/2556288.2557075</DOI>
<booktitle>Proc. of CHI 2014 (SIGCHI Conference on Human Factors in Computing Systems), ACM, 10 pages</booktitle>
<event_name>CHI 2014</event_name>
<web_url>https://www.uni-ulm.de/index.php?id=85270</web_url>
<web_url2>http://www.youtube.com/watch?v=DKofzCI7Yfw</web_url2>
<file_url>/fileadmin/website_uni_ulm/iui.inst.100/institut/Papers/Prof_Rukzio/2014/winkler672-surfacephone.pdf</file_url>
<authors>
<person>
<fn>Christian</fn>
<sn>Winkler</sn>
</person>
<person>
<fn>Markus</fn>
<sn>Loechtefeld</sn>
</person>
<person>
<fn>David</fn>
<sn>Dobbelstein</sn>
</person>
<person>
<fn>Antonio</fn>
<sn>Krueger</sn>
</person>
<person>
<fn>Enrico</fn>
<sn>Rukzio</sn>
</person>
</authors>
</reference>
<reference>
<bibtype>inproceedings</bibtype>
<citeid>Schaub:2014</citeid>
<title>Broken Display = Broken Interface? The Impact of Display Damage on Smartphone Interaction</title>
<year>2014</year>
<DOI>10.1145/2556288.2557067</DOI>
<booktitle>In Proc. of CHI 2014 (SIGCHI Conference on Human Factors in Computing Systems)</booktitle>
<web_url>https://youtu.be/KmNuaavfyG8</web_url>
<file_url>fileadmin/website_uni_ulm/iui.inst.100/institut/Papers/Prof_Rukzio/2014/Schaub-et-al.-BrokenInterfaces-CHI14.pdf</file_url>
<authors>
<person>
<fn>Florian</fn>
<sn>Schaub</sn>
</person>
<person>
<fn>Julian</fn>
<sn>Seifert</sn>
</person>
<person>
<fn>Frank</fn>
<sn>Honold</sn>
</person>
<person>
<fn>Michael</fn>
<sn>Mueller</sn>
</person>
<person>
<fn>Enrico</fn>
<sn>Rukzio</sn>
</person>
<person>
<fn>Michael</fn>
<sn>Weber</sn>
</person>
</authors>
</reference>
<reference>
<bibtype>inproceedings</bibtype>
<citeid>Rogers:2014:PFP:2669485.2669514</citeid>
<title>P.I.A.N.O.: Faster Piano Learning with Interactive Projection</title>
<year>2014</year>
<DOI>10.1145/2669485.2669514</DOI>
<booktitle>Proceedings of the Ninth ACM International Conference on Interactive Tabletops and Surfaces</booktitle>
<publisher>ACM</publisher>
<address>New York, NY, USA</address>
<series>ITS '14</series>
<pages>149--158</pages>
<keywords>Rogers:2014:PFP:2669485.2669514</keywords>
<web_url>https://www.youtube.com/watch?v=4ohSTRXjBNI</web_url>
<web_url2>http://doi.acm.org/10.1145/2669485.2669514</web_url2>
<file_url>t3://file?uid=98978 - - "Link to document"</file_url>
<authors>
<person>
<fn>Katja</fn>
<sn>Rogers</sn>
</person>
<person>
<fn>Amrei</fn>
<sn>Röhlig</sn>
</person>
<person>
<fn>Matthias</fn>
<sn>Weing</sn>
</person>
<person>
<fn>Jan</fn>
<sn>Gugenheimer</sn>
</person>
<person>
<fn>Bastian</fn>
<sn>Könings</sn>
</person>
<person>
<fn>Melina</fn>
<sn>Klepsch</sn>
</person>
<person>
<fn>Florian</fn>
<sn>Schaub</sn>
</person>
<person>
<fn>Enrico</fn>
<sn>Rukzio</sn>
</person>
<person>
<fn>Tina</fn>
<sn>Seufert</sn>
</person>
<person>
<fn>Michael</fn>
<sn>Weber</sn>
</person>
</authors>
</reference>
<reference>
<bibtype>inproceedings</bibtype>
<citeid>Gugenheimer:2014:UIP:2669485.2669537</citeid>
<title>UbiBeam: An Interactive Projector-Camera System for Domestic Deployment</title>
<year>2014</year>
<DOI>10.1145/2669485.2669537</DOI>
<booktitle>Proceedings of the Ninth ACM International Conference on Interactive Tabletops and Surfaces</booktitle>
<publisher>ACM</publisher>
<address>New York, NY, USA</address>
<series>ITS '14</series>
<pages>305--310</pages>
<web_url>https://www.youtube.com/watch?v=t-2ddmX5s2M</web_url>
<web_url2>http://doi.acm.org/10.1145/2669485.2669537</web_url2>
<file_url>/fileadmin/website_uni_ulm/iui.inst.100/institut/Papers/Prof_Rukzio/2014/UbiBeam_ITS14.pdf</file_url>
<authors>
<person>
<fn>Jan</fn>
<sn>Gugenheimer</sn>
</person>
<person>
<fn>Pascal</fn>
<sn>Knierim</sn>
</person>
<person>
<fn>Julian</fn>
<sn>Seifert</sn>
</person>
<person>
<fn>Enrico</fn>
<sn>Rukzio</sn>
</person>
</authors>
</reference>
<reference>
<bibtype>article</bibtype>
<citeid>SimeoneSSHRG2013</citeid>
<title>A Cross-Device Drag-and-Drop Technique</title>
<year>2013</year>
<journal>International Conference on Mobile and Ubiquitous Multimedia - MUM '13</journal>
<publisher>ACM</publisher>
<address>New York, NY, USA</address>
<web_url>http://youtu.be/mWsp3XksNYs</web_url>
<file_url>https://www.uni-ulm.de/fileadmin/website_uni_ulm/iui.inst.100/institut/Papers/Prof_Rukzio/2013/Simeone_et_al_MUM2013.pdf</file_url>
<authors>
<person>
<fn>Adalberto L.</fn>
<sn>Simeone</sn>
</person>
<person>
<fn>Julian</fn>
<sn>Seifert</sn>
</person>
<person>
<fn>Dominik</fn>
<sn>Schmidt</sn>
</person>
<person>
<fn>Paul</fn>
<sn>Holleis</sn>
</person>
<person>
<fn>Enrico</fn>
<sn>Rukzio</sn>
</person>
<person>
<fn>Hans</fn>
<sn>Gellersen</sn>
</person>
</authors>
</reference>
<reference>
<bibtype>inproceedings</bibtype>
<citeid>Seifert2013e</citeid>
<title>Adding Vibrotactile Feedback to Large Interactive Surfaces</title>
<year>2013</year>
<booktitle>In Proc. of Interact 2013 (IFIP TC13 Conference on Human-Computer Interaction), Springer.</booktitle>
<web_url2>http://youtu.be/0DzTtBTeglQ</web_url2>
<file_url>https://www.uni-ulm.de/fileadmin/website_uni_ulm/iui.inst.100/institut/Papers/Prof_Rukzio/2013/2013-Seifert-et-al_VibrotactileFeedback.pdf</file_url>
<authors>
<person>
<fn>Julian</fn>
<sn>Seifert</sn>
</person>
<person>
<fn>Markus</fn>
<sn>Packeiser</sn>
</person>
<person>
<fn>Enrico</fn>
<sn>Rukzio</sn>
</person>
</authors>
</reference>
<reference>
<bibtype>inproceedings</bibtype>
<citeid>Vaananen-Vainio-Mattila:2013:EIP:2468356.2479665</citeid>
<title>Experiencing interactivity in public spaces (eips).</title>
<year>2013</year>
<DOI>10.1145/2468356.2479665</DOI>
<booktitle>In Extended Abstracts of CHI '13 (ACM Annual Conference on Human Factors in Computing Systems), ACM, 4 pages.</booktitle>
<file_url>/fileadmin/website_uni_ulm/iui.inst.100/institut/Papers/Prof_Rukzio/2013/2013_chi_kaisa.pdf.pdf</file_url>
<authors>
<person>
<fn>Kaisa</fn>
<sn>Väänänen-Vainio-Mattila</sn>
</person>
<person>
<fn>Jonna</fn>
<sn>Hakkila</sn>
</person>
<person>
<fn>Alvaro</fn>
<sn>Cassinelli</sn>
</person>
<person>
<fn>Jörg</fn>
<sn>Müller</sn>
</person>
<person>
<fn>Enrico</fn>
<sn>Rukzio</sn>
</person>
<person>
<fn>Albrecht</fn>
<sn>Schmidt</sn>
</person>
</authors>
</reference>
<reference>
<bibtype>inproceedings</bibtype>
<citeid>Seifert2013c</citeid>
<title>Extending Mobile Interfaces with External Screens</title>
<year>2013</year>
<booktitle>In Proc. of Interact 2013 (IFIP TC13 Conference on Human-Computer Interaction), Springer, 8 pages.</booktitle>
<web_url2>http://youtu.be/dZaCNV64ltk</web_url2>
<file_url>https://www.uni-ulm.de/fileadmin/website_uni_ulm/iui.inst.100/institut/Papers/Prof_Rukzio/2013/2013-Seifert-et-al_ExtendingMobileInterfaces.pdf</file_url>
<authors>
<person>
<fn>Julian</fn>
<sn>Seifert</sn>
</person>
<person>
<fn>Dennis</fn>
<sn>Schneider</sn>
</person>
<person>
<fn>Enrico</fn>
<sn>Rukzio</sn>
</person>
</authors>
</reference>
<reference>
<bibtype>article</bibtype>
<citeid>Seifert:2013a</citeid>
<title>From the Private Into the Public: Privacy-Respecting Mobile Interaction Techniques for Sharing Data on Surfaces.</title>
<year>2013</year>
<language>English</language>
<DOI>10.1007/s00779-013-0667-x</DOI>
<journal>Personal and Ubiquitous Computing, Springer, 14 pages</journal>
<web_url>https://www.uni-ulm.de/index.php?id=47570</web_url>
<web_url2>http://youtu.be/nKI-3cAhgVs</web_url2>
<file_url>https://www.uni-ulm.de/fileadmin/website_uni_ulm/iui.inst.100/institut/Papers/Prof_Rukzio/2013/2013-Seifert-et-al_FromThePrivateIntoThePublic.pdf</file_url>
<authors>
<person>
<fn>Julian</fn>
<sn>Seifert</sn>
</person>
<person>
<fn>David</fn>
<sn>Dobbelstein</sn>
</person>
<person>
<fn>Dominik</fn>
<sn>Schmidt</sn>
</person>
<person>
<fn>Paul</fn>
<sn>Holleis</sn>
</person>
<person>
<fn>Enrico</fn>
<sn>Rukzio</sn>
</person>
</authors>
</reference>
<reference>
<bibtype>article</bibtype>
<citeid>RaderHRS2013</citeid>
<title>MobiZone: Personalized Interaction with Multiple Items on Interactive Surfaces</title>
<year>2013</year>
<journal>International Conference on Mobile and Ubiquitous Multimedia - MUM '13</journal>
<publisher>ACM</publisher>
<address>New York, NY, USA</address>
<file_url>https://www.uni-ulm.de/fileadmin/website_uni_ulm/iui.inst.100/institut/Papers/Prof_Rukzio/2013/Rader-at-al-MUM13.pdf</file_url>
<authors>
<person>
<fn>Markus</fn>
<sn>Rader</sn>
</person>
<person>
<fn>Clemens</fn>
<sn>Holzmann</sn>
</person>
<person>
<fn>Enrico</fn>
<sn>Rukzio</sn>
</person>
<person>
<fn>Julian</fn>
<sn>Seifert</sn>
</person>
</authors>
</reference>
<reference>
<bibtype>inproceedings</bibtype>
<citeid>Seifert2013d</citeid>
<title>MoCoShoP: Supporting Mobile and Collaborative Shopping and Planning of Interiors</title>
<year>2013</year>
<booktitle>In Proc. of Interact 2013 (IFIP TC13 Conference on Human-Computer Interaction), Springer, 8 pages.</booktitle>
<file_url>https://www.uni-ulm.de/fileadmin/website_uni_ulm/iui.inst.100/institut/Papers/Prof_Rukzio/2013/2013-Seifert-et-al_MoCoShoP.pdf</file_url>
<authors>
<person>
<fn>Julian</fn>
<sn>Seifert</sn>
</person>
<person>
<fn>Dennis</fn>
<sn>Schneider</sn>
</person>
<person>
<fn>Enrico</fn>
<sn>Rukzio</sn>
</person>
</authors>
</reference>
<reference>
<bibtype>inproceedings</bibtype>
<citeid>Weing:2013:PEI:2494091.2494113</citeid>
<title>P.I.A.N.O.: Enhancing Instrument Learning via Interactive Projected Augmentation</title>
<year>2013</year>
<DOI>10.1145/2494091.2494113</DOI>
<booktitle>Proceedings of UbiComp '13 Adjunct (2013 ACM Conference on Pervasive and Ubiquitous Computing), ACM, 4 pages</booktitle>
<web_url>http://youtu.be/4ohSTRXjBNI</web_url>
<file_url>/fileadmin/website_uni_ulm/iui.inst.100/institut/Papers/Prof_Rukzio/2013/PIANO_UbiComp13.pdf</file_url>
<authors>
<person>
<fn>Matthias</fn>
<sn>Weing</sn>
</person>
<person>
<fn>Amrei</fn>
<sn>Röhlig</sn>
</person>
<person>
<fn>Katja</fn>
<sn>Rogers</sn>
</person>
<person>
<fn>Jan</fn>
<sn>Gugenheimer</sn>
</person>
<person>
<fn>Florian</fn>
<sn>Schaub</sn>
</person>
<person>
<fn>Bastian</fn>
<sn>Könings</sn>
</person>
<person>
<fn>Enrico</fn>
<sn>Rukzio</sn>
</person>
<person>
<fn>Michael</fn>
<sn>Weber</sn>
</person>
</authors>
</reference>
<reference>
<bibtype>article</bibtype>
<citeid>WinklerSRKR2013</citeid>
<title>Penbook: Bringing Pen+Paper Interaction to a Tablet Device to Facilitate Paper-Based Workflows in the Hospital Domain</title>
<year>2013</year>
<DOI>10.1145/2512349.2512797</DOI>
<journal>Proc. of ITS 2013 (ACM International Conference on Interactive Tabletops and Surfaces), ACM, 4 pages [Winner of Best Note Award]</journal>
<keywords>WinklerSRKR2013</keywords>
<web_url>https://dl.acm.org/doi/abs/10.1145/2512349.2512797 - - "Link to ACM"</web_url>
<web_url2>http://www.youtube.com/watch?v=4D207Er3H8Q</web_url2>
<file_url>t3://file?uid=223330</file_url>
<authors>
<person>
<fn>Christian</fn>
<sn>Winkler</sn>
</person>
<person>
<fn>Julian</fn>
<sn>Seifert</sn>
</person>
<person>
<fn>Christian</fn>
<sn>Reinartz</sn>
</person>
<person>
<fn>Pascal</fn>
<sn>Krahmer</sn>
</person>
<person>
<fn>Enrico</fn>
<sn>Rukzio</sn>
</person>
</authors>
</reference>
<reference>
<bibtype>inproceedings</bibtype>
<citeid>Seifert2013b</citeid>
<title>PointerPhone: Using Mobile Phones for Direct Pointing Interactions with Remote Displays.</title>
<year>2013</year>
<booktitle>In Proc. of Interact 2013 (IFIP TC13 Conference on Human-Computer Interaction), Springer, 18 pages.</booktitle>
<web_url2>http://youtu.be/qp3pIklYLxo</web_url2>
<file_url>http://www.uni-ulm.de/fileadmin/website_uni_ulm/iui.inst.100/institut/Papers/Prof_Rukzio/2013/2013-Seifert-PointerPhone.pdf</file_url>
<authors>
<person>
<fn>Julian</fn>
<sn>Seifert</sn>
</person>
<person>
<fn>Andreas</fn>
<sn>Bayer</sn>
</person>
<person>
<fn>Enrico</fn>
<sn>Rukzio</sn>
</person>
</authors>
</reference>
<reference>
<bibtype>inproceedings</bibtype>
<citeid>nowacka2013touchbugs</citeid>
<title>Touchbugs: actuated tangibles on multi-touch tables.</title>
<year>2013</year>
<DOI>10.1145/2470654.2470761</DOI>
<booktitle>In Proc. of CHI 2013 (SIGCHI Conference on Human Factors in Computing Systems), ACM, 4 pages.</booktitle>
<keywords>nowacka2013touchbugs</keywords>
<web_url>https://dl.acm.org/doi/abs/10.1145/2470654.2470761</web_url>
<web_url2>http://youtu.be/k4oz_ErsvqM</web_url2>
<file_url>t3://file?uid=240962</file_url>
<authors>
<person>
<fn>Diana</fn>
<sn>Nowacka</sn>
</person>
<person>
<fn>Karim</fn>
<sn>Ladha</sn>
</person>
<person>
<fn>Nils Y</fn>
<sn>Hammerla</sn>
</person>
<person>
<fn>Daniel</fn>
<sn>Jackson</sn>
</person>
<person>
<fn>Cassim</fn>
<sn>Ladha</sn>
</person>
<person>
<fn>Enrico</fn>
<sn>Rukzio</sn>
</person>
<person>
<fn>Patrick</fn>
<sn>Olivier</sn>
</person>
</authors>
</reference>
<reference>
<bibtype>inproceedings</bibtype>
<citeid>Schmidt:2012</citeid>
<title>A cross-device interaction style for mobiles and surfaces</title>
<year>2012</year>
<DOI>10.1145/2317956.2318005</DOI>
<booktitle>In Proc. of DIS '12 (Designing Interactive Systems Conference), ACM, 10 pages.</booktitle>
<web_url2>http://youtu.be/Z01Xh23X2mc</web_url2>
<file_url>/fileadmin/website_uni_ulm/iui.inst.100/institut/Papers/Prof_Rukzio/2012-Schmidt-D-A-Cross-Device-Interaction-Style-for-Mobiles-and-Surfaces.pdf</file_url>
<authors>
<person>
<fn>Dominik</fn>
<sn>Schmidt</sn>
</person>
<person>
<fn>Julian</fn>
<sn>Seifert</sn>
</person>
<person>
<fn>Enrico</fn>
<sn>Rukzio</sn>
</person>
<person>
<fn>Hans</fn>
<sn>Gellersen</sn>
</person>
</authors>
</reference>
<reference>
<bibtype>inproceedings</bibtype>
<citeid>Seifert:2012a</citeid>
<title>Don't queue up!: user attitudes towards mobile interactions with public terminals.</title>
<year>2012</year>
<DOI>10.1145/2406367.2406422</DOI>
<booktitle>In Proc. of MUM 2012 (International Conference on Mobile and Ubiquitous Multimedia), ACM, 4 pages.</booktitle>
<web_url2>http://youtu.be/zt-RVzQMsI4</web_url2>
<file_url>/fileadmin/website_uni_ulm/iui.inst.100/institut/Papers/Prof_Rukzio/MobileATM-Paper-CameraReady.pdf</file_url>
<authors>
<person>
<fn>Julian</fn>
<sn>Seifert</sn>
</person>
<person>
<fn>Alexander</fn>
<sn>De Luca</sn>
</person>
<person>
<fn>Enrico</fn>
<sn>Rukzio</sn>
</person>
</authors>
</reference>
<reference>
<bibtype>incollection</bibtype>
<citeid>Bial2012</citeid>
<title>Improving Cyclists Training with Tactile Feedback on Feet</title>
<year>2012</year>
<DOI>10.1007/978-3-642-32796-4_5</DOI>
<booktitle>In Proc. of Haid 2012 (Haptic and Audio Interaction Design), Springer, 10 pages.</booktitle>
<file_url>/fileadmin/website_uni_ulm/iui.inst.100/institut/Papers/Prof_Rukzio/2012/2012_bial_haid2012.pdf</file_url>
<authors>
<person>
<fn>Dominik</fn>
<sn>Bial</sn>
</person>
<person>
<fn>Thorsten</fn>
<sn>Appelmann</sn>
</person>
<person>
<fn>Enrico</fn>
<sn>Rukzio</sn>
</person>
<person>
<fn>Albrecht</fn>
<sn>Schmidt</sn>
</person>
</authors>
</reference>
<reference>
<bibtype>conference</bibtype>
<citeid>winkler_investigating_2012</citeid>
<title>Investigating mid-air pointing interaction for projector phones.</title>
<year>2012</year>
<DOI>10.1145/2396636.2396650</DOI>
<booktitle>Proc. of ITS 2012 (ACM International Conference on Interactive Tabletops and Surfaces), ACM, 10 pages.</booktitle>
<web_url2>http://youtu.be/r51z70PRb0M</web_url2>
<file_url>/fileadmin/website_uni_ulm/iui.inst.100/institut/Papers/Prof_Rukzio/2012/2012_its_winkler.pdf</file_url>
<authors>
<person>
<fn>Christian</fn>
<sn>Winkler</sn>
</person>
<person>
<fn>Ken</fn>
<sn>Pfeuffer</sn>
</person>
<person>
<fn>Enrico</fn>
<sn>Rukzio</sn>
</person>
</authors>
</reference>
<reference>
<bibtype>inproceedings</bibtype>
<citeid>Schneider:2012</citeid>
<title>MobIES: extending mobile interfaces using external screens.</title>
<year>2012</year>
<DOI>10.1145/2406367.2406438</DOI>
<booktitle>In Proc. of MUM 2112 (International Conference on Mobile and Ubiquitous Multimedia), ACM, 2 Pages.</booktitle>
<web_url2>http://youtu.be/dZaCNV64ltk</web_url2>
<file_url>/fileadmin/website_uni_ulm/iui.inst.100/institut/Papers/Prof_Rukzio/2012/2012_schneider_mum2012.pdf</file_url>
<authors>
<person>
<fn>Dennis</fn>
<sn>Schneider</sn>
</person>
<person>
<fn>Julian</fn>
<sn>Seifert</sn>
</person>
<person>
<fn>Enrico</fn>
<sn>Rukzio</sn>
</person>
</authors>
</reference>
<reference>
<bibtype>inproceedings</bibtype>
<citeid>Seifert:2012</citeid>
<title>MobiSurf: improving co-located collaboration through integrating mobile devices and interactive surfaces.</title>
<year>2012</year>
<DOI>10.1145/2396636.2396644</DOI>
<booktitle>In Proc. of ITS 2012 (ACM International Conference on Interactive tabletops and Surfaces), ACM, 10 pages.</booktitle>
<web_url2>http://youtu.be/u-TAwIZXXwo</web_url2>
<file_url>/fileadmin/website_uni_ulm/iui.inst.100/institut/Papers/Prof_Rukzio/MobiSurf.pdf</file_url>
<authors>
<person>
<fn>Julian</fn>
<sn>Seifert</sn>
</person>
<person>
<fn>Adalberto</fn>
<sn>Simeone</sn>
</person>
<person>
<fn>Dominik</fn>
<sn>Schmidt</sn>
</person>
<person>
<fn>Paul</fn>
<sn>Holleis</sn>
</person>
<person>
<fn>Christian</fn>
<sn>Reinartz</sn>
</person>
<person>
<fn>Matthias</fn>
<sn>Wagner</sn>
</person>
<person>
<fn>Hans</fn>
<sn>Gellersen</sn>
</person>
<person>
<fn>Enrico</fn>
<sn>Rukzio</sn>
</person>
</authors>
</reference>
<reference>
<bibtype>inproceedings</bibtype>
<citeid>henze_observational_2012</citeid>
<title>Observational and experimental investigation of typing behaviour using virtual keyboards for mobile devices.</title>
<year>2012</year>
<DOI>10.1145/2207676.2208658</DOI>
<booktitle>Proc. of CHI 2012 (ACM Annual Conference on Human Factors in Computing Systems), ACM, 10 pages.</booktitle>
<keywords>henze_observational_2012</keywords>
<web_url>https://dl.acm.org/doi/abs/10.1145/2207676.2208658</web_url>
<web_url2>http://youtu.be/Til1bC23Pic</web_url2>
<file_url>t3://file?uid=128898</file_url>
<authors>
<person>
<fn>Niels</fn>
<sn>Henze</sn>
</person>
<person>
<fn>Enrico</fn>
<sn>Rukzio</sn>
</person>
<person>
<fn>Susanne</fn>
<sn>Boll</sn>
</person>
</authors>
</reference>
<reference>
<bibtype>incollection</bibtype>
<citeid>Struse2012</citeid>
<title>PermissionWatcher: Creating User Awareness of Application Permissions in Mobile Systems.</title>
<year>2012</year>
<DOI>10.1007/978-3-642-34898-3_5</DOI>
<booktitle>In Proc. of AMI 2012 (International Joint Conference on Ambient Intelligence), Springer, 16 pages.</booktitle>
<file_url>/fileadmin/website_uni_ulm/iui.inst.100/institut/Papers/Prof_Rukzio/2012-Struse-PermissionWatcher.pdf</file_url>
<authors>
<person>
<fn>Eric</fn>
<sn>Struse</sn>
</person>
<person>
<fn>Julian</fn>
<sn>Seifert</sn>
</person>
<person>
<fn>Sebastian</fn>
<sn>Uellenbeck</sn>
</person>
<person>
<fn>Enrico</fn>
<sn>Rukzio</sn>
</person>
<person>
<fn>Christopher</fn>
<sn>Wolf</sn>
</person>
</authors>
</reference>
<reference>
<bibtype>article</bibtype>
<citeid>rukzio_personal_2012</citeid>
<title>Personal projectors for pervasive computing</title>
<year>2012</year>
<DOI>10.1109/MPRV.2011.17</DOI>
<journal>Pervasive Computing, IEEE</journal>
<volume>11</volume>
<pages>30–37</pages>
<number>2</number>
<file_url>/fileadmin/website_uni_ulm/iui.inst.100/institut/mitarbeiterbereiche/rukzio/publications/PersonalProjectorsForPervasiveComputing.pdf</file_url>
<authors>
<person>
<fn>Enrico</fn>
<sn>Rukzio</sn>
</person>
<person>
<fn>Paul</fn>
<sn>Holleis</sn>
</person>
<person>
<fn>Hans</fn>
<sn>Gellersen</sn>
</person>
</authors>
</reference>
<reference>
<bibtype>article</bibtype>
<citeid>Dachselt:2012:PPF:2090150.2090158</citeid>
<title>Pico projectors: firefly or bright future?</title>
<year>2012</year>
<DOI>10.1145/2090150.2090158</DOI>
<journal>ACM Interactions</journal>
<volume>19</volume>
<publisher>ACM</publisher>
<address>New York, NY, USA</address>
<pages>24--29</pages>
<number>2</number>
<file_url>/fileadmin/website_uni_ulm/iui.inst.100/institut/Papers/Prof_Rukzio/2012/2012_dachselt_interactions.pdf</file_url>
<authors>
<person>
<fn>Raimund</fn>
<sn>Dachselt</sn>
</person>
<person>
<fn>Jonna</fn>
<sn>Hakkila</sn>
</person>
<person>
<fn>Matt</fn>
<sn>Jones</sn>
</person>
<person>
<fn>Markus</fn>
<sn>Löchtefeld</sn>
</person>
<person>
<fn>Michael</fn>
<sn>Rohs</sn>
</person>
<person>
<fn>Enrico</fn>
<sn>Rukzio</sn>
</person>
</authors>
</reference>
<reference>
<bibtype>inproceedings</bibtype>
<citeid>winkler_wall_2012</citeid>
<title>Wall Play: a novel wall/floor interaction concept for mobile projected gaming.</title>
<year>2012</year>
<DOI>10.1145/2371664.2371687</DOI>
<booktitle>In Proc. of Mobile HCI 2012 (14th International Conference on Human-Computer Interaction with Mobile Devices and Services), ACM, 4 pages.</booktitle>
<file_url>http://mobivis.labs-exit.de/AcceptedPapers/mobivis2012_Hutflesz_et_al.pdf</file_url>
<authors>
<person>
<fn>Christian</fn>
<sn>Winkler</sn>
</person>
<person>
<fn>Patrick</fn>
<sn>Hutflesz</sn>
</person>
<person>
<fn>Clemens</fn>
<sn>Holzmann</sn>
</person>
<person>
<fn>Enrico</fn>
<sn>Rukzio</sn>
</person>
</authors>
</reference>
<reference>
<bibtype>inproceedings</bibtype>
<citeid>henze_100000000_2011</citeid>
<title>100,000,000 Taps: Analysis and Improvement of Touch Performance in the Large.</title>
<year>2011</year>
<DOI>10.1145/2037373.2037395</DOI>
<booktitle>Proc. of Mobile HCI 2011 (International Conference on Human Computer Interaction with Mobile Devices and Services), Winner of Best Paper Award, ACM, 10 pages.</booktitle>
<keywords>henze_100000000_2011</keywords>
<web_url>https://dl.acm.org/doi/abs/10.1145/2037373.2037395</web_url>
<file_url>/fileadmin/website_uni_ulm/iui.inst.100/institut/Papers/Prof_Rukzio/2011/2011_henze_mobilehci2011.pdf</file_url>
<authors>
<person>
<fn>Niels</fn>
<sn>Henze</sn>
</person>
<person>
<fn>Enrico</fn>
<sn>Rukzio</sn>
</person>
<person>
<fn>Susanne</fn>
<sn>Boll</sn>
</person>
</authors>
</reference>
<reference>
<bibtype>inproceedings</bibtype>
<citeid>winkler_interactive_20110</citeid>
<title>Interactive phone call: synchronous remote collaboration and projected interactive surfaces.</title>
<year>2011</year>
<DOI>10.1145/2076354.2076367</DOI>
<booktitle>In Proc. of ITS 2011 (ACM International Conference on Interactive Tabletops and Surfaces), ACM, 10 pages.</booktitle>
<web_url2>http://youtu.be/Qv6Wv8Nv6sI</web_url2>
<file_url>/fileadmin/website_uni_ulm/iui.inst.100/institut/Papers/Prof_Rukzio/2011/2011_winkler_its2011.pdf</file_url>
<authors>
<person>
<fn>Christian</fn>
<sn>Winkler</sn>
</person>
<person>
<fn>Christian</fn>
<sn>Reinartz</sn>
</person>
<person>
<fn>Diana</fn>
<sn>Nowacka</sn>
</person>
<person>
<fn>Enrico</fn>
<sn>Rukzio</sn>
</person>
</authors>
</reference>
<reference>
<bibtype>inproceedings</bibtype>
<citeid>seifert_mobidev:_2011</citeid>
<title>Mobidev: a tool for creating apps on mobile phones.</title>
<year>2011</year>
<DOI>10.1145/2037373.2037392</DOI>
<booktitle>Proc. of Mobile HCI 2011 (International Conference on Human Computer Interaction with Mobile Devices and Services), ACM, 4 pages.</booktitle>
<file_url>/fileadmin/website_uni_ulm/iui.inst.100/institut/Papers/Prof_Rukzio/2012-Seifert-MobiDev.pdf</file_url>
<authors>
<person>
<fn>Julian</fn>
<sn>Seifert</sn>
</person>
<person>
<fn>Bastian</fn>
<sn>Pfleging</sn>
</person>
<person>
<fn>Elba</fn>
<sn>del Carmen Valderrama Bahamóndez</sn>
</person>
<person>
<fn>Martin</fn>
<sn>Hermes</sn>
</person>
<person>
<fn>Enrico</fn>
<sn>Rukzio</sn>
</person>
<person>
<fn>Albrecht</fn>
<sn>Schmidt</sn>
</person>
</authors>
</reference>
<reference>
<bibtype>inproceedings</bibtype>
<citeid>Dachselt:2011:MPP:1979482.1979588</citeid>
<title>Mobile and personal projection (MP2)</title>
<year>2011</year>
<DOI>10.1145/1979742.1979588</DOI>
<booktitle>In Extended Abstracts of CHI 2011 (SIGCHI Conference on Human Factors in Computing Systems), ACM, 3 pages.</booktitle>
<file_url>/fileadmin/website_uni_ulm/iui.inst.100/institut/Papers/Prof_Rukzio/2011/2011_dachselt_chi2011ea.pdf</file_url>
<authors>
<person>
<fn>Raimund</fn>
<sn>Dachselt</sn>
</person>
<person>
<fn>Matt</fn>
<sn>Jones</sn>
</person>
<person>
<fn>Jonna</fn>
<sn>Hakkila</sn>
</person>
<person>
<fn>Markus</fn>
<sn>Löchtefeld</sn>
</person>
<person>
<fn>Michael</fn>
<sn>Rohs</sn>
</person>
<person>
<fn>Enrico</fn>
<sn>Rukzio</sn>
</person>
</authors>
</reference>
<reference>
<bibtype>inproceedings</bibtype>
<citeid>Hardy:2011:MUI:2107596.2107598</citeid>
<title>MultiKit: a user interface toolkit for multi-tag applications.</title>
<year>2011</year>
<DOI>10.1145/2107596.2107598</DOI>
<booktitle>In Proc. of MUM 2011 (International Conference on Mobile and Ubiquitous Multimedia), ACM, 10 pages.</booktitle>
<keywords>Hardy:2011:MUI:2107596.2107598</keywords>
<web_url>https://dl.acm.org/doi/abs/10.1145/2107596.2107598</web_url>
<file_url>t3://file?uid=175962 - - "Link to document"</file_url>
<authors>
<person>
<fn>Robert</fn>
<sn>Hardy</sn>
</person>
<person>
<fn>Enrico</fn>
<sn>Rukzio</sn>
</person>
<person>
<fn>Matthias</fn>
<sn>Wagner</sn>
</person>
<person>
<fn>Paul</fn>
<sn>Holleis</sn>
</person>
</authors>
</reference>
<reference>
<bibtype>inproceedings</bibtype>
<citeid>hardy_mystate:_2011</citeid>
<title>Mystate: sharing social and contextual information through touch interactions with tagged objects.</title>
<year>2011</year>
<DOI>10.1145/2037373.2037444</DOI>
<booktitle>Proc. of Mobile HCI 2011 (International Conference on Human Computer Interaction with Mobile Devices and Services), ACM, 10 pages.</booktitle>
<web_url2>http://youtu.be/RJuGnhpIKBY</web_url2>
<file_url>/fileadmin/website_uni_ulm/iui.inst.100/institut/Papers/Prof_Rukzio/2011/2011_hardy_mobilehci.pdf</file_url>
<authors>
<person>
<fn>Robert</fn>
<sn>Hardy</sn>
</person>
<person>
<fn>Enrico</fn>
<sn>Rukzio</sn>
</person>
<person>
<fn>Paul</fn>
<sn>Holleis</sn>
</person>
<person>
<fn>Matthias</fn>
<sn>Wagner</sn>
</person>
</authors>
</reference>
<reference>
<bibtype>inproceedings</bibtype>
<citeid>winkler_navibeam</citeid>
<title>NaviBeam: Indoor Assistance and Navigation for Shopping Malls through Projector Phones</title>
<year>2011</year>
<booktitle>In Proc. of Workshop on Mobile and Personal Projection (at CHI 2011).</booktitle>
<file_url>fileadmin/website_uni_ulm/iui.inst.100/institut/Papers/Prof_Rukzio/2011/2011_winkler_mp2.pdf</file_url>
<authors>
<person>
<fn>C.</fn>
<sn>Winkler</sn>
</person>
<person>
<fn>M.</fn>
<sn>Broscheit</sn>
</person>
<person>
<fn>Enrico</fn>
<sn>Rukzio</sn>
</person>
</authors>
</reference>
<reference>
<bibtype>inproceedings</bibtype>
<citeid>rumelin_naviradar:_2011</citeid>
<title>NaviRadar: A Novel Tactile Information Display for Pedestrian Navigation.</title>
<year>2011</year>
<DOI>10.1145/2047196.2047234</DOI>
<booktitle>Proc. of UIST 2011 (Annual ACM Symposium on User Interface Software and Technology), ACM, 10 pages.</booktitle>
<web_url2>http://youtu.be/-wyRovwgzH0</web_url2>
<file_url>/fileadmin/website_uni_ulm/iui.inst.100/institut/Papers/Prof_Rukzio/2011/2011_ruemelin_uist2011.pdf</file_url>
<authors>
<person>
<fn>Sonja</fn>
<sn>Rümelin</sn>
</person>
<person>
<fn>Enrico</fn>
<sn>Rukzio</sn>
</person>
<person>
<fn>Robert</fn>
<sn>Hardy</sn>
</person>
</authors>
</reference>
<reference>
<bibtype>inproceedings</bibtype>
<citeid>Hardy:2011:RWR:2107596.2107600</citeid>
<title>Real world responses to interactive gesture based public displays.</title>
<year>2011</year>
<DOI>10.1145/2107596.2107600</DOI>
<booktitle>Proc. of MUM 2011 (International Conference on Mobile and Ubiquitous Multimedia), ACM, 10 pages.</booktitle>
<file_url>/fileadmin/website_uni_ulm/iui.inst.100/institut/Papers/Prof_Rukzio/2011/2011_hardy_mum2011.pdf</file_url>
<authors>
<person>
<fn>John</fn>
<sn>Hardy</sn>
</person>
<person>
<fn>Enrico</fn>
<sn>Rukzio</sn>
</person>
<person>
<fn>Nigel</fn>
<sn>Davies</sn>
</person>
</authors>
</reference>
<reference>
<bibtype>inproceedings</bibtype>
<citeid>kawsar_explorative_2010</citeid>
<title>An explorative comparison of magic lens and personal projection for interacting with smart objects</title>
<year>2010</year>
<DOI>10.1145/1851600.1851627</DOI>
<booktitle>Proceedings of the 12th international conference on Human computer interaction with mobile devices and services</booktitle>
<file_url>https://www.uni-ulm.de/fileadmin/website_uni_ulm/iui.inst.100/institut/Papers/Prof_Rukzio/2010/KawsarMobileHCI2010.pdf</file_url>
<authors>
<person>
<fn>Fahim</fn>
<sn>Kawsar</sn>
</person>
<person>
<fn>Enrico</fn>
<sn>Rukzio</sn>
</person>
<person>
<fn>Gerd</fn>
<sn>Kortuem</sn>
</person>
</authors>
</reference>
<reference>
<bibtype>inproceedings</bibtype>
<citeid>schildbach_investigating_2010</citeid>
<title>Investigating selection and reading performance on a mobile phone while walking</title>
<year>2010</year>
<DOI>10.1145/1851600.1851619</DOI>
<booktitle>Proceedings of the 12th international conference on Human computer interaction with mobile devices and services</booktitle>
<file_url>https://www.uni-ulm.de/fileadmin/website_uni_ulm/iui.inst.100/institut/Papers/Prof_Rukzio/2010/SchildbachMobileHCI2010.pdf</file_url>
<authors>
<person>
<fn>Bastian</fn>
<sn>Schildbach</sn>
</person>
<person>
<fn>Enrico</fn>
<sn>Rukzio</sn>
</person>
</authors>
</reference>
<reference>
<bibtype>inproceedings</bibtype>
<citeid>hardy_mobile_2010</citeid>
<title>Mobile interaction with static and dynamic <prt>NFC-based displays</title>
<year>2010</year>
<DOI>10.1145/1851600.1851623</DOI>
<booktitle>Proceedings of the 12th international conference on Human computer interaction with mobile devices and services</booktitle>
<file_url>https://www.uni-ulm.de/fileadmin/website_uni_ulm/iui.inst.100/institut/Papers/Prof_Rukzio/2010/HardyMobileHCI2010.pdf</file_url>
<authors>
<person>
<fn>Robert</fn>
<sn>Hardy</sn>
</person>
<person>
<fn>Enrico</fn>
<sn>Rukzio</sn>
</person>
<person>
<fn>Paul</fn>
<sn>Holleis</sn>
</person>
<person>
<fn>Matthias</fn>
<sn>Wagner</sn>
</person>
</authors>
</reference>
<reference>
<bibtype>inproceedings</bibtype>
<citeid>hardy_mystate:_2010</citeid>
<title>MyState: using NFC to share social and contextual information in a quick and personalized way</title>
<year>2010</year>
<DOI>10.1145/1864431.1864481</DOI>
<booktitle>Proceedings of the 12th <prt>ACM</prt> international conference adjunct papers on Ubiquitous computing-Adjunct</booktitle>
<file_url>https://www.uni-ulm.de/fileadmin/website_uni_ulm/iui.inst.100/institut/Papers/Prof_Rukzio/2010/HardyUbicomp2010.pdf</file_url>
<authors>
<person>
<fn>Robert</fn>
<sn>Hardy</sn>
</person>
<person>
<fn>Enrico</fn>
<sn>Rukzio</sn>
</person>
<person>
<fn>Paul</fn>
<sn>Holleis</sn>
</person>
<person>
<fn>Gregor</fn>
<sn>Broll</sn>
</person>
<person>
<fn>Matthias</fn>
<sn>Wagner</sn>
</person>
</authors>
</reference>
<reference>
<bibtype>inproceedings</bibtype>
<citeid>schmidt_phonetouch:_2010</citeid>
<title>PhoneTouch: a technique for direct phone interaction on surfaces</title>
<year>2010</year>
<DOI>10.1145/1866029.1866034</DOI>
<booktitle>Proceedings of the 23nd annual <prt>ACM</prt> symposium on User interface software and technology</booktitle>
<pages>13–16</pages>
<file_url>https://www.uni-ulm.de/fileadmin/website_uni_ulm/iui.inst.100/institut/Papers/Prof_Rukzio/2010/PhoneTouch.pdf</file_url>
<authors>
<person>
<fn>Dominik</fn>
<sn>Schmidt</sn>
</person>
<person>
<fn>Fadi</fn>
<sn>Chehimi</sn>
</person>
<person>
<fn>Enrico</fn>
<sn>Rukzio</sn>
</person>
<person>
<fn>Hans</fn>
<sn>Gellersen</sn>
</person>
</authors>
</reference>
<reference>
<bibtype>inproceedings</bibtype>
<citeid>rukzio_projector_2010</citeid>
<title>Projector phone interactions: design space and survey</title>
<year>2010</year>
<booktitle>Workshop on coupled display visual interfaces at <prt>AVI</prt></booktitle>
<file_url>http://www.researchgate.net/publication/228370421_Projector_Phone_Interactions_Design_Space_and_Survey/file/32bfe5121ef910a965.pdf</file_url>
<authors>
<person>
<fn>Enrico</fn>
<sn>Rukzio</sn>
</person>
<person>
<fn>Paul</fn>
<sn>Holleis</sn>
</person>
</authors>
</reference>
<reference>
<bibtype>inproceedings</bibtype>
<citeid>chehimi_throw_2010</citeid>
<title>Throw your photos: an intuitive approach for sharing between mobile phones and interactive tables</title>
<year>2010</year>
<DOI>10.1145/1864431.1864479</DOI>
<booktitle>Proceedings of the 12th <prt>ACM</prt> international conference adjunct papers on Ubiquitous computing-Adjunct</booktitle>
<file_url>https://www.uni-ulm.de/fileadmin/website_uni_ulm/iui.inst.100/institut/Papers/Prof_Rukzio/2010/ChehimiUbicomp2010.pdf</file_url>
<authors>
<person>
<fn>Fadi</fn>
<sn>Chehimi</sn>
</person>
<person>
<fn>Enrico</fn>
<sn>Rukzio</sn>
</person>
</authors>
</reference>
<reference>
<bibtype>inproceedings</bibtype>
<citeid>rukzio_design_2009</citeid>
<title>Design, implementation and evaluation of a novel public display for pedestrian navigation: the rotating compass</title>
<year>2009</year>
<DOI>10.1145/1518701.1518722</DOI>
<booktitle>Proceedings of the SIGCHI Conference on Human Factors in Computing Systems</booktitle>
<file_url>https://www.uni-ulm.de/fileadmin/website_uni_ulm/iui.inst.100/institut/Papers/Prof_Rukzio/2009/RukzioCHI2009.pdf</file_url>
<authors>
<person>
<fn>Enrico</fn>
<sn>Rukzio</sn>
</person>
<person>
<fn>Michael</fn>
<sn>Müller</sn>
</person>
<person>
<fn>Robert</fn>
<sn>Hardy</sn>
</person>
</authors>
</reference>
<reference>
<bibtype>inproceedings</bibtype>
<citeid>hardy_exploring_2009</citeid>
<title>Exploring expressive nfc-based mobile phone interaction with large dynamic displays</title>
<year>2009</year>
<DOI>10.1109/NFC.2009.10</DOI>
<booktitle>Near Field Communication, 2009. NFC'09. First International Workshop on</booktitle>
<file_url>https://www.uni-ulm.de/fileadmin/website_uni_ulm/iui.inst.100/institut/Papers/Prof_Rukzio/2009/HardyNFC2009.pdf</file_url>
<authors>
<person>
<fn>Robert</fn>
<sn>Hardy</sn>
</person>
<person>
<fn>Enrico</fn>
<sn>Rukzio</sn>
</person>
<person>
<fn>Matthias</fn>
<sn>Wagner</sn>
</person>
<person>
<fn>Massimo</fn>
<sn>Paolucci</sn>
</person>
</authors>
</reference>
<reference>
<bibtype>article</bibtype>
<citeid>greaves_exploring_2009</citeid>
<title>Exploring user reaction to personal projection when used in shared public places: A formative study</title>
<year>2009</year>
<file_url>http://comp.eprints.lancs.ac.uk/2264/1/cam3sn2009_greaves.pdf</file_url>
<authors>
<person>
<fn>Andrew</fn>
<sn>Greaves</sn>
</person>
<person>
<fn>Panu</fn>
<sn>Akerman</sn>
</person>
<person>
<fn>Enrico</fn>
<sn>Rukzio</sn>
</person>
<person>
<fn>Keith</fn>
<sn>Cheverst</sn>
</person>
<person>
<fn>Jonna</fn>
<sn>Hakkila</sn>
</person>
</authors>
</reference>
<reference>
<bibtype>inproceedings</bibtype>
<citeid>zimmermann_mobile_2009</citeid>
<title>Mobile interaction with the Real World</title>
<year>2009</year>
<DOI>10.1145/1613858.1613980</DOI>
<booktitle>Proceedings of the 11th international Conference on Human-Computer interaction with Mobile Devices and Services</booktitle>
<file_url>https://www.uni-ulm.de/fileadmin/website_uni_ulm/iui.inst.100/institut/Papers/Prof_Rukzio/2009/ZimmermannMobileHCI2009.pdf</file_url>
<authors>
<person>
<fn>Andreas</fn>
<sn>Zimmermann</sn>
</person>
<person>
<fn>Niels</fn>
<sn>Henze</sn>
</person>
<person>
<fn>Xavier</fn>
<sn>Righetti</sn>
</person>
<person>
<fn>Enrico</fn>
<sn>Rukzio</sn>
</person>
</authors>
</reference>
<reference>
<bibtype>incollection</bibtype>
<citeid>seewoonauth_nfc-based_2009</citeid>
<title>NFC-based Mobile Interactions with Direct-View Displays</title>
<year>2009</year>
<DOI>10.1007/978-3-642-03655-2_91</DOI>
<booktitle>Human-Computer Interaction–INTERACT 2009</booktitle>
<publisher>Springer</publisher>
<file_url>https://www.uni-ulm.de/fileadmin/website_uni_ulm/iui.inst.100/institut/Papers/Prof_Rukzio/2009/SeewonauthINTERACT2009.pdf</file_url>
<authors>
<person>
<fn>Khoovirajsingh</fn>
<sn>Seewoonauth</sn>
</person>
<person>
<fn>Enrico</fn>
<sn>Rukzio</sn>
</person>
<person>
<fn>Robert</fn>
<sn>Hardy</sn>
</person>
<person>
<fn>Paul</fn>
<sn>Holleis</sn>
</person>
</authors>
</reference>
<reference>
<bibtype>article</bibtype>
<citeid>broll_perci:_2009</citeid>
<title>Perci: Pervasive service interaction with the internet of things</title>
<year>2009</year>
<DOI>10.1109/MIC.2009.120</DOI>
<journal>Internet Computing, <prt>IEEE</prt></journal>
<volume>13</volume>
<pages>74–81</pages>
<number>6</number>
<file_url>https://www.uni-ulm.de/fileadmin/website_uni_ulm/iui.inst.100/institut/Papers/Prof_Rukzio/2009/BrollInternetComp2009.pdf</file_url>
<authors>
<person>
<fn>Gregor</fn>
<sn>Broll</sn>
</person>
<person>
<fn>Enrico</fn>
<sn>Rukzio</sn>
</person>
<person>
<fn>Massimo</fn>
<sn>Paolucci</sn>
</person>
<person>
<fn>Matthias</fn>
<sn>Wagner</sn>
</person>
<person>
<fn>Albrecht</fn>
<sn>Schmidt</sn>
</person>
<person>
<fn>Heinrich</fn>
<sn>Hussmann</sn>
</person>
</authors>
</reference>
<reference>
<bibtype>article</bibtype>
<citeid>gellersen_supporting_2009</citeid>
<title>Supporting device discovery and spontaneous interaction with spatial references</title>
<year>2009</year>
<DOI>10.1007/s00779-008-0206-3</DOI>
<journal>Personal and Ubiquitous Computing</journal>
<file_url>https://www.uni-ulm.de/fileadmin/website_uni_ulm/iui.inst.100/institut/Papers/Prof_Rukzio/2009/GellersenPersUbiComp2009.pdf</file_url>
<authors>
<person>
<fn>Hans</fn>
<sn>Gellersen</sn>
</person>
<person>
<fn>Carl</fn>
<sn>Fischer</sn>
</person>
<person>
<fn>Dominique</fn>
<sn>Guinard</sn>
</person>
<person>
<fn>Roswitha</fn>
<sn>Gostner</sn>
</person>
<person>
<fn>Gerd</fn>
<sn>Kortuem</sn>
</person>
<person>
<fn>Christian</fn>
<sn>Kray</sn>
</person>
<person>
<fn>Enrico</fn>
<sn>Rukzio</sn>
</person>
<person>
<fn>Sara</fn>
<sn>Streng</sn>
</person>
</authors>
</reference>
<reference>
<bibtype>inproceedings</bibtype>
<citeid>seewoonauth_touch_2009</citeid>
<title>Touch &amp; connect and touch &amp; select: interacting with a computer by touching it with a mobile phone</title>
<year>2009</year>
<DOI>10.1145/1613858.1613905</DOI>
<booktitle>Proceedings of the 11th International Conference on Human-Computer Interaction with Mobile Devices and Services</booktitle>
<file_url>https://www.uni-ulm.de/fileadmin/website_uni_ulm/iui.inst.100/institut/Papers/Prof_Rukzio/2009/SeewoonauthMobileHCI2009.pdf</file_url>
<authors>
<person>
<fn>Khoovirajsingh</fn>
<sn>Seewoonauth</sn>
</person>
<person>
<fn>Enrico</fn>
<sn>Rukzio</sn>
</person>
<person>
<fn>Robert</fn>
<sn>Hardy</sn>
</person>
<person>
<fn>Paul</fn>
<sn>Holleis</sn>
</person>
</authors>
</reference>
<reference>
<bibtype>inproceedings</bibtype>
<citeid>lorenz_using_2009</citeid>
<title>Using handheld devices for mobile interaction with displays in home environments</title>
<year>2009</year>
<DOI>10.1145/1613858.1613882</DOI>
<booktitle>Proceedings of the 11th International Conference on Human-Computer Interaction with Mobile Devices and Services</booktitle>
<file_url>https://www.uni-ulm.de/fileadmin/website_uni_ulm/iui.inst.100/institut/Papers/Prof_Rukzio/2009/LorenzMobileHCI2009.pdf</file_url>
<authors>
<person>
<fn>Andreas</fn>
<sn>Lorenz</sn>
</person>
<person>
<fn>Clara Fernandez</fn>
<sn>De Castro</sn>
</person>
<person>
<fn>Enrico</fn>
<sn>Rukzio</sn>
</person>
</authors>
</reference>
<reference>
<bibtype>inproceedings</bibtype>
<citeid>greaves_view_2009</citeid>
<title>View &amp; share: supporting co-present viewing and sharing of media using personal projection</title>
<year>2009</year>
<DOI>10.1145/1613858.1613914</DOI>
<booktitle>Proceedings of the 11th International Conference on Human-Computer Interaction with Mobile Devices and Services</booktitle>
<file_url>https://www.uni-ulm.de/fileadmin/website_uni_ulm/iui.inst.100/institut/Papers/Prof_Rukzio/2009/GreavesMobileHCI2009.pdf</file_url>
<authors>
<person>
<fn>Andrew</fn>
<sn>Greaves</sn>
</person>
<person>
<fn>Enrico</fn>
<sn>Rukzio</sn>
</person>
</authors>
</reference>
<reference>
<bibtype>article</bibtype>
<citeid>rukzio_automatic_2008</citeid>
<title>Automatic form filling on mobile devices</title>
<year>2008</year>
<DOI>10.1016/j.pmcj.2007.09.001</DOI>
<journal>Pervasive and Mobile Computing</journal>
<file_url>https://www.uni-ulm.de/fileadmin/website_uni_ulm/iui.inst.100/institut/Papers/Prof_Rukzio/2008/RukzioPervMobileComp2008.pdf</file_url>
<authors>
<person>
<fn>Enrico</fn>
<sn>Rukzio</sn>
</person>
<person>
<fn>Chie</fn>
<sn>Noda</sn>
</person>
<person>
<fn>Alexander</fn>
<sn>De Luca</sn>
</person>
<person>
<fn>John</fn>
<sn>Hamard</sn>
</person>
<person>
<fn>Fatih</fn>
<sn>Coskun</sn>
</person>
</authors>
</reference>
<reference>
<bibtype>article</bibtype>
<citeid>paolucci_bringing_2008</citeid>
<title>Bringing semantic services to real-world objects</title>
<year>2008</year>
<DOI>10.4018/jswis.2008010103</DOI>
<journal>International Journal on Semantic Web and Information Systems (IJSWIS)</journal>
<volume>4</volume>
<number>1</number>
<file_url>https://www.uni-ulm.de/fileadmin/website_uni_ulm/iui.inst.100/institut/Papers/Prof_Rukzio/2008/PaolucciJSemWebIncSys2008.pdf</file_url>
<authors>
<person>
<fn>Massimo</fn>
<sn>Paolucci</sn>
</person>
<person>
<fn>Gregor</fn>
<sn>Broll</sn>
</person>
<person>
<fn>John</fn>
<sn>Hamard</sn>
</person>
<person>
<fn>Enrico</fn>
<sn>Rukzio</sn>
</person>
<person>
<fn>Matthew</fn>
<sn>Wagner</sn>
</person>
<person>
<fn>Albrect</fn>
<sn>Schmidt</sn>
</person>
</authors>
</reference>
<reference>
<bibtype>incollection</bibtype>
<citeid>broll_collect&drop:_2008</citeid>
<title>Collect&amp;Drop:A technique for multi-tag interaction with real world objects and information</title>
<year>2008</year>
<DOI>10.1007/978-3-540-89617-3_12</DOI>
<booktitle>Ambient Intelligence</booktitle>
<publisher>Springer</publisher>
<file_url>https://www.uni-ulm.de/fileadmin/website_uni_ulm/iui.inst.100/institut/Papers/Prof_Rukzio/2008/BrollAmbIntelligence2008.pdf</file_url>
<authors>
<person>
<fn>Gregor</fn>
<sn>Broll</sn>
</person>
<person>
<fn>Markus</fn>
<sn>Haarländer</sn>
</person>
<person>
<fn>Massimo</fn>
<sn>Paolucci</sn>
</person>
<person>
<fn>Matthias</fn>
<sn>Wagner</sn>
</person>
<person>
<fn>Enrico</fn>
<sn>Rukzio</sn>
</person>
<person>
<fn>Albrecht</fn>
<sn>Schmidt</sn>
</person>
</authors>
</reference>
<reference>
<bibtype>article</bibtype>
<citeid>hardy_direct_2008</citeid>
<title>Direct Touch-based Mobile Interaction with Dynamic Displays.</title>
<year>2008</year>
<file_url>http://comp.eprints.lancs.ac.uk/1644/1/chi2008ws_hardy.pdf</file_url>
<authors>
<person>
<fn>Robert</fn>
<sn>Hardy</sn>
</person>
<person>
<fn>Enrico</fn>
<sn>Rukzio</sn>
</person>
</authors>
</reference>
<reference>
<bibtype>inproceedings</bibtype>
<citeid>greaves_evaluation_2008</citeid>
<title>Evaluation of picture browsing using a projector phone</title>
<year>2008</year>
<DOI>10.1145/1409240.1409286</DOI>
<booktitle>Proceedings of the 10th international conference on Human computer interaction with mobile devices and services</booktitle>
<file_url>https://www.uni-ulm.de/fileadmin/website_uni_ulm/iui.inst.100/institut/Papers/Prof_Rukzio/2008/GreavesMobileHCI2008.pdf</file_url>
<authors>
<person>
<fn>Andrew</fn>
<sn>Greaves</sn>
</person>
<person>
<fn>Enrico</fn>
<sn>Rukzio</sn>
</person>
</authors>
</reference>
<reference>
<bibtype>inproceedings</bibtype>
<citeid>henze_mobile_2008</citeid>
<title>Mobile interaction with the real world</title>
<year>2008</year>
<DOI>10.1145/1409240.1409351</DOI>
<booktitle>Proceedings of the 10th international conference on Human computer interaction with mobile devices and services</booktitle>
<file_url>https://www.uni-ulm.de/fileadmin/website_uni_ulm/iui.inst.100/institut/Papers/Prof_Rukzio/2008/HenzeMobileHCI2008_2.pdf</file_url>
<authors>
<person>
<fn>Niels</fn>
<sn>Henze</sn>
</person>
<person>
<fn>Gregor</fn>
<sn>Broll</sn>
</person>
<person>
<fn>Enrico</fn>
<sn>Rukzio</sn>
</person>
<person>
<fn>Michael</fn>
<sn>Rohs</sn>
</person>
<person>
<fn>Andreas</fn>
<sn>Zimmermann</sn>
</person>
</authors>
</reference>
<reference>
<bibtype>inproceedings</bibtype>
<citeid>henze_physical-virtual_2008</citeid>
<title>Physical-virtual linkage with contextual bookmarks</title>
<year>2008</year>
<DOI>10.1145/1409240.1409335</DOI>
<booktitle>Proceedings of the 10th international conference on Human computer interaction with mobile devices and services</booktitle>
<file_url>https://www.uni-ulm.de/fileadmin/website_uni_ulm/iui.inst.100/institut/Papers/Prof_Rukzio/2008/HenzeMobileHCI2008.pdf</file_url>
<authors>
<person>
<fn>Niels</fn>
<sn>Henze</sn>
</person>
<person>
<fn>Enrico</fn>
<sn>Rukzio</sn>
</person>
<person>
<fn>Andreas</fn>
<sn>Lorenz</sn>
</person>
<person>
<fn>Xavier</fn>
<sn>Righetti</sn>
</person>
<person>
<fn>Susanne</fn>
<sn>Boll</sn>
</person>
</authors>
</reference>
<reference>
<bibtype>inproceedings</bibtype>
<citeid>greaves_picture_2008</citeid>
<title>Picture browsing and map interaction using a projector phone</title>
<year>2008</year>
<DOI>10.1145/1409240.1409336</DOI>
<booktitle>Proceedings of the 10th international conference on Human computer interaction with mobile devices and services</booktitle>
<file_url>https://www.uni-ulm.de/fileadmin/website_uni_ulm/iui.inst.100/institut/Papers/Prof_Rukzio/2008/GreavesMobileHCI2008_2.pdf</file_url>
<authors>
<person>
<fn>Andrew</fn>
<sn>Greaves</sn>
</person>
<person>
<fn>Alina</fn>
<sn>Hang</sn>
</person>
<person>
<fn>Enrico</fn>
<sn>Rukzio</sn>
</person>
</authors>
</reference>
<reference>
<bibtype>inproceedings</bibtype>
<citeid>hang_projector_2008</citeid>
<title>Projector phone: a study of using mobile phones with integrated projector for interaction with maps</title>
<year>2008</year>
<DOI>10.1145/1409240.1409263</DOI>
<booktitle>Proceedings of the 10th international conference on Human computer interaction with mobile devices and services</booktitle>
<file_url>https://www.uni-ulm.de/fileadmin/website_uni_ulm/iui.inst.100/institut/Papers/Prof_Rukzio/2008/HangMobileHCI2008.pdf</file_url>
<authors>
<person>
<fn>Alina</fn>
<sn>Hang</sn>
</person>
<person>
<fn>Enrico</fn>
<sn>Rukzio</sn>
</person>
<person>
<fn>Andrew</fn>
<sn>Greaves</sn>
</person>
</authors>
</reference>
<reference>
<bibtype>article</bibtype>
<citeid>henze_services_2008</citeid>
<title>Services surround you</title>
<year>2008</year>
<DOI>10.1007/s00371-008-0266-4</DOI>
<journal>The Visual Computer</journal>
<file_url>https://www.uni-ulm.de/fileadmin/website_uni_ulm/iui.inst.100/institut/Papers/Prof_Rukzio/2008/HenzeVisualComp2008.pdf</file_url>
<authors>
<person>
<fn>Niels</fn>
<sn>Henze</sn>
</person>
<person>
<fn>René</fn>
<sn>Reiners</sn>
</person>
<person>
<fn>Xavier</fn>
<sn>Righetti</sn>
</person>
<person>
<fn>Enrico</fn>
<sn>Rukzio</sn>
</person>
<person>
<fn>Susanne</fn>
<sn>Boll</sn>
</person>
</authors>
</reference>
<reference>
<bibtype>article</bibtype>
<citeid>rukzio_physical_2008</citeid>
<title>The physical mobile interaction framework (pmif)</title>
<year>2008</year>
<journal>Technical Report <prt>LMU-MI-2008-2</prt></journal>
<file_url>http://eprints.lancs.ac.uk/42386/1/lmureport2008.pdf</file_url>
<authors>
<person>
<fn>Enrico</fn>
<sn>Rukzio</sn>
</person>
<person>
<fn>Gregor</fn>
<sn>Broll</sn>
</person>
<person>
<fn>Sergej</fn>
<sn>Wetzstein</sn>
</person>
</authors>
</reference>
<reference>
<bibtype>inproceedings</bibtype>
<citeid>hardy_touch_2008</citeid>
<title>Touch & interact: touch-based interaction of mobile phones with displays</title>
<year>2008</year>
<DOI>10.1145/1409240.1409267</DOI>
<booktitle>Proceedings of the 10th international conference on Human computer interaction with mobile devices and services</booktitle>
<keywords>touchbasedmobile</keywords>
<file_url>https://www.uni-ulm.de/fileadmin/website_uni_ulm/iui.inst.100/institut/Papers/Prof_Rukzio/2008/HardyMobileHCI2008.pdf</file_url>
<authors>
<person>
<fn>Robert</fn>
<sn>Hardy</sn>
</person>
<person>
<fn>Enrico</fn>
<sn>Rukzio</sn>
</person>
</authors>
</reference>
<reference>
<bibtype>inproceedings</bibtype>
<citeid>hardy_touch_2008-1</citeid>
<title>Touch & Interact: touch-based interaction with a tourist application</title>
<year>2008</year>
<DOI>10.1145/1409240.1409337</DOI>
<booktitle>Proceedings of the 10th international conference on Human computer interaction with mobile devices and services</booktitle>
<keywords>touchtourist</keywords>
<file_url>https://www.uni-ulm.de/fileadmin/website_uni_ulm/iui.inst.100/institut/Papers/Prof_Rukzio/2008/HardyMobileHCI2008_2.pdf</file_url>
<authors>
<person>
<fn>Robert</fn>
<sn>Hardy</sn>
</person>
<person>
<fn>Enrico</fn>
<sn>Rukzio</sn>
</person>
</authors>
</reference>
<reference>
<bibtype>inproceedings</bibtype>
<citeid>gostner_usage_2008</citeid>
<title>Usage of spatial information for selection of co-located devices</title>
<year>2008</year>
<DOI>10.1145/1409240.1409305</DOI>
<booktitle>Proceedings of the 10th international conference on Human computer interaction with mobile devices and services</booktitle>
<file_url>https://www.uni-ulm.de/fileadmin/website_uni_ulm/iui.inst.100/institut/Papers/Prof_Rukzio/2008/GostnerMobileHCI2008.pdf</file_url>
<authors>
<person>
<fn>Roswitha</fn>
<sn>Gostner</sn>
</person>
<person>
<fn>Enrico</fn>
<sn>Rukzio</sn>
</person>
<person>
<fn>Hans</fn>
<sn>Gellersen</sn>
</person>
</authors>
</reference>
<reference>
<bibtype>article</bibtype>
<citeid>broll_authoring_2007</citeid>
<title>Authoring Support for Mobile Interaction with the Real World.</title>
<year>2007</year>
<file_url>https://www.uni-ulm.de/fileadmin/website_uni_ulm/iui.inst.100/institut/Papers/Prof_Rukzio/2007/BrollPervasive2007.pdf</file_url>
<authors>
<person>
<fn>Gregor</fn>
<sn>Broll</sn>
</person>
<person>
<fn>Enrico</fn>
<sn>Rukzio</sn>
</person>
<person>
<fn>Björn</fn>
<sn>Wedi</sn>
</person>
</authors>
</reference>
<reference>
<bibtype>article</bibtype>
<citeid>broll_comparing_2007</citeid>
<title>Comparing Techniques for Mobile Interaction with Objects from the Real World.</title>
<year>2007</year>
<file_url>https://www.uni-ulm.de/fileadmin/website_uni_ulm/iui.inst.100/institut/Papers/Prof_Rukzio/2007/BrollPermid2007.pdf</file_url>
<authors>
<person>
<fn>Gregor</fn>
<sn>Broll</sn>
</person>
<person>
<fn>Sven</fn>
<sn>Siorpaes</sn>
</person>
<person>
<fn>Enrico</fn>
<sn>Rukzio</sn>
</person>
<person>
<fn>Massimo</fn>
<sn>Paolucci</sn>
</person>
<person>
<fn>John</fn>
<sn>Hamard</sn>
</person>
<person>
<fn>Matthias</fn>
<sn>Wagner</sn>
</person>
<person>
<fn>Albrecht</fn>
<sn>Schmidt</sn>
</person>
</authors>
</reference>
<reference>
<bibtype>inproceedings</bibtype>
<citeid>henze_contextual_2007</citeid>
<title>Contextual bookmarks</title>
<year>2007</year>
<booktitle>MobileHCI 2007 workshop on Mobile Interaction with the Real World (MIRW 2007)</booktitle>
<file_url>https://www.uni-ulm.de/fileadmin/website_uni_ulm/iui.inst.100/institut/Papers/Prof_Rukzio/2007/HenzeMobileHCI2007.pdf</file_url>
<authors>
<person>
<fn>Niels</fn>
<sn>Henze</sn>
</person>
<person>
<fn>Mingyu</fn>
<sn>Lim</sn>
</person>
<person>
<fn>Andreas</fn>
<sn>Lorenz</sn>
</person>
<person>
<fn>Michael</fn>
<sn>Mueller</sn>
</person>
<person>
<fn>Xavier</fn>
<sn>Righetti</sn>
</person>
<person>
<fn>Enrico</fn>
<sn>Rukzio</sn>
</person>
<person>
<fn>Nadia</fn>
<sn>Magnenat-Thalmann</sn>
</person>
<person>
<fn>Andreas</fn>
<sn>Zimmermann</sn>
</person>
</authors>
</reference>
<reference>
<bibtype>incollection</bibtype>
<citeid>rukzio_mobile_2007</citeid>
<title>Mobile interaction with the real world: An evaluation and comparison of physical mobile interaction techniques</title>
<year>2007</year>
<booktitle>Ambient Intelligence</booktitle>
<publisher>Springer</publisher>
<file_url>https://www.uni-ulm.de/fileadmin/website_uni_ulm/iui.inst.100/institut/Papers/Prof_Rukzio/2007/RukzioAmbIntelli2007.pdf</file_url>
<authors>
<person>
<fn>Enrico</fn>
<sn>Rukzio</sn>
</person>
<person>
<fn>Gregor</fn>
<sn>Broll</sn>
</person>
<person>
<fn>Karin</fn>
<sn>Leichtenstern</sn>
</person>
<person>
<fn>Albrecht</fn>
<sn>Schmidt</sn>
</person>
</authors>
</reference>
<reference>
<bibtype>inproceedings</bibtype>
<citeid>broll_mobile_2007</citeid>
<title>Mobile interaction with web services through associated real world objects</title>
<year>2007</year>
<DOI>10.1145/1377999.1378025</DOI>
<booktitle>Proceedings of the 9th international conference on Human computer interaction with mobile devices and services</booktitle>
<pages>319–321</pages>
<file_url>https://www.uni-ulm.de/fileadmin/website_uni_ulm/iui.inst.100/institut/mitarbeiterbereiche/wolf/awards/broll_mobile_2007.pdf</file_url>
<authors>
<person>
<fn>Gregor</fn>
<sn>Broll</sn>
</person>
<person>
<fn>John</fn>
<sn>Hamard</sn>
</person>
<person>
<fn>Massimo</fn>
<sn>Paolucci</sn>
</person>
<person>
<fn>Markus</fn>
<sn>Haarländer</sn>
</person>
<person>
<fn>Matthias</fn>
<sn>Wagner</sn>
</person>
<person>
<fn>Sven</fn>
<sn>Siorpaes</sn>
</person>
<person>
<fn>Enrico</fn>
<sn>Rukzio</sn>
</person>
<person>
<fn>Albrecht</fn>
<sn>Schmidt</sn>
</person>
<person>
<fn>Kevin</fn>
<sn>Wiesner</sn>
</person>
</authors>
</reference>
<reference>
<bibtype>article</bibtype>
<citeid>falke_mobile_2007</citeid>
<title>Mobile services for near field communication</title>
<year>2007</year>
<journal>Ludwig-Maximilians-Universität (<prt>LMU)</prt>, Munich, Germany, Technical Report <prt>LMUMI-2007-1</prt></journal>
<file_url>https://www.uni-ulm.de/fileadmin/website_uni_ulm/iui.inst.100/institut/Papers/Prof_Rukzio/2007/FalkeNFC2007.pdf</file_url>
<authors>
<person>
<fn>Oliver</fn>
<sn>Falke</sn>
</person>
<person>
<fn>Enrico</fn>
<sn>Rukzio</sn>
</person>
<person>
<fn>Ulrich</fn>
<sn>Dietz</sn>
</person>
<person>
<fn>Paul</fn>
<sn>Holleis</sn>
</person>
<person>
<fn>Albrecht</fn>
<sn>Schmidt</sn>
</person>
</authors>
</reference>
<reference>
<bibtype>inproceedings</bibtype>
<citeid>vetter_physical_2007</citeid>
<title>Physical mobile interaction with dynamic physical object</title>
<year>2007</year>
<DOI>10.1145/1377999.1378030</DOI>
<booktitle>Proceedings of the 9th international conference on Human computer interaction with mobile devices and services</booktitle>
<file_url>https://www.uni-ulm.de/fileadmin/website_uni_ulm/iui.inst.100/institut/Papers/Prof_Rukzio/2007/VetterMobileHCI2007.pdf</file_url>
<authors>
<person>
<fn>Johannes</fn>
<sn>Vetter</sn>
</person>
<person>
<fn>John</fn>
<sn>Hamard</sn>
</person>
<person>
<fn>Massimo</fn>
<sn>Paolucci</sn>
</person>
<person>
<fn>Enrico</fn>
<sn>Rukzio</sn>
</person>
<person>
<fn>Albrecht</fn>
<sn>Schmidt</sn>
</person>
</authors>
</reference>
<reference>
<bibtype>inproceedings</bibtype>
<citeid>holleis_privacy_2007</citeid>
<title>Privacy and Curiosity in Mobile Interactions with Public Displays.</title>
<year>2007</year>
<booktitle><prt>CHI</prt> 2007 workshop on Mobile Spatial Interaction</booktitle>
<file_url>https://www.uni-ulm.de/fileadmin/website_uni_ulm/iui.inst.100/institut/Papers/Prof_Rukzio/2007/HolleisCHI2007.pdf</file_url>
<authors>
<person>
<fn>Paul</fn>
<sn>Holleis</sn>
</person>
<person>
<fn>Enrico</fn>
<sn>Rukzio</sn>
</person>
<person>
<fn>Friderike</fn>
<sn>Otto</sn>
</person>
<person>
<fn>Albrecht</fn>
<sn>Schmidt</sn>
</person>
</authors>
</reference>
<reference>
<bibtype>inproceedings</bibtype>
<citeid>broll_supporting_2007</citeid>
<title>Supporting mobile service usage through physical mobile interaction</title>
<year>2007</year>
<DOI>10.1109/PERCOM.2007.35</DOI>
<booktitle>Pervasive Computing and Communications, 2007. PerCom'07. Fifth Annual IEEE International Conference on</booktitle>
<file_url>https://www.uni-ulm.de/fileadmin/website_uni_ulm/iui.inst.100/institut/Papers/Prof_Rukzio/2007/BrollPervCom2007.pdf</file_url>
<authors>
<person>
<fn>Gregor</fn>
<sn>Broll</sn>
</person>
<person>
<fn>Sven</fn>
<sn>Siorpaes</sn>
</person>
<person>
<fn>Enrico</fn>
<sn>Rukzio</sn>
</person>
<person>
<fn>Massimo</fn>
<sn>Paolucci</sn>
</person>
<person>
<fn>John</fn>
<sn>Hamard</sn>
</person>
<person>
<fn>Matthias</fn>
<sn>Wagner</sn>
</person>
<person>
<fn>Albrecht</fn>
<sn>Schmidt</sn>
</person>
</authors>
</reference>
<reference>
<bibtype>inproceedings</bibtype>
<citeid>broll_using_2007</citeid>
<title>Using Video Clips to Support Requirements Elicitation in Focus Groups-An Experience Report.</title>
<year>2007</year>
<booktitle><prt>SE</prt> 2007 workshop on Multimedia Requirements Engineering</booktitle>
<file_url>https://www.uni-ulm.de/fileadmin/website_uni_ulm/iui.inst.100/institut/Papers/Prof_Rukzio/2007/BrollRequEng2007.pdf</file_url>
<authors>
<person>
<fn>Gregor</fn>
<sn>Broll</sn>
</person>
<person>
<fn>Heinrich</fn>
<sn>Hussmann</sn>
</person>
<person>
<fn>Enrico</fn>
<sn>Rukzio</sn>
</person>
<person>
<fn>Raphael</fn>
<sn>Wimmer</sn>
</person>
</authors>
</reference>
<reference>
<bibtype>incollection</bibtype>
<citeid>rukzio_experimental_2006</citeid>
<title>An experimental comparison of physical mobile interaction techniques: Touching, pointing and scanning</title>
<year>2006</year>
<DOI>10.1007/11853565_6</DOI>
<booktitle><prt>UbiComp</prt> 2006: Ubiquitous Computing</booktitle>
<publisher>Springer</publisher>
<file_url>https://www.uni-ulm.de/fileadmin/website_uni_ulm/iui.inst.100/institut/Papers/Prof_Rukzio/2006/RukzioUbiComp2006.pdf</file_url>
<authors>
<person>
<fn>Enrico</fn>
<sn>Rukzio</sn>
</person>
<person>
<fn>Karin</fn>
<sn>Leichtenstern</sn>
</person>
<person>
<fn>Vic</fn>
<sn>Callaghan</sn>
</person>
<person>
<fn>Paul</fn>
<sn>Holleis</sn>
</person>
<person>
<fn>Albrecht</fn>
<sn>Schmidt</sn>
</person>
<person>
<fn>Jeannette</fn>
<sn>Chin</sn>
</person>
</authors>
</reference>
<reference>
<bibtype>article</bibtype>
<citeid>leichtenstern_mobile_2006</citeid>
<title>Mobile interaction in smart environments</title>
<year>2006</year>
<journal>Computing</journal>
<volume>7</volume>
<file_url>https://www.uni-ulm.de/fileadmin/website_uni_ulm/iui.inst.100/institut/Papers/Prof_Rukzio/2006/LeichtensternPervCom2006.pdf</file_url>
<authors>
<person>
<fn>Karin</fn>
<sn>Leichtenstern</sn>
</person>
<person>
<fn>Enrico</fn>
<sn>Rukzio</sn>
</person>
<person>
<fn>Jeannette</fn>
<sn>Chin</sn>
</person>
<person>
<fn>Vic</fn>
<sn>Callaghan</sn>
</person>
<person>
<fn>Albrecht</fn>
<sn>Schmidt</sn>
</person>
</authors>
</reference>
<reference>
<bibtype>inproceedings</bibtype>
<citeid>siorpaes_mobile_2006</citeid>
<title>Mobile interaction with the internet of things</title>
<year>2006</year>
<booktitle>Poster at 4th International Conference on Pervasive Computing (Pervasive 2006), Dublin, Ireland</booktitle>
<file_url>https://www.uni-ulm.de/fileadmin/website_uni_ulm/iui.inst.100/institut/Papers/Prof_Rukzio/2006/SiorpaesPERC2006.pdf</file_url>
<authors>
<person>
<fn>Sven</fn>
<sn>Siorpaes</sn>
</person>
<person>
<fn>Gregor</fn>
<sn>Broll</sn>
</person>
<person>
<fn>Massimo</fn>
<sn>Paolucci</sn>
</person>
<person>
<fn>Enrico</fn>
<sn>Rukzio</sn>
</person>
<person>
<fn>John</fn>
<sn>Hamard</sn>
</person>
<person>
<fn>Matthias</fn>
<sn>Wagner</sn>
</person>
<person>
<fn>Albrecht</fn>
<sn>Schmidt</sn>
</person>
<person>
<fn>Docomo</fn>
<sn>Eurolabs</sn>
</person>
</authors>
</reference>
<reference>
<bibtype>inproceedings</bibtype>
<citeid>rukzio_mobile_2006</citeid>
<title>Mobile interaction with the real world</title>
<year>2006</year>
<booktitle><prt>ACM</prt> International Conference Proceeding Series</booktitle>
<volume>159</volume>
<pages>295–296</pages>
<file_url>http://www.comp.lancs.ac.uk/~rukzio/mobilehci2009tutorials/Rukzio_MobileInteractionWithTheRealWorld.pdf</file_url>
<authors>
<person>
<fn>Enrico</fn>
<sn>Rukzio</sn>
</person>
<person>
<fn>Massimo</fn>
<sn>Paolucci</sn>
</person>
<person>
<fn>Tim</fn>
<sn>Finin</sn>
</person>
<person>
<fn>Paul</fn>
<sn>Wisner</sn>
</person>
<person>
<fn>Terry</fn>
<sn>Payne</sn>
</person>
</authors>
</reference>
<reference>
<bibtype>inproceedings</bibtype>
<citeid>schmidt_mobile_2006</citeid>
<title>Mobile phones as tool to increase communication and location awareness of users</title>
<year>2006</year>
<DOI>10.1145/1292331.1292355</DOI>
<booktitle>Proceedings of the 3rd international conference on Mobile technology, applications &amp; systems</booktitle>
<file_url>https://www.uni-ulm.de/fileadmin/website_uni_ulm/iui.inst.100/institut/Papers/Prof_Rukzio/2006/SchmidtMobility2006.pdf</file_url>
<authors>
<person>
<fn>Albrecht</fn>
<sn>Schmidt</sn>
</person>
<person>
<fn>Paul</fn>
<sn>Holleis</sn>
</person>
<person>
<fn>Jonna</fn>
<sn>Hakkila</sn>
</person>
<person>
<fn>Enrico</fn>
<sn>Rukzio</sn>
</person>
<person>
<fn>Richard</fn>
<sn>Atterer</sn>
</person>
</authors>
</reference>
<reference>
<bibtype>inproceedings</bibtype>
<citeid>rukzio_mobile_2006-1</citeid>
<title>Mobile service interaction with the web of things</title>
<year>2006</year>
<booktitle>13th International Conference on Telecommunications (ICT 2006), Funchal, Madeira island, Portugal, 2006c</booktitle>
<file_url>https://www.uni-ulm.de/fileadmin/website_uni_ulm/iui.inst.100/institut/Papers/Prof_Rukzio/2006/RukzioICT2006.pdf</file_url>
<authors>
<person>
<fn>Enrico</fn>
<sn>Rukzio</sn>
</person>
<person>
<fn>Massimo</fn>
<sn>Paolucci</sn>
</person>
<person>
<fn>Matthias</fn>
<sn>Wagner</sn>
</person>
<person>
<fn>Hendrik H.</fn>
<sn>Berndt</sn>
</person>
<person>
<fn>John</fn>
<sn>Hamard</sn>
</person>
<person>
<fn>Albrecht</fn>
<sn>Schmidt</sn>
</person>
</authors>
</reference>
<reference>
<bibtype>phdthesis</bibtype>
<citeid>rukzio_physical_2006</citeid>
<title>Physical mobile interactions: Mobile devices as pervasive mediators for interactions with the real world</title>
<year>2006</year>
<school>University of Munich</school>
<file_url>https://www.uni-ulm.de/fileadmin/website_uni_ulm/iui.inst.100/institut/Papers/Prof_Rukzio/2006/RukzioDissertation.pdf</file_url>
<authors>
<person>
<fn>Enrico</fn>
<sn>Rukzio</sn>
</person>
</authors>
</reference>
<reference>
<bibtype>inproceedings</bibtype>
<citeid>broll_supporting_2006</citeid>
<title>Supporting Mobile Service Interaction through Semantic Service Description Annotation and Automatic Interface Generation.</title>
<year>2006</year>
<booktitle><prt>ISWC</prt> 2006 workshop on Semantic Desktop and Social Semantic Collaboration (Semdesk 2006)</booktitle>
<file_url>http://eprints.lancs.ac.uk/42334/1/semdesk2006.pdf</file_url>
<authors>
<person>
<fn>Gregor</fn>
<sn>Broll</sn>
</person>
<person>
<fn>Sven</fn>
<sn>Siorpaes</sn>
</person>
<person>
<fn>Enrico</fn>
<sn>Rukzio</sn>
</person>
<person>
<fn>Massimo</fn>
<sn>Paolucci</sn>
</person>
<person>
<fn>John</fn>
<sn>Hamard</sn>
</person>
<person>
<fn>Matthias</fn>
<sn>Wagner</sn>
</person>
<person>
<fn>Albrecht</fn>
<sn>Schmidt</sn>
</person>
</authors>
</reference>
<reference>
<bibtype>inproceedings</bibtype>
<citeid>broll_supporting_2006-1</citeid>
<title>Supporting service interaction in the real world</title>
<year>2006</year>
<booktitle>Workshop Permid</booktitle>
<file_url>http://www.researchgate.net/publication/228364452_Supporting_Service_Interaction_in_the_Real_World/file/32bfe50ec4f2cb65ab.pdf</file_url>
<authors>
<person>
<fn>Gregor</fn>
<sn>Broll</sn>
</person>
<person>
<fn>Sven</fn>
<sn>Siorpaes</sn>
</person>
<person>
<fn>Enrico</fn>
<sn>Rukzio</sn>
</person>
<person>
<fn>Massimo</fn>
<sn>Paolucci</sn>
</person>
<person>
<fn>John</fn>
<sn>Hamard</sn>
</person>
<person>
<fn>Matthias</fn>
<sn>Wagner</sn>
</person>
<person>
<fn>Albrecht</fn>
<sn>Schmidt</sn>
</person>
</authors>
</reference>
<reference>
<bibtype>inproceedings</bibtype>
<citeid>bartolomeo_simplicity_2006</citeid>
<title>The Simplicity Device: Your Personal Mobile Representative</title>
<year>2006</year>
<booktitle>Workshop on Pervasive Mobile Interaction Devices (<prt>PERMID</prt> 2006), Dublin, Ireland</booktitle>
<file_url>http://www.comp.lancs.ac.uk/~rukzio/publications/permid2006_Bartolomeo.pdf</file_url>
<authors>
<person>
<fn>Giovanni</fn>
<sn>Bartolomeo</sn>
</person>
<person>
<fn>Francesca</fn>
<sn>Martire</sn>
</person>
<person>
<fn>Enrico</fn>
<sn>Rukzio</sn>
</person>
<person>
<fn>Stefano</fn>
<sn>Salsano</sn>
</person>
<person>
<fn>Nicola Blefari</fn>
<sn>Melazzi</sn>
</person>
<person>
<fn>Chie</fn>
<sn>Noda</sn>
</person>
<person>
<fn>John</fn>
<sn>Hamard</sn>
</person>
<person>
<fn>Alexander</fn>
<sn>De Luca</sn>
</person>
</authors>
</reference>
<reference>
<bibtype>inproceedings</bibtype>
<citeid>schmidt_utilizing_2006</citeid>
<title>Utilizing mobile phones as ambient information displays</title>
<year>2006</year>
<DOI>10.1145/1125451.1125692</DOI>
<booktitle>CHI'06 Extended Abstracts on Human Factors in Computing Systems</booktitle>
<file_url>https://www.uni-ulm.de/fileadmin/website_uni_ulm/iui.inst.100/institut/Papers/Prof_Rukzio/2006/SchmidtCHI2006.pdf</file_url>
<authors>
<person>
<fn>Albrecht</fn>
<sn>Schmidt</sn>
</person>
<person>
<fn>Jonna</fn>
<sn>Hakkila</sn>
</person>
<person>
<fn>Richard</fn>
<sn>Atterer</sn>
</person>
<person>
<fn>Enrico</fn>
<sn>Rukzio</sn>
</person>
<person>
<fn>Paul</fn>
<sn>Holleis</sn>
</person>
</authors>
</reference>
<reference>
<bibtype>inproceedings</bibtype>
<citeid>rukzio_visualization_2006</citeid>
<title>Visualization of uncertainty in context aware mobile applications</title>
<year>2006</year>
<DOI>10.1145/1152215.1152267</DOI>
<booktitle>Proceedings of the 8th conference on Human-computer interaction with mobile devices and services</booktitle>
<pages>247–250</pages>
<file_url>https://www.uni-ulm.de/fileadmin/website_uni_ulm/iui.inst.100/institut/Papers/Prof_Rukzio/2006/RukzioCHI2006.pdf</file_url>
<authors>
<person>
<fn>Enrico</fn>
<sn>Rukzio</sn>
</person>
<person>
<fn>John</fn>
<sn>Hamard</sn>
</person>
<person>
<fn>Chie</fn>
<sn>Noda</sn>
</person>
<person>
<fn>Alexander</fn>
<sn>De Luca</sn>
</person>
</authors>
</reference>
<reference>
<bibtype>article</bibtype>
<citeid>rukzio_framework_2005</citeid>
<title>A Framework for Mobile Interactions with the Physical World</title>
<year>2005</year>
<journal>Proceedings of Wireless Personal Multimedia Communication (<prt>WPMC'05)</prt></journal>
<file_url>https://www.medien.ifi.lmu.de/fileadmin/mimuc/rukzio/rukzio_wpmc2005.pdf</file_url>
<authors>
<person>
<fn>Enrico</fn>
<sn>Rukzio</sn>
</person>
<person>
<fn>Sergej</fn>
<sn>Wetzstein</sn>
</person>
<person>
<fn>Albrecht</fn>
<sn>Schmidt</sn>
</person>
</authors>
</reference>
<reference>
<bibtype>inproceedings</bibtype>
<citeid>leichtenstern_analysis_2005</citeid>
<title>Analysis of built-in mobile phone sensors for supporting interactions with the real world</title>
<year>2005</year>
<booktitle>Pervasive Mobile Interaction Devices (<prt>PERMID</prt> 2005)-Mobile Devices as Pervasive User Interfaces and Interaction Devices-Workshop at the Pervasive 2005</booktitle>
<file_url>http://www.medien.ifi.lmu.de/permid2005/pdf/KarinLeichtenstern_Permid2005.pdf</file_url>
<authors>
<person>
<fn>Karin</fn>
<sn>Leichtenstern</sn>
</person>
<person>
<fn>A. D.</fn>
<sn>Luca</sn>
</person>
<person>
<fn>Enrico</fn>
<sn>Rukzio</sn>
</person>
</authors>
</reference>
<reference>
<bibtype>inproceedings</bibtype>
<citeid>rukzio_development_2005</citeid>
<title>Development of interactive applications for mobile devices</title>
<year>2005</year>
<booktitle><prt>ACM</prt> International Conference Proceeding Series</booktitle>
<volume>111</volume>
<pages>365–366</pages>
<file_url>http://www.medien.ifi.lmu.de/fileadmin/mimuc/rukzio/tutorial_mobilehci2005.pdf</file_url>
<authors>
<person>
<fn>Enrico</fn>
<sn>Rukzio</sn>
</person>
<person>
<fn>Michael</fn>
<sn>Rohs</sn>
</person>
<person>
<fn>Daniel</fn>
<sn>Wagner</sn>
</person>
<person>
<fn>John</fn>
<sn>Hamard</sn>
</person>
</authors>
</reference>
<reference>
<bibtype>inproceedings</bibtype>
<citeid>rukzio_policy_2005</citeid>
<title>Policy based adaptive services for mobile commerce</title>
<year>2005</year>
<DOI>10.1109/WMCS.2005.18</DOI>
<booktitle>Mobile Commerce and Services, 2005. <prt>WMCS'05.</prt> The Second <prt>IEEE</prt> International Workshop on</booktitle>
<file_url>https://www.uni-ulm.de/fileadmin/website_uni_ulm/iui.inst.100/institut/Papers/Prof_Rukzio/2005/RukzioWMCS2005.pdf</file_url>
<authors>
<person>
<fn>Enrico</fn>
<sn>Rukzio</sn>
</person>
<person>
<fn>Sven</fn>
<sn>Siorpaes</sn>
</person>
<person>
<fn>Oliver</fn>
<sn>Falke</sn>
</person>
<person>
<fn>Heinrich</fn>
<sn>Hussmann</sn>
</person>
</authors>
</reference>
<reference>
<bibtype>inproceedings</bibtype>
<citeid>rukzio_physical_2005</citeid>
<title>The physical user interface profile (PUIP): Modelling mobile interactions with the real world</title>
<year>2005</year>
<DOI>10.1145/1122935.1122954</DOI>
<booktitle>Proceedings of the 4th international workshop on Task models and diagrams</booktitle>
<file_url>https://www.uni-ulm.de/fileadmin/website_uni_ulm/iui.inst.100/institut/Papers/Prof_Rukzio/2005/RukzioTAMODIA2005.pdf</file_url>
<authors>
<person>
<fn>Enrico</fn>
<sn>Rukzio</sn>
</person>
<person>
<fn>Andreas</fn>
<sn>Pleuss</sn>
</person>
<person>
<fn>Lucia</fn>
<sn>Terrenghi</sn>
</person>
</authors>
</reference>
<reference>
<bibtype>inproceedings</bibtype>
<citeid>rukzio_rotating_2005</citeid>
<title>The rotating compass: a novel interaction technique for mobile navigation</title>
<year>2005</year>
<DOI>10.1145/1056808.1057016</DOI>
<booktitle><prt>CHI'05</prt> extended abstracts on Human factors in computing systems</booktitle>
<file_url>https://www.uni-ulm.de/fileadmin/website_uni_ulm/iui.inst.100/institut/Papers/Prof_Rukzio/2005/RukzioCHI2005.pdf</file_url>
<authors>
<person>
<fn>Enrico</fn>
<sn>Rukzio</sn>
</person>
<person>
<fn>Albrecht</fn>
<sn>Schmidt</sn>
</person>
<person>
<fn>Antonio</fn>
<sn>Krüger</sn>
</person>
</authors>
</reference>
<reference>
<bibtype>inproceedings</bibtype>
<citeid>brugnoli_user_2005</citeid>
<title>User expectations for simple mobile ubiquitous computing environments</title>
<year>2005</year>
<DOI>10.1109/WMCS.2005.27</DOI>
<booktitle>Mobile Commerce and Services, 2005. <prt>WMCS'05.</prt> The Second <prt>IEEE</prt> International Workshop on</booktitle>
<file_url>https://www.uni-ulm.de/fileadmin/website_uni_ulm/iui.inst.100/institut/Papers/Prof_Rukzio/2005/BrugnoliWMCS2005.pdf</file_url>
<authors>
<person>
<fn>Maria Cristina</fn>
<sn>Brugnoli</sn>
</person>
<person>
<fn>John</fn>
<sn>Hamard</sn>
</person>
<person>
<fn>Enrico</fn>
<sn>Rukzio</sn>
</person>
</authors>
</reference>
<reference>
<bibtype>inproceedings</bibtype>
<citeid>rukzio_analysis_2004</citeid>
<title>An analysis of the usage of mobile phones for personalized interactions with ubiquitous public displays</title>
<year>2004</year>
<booktitle>Workshop Ubiquitous Display Environments in conjunction with <prt>UbiComp</prt></booktitle>
<file_url>http://www.medien.ifi.lmu.de/pubdb/publications/pub/rukzio2004ubidisplay/rukzio2004ubidisplay.pdf</file_url>
<authors>
<person>
<fn>Enrico</fn>
<sn>Rukzio</sn>
</person>
<person>
<fn>Albrecht</fn>
<sn>Schmidt</sn>
</person>
<person>
<fn>Heinrich</fn>
<sn>Hussmann</sn>
</person>
</authors>
</reference>
<reference>
<bibtype>inproceedings</bibtype>
<citeid>rukzio_context_2004</citeid>
<title>Context for Simplicity: A Basis for Context-aware Systems Based on the <prt>3GPP</prt> Generic User Profile</title>
<year>2004</year>
<booktitle>International Conference on Computational Intelligence (<prt>ICCI</prt> 2004)</booktitle>
<pages>466–469</pages>
<file_url>http://www.comp.lancs.ac.uk/~rukzio/publications/icci2004_Rukzio.pdf</file_url>
<authors>
<person>
<fn>Enrico</fn>
<sn>Rukzio</sn>
</person>
<person>
<fn>George N.</fn>
<sn>Prezerakos</sn>
</person>
<person>
<fn>Giovanni</fn>
<sn>Cortese</sn>
</person>
<person>
<fn>Eleftherios</fn>
<sn>Koutsoloukas</sn>
</person>
<person>
<fn>Sofia</fn>
<sn>Kapellaki</sn>
</person>
</authors>
</reference>
<reference>
<bibtype>inproceedings</bibtype>
<citeid>rukzio_physical_2004</citeid>
<title>Physical posters as gateways to context-aware services for mobile devices</title>
<year>2004</year>
<DOI>10.1109/MCSA.2004.20</DOI>
<booktitle>Mobile Computing Systems and Applications, 2004. WMCSA 2004. Sixth IEEEWorkshop on</booktitle>
<file_url>https://www.uni-ulm.de/fileadmin/website_uni_ulm/iui.inst.100/institut/Papers/Prof_Rukzio/2004/RukzioWMCSA2004.pdf</file_url>
<authors>
<person>
<fn>Enrico</fn>
<sn>Rukzio</sn>
</person>
<person>
<fn>Albrecht</fn>
<sn>Schmidt</sn>
</person>
<person>
<fn>Heinrich</fn>
<sn>Hussmann</sn>
</person>
</authors>
</reference>
<reference>
<bibtype>article</bibtype>
<citeid>rukzio_privacy-enhanced_2004</citeid>
<title>Privacy-enhanced intelligent automatic form filling for context-aware services on mobile devices</title>
<year>2004</year>
<journal>Artificial Intelligence in Mobile Systems 2004 (AIMS 2004)</journal>
<pages>84</pages>
<file_url>https://www.uni-ulm.de/fileadmin/website_uni_ulm/iui.inst.100/institut/Papers/Prof_Rukzio/2004/RukzioAIMS2004.pdf</file_url>
<authors>
<person>
<fn>Enrico</fn>
<sn>Rukzio</sn>
</person>
<person>
<fn>Albrecht</fn>
<sn>Schmidt</sn>
</person>
<person>
<fn>Heinrich</fn>
<sn>Hussmann</sn>
</person>
</authors>
</reference>
<reference>
<bibtype>book</bibtype>
<citeid>rukzio_generic_2003</citeid>
<title>A generic extension mechanism for x3d to define, implement and integrate new first-class nodes, components, and profiles</title>
<year>2003</year>
<publisher><prt>PhD</prt> Thesis, Dresden University of Technology</publisher>
<file_url>http://www.comp.lancs.ac.uk/~rukzio/publications/X3D2003_rukzio.pdf</file_url>
<authors>
<person>
<fn>Enrico</fn>
<sn>Rukzio</sn>
</person>
</authors>
</reference>
<reference>
<bibtype>inproceedings</bibtype>
<citeid>dachselt_behavior3d:_2003</citeid>
<title>Behavior3D: an XML-based framework for 3D graphics behavior</title>
<year>2003</year>
<DOI>10.1145/636593.636609</DOI>
<booktitle>Proceedings of the eighth international conference on <prt>3D</prt> Web technology</booktitle>
<file_url>https://www.uni-ulm.de/fileadmin/website_uni_ulm/iui.inst.100/institut/Papers/Prof_Rukzio/2003/DachseltWeb3D2003.pdf</file_url>
<authors>
<person>
<fn>Raimund</fn>
<sn>Dachselt</sn>
</person>
<person>
<fn>Enrico</fn>
<sn>Rukzio</sn>
</person>
</authors>
</reference>
<reference>
<bibtype>article</bibtype>
<citeid>rukzio_realisierung_2002</citeid>
<title>Realisierung von Interaktionen und Verhalten in dokumentbestimmten, komponentenbasierten <prt>3D-Applikationen</prt></title>
<year>2002</year>
<file_url>http://www.comp.lancs.ac.uk/~rukzio/publications/Diplom2002_rukzio.pdf</file_url>
<authors>
<person>
<fn>Enrico</fn>
<sn>Rukzio</sn>
</person>
</authors>
</reference>
<reference>
<bibtype>article</bibtype>
<citeid>rukzio_formate_2001</citeid>
<title>Formate, Technologien und Architekturkonzepte für 3D-Web-Applikationen</title>
<year>2001</year>
<journal>Belegarbeit der Technischen Universität Dresden</journal>
<file_url>http://www.medien.ifi.lmu.de/fileadmin/mimuc/rukzio/BelegarbeitEnricoRukzio.pdf</file_url>
<authors>
<person>
<fn>Enrico</fn>
<sn>Rukzio</sn>
</person>
</authors>
</reference>
</bib>
