IEEE VR logo

March 22nd - 26th

IEEE VR logo

March 22nd - 26th

IEEE Computer Society IEEE

IEEE Computer Society IEEE

Papers

Monday, March 23, 2020

Tuesday, March 24, 2020

Wednesday, March 25, 2020

Thursday, March 26, 2020


Session 1: Avatar - Appearance

Monday, March 23, 11:00 AM - 12:30 PM,
Track 1 (Great Room 1)

Effects of Locomotion Style and Body Visibility of a Telepresence Avatar

Youjin Choi (KAIST), Jeongmi Lee (KAIST), Sung-Hee Lee (KAIST)

Conference

Abstract: “Telepresence avatars enable users in different environments to interact with each other. In order to increase the effectiveness of these interactions, however, the movements of avatars must be adjusted accordingly to account for differences between user environments. Several locomotion styles can be used to achieve this speed change. This paper investigates how different avatar locomotion styles (speed, stride, and glide), body visibility levels (full body and head-to-knee), and views (front views and side views) influence human perceptions of the naturalness of motion, similarity to the user’s locomotion, and the degree of preserving the user’s intention.”

Manipulating Puppets in VR

Michael Nitsche (Georgia Institute of Technology, USA), Pierce McBride (Georgia Institute of Technology)

Conference

Abstract: “Archiving Performative Objects aimed at applying and conserving puppetry as creative practice in VR. It included 3D scanning and interaction design to capture puppets and their varying control schemes from the archives of the Center for Puppetry Arts. This paper reports on their design and implementation in a VR puppetry set up. Its focuses on the evaluation study (n=18) comparing the interaction of non-experts vs expert puppeteers. The data initially show little differences but a more detailed discussion indicates differing qualitative assessments of puppetry that support its value for VR. Results suggests successful creative activation especially among experts.”

The Self-Avatar Follower Effect in Virtual Reality

Mar Gonzalez-Franco (Microsoft Research), Brian Cohn (Microsoft Research), Eyal Ofek (Microsoft Research), Dalila Burin (Tohoku University), Antonella Maselli (Microsoft Research)

Conference

Abstract: “When embodying a virtual avatar in immersive VR applications, users typically are and feel in control the avatar movements. However, there are situations in which the technology could flip this relationship so that an embodied avatar could affect the user’s motor behavior without users noticing it. This has been shown in action retargeting applications and motor contagion experiments. Here we discuss the self-avatar follower effect. We review previous evidences and present new experimental results showing how, whenever the virtual body does not overlay with their physical body, users tend to unconsciously follow their avatar, filling the gap if the system allows for it.”

Effects of volumetric capture avatars on social presence in immersive virtual environments

SungIk Cho (Korea University, South Korea), Seung-wook Kim (Korea University, South Korea), JongMin Lee (Korea University, South Korea), JeongHyeon Ahn (Korea University, South Korea), JungHyun Han (Korea University, South Korea)

Conference

Abstract: “Recent advances in 3D reconstruction and tracking technologies have made it possible to volumetrically capture human body and performance at real time. In the field of human-computer interaction, however, no works have been reported on the user study made with such volumetric capture avatars. This paper investigates how the volumetric capture avatar affects users’ sense of social presence in immersive virtual environments. Two experiments are made, where the volumetric capture avatar of an actor is compared with the actor captured in 2D video and another 3D avatar obtained by pre-scanning the actor. The experiment results show that the emerging volumetric capture techniques can be an attractive tool for many XR applications.”

Modeling Data-Driven Dominance Traits for Virtual Characters using Gait Analysis

Tanmay Randhavane (University of North Carolina), Aniket Bera (University of North Carolina), Emily Kubin (University of Tilburg), Kurt Gray (University of North Carolina), Dinesh Manocha (University of Maryland)

Journal

Abstract: “We present a data-driven algorithm for generating gaits of virtual characters with varying dominance traits. Our formulation utilizes a user study to establish a data-driven dominance mapping between gaits and dominance labels. We use our dominance mapping to generate walking gaits for virtual characters that exhibit a variety of dominance traits while interacting with the user. Furthermore, we extract gait features based on known criteria in visual perception and psychology literature that can be used to identify the dominance levels of any walking gait. We validate our mapping and the perceived dominance traits by a second user study in an immersive virtual environment. Our gait dominance classification algorithm can classify the dominance traits of gaits with ~73% accuracy. We also present an application of our approach that simulates interpersonal relationships between virtual characters. To the best of our knowledge, ours is the first practical approach to classifying gait dominance and generate dominance traits in virtual characters.”

Session 2: 3DUI - Selection/Exploration

Monday, March 23, 11:00 AM - 12:30 PM,
Track 2 (Great Room 2)

Investigating Bubble Mechanism for Ray-Casting to Improve 3D Target Acquisition in Virtual Reality

Yiqin Lu (Department of Computer Science and Technology, Tsinghua University; Key Laboratory of Pervasive Computing, Ministry of Education, China), Chun Yu (Department of Computer Science and Technology, Tsinghua University; Key Laboratory of Pervasive Computing, Ministry of Education, China), Yuanchun Shi (Department of Computer Science and Technology, Tsinghua University; Key Laboratory of Pervasive Computing, Ministry of Education, China)

Conference

Abstract: “We investigate a bubble mechanism for ray-casting which dynamically resizes the selection range of the ray for 3D target acquisition in virtual reality. Bubble mechanism identifies the target nearest to the ray, with which users do not have to accurately shoot through the target. We design the criterion of selection and the visual feedback of the bubble and conduct two experiments to evaluate ray-casting techniques with bubble mechanism in both simple and complicated 3D target acquisition tasks. Results show the bubble mechanism significantly improves ray-casting on both performance and preference, and the Bubble Ray technique with angular distance definition is competitive compared with other target acquisition techniques.”

Improving Obstacle Awareness to Enhance Interaction in Virtual Reality

Ivan Valentini (University of Genoa), Giorgio Ballestin (University of Genoa), Chiara Bassano (University of Genoa), Fabio Solari (University of Genoa), Manuela Chessa (University of Genoa)

Conference

Abstract: “Immersive VR is experienced through HMDs while the user is physically present into a real, cluttered environment. We present a method to create a virtual scenario composed of virtual objects having the same spatial occupancy of the corresponding real ones. The real scene is scanned to detect the position and bounding box of objects and obstacles, then virtual elements having similar sizes are added. Two different structure detection and clustering techniques are described and compared, also considering users’ sense of presence with respect to a standard technique. The method allows us to maintain the real environment awareness while keeping an high level of immersivity and sense of presence, and to achieve augmented virtuality interaction.”

Slicing Volume: Hybrid 3D/2D Multi target Selection Technique for Dense Virtual Environments

Roberto A. Montano-Murillo (University of Sussex, Brighton, United Kingdom), Cuong Nguyen (Adobe Research, San Francisco, California, United States), Rubaiat Habib Kazi (Adobe Research, Seattle, Washington, United States), Sriram Subramanian (University of Sussex, Brighton, United Kingdom), Stephen DiVerdi (Adobe Research, San Francisco, California, United States), Diego Martinez-Plasencia (University of Sussex, Brighton, United Kingdom)

Conference

Abstract: “3D selection in dense VR environments is challenging due to occlusion and imprecise mid-air input modalities. In this paper, we propose “Slicing-Volume”, a hybrid selection technique that enables simultaneous 3D interaction in mid-air, and a 2D pen-and-tablet metaphor in VR. Our technique consists of a 3D volume that encloses target objects in mid-air, which are then projected to a 2D tablet view for precise selection on a tangible physical surface. We evaluated our approach in highly occluded selection tasks and showed that our hybrid technique significantly improved accuracy of selection compared to mid-air selection only, thanks to the added haptic feedback given by the physical tablet surface.”

ENTROPiA: Towards Infinite Surface Haptic Displays in Virtual Reality Using Encountered-Type Rotating Props

Victor Mercado (INRIA Rennes), Maud Marchal (INRIA Rennes), Anatole Lécuyer (INRIA Rennes)

Journal

Abstract: “In this paper, we propose an approach towards an infinite surface haptic display. Our approach, named ENcountered-Type ROtating Prop Approach (ENTROPiA) is based on a cylindrical spinning prop attached to a robot’s end-effector serving as an encountered-type haptic display (ETHD). This type of haptic display permits the users to have an unconstrained, free-hand contact with a surface being provided by a robotic device for the users’ to encounter a surface to be touched. In our approach, the sensation of touching a virtual surface is given by an interaction technique that couples with the sliding movement of the prop under the users’ finger by tracking their hand location and establishing a path to be explored. This approach enables large motion for a larger surface rendering, permits to render multi-textured haptic feedback, and leverages the ETHD approach introducing large motion and sliding/friction sensations. As a part of our contribution, a proof of concept was designed for illustrating our approach. A user study was conducted to assess the perception of our approach showing a significant performance for rendering the sensation of touching a large flat surface. Our approach could be used to render large haptic surfaces in applications such as rapid prototyping for automobile design.”

Session 3: Gaze and Attention

Monday, March 23, 11:00 AM - 12:30 PM,
Track 3 (Studio 1)

DGaze: CNN-Based Gaze Prediction in Dynamic Scenes

Zhiming Hu (Peking University, China), Sheng Li (Peking University, China), Congyi Zhang (The University of Hong Kong, China), Kangrui Yi (Peking University, China), Guoping Wang (Peking University, China), Dinesh Manocha (University of Maryland, USA)

Journal

Abstract: “We conduct novel analyses of users’ gaze behaviors in dynamic virtual scenes and, based on our analyses, we present a novel CNN-based model called DGaze for gaze prediction in HMD-based applications. We first collect 43 users’ eye tracking data in 5 dynamic scenes under free-viewing conditions. Next, we perform statistical analysis of our data and observe that dynamic object positions, head rotation velocities, and salient regions are correlated with users’ gaze positions. Based on our analysis, we present a CNN-based model (DGaze) that combines object position sequence, head velocity sequence, and saliency features to predict users’ gaze positions. Our model can be applied to predict not only realtime gaze positions but also gaze positions in the near future and can achieve better performance than prior method. In terms of realtime prediction, DGaze achieves a 22.0% improvement over prior method in dynamic scenes and obtains an improvement of 9.5% in static scenes, based on using the angular distance as the evaluation metric. We also propose a variant of our model called DGaze_ET that can be used to predict future gaze positions with higher precision by combining accurate past gaze data gathered using an eye tracker. We further analyze our CNN architecture and verify the effectiveness of each component in our model. We apply DGaze to gaze-contingent rendering and a game, and also present the evaluation results from a user study.”

Directing versus Attracting Attention: Exploring the Effectiveness of Central and Peripheral Cues in Panoramic Videos

Anastasia Schmitz (University College London, United Kingdom), Andrew MacQuarrie (University College London, United Kingdom), Simon Julier (University College London, United Kingdom), Nicola Binetti (University College London, United Kingdom), Anthony Steed (University College London, United Kingdom)

Conference

Abstract: “This mixed-methods study evaluated the effectiveness of using central arrows and peripheral flickers to guide attention in panoramic videos. 25 adults wore a head-mounted display with eye-tracker and were guided to 14 targets in two videos. No significant differences emerged in regard to the number of followed cues, time taken to reach and observe targets, memory and user engagement. However, participants’ gaze travelled a significantly greater distance toward targets within the first 500ms after flicker compared to arrow-onsets. Nevertheless, most users preferred the arrow, perceiving it as more rewarding. Traditional attention paradigms may not be entirely applicable to panoramic videos.”

Exploring the impact of 360º movie cuts in users’ attention

Carlos Marañes (Universidad de Zaragoza), Diego Gutierrez (Universidad de Zaragoza), Ana Serrano (Universidad de Zaragoza)

Conference

Abstract: “The production of virtual reality (VR) cinematic content is still in an early exploratory phase. A key element in film editing is the use of cutting techniques, in order to transition seamlessly from one sequence to another. A fundamental aspect of these techniques is the control over the camera. However, in VR, users can freely explore the 360º around them, which potentially leads to very different experiences. We perform a systematic analysis of users’ viewing behavior across cut boundaries while watching 360º videos, and we derive insights that could inform creators about the impact of cuts in the audience’s behavior.”

A Comparison of Visual Attention Guiding Approaches for 360°Image-Based VR Tours

Jan Oliver Wallgrün (ChoroPhronesis, Department of Geography, The Pennsylvania State University), Mahda M. Bagher (ChoroPhronesis, Department of Geography, The Pennsylvania State University), Pejman Sajjadi (ChoroPhronesis, Department of Geography, The Pennsylvania State University), Alexander Klippel (ChoroPhronesis, Department of Geography, The Pennsylvania State University)

Conference

Abstract: “Mechanisms for guiding a user’s visual attention to a particular point of interest play a crucial role in areas such as collaborative VR and AR, cinematic VR, and tour experiences in xR-based education. We report on a study in which we compared three different visual guiding mechanisms (arrow, butterfly guide, radar) in the context of 360° image-based educational VR tour applications of real-world sites. A fourth condition with no guidance tool available was added as a baseline. We investigate the question: How do the different approaches compare in terms of target finding performance and participants’ assessments of the experiences.”

SalBiNet360: Saliency Prediction on 360° Images with Local-Global Bifurcated Deep Network

Dongwen Chen (South China University of Technology, China), Chunmei Qing (South China University of Technology, China), Xiangmin Xu (South China University of Technology, China), Huansheng Zhu (South China University of Technology, China)

Conference

Abstract: “Predicting human visual attention on 360° images is valuable and essential to understand user behaviour. In this paper, we propose a local-global bifurcated deep network for saliency prediction on 360° images (SalBiNet360). In the global deep sub-network, multiple multi-scale contextual modules and a multi-level decoder are utilized to integrate the features. In the local deep sub-network, only one multi-scale contextual module and a single-level decoder are utilized to reduce the redundancy of local saliency maps. Finally, fused saliency maps are generated by linear combination of the global and local saliency maps. Experiments illustrate the effectiveness of the proposed framework.”

Session 4: Avatar - Perception

Monday, March 23, 2:00 PM - 3:30 PM,
Track 1 (Great Room 1)

Detection Thresholds for Vertical Gains in VR and Drone-based Telepresence Systems

Keigo Matsumoto (The University of Tokyo), Eike Langbehn (University of Hamburg), Takuji Narumi (The University of Tokyo), Frank Steinicke (University of Hamburg)

Conference

Abstract: “We explored vertical gains, a novel redirection technique, which enables us to purposefully manipulate the mapping of the user’s physical vertical movements to movements in the virtual space and the remote space. This approach allows natural and more active physical control of a real drone. To demonstrate the usability of vertical gains, we implemented a telepresence drone and vertical redirection techniques for stretching and crouching actions using common VR devices. We conducted two user studies to investigate the effective manipulation ranges and its usability: one study using a virtual environment, and one using a camera stream from a telepresence drone.”

The Security-Utility Trade-off for Iris Authentication and Eye Animation for Social Virtual Avatars

Brendan John (University of Florida), Sophie Jörg (Clemson University), Sanjeev Koppal (University of Florida), Eakta Jain (University of Florida)

Journal

Abstract: “The gaze behavior of virtual avatars is critical to social presence and perceived eye contact during social interactions in Virtual Reality. Virtual Reality headsets are being designed with integrated eye tracking to enable compelling virtual social interactions. This paper shows that the near infra-red cameras used in eye tracking capture eye images that contain iris patterns of the user. Because iris patterns are a gold standard biometric, the current technology places the user’s biometric identity at risk. Our first contribution is an optical defocus based hardware solution to remove the iris biometric from the stream of eye tracking images. We characterize the performance of this solution with different internal parameters. Our second contribution is a psychophysical experiment with a same-different task that investigates the sensitivity of users to a virtual avatar’s eye movements when this solution is applied. By deriving detection threshold values, our findings provide a range of defocus parameters where the change in eye movements would go unnoticed in a conversational setting. Our third contribution is a perceptual study to determine the impact of defocus parameters on the perceived eye contact, attentiveness, naturalness, and truthfulness of the avatar. Thus, if a user wishes to protect their iris biometric, our approach provides a solution that balances biometric protection while preventing their conversation partner from perceiving a difference in the user’s virtual avatar. This work is the first to develop secure eye tracking configurations for VR/AR/XR applications and motivates future work in the area.”

An Optical Design for Avatar-User Co-axial Viewpoint Telepresence

Kei Tsuchiya (The University of Electro-Communications), Naoya Koizumi (The University of Electro-Communications, JST PRESTO)

Conference

Abstract: “We propose a system that takes the avatar from VR space to real space with mid-air imaging technology. In this system, the micro-mirror array plates (MMAPs) display the mid-air image and optically transfer the camera viewpoint to capture users from the mid-air image position. We evaluated the image capturing performance and revealed an optical specification of MMAPs. It was confirmed that the face detection works correctly on the captured video by adjusting the ISO sensitivity of the camera. Furthermore, we designed an application for telepresence called Levitar, which uses a dual camera to output the captured video to the HMD and controls the camera gaze direction.”

SPA: Verbal Interactions between Agents and Avatars in Shared Virtual Environments using Propositional Planning

Andrew Best (University of North Carolina at Chapel Hill, USA), Sahil Narang (University of North Carolina at Chapel Hill, USA), Dinesh Manocha (University of Maryland, USA)

Conference

Abstract: “We present a novel approach for generating plausible verbal interactions between virtual human-like agents and user avatars in shared, interactive virtual environments. Sense-Plan-Ask, or SPA, extends prior work in propositional planning and natural language processing to enable agents to plan with uncertain information, and leverage question and answer dialogue with other agents and avatars to obtain the needed information and complete their goals. Agents ask and respond to questions. We demonstrate quantitative results on a set of simulated benchmarks and detail the results of a preliminary user-study conducted to evaluate the plausibility of the virtual interactions generated by SPA.”

Comparing the Quality of Highly Realistic Digital Humans in 3DoF and 6DoF: A Volumetric Video Case Study

Shishir Subramanyam (Centrum Wiskunde & Informatica), Jie Li (Centrum Wiskunde & Informatica), Irene Viola (Centrum Wiskunde & Informatica), Pablo Cesar (Centrum Wiskunde & Informatica)

Conference

Abstract: “Point clouds have emerged as a popular format for real-time 3D reconstructions such as reconstructing humans for social virtual reality. In this study, we evaluate the effect of compression distortions on the visual quality of point cloud digital humans. We compare the performance of the upcoming point cloud compression standard against an anchor codec. The test is conducted in two VR viewing conditions enabling 3- and 6 degrees of freedom. To the best of our knowledge, this is the first work performing user quality evaluation of dynamic point clouds in VR. Results highlight how perceived visual quality is affected by the tested content, and how current data sets might not be sufficient to comprehensively evaluate compression solutions.”

Session 5: 3DUI - Navigation - Redirected Walking

Monday, March 23, 2:00 PM - 3:30 PM,
Track 2 (Great Room 2)

A Steering Algorithm for Redirected Walking Using Reinforcement Learning

Ryan R. Strauss (Davidson College), Raghuram Ramanujan (Davidson College), Andrew Becker (Bank of America), Tabitha C. Peck (Davidson College)

Journal

Abstract: “Redirected Walking (RDW) steering algorithms have traditionally relied on human-engineered logic. However, recent advances in reinforcement learning (RL) have produced systems that surpass human performance on a variety of control tasks. This paper investigates the potential of using RL to develop a novel reactive steering algorithm for RDW. Our approach uses RL to train a deep neural network that directly prescribes the rotation, translation, and curvature gains to transform a virtual environment given a user’s position and orientation in the tracked space. We compare our learned algorithm to steer-to-center using simulated and real paths. We found that our algorithm outperforms steer-to-center on simulated paths, and found no significant difference on distance traveled on real paths. We demonstrate that when modeled as a continuous control problem, RDW is a suitable domain for RL, and moving forward, our general framework provides a promising path towards an optimal RDW steering algorithm.”

Feature Guided Path Redirection for VR Navigation

Antong Cao (State Key Laboratory of Virtual Reality Technology and Systems; Beihang University; Peng Cheng Laboratory, China), Lili Wang (State Key Laboratory of Virtual Reality Technology and Systems; Beihang University; Peng Cheng Laboratory, China), Yi Liu (State Key Laboratory of Virtual Reality Technology and Systems; Beihang University; Peng Cheng Laboratory, China), Voicu Popescu (Purdue University)

Conference

Abstract: “In this paper we propose a feature-guided path redirection method that finds and takes into account the visual features of 3D virtual scenes. A collection of view-independent and view-dependent visual features of the VE are extracted and stored in a visual feature map. The navigation path is deformed to fit in the confines of the available physical space through a mass-spring system optimization, according to distortion sensitive factors derived from the visual feature map. A novel detail preserving rendering algorithm is employed to preserve the original visual detail as the user navigates the VE on the redirected path.”

Dynamic Artificial Potential Fields for Multi-User Redirected Walking

Tianyang Dong (Zhejiang University of Technology, China), Xianwei Chen (Zhejiang University of Technology, China), Yifan Song (Zhejiang University of Technology, China), Wenyuan Ying (Zhejiang University of Technology, China), Jing Fan (Zhejiang University of Technology, China)

Conference

Abstract: “In order to solve the collision problem caused by multiple users sharing the same physical space, this work presents a new strategy of multi-user redirected walking using dynamic artificial potential fields. It generates repulsion to “push” users away from obstacles and other users, and uses gravity to “attract” users to a open or unobstructed space. Therefore, the users not only get the repulsive forces from walls, but also get the repulsive forces from other users and their future state. Data from human subject experiments shows that our method can reduce potential single-user resets by about 20%.”

Optimal Planning for Redirected Walking Based on Reinforcement Learning in Multi-user Environment with Irregularly Shaped Physical Space

Dong-Yong Lee (Yonsei University), Yong-Hun Cho (Yonsei University), Dae-Hong Min (Yonsei University), In-Kwon Lee (Yonsei University)

Conference

Abstract: “We propose a new predictive RDW algorithm “Multiuser-Steer-to-Optimal-Target (MS2OT)” that extends the existing S2OT method into an environment with multiple users and various types of tracking space. MS2OT considers pre-reset actions and uses more steering targets than S2OT with an improved reward function. The users’ locations and tracking space information are encoded as an image to be the state of the reinforcement learning model using Q-Learning. MS2OT reduces the total number of resets compared to the conventional RDW algorithms such as S2C and APF-RDW in a multi-user environment. Experimental results show MS2OT can process up to 32 users in real-time.”

Shaking Hands in Virtual Space: Recovery in Redirected Walking for Direct Interaction between Two Users

Dae-Hong Min (Yonsei University), Dong-Yong Lee (Yonsei University), Yong-Hun Cho (Yonsei University), In-Kwon Lee (Yonsei University)

Conference

Abstract: “For two users to meet each other in a virtual environment to realize realistic direct interaction, they must simultaneously meet each other in physical space. However, if the RDW algorithm is applied to each user independently, the relative positions and orientations of the two users can be different in the virtual and physical spaces. We present a recovery algorithm adjusting relative position and orientation such that they become the same in the two spaces. Once the recovered state is reached, the two users can go forward to meet each other and directly interact in the virtual and physical spaces simultaneously.”

Session 6: AR: Tools and Displays

Monday, March 23, 2:00 PM - 3:30 PM,
Track 3 (Studio 1)

Alpaca: AR Graphics Extensions for Web Applications

Tanner Hobson (University of Knoxville), Jeremiah Duncan (EECS Department, University of Tennessee, Knoxville), Mohammad Raji (EECS Department, University of Tennessee, Knoxville), Aidong Lu (University of North Carolina at Charlotte, Charlotte, North Carolina, United States), Jian Huang (EECS Department, University of Tennessee, Knoxville, Tennessee, United States)

Conference

Abstract: “In this work, we propose a framework to simplify the creation of Augmented Reality (AR) extensions for web applications, without modifying the original web applications. AR extensions developed using Alpaca appear as a web-browser extension, and automatically bridge the Document Object Model (DOM) of the web with the SceneGraph model of AR. To transform the web application into a multi-device, mixed-space web application, we designed a restrictive and minimized interface for cross-device event handling. We demonstrate our approach to develop mixed-space applications using three examples. With our extension, the creation and control of augmented reality devices becomes transparent, as if they were natively part of the browser.”

Touch the Wall: Comparison of Virtual and Augmented Reality with Conventional 2D Screen Eye-Hand Coordination Training Systems

Anil Ufuk Batmaz (Simon Fraser University, Canada), Aunnoy K Mutasim (Simon Fraser University, Canada), Morteza Malekmakan (Simon Fraser University, Canada), Elham Sadr (Simon Fraser University, Canada), Wolfgang Stuerzlinger (Simon Fraser University, Canada)

Conference

Abstract: “We designed an eye-hand coordination reaction test to investigate user performance in Virtual Reality (VR), Augmented Reality (AR), and 2D touchscreen. VR and AR were comprised of mid-air and passive haptic feedback conditions. Results showed that compared to AR, participants were faster and made fewer errors in 2D and VR. However, throughput performance of the participants was significantly higher in the 2D touchscreen. There was no significant difference between the two feedback conditions. The results show the importance of assessing precision and accuracy and suggest that AR headsets are not ready to be used for reaction time training systems”

Enlightening Patients with Augmented Reality

Andreas Jakl (St. Poelten University of Applied Sciences, Austria), Anna-Maria Lienhart (St. Poelten University of Applied Sciences, Austria), Clemens Baumann (St. Poelten University of Applied Sciences, Austria), Arian Jalaeefar (St. Poelten University of Applied Sciences, Austria), Alexander Schlager (St. Poelten University of Applied Sciences, Austria), Lucas Schoeffer (St. Poelten University of Applied Sciences, Austria), Franziska Bruckner (St. Poelten University of Applied Sciences, Austria)

Conference

Abstract: “Enlightening Patients with Augmented Reality (EPAR) developed an augmented reality prototype helping patients with strabismus to better understand the processes of examinations and eye surgeries. By means of interactive storytelling, three target groups based on user personas were able to adjust the level of information based on their interests. We performed a 2-phase evaluation with 24 test subjects, resulting in a final system usability score of 80:0. For interaction prompts concerning virtual 3D content, visual highlights were considered to be sufficient. On the whole, participants thought that an AR system as a complementary tool could enhance patient education in this field.”

Exploring Visual Techniques for Boundary Awareness During Interaction in Augmented Reality Head-Mounted Displays

Wenge Xu (Xi’an Jiaotong-Liverpool University), Hai-Ning Liang (Xi’an Jiaotong-Liverpool University), Yuzheng Chen (Xi’an Jiaotong-Liverpool University), Xiang Li (Xi’an Jiaotong-Liverpool University), Kangyou Yu (Xi’an Jiaotong-Liverpool University)

Conference

Abstract: “Mid-air hand interaction is common in AR systems. Because AR HMDs have a limited interaction tracking area, it’s easy for users to move their hand(s) outside this tracked area during interaction, especially in dynamic tasks. This research explores visual techniques for boundary awareness, focusing on translation tasks. We first identified the challenges users face during interaction without any boundary awareness information. From the findings, we then proposed four visual methods and evaluated them against the baseline condition without boundary awareness. Results show the visual methods help with dynamic mid-air hand interactions and their effectiveness and application is user-dependent.”

How About the Mentor? Effective Workspace Visualization in AR Telementoring

Chengyuan Lin (Purdue University), Edgar Rojas-Muñoz (Purdue University), Maria Eugenia Cabrera (Purdue University), Natalia Sanchez-Tamayo (Purdue University), Daniel Andersen (Purdue University), Voicu Popescu (Purdue University), Juan Antonio Barragan Noguera (Purdue University), Ben Zarzaur (Indiana University School of Medicine), Pat Murphy (Indiana University School of Medicine), Kathryn Anderson (Indiana University School of Medicine), Thomas Douglas (Naval Medical Center Portsmouth), Clare Griffis (Naval Medical Center Portsmouth), Juan Wachs (Purdue University)

Conference

Abstract: “In AR telementoring, the camera built into the mentee headset can convey the workspace to the remote mentor. However, as the mentee moves their head, the visualization changes frequently and abruptly. This paper presents a method for high-level stabilization of a mentee first-person video to provide effective workspace visualization to the mentor. The visualization is stable, complete, up-to-date, continuous, distortion-free, and rendered from the mentee’s typical viewpoint. The method had significant advantages over unstabilized visualization for number matching tasks. The stabilization also showed good results in the context of surgical telementoring in austere settings.”

Session 7: Haptics

Monday, March 23, 4:00 PM - 5:30 PM,
Track 2 (Great Room 2)

A Tangible Spherical Proxy for Object Manipulation in Augmented Reality

David Englmeier (LMU Munich, Germany), Julia Dörner (LMU Munich, Germany), Andreas Butz (LMU Munich, Germany), Tobias Höllerer (University of California, Santa Barbara, United States)

Conference

Abstract: “We explore how a familiarly shaped object can serve as a physical proxy for object manipulation in AR. Using the example of a tangible, handheld sphere, we demonstrate how virtual objects can be selected, transformed, and released. We present a buttonless interaction technique suited to the characteristics of the sphere. In a user study (N = 30), we compare our approach with three controller-based methods. As a use case, we focused on an alignment task that had to be completed in mid-air as well as on a flat surface. Results show that our concept has advantages over two of the controller-based methods regarding task completion time and user ratings. Our findings inform research on integrating tangible interaction into AR experiences.”

Pseudo-Haptic Display of Mass and Mass Distribution During Object Rotation in Virtual Reality

Run Yu (Virginia Tech, USA), Doug Bowman (Virginia Tech, USA)

Journal

Abstract: “We propose and evaluate novel pseudo-haptic techniques to display mass and mass distribution for proxy-based object manipulation in virtual reality. These techniques are specifically designed to generate haptic effects during the object’s rotation. They rely on manipulating the mapping between visual cues of motion and kinesthetic cues of force to generate a sense of heaviness, which alters the perception of the object’s mass-related properties without changing the physical proxy. First we present a technique to display an object’s mass by scaling its rotational motion relative to its mass. A psycho-physical experiment demonstrates that this technique effectively generates correct perceptions of relative mass between two virtual objects. We then present two pseudo-haptic techniques designed to display an object’s mass distribution. One of them relies on manipulating the pivot point of rotation, while the other adjusts rotational motion based on the real-time dynamics of the moving object. An empirical study shows that both techniques can influence perception of mass distribution, with the second technique being significantly more effective.”

Design and Evaluation of Interaction Techniques Dedicated to Integrate Encountered-Type Haptic Displays in Virtual Environment

Víctor Mercado (Univ Rennes, INSA, Inria, CNRS, IRISA), Maud Marchal (Univ Rennes, INSA, Inria, CNRS, IRISA), Anatole Lécuyer (Univ Rennes, Inria, CNRS, IRISA)

Conference

Abstract: “In this paper, we present novel interaction techniques (ITs) dedicated to Encountered-Type Haptic Displays (ETHDs). The techniques aim at addressing the issues commonly presented for these devices. We propose a design space based on parameters defining the interactive process between user and ETHD. Five techniques based on the design space were conceived. A use-case scenario was designed to test these techniques on the task of coloring a wide surface. A user study was conducted to assess the performance for each IT. Results were in favor of techniques based on manual surface displacement, absolute position selection and intermittent contact interaction.”

Implementation and Evaluation of Touch-based Interaction Using Electrovibration Haptic Feedback in Virtual Environments

Lu Zhao (Beijing Institute of Technology, China), Yue Liu (Beijing Institute of Technology; AICFVE of Beijing Film Academy, China), Dejiang Ye (Beijing Institute of Technology, China), Zhuoluo Ma (Beijing Institute of Technology, China), Weitao Song (Beijing Institute of Technology, China)

Conference

Abstract: “We explore a new VR interaction method based on electrovibration technology. The key idea is to incorporate a set of manipulation gestures and three types of electrovibration in the VR interaction to help users acquire different kinds of tactile perception in the virtual manipulation. We present the evaluation in which we compare user performance measured first in a Fitts’ law task to evaluate different electrovibration types and then in a virtual office application to assess the interactive user interface. Our results show that the precision of interactions is significantly improved with the electrovibration haptic feedback. Our work enlightens the potential of the electrovibration touchscreen-based interaction in virtual environments.”

ThermAirGlove: A Pneumatic Glove for Thermal Perception and Material Identification in Virtual Reality

Shaoyu Cai (City University of Hong Kong, Hong Kong, China), Pingchuan Ke (City University of Hong Kong, Hong Kong, China), Takuji Narumi (The University of Tokyo, Tokyo, Japan), Kening Zhu (City University of Hong Kong, Hong Kong, China)

Conference

Abstract: “We present ThermAirGlove, a pneumatic glove which provides thermal feedback for users, to support the haptic experience of different temperatures and materials in VR. The system consists of a glove with five inflatable airbags, two temperature chambers, and the closed-loop pneumatic thermal control system. Our technical experiments showed that the system could generate the thermal cues of different materials. The user-perception experiments showed that our system could provide five levels of thermal sensation and support users’ material identification among foam, glass, and copper. The user studies on VR experience showed that using TAGlove could significantly improve users’ experience of presence.”

Session 8: Embodiment 1

Tuesday, March 24, 9:00 AM - 10:30 AM,
Track 1 (Great Room 1)

Mind the Gap: The Underrepresentation of Female Participants and Authors in Virtual Reality Research

Tabitha C. Peck (Davidson College), Laura E. Sockol (Davidson College), Sarah M. Hancock (Davidson College)

Journal

Abstract: “A common goal of human-subject experiments in virtual reality (VR) research is evaluating VR hardware and software for use by the general public. A core principle of human-subject research is that the sample included in a given study should be representative of the target population; otherwise, the conclusions drawn from the findings may be biased and may not generalize to the population of interest. In order to assess whether characteristics of participants in VR research are representative of the general public, we investigated participant demographic characteristics from human-subject experiments in the Proceedings of the IEEE Virtual Reality Conferences from 2015-2019. We also assessed the representation of female authors. In the 325 relevant papers, which presented 365 human-participant experiments, we found evidence of significant underrepresentation of women as both participants and authors. To investigate whether this underrepresentation may bias researchers’ findings, we then conducted a meta-analysis and meta-regression to assess whether demographic characteristics of study participants were associated with a common outcome evaluated in VR research: the change in simulator sickness following head-mounted display VR exposure. As expected, participants in VR studies using HMDs experienced small but significant increases in simulator sickness. However, across the included studies, the change in simulator sickness was systematically associated with the proportion of female participants. We discuss the negative implications of conducting experiments on non-representative samples and provide methodological recommendations mitigating bias for future VR research.”

The Impact of a Self-Avatar, Hand Collocation, and Hand Proximity on Embodiment and Stroop Interference

Tabitha C. Peck (Davidson College), Altan Tutar (Davidson College)

Journal

Abstract: “Understanding the effects of hand proximity to objects and tasks is critical for hand-held and near-hand objects. Even though self-avatars have been shown to be beneficial for various tasks in virtual environments, little research has investigated the effect of avatar hand proximity on working memory. This paper presents a between-participants user study investigating the effects of self-avatars and physical hand proximity on a common working memory task, the Stroop interference task. Results show that participants felt embodied when a self-avatar was in the scene, and that the subjective level of embodiment decreased when a participant’s hands were not collocated with the avatar’s hands. Furthermore, a participant’s physical hand placement was significantly related to Stroop interference: proximal hands produced a significant increase in accuracy compared to non-proximal hands. Surprisingly, Stroop interference was not mediated by the existence of a self-avatar or level of embodiment.”

Avatar and Sense of Embodiment: Studying the Relative Preference Between Appearance, Control and Point of View

Rebecca Fribourg (Inria, Rennes, France), Ferran Argelaguet (Inria, Rennes, France), Anatole Lécuyer (Inria, Rennes, France), Ludovic Hoyet (Inria, Rennes, France)

Journal

Abstract: “In Virtual Reality, a number of studies have been conducted to assess the influence of avatar appearance, avatar control and user point of view on the Sense of Embodiment (SoE) towards a virtual avatar. However, such studies tend to explore each factor in isolation. This paper aims to better understand the inter-relations among these three factors by conducting a subjective matching experiment. In the presented experiment (n=40), participants had to match a given “optimal” SoE avatar configuration (realistic avatar, full-body motion capture, first person point of view), starting by a “minimal” SoE configuration (minimal avatar, no control, third person point of view), by increasing iteratively the level of each factor. The choices of the participants provide insights about their preferences and perception over the three factors considered. Moreover, the subjective matching procedure was conducted in the context of four different interaction tasks with the goal of covering a wide range of actions an avatar can do in a VE. The paper also describes a baseline experiment (n=20) which was used to define the number and order of the different levels for each factor, prior to the subjective matching experiment (e.g. different degrees of realism ranging from abstract to personalised avatars for the visual appearance). The results of the subjective matching experiment show that point of view and control levels were consistently increased by users before appearance levels when it comes to enhancing the SoE. Second, several configurations were identified with equivalent SoE as the one felt in the optimal configuration, but vary between the tasks. Taken together, our results provide valuable insights about which factors to prioritize in order to enhance the SoE towards an avatar in different tasks, and about configurations which lead to fulfilling SoE in VE.”

Effect of Avatar Appearance on Detection Thresholds for Remapped Hand Movements

Nami Ogawa (University of Tokyo), Takuji Narumi (University of Tokyo), Michitaka Hirose (University of Tokyo)

Journal

Abstract: “Hand interaction techniques in virtual reality often exploit visual dominance over proprioception to remap physical hand movements onto different virtual movements. However, when the offset between virtual and physical hands increases, the remapped virtual hand movements are hardly self-attributed, and the users become aware of the remapping. Interestingly, the sense of self-attribution of a body is called the sense of body ownership (SoBO) in the field of psychology, and the realistic the avatar, the stronger is the SoBO. Hence, we hypothesized that realistic avatars (i.e., human hands) can foster self-attribution of the remapped movements better than abstract avatars (i.e., spherical pointers), thus making the remapping less noticeable. In this paper, we present an experiment in which participants repeatedly executed reaching movements with their right hand while different amounts of horizontal shifts were applied. We measured the remapping detection thresholds for each combination of shift directions (left or right) and avatar appearances (realistic or abstract). The results show that realistic avatars increased the detection threshold (i.e., lowered sensitivity) by 31.3% than the abstract avatars when the leftward shift was applied (i.e., when the hand moved in the direction away from the body-midline). In addition, the proprioceptive drift (i.e., the displacement of self-localization toward an avatar) was larger with realistic avatars for leftward shifts, indicating that visual information was given greater preference during visuo-proprioceptive integration in realistic avatars. Our findings quantifiably show that realistic avatars can make remapping less noticeable for larger mismatches between virtual and physical movements and can potentially improve a wide variety of hand-remapping techniques without changing the mapping itself.”

Session 9: 3DUI- Manipulation

Tuesday, March 24, 9:00 AM - 10:30 AM,
Track 2 (Great Room 2)

On Motor Performance in Virtual 3D Object Manipulation

Alexander Kulik (Virtual Reality and Visualization Research, Bauhaus-Universität Weimar), André Kunert Virtual Reality and Visualization Research, Bauhaus-Universität Weimar (), Bernd Froehlich (Virtual Reality and Visualization Research, Bauhaus-Universität Weimar)

Journal

Abstract: “Fitts’s law facilitates approximate comparisons of target acquisition performance across a variety of settings. Conceptually, also the index of difficulty of 3D object manipulation with six degrees of freedom can be computed, which allows the comparison of results from different studies. Prior experiments, however, often revealed much worse performance than one would reasonably expect on this basis. We argue that this discrepancy stems from confounding variables and show how Fitts’s law and related research methods can be applied to isolate and identify relevant factors of motor performance in 3D manipulation tasks. The results of a formal user study (N=21) demonstrate competitive performance in compliance with Fitts’s model and provide empirical evidence that simultaneous 3D rotation and translation can be beneficial.”

Transfer of Coordination Skill to the Unpracticed Hand in Immersive Environments

Shan Xiao (College of Information Science and Technology, Jinan University, Guangzhou, Guangdong, China), Xupeng Ye (College of Information Science and Technology, Jinan University, Guangzhou, China), Yaqiu Guo (College of Information Science and Technology, Jinan University, Guangzhou, China), Boyu Gao (College of Information Science and Technology, Jinan University, Guangzhou, China), Jinyi Long (College of Information Science and Technology, Jinan University, Guangzhou, China)

Conference

Abstract: “We conduct a well-designed study that systematically investigated the effects of visualizations on bimanual interaction training. The results indicate that performing and seeing a bimanual task or performing a unimanual task and seeing a bimanual action are better than performing and seeing a unimanual task or not performing a task and seeing a bimanual action. Another contribution is that the second experiment provides results indicating that higher-fidelity hand representations positively affect performance in the unimanual task and bimanual visualization.”

Precise and realistic grasping and manipulation in Virtual Reality withoutforce feedback

Thibauld DELRIEU (CEA-LIST)

Conference

Abstract: “The main contribution of this paper is to enhance an existing method which couples a virtual kinematic hand with a visual hand tracking system. Here we implement grasping assistance based on virtual springs between the virtual hands and the virtual object. The assistance is triggered based on an analysis of usual grasping criteria, to determine whether a grasp is feasible or not. The proposed method has been validated in a supervised experiment which showed that our assistance improves speed and accuracy for a “pick and place” task involving an exhaustive object set, sized for precision grasp. Moreover, users’ feedback shows a clear preference for the present approach in terms of naturalness and efficiency.”

A Comparative Analysis of 3D User Interaction: How to Move Virtual Objects in Mixed Reality

Hyo Kang (University of Florida), Jung-hye Shin (University of Wisconsin-Madison), Kevin Ponto (University of Wisconsin-Madison)

Conference

Abstract: “This study explores three hand-interaction techniques, including the gaze and pinch, touch and grab, and worlds-in-miniature interaction. Overall, a comparative analysis reveals that the WIM provided the best usability and task performance than other studied techniques. We also conducted in-depth interviews and analyzed participants’ hand gestures. Gesture analysis reveals that shapes of furniture, as well as its perceived features such as weight, largely determined the participant’s instinctive form of hand interaction. Based on these findings, we present design suggestions that can aid 3D interaction designers to develop a natural hand interaction for mixed reality.”

Disambiguation Techniques for Freehand Object Manipulations in Virtual Reality

Di (Laura) Chen (University of Toronto, Canada), Ravin Balakrishnan (University of Toronto, Canada), Tovi Grossman (University of Toronto, Canada)

Conference

Abstract: “Manipulating virtual objects using bare hands has been an attractive interaction paradigm in VR and AR. However, one limitation of freehand input lies in the ambiguous resulting effect of the interaction, as the same gesture performed on a virtual object could invoke different operations. We present an experimental analysis of a set of disambiguation techniques in VR, comparing three input modalities (head gaze, speech, and foot tap) paired with three different timings to resolve ambiguity (before, during, and after an interaction). The results indicate that using head gaze for disambiguation during an interaction with the object achieved the best performance.”

Session 10: Crowds & perception

Tuesday, March 24, 9:00 AM - 10:30 AM,
Track 3 (Studio 1)

Effects of Interacting with a Crowd of Emotional Virtual Humans on Users’ Affective and Non-Verbal Behaviors

Matias Volonte (Clemson University, USA), Yu Chun Hsu (National Chiao Tung University, Taiwan), Kuan-yu Liu (National Chiao Tung University, Taiwan), Joseph P. Mazer (Clemson University, USA), Sai-Keung Wong (National Chiao Tung University, Taiwan), Sabarish V. Babu (Clemson University, USA)

Conference

Abstract: “We examined the effects on users during interaction with a virtual human crowd in an immersive virtual reality environment. We developed an agent-based crowd model with rich properties including eye gaze, facial expression, body motion, and verbal and non-verbal behaviors. The scenario was a virtual market in which the users needed to gather specific items. In a between-subjects design, users interacted with a virtual human crowd that showed opposite valenced emotional expressions. There are four conditions in the between-subjects design, including different combinations of emotional expressive characters performing verbal and non-verbal behaviors. We reported our findings with an in-depth analysis.”

The Effects of Virtual Audience Size on Social Anxiety during Public Speaking

Fariba Mostajeran (Universität Hamburg, Germany), Melik Berk Balci (University Medical Center Hamburg-Eppendorf, Germany), Frank Steinicke (Universität Hamburg, Germany), Simone Kühn (University Medical Center Hamburg-Eppendorf, Germany), Jürgen Gallinat (University Medical Center Hamburg-Eppendorf, Germany)

Conference

Abstract: “We present an adaptation of Trier Social Stress Test (TSST) to investigate the effects of different numbers of virtual humans (VHs) on perceived social anxiety (SA). Moreover, we compare the results with an in vivo TSST with 3 real audience. 24 participants took part in this experiment. As a result, physiological arousal could be observed with VR inducing SA yet less than in vivo TSST. Also, subjective measures showed a high state of anxiety experienced during the experiment. An effect of the virtual audience size could be observed only in heart rate (HR) as a virtual audience size of 3 VHs induced the highest HR responses which was significantly different from an audience of size 6 and 15.”

Analyzing Pedestrian Behavior in Augmented Reality - Proof of Concept

Philipp Maruhn (Technical University of Munich, Germany), André Dietrich (Technical University of Munich, Germany), Lorenz Prasch (Technical University of Munich, Germany), Sonja Schneider (Technical University of Munich, Germany)

Conference

Abstract: “This paper presents a novel approach for an augmented reality pedestrian simulator. With this simulator, the participant experiences virtual vehicles, augmented on a real scenario, allowing for safe and controlled testing in a realistic setting. In a between-subject design, 13 participants experienced a gap acceptance scenario with virtual vehicles, while 30 participants experienced the same scenario with real vehicles in the same environment. Results indicate similar, but also offset behavior for both conditions. Still, it was shown that augmented reality renders a promising tool for pedestrian research but also features limitations depending on the use case.”

Eye-Gaze Activity in Crowds: Impact of Virtual Reality and Density

Florian Berton (Inria Rennes France), Ludovic Hoyet (Inria Rennes France), Anne-Hélène Olivier (M2S Lab, University Rennes 2), Julien Bruneau (Inria Rennes France), Olivier Le Meurs (Univ Rennes, Inria, CNRS, Irisa, France), Julien Pettré (Inria Rennes)

Conference

Abstract: “This paper investigates interaction neighborhood, i.e., the set of people that influences our motion, while walking in a crowd. We designed a Virtual Reality (VR) study that exploits movement and eye-gaze data, and their relation to interaction neighborhood. The study was divided into two experiments. The first one evaluated the bias induced by VR on eye-gaze movement while walking in a busy street. The second experiment explored the influence of crowd density on eye-gaze movements. Our results showed that the increase of density does not affect eye-gaze fixations frequency but induces a refocus of eye-gaze in the direction of locomotion.”

Determining Peripersonal Space Boundaries and Their Plasticity in Relation to Object and Agent Characteristics in an Immersive Virtual Environment

Lauren Buck (Vanderbilt University, USA), Sohee Park (Vanderbilt University, USA), Bobby Bodenheimer (Vanderbilt University, USA)

Conference

Abstract: “In this work we examine the extent of peripersonal space (PPS), or functional reaching distance, in immersive virtual reality. Naturally, PPS boundaries in the real world are modulated by different contextual factors. We completed two studies using multisensory stimuli to determine PPS boundaries, and investigated whether PPS in an immersive virtual environment behaves consistently with real world findings and could be altered by object and virtual agent interactions. We found that boundaries were consistent with those in the real world and were responsive to object and agent interactions. These findings have potential implications for the design of virtual environments.”

Session 11: Collaboration

Tuesday, March 24, 1:30 PM - 3:00 PM,
Track 1 (Great Room 1)

Augmented Virtual Teleportation for High-Fidelity Telecollaboration

Taehyun Rhee (Computational Media Innovation Centre, Victoria University of Wellington), Stephen Thompson (Computational Media Innovation Centre, Victoria University of Wellington), Daniel Medeiros (Computational Media Innovation Centre, Victoria University of Wellington), Rafael dos Anjos (Computational Media Innovation Centre, Victoria University of Wellington), Andrew Chalmers (Computational Media Innovation Centre)

Journal

Abstract: “Telecollaboration involves the teleportation of a remote collaborator to another real-world environment where their partner is located. The fidelity of the environment plays an important role for allowing corresponding spatial references in remote collaboration. We present a novel asymmetric platform, Augmented Virtual Teleportation (AVT), which provides high-fidelity telepresence of a remote VR user (VR-Traveler) into a real-world collaboration space to interact with a local AR user (AR-Host). AVT uses a 360° video camera (360-camera) that captures and live-streams the omni-directional scenes over a network. The remote VR-Traveler watching the video in a VR headset experiences live presence and co-presence in the real-world collaboration space. The VR-Traveler’s movements are captured and transmitted to a 3D avatar overlaid onto the 360-camera which can be seen in the AR-Host’s display. The visual and audio cues for each collaborator are synchronized in the Mixed Reality Collaboration space (MRC-space), where they can interactively edit virtual objects and collaborate in the real environment using the real objects as a reference. High fidelity, real-time rendering of virtual objects and seamless blending into the real scene allows for unique mixed reality use-case scenarios. Our working prototype has been tested with a user study to evaluate spatial presence, co-presence, and user satisfaction during telecollaboration. Possible applications of AVT are identified and proposed to guide future usage.”

A User Study on View-sharing Techniques for One-to-Many Mixed Reality Collaborations

Geonsun Lee (Korea University, South Korea), HyeongYeop Kang (Kyung Hee University, South Korea), JongMin Lee (Korea University, South Korea), JungHyun Han (Korea University, South Korea)

Conference

Abstract: “In a one-to-many mixed reality collaboration environment, where multiple local users wearing AR headsets are supervised by a remote expert wearing a VR HMD, we evaluated three view-sharing techniques: 2D video, 360 video, and 3D model augmented with 2D video. Their performances were compared in two different collaboration scenarios based on search and assembling. In the first scenario, a local user performed both search and assembling. In the second scenario, two local users had dedicated roles, one for search and the other for assembling. The experiment results showed that the 3D model augmented with 2D video was time-efficient, usable, less demanding and most preferred in one-to-many mixed reality collaborations.”

Optimization and Manipulation of Contextual Mutual Spaces for Multi-User Virtual and Augmented Reality Interaction

Mohammad Keshavarzi (University of California, Berkeley), Allen Y. Yang (University of California, Berkeley), Woojin Ko (University of California, Berkeley), Luisa Caldas (University of California, Berkeley)

Conference

Abstract: “Spatial computing experiences are physically constrained by the geometry and semantics of the local user environment. This limitation is elevated in remote multi-user interaction scenarios, where finding a common virtual ground physically accessible for all participants becomes challenging, particularly if they are not aware of the spatial surroundings of other users. In this paper, we introduce a framework that can locate an optimal mutual virtual space for a multi-user interaction setting where remote users’ room spaces can have different layout and sizes. The framework further recommends movement of surrounding furniture objects that expand the size of the mutual space with minimal physical effort.”

Design and Initial Evaluation of a VR based Immersive and Interactive Architectural Design Discussion System

Ting-Wei Hsu (National Chiao Tung University, Taiwan), Ming-Han Tsai (Feng Chia University, Taiwan), Sabarish V. Babu (Clemson University, United State), Pei-Hsien Hsu (National Chiao Tung University, Taiwan), Hsuan-Ming Chang (National Chiao Tung University, Taiwan), Wen-Chieh Lin (National Chiao Tung University, Taiwan), Jung-Hong Chuang (National Chiao Tung University, Taiwan)

Conference

Abstract: “In this paper, we developed a VR architecture design discussion system that supports members to discuss about the model and modify the models during discussion. Members communicate via voice, object manipulations and sketching. Several tools have been designed to enhance the sense of presence and effectiveness of discussion. A rollback mechanism is developed to help user quickly going back to a previous state of discussion to make some changes or to start a new direction of discussion. We conducted a user study and the feedbacks show that the system is effective and useful for supporting architecture design discussion.”

Multi-Window 3D Interaction for Collaborative Virtual Reality

André Kunert (University of Weimar), Tim Weissker (University of Weimar), Bernd Fröhlich (University of Weimar), Alexander Kulik (University of Weimar)

Journal

Abstract: “We present a novel collaborative virtual reality system that offers multiple immersive 3D views at large 3D scenes. The physical setup consists of two synchronized multi-user 3D displays: a tabletop and a large vertical projection screen. These displays afford different presentations of the shared 3D scene. The wall display lends itself to the egocentric exploration at 1:1 scale, while the tabletop affords an allocentric overview. Additionally, handheld 3D portals facilitate the personal exploration of the scene, the comparison of views, and the exchange with others. Our developments enable seamless 3D interaction across these independent 3D views. This requires the simultaneous representation of user input in the different viewing contexts. However, the resulting interactions cannot be executed independently. The application must coordinate the interactions and resolve potential ambiguities to provide plausible effects. We analyze and document the challenges of seamless 3D interaction across multiple independent viewing windows, propose a high-level software design to realize the necessary functionality, and apply the design to a set of interaction tools. Our setup was tested in a formal user study, which revealed general advantages of collaborative 3D data exploration with multiple views in terms of user preference, comfort, and task performance.”

Session 12: 3DUI - Navigation - Interfaces and chair

Tuesday, March 24, 1:30 PM - 3:00 PM,
Track 2 (Great Room 2)

Above Surface Interaction for Multiscale Navigation in Mobile Virtual Reality

Tim Menzner (Coburg University of Applied Sciences and Arts), Travis Gesslein (Coburg University of Applied Sciences and Arts), Alexander Otte (Coburg University of Applied Sciences and Arts), Jens Grubert (Coburg University of Applied Sciences and Arts)

Conference

Abstract: “Virtual Reality (VR) enables the exploration of large information spaces. In physically constrained spaces such as airplanes or buses, controller-based or mid-air interaction in mobile VR can be challenging. Instead, the input space on and above touch-screen enabled devices such as smartphones or tablets could be employed for VR interaction in those spaces.

We compared an above surface interaction technique with traditional 2D on-surface input for navigating large planar information spaces such as maps in a controlled user study (n = 20). Our proposed above surface interaction technique results in significantly better performance and user preference compared to pinch-to-zoom and drag-to-pan when navigating planar information spaces.”

Real Walking in Place: HEX-CORE-PROTOTYPE Omnidirectional Treadmill

Ziyao Wang (School of Automation, Southeast University), Haikun Wei (School of Automation, Southeast University), KanJian Zhang (School of Automation, Southeast University), Liping Xie (School of Automation, Southeast University)

Conference

Abstract: “Locomotion is one of the most important problems in virtual reality. Real walking experience is the key to immersively explore the virtual world. The omnidirectional treadmill is an effective way to provide a natural walking experience within the Room-Scale VR. This paper proposes a novel omnidirectional treadmill named HEX-CORE-PROTOTYPE (HCP). The principle of synthesis and decomposition of velocity is applied to form an omnidirectional velocity field. Our system could provide a full degree of freedom and real walking experience in place. Compared to the current best system, the height of HCP is only 40% of it.”

VR Bridges: Simulating Smooth Uneven Surfaces in VR

Khrystyna Vasylevska (TU Wien, Austria), Bálint István Kovács (TU Wien, Austria), Hannes Kaufmann (TU Wien, Austria)

Conference

Abstract: “Walkable smooth uneven surfaces are inherent to reality but extremely lacking in VR. In this paper, we focus on human height and slant perception of the simulated uneven surfaces with multi-sensory stimulation.

Our results suggest that the use of a curved prop creates a convincing illusion of uneven surface significantly higher than the physical one, especially with the multi-sensory stimulation. The use of a flat prop is less realistic and leads to massive height and slant underestimations. However, if the concave prop cannot be used, a flat prop might be used to simulate dents on the surface.”

Take a Look Around - The Impact of Decoupling Gaze and Travel-direction in Seated and Ground-based Virtual Reality Utilizing Torso-directed Steering

Daniel Zielasko (Human-Computer Interaction, University of Trier), Yuen C. Law (School of Computing, Costa Rica Institute of Technology), Benjamin Weyers (Human-Computer Interaction, University of Trier)

Conference

Abstract: “Leaning has several times been shown to be a suitable virtual travel technique when being seated. The direction of the steering method most commonly used is gaze/head-directed. However, this does not allow to inspect the environment independently from the direction of movement. The change to torso-directed steering allows for the latter and additionally does not take anything from the natural character of the leaning metaphor. We empirically investigated the impact of this freedom in a ground-based scenario and complemented the conditions with a virtual body-directed method and then crossed all with device-based control conditions.”

Thinh Nguyen-Vo (Simon Fraser University), Bernhard Riecke (Simon Fraser University), Wolfgang Stuerzlinger (Simon Fraser University), Duc Mihn Pham (Simon Fraser University), Ernst Kruijff (Bonn-Rhein-Sieg University of Applied Sciences)

Journal

Abstract: “Walking has always been considered as the gold standard for navigation in Virtual Reality research. Though full rotation is no longer a technical challenge, physical translation is still restricted through limited tracked areas. While rotational information has been shown to be important, the benefit of the translational component is still unclear with mixed results in previous work. To address this gap, we conducted a mixed-method experiment to compare four levels of translational cues and control: none (using the trackpad of the HTC Vive controller to translate), upper-body leaning (sitting on a “NaviChair”, leaning the upper-body to locomote), whole-body leaning/stepping (standing on a platform called NaviBoard, leaning the whole body or stepping one foot off the center to navigate), and full translation (physically walking). Results showed that translational cues and control had significant effects on various measures including task performance, task load, and simulator sickness. While participants performed significantly worse when they used a controller with no embodied translational cues, there was no significant difference between the NaviChair, NaviBoard, and actual walking. These results suggest that translational body-based motion cues and control from a low-cost leaning/stepping interface might provide enough sensory information for supporting spatial updating, spatial awareness, and efficient locomotion in VR, although future work will need to investigate how these results might or might not generalize to other tasks and scenarios.”

Session 13: Visual comfort

Tuesday, March 24, 1:30 PM - 3:00 PM,
Track 3 (Studio 1)

Recurrent Enhancement of Visual Comfort for Casual Stereoscopic Photography

Yuzhen Niu (College of Mathematics and Computer Science, Fuzhou University, China), Qingyang Zheng (College of Mathematics and Computer Science, Fuzhou University, China), Wenxi Liu (College of Mathematics and Computer Science, Fuzhou University, China), Wenzhong Guo (College of Mathematics and Computer Science, Fuzhou University, China)

Conference

Abstract: “In this paper, we are interested in casual stereoscopic photography that allows ordinary users to create a stereoscopic photo captured by a hand-held monocular camera. To handle the geometric constraints and disparity adjustment for captured image pairs, we present a coarse-to-fine framework. In the coarse stage, we propose a unified reinforcement learning-based method, in which the produced stereo image is iteratively adjusted and evaluated in the term of visual comfort. In addition, to further enhance the visual comfort of the stereoscopic image produced in the coarse stage, we introduce another independent recurrent network to fine-tune its disparity range.”

Visualization and evaluation of ergonomic visual field parameters in first person virtual environments

Tobias Günther (Technische Universität Dresden, Germany), Inga-Lisa Hilgers (Technische Universität Dresden, Germany), Rainer Groh (Technische Universität Dresden, Germany), Martin Schmauder (Technische Universität Dresden, Germany)

Conference

Abstract: “Especially in the field of mechanical engineering, the market pressure for small and medium-sized enterprises (SME) increases because of faster developments and more complex designs. Nevertheless, standards and ergonomic safety regulations must be observed. In recent years, various applications were presented to help users understand and comply with inconvenient requirements. However, the time-consuming tools are primarily for ergonomics experts and often overwhelm engineers from SMEs. We present an immersive concept that allows inexperienced users to quickly assess the ergonomic parameters of the visual field, represented by easy-to-understand visualizations. The solution is compared with standard market and scientific approaches.”

Virtual Big Heads: Analysis of Human Perception and Comfort of Head Scales in Social Virtual Reality

Zubin Choudhary (University of Central Florida, Orlando, Florida, United States), Kangsoo Kim (University of Central Florida, Orlando, Florida, United States), Ryan Schubert (Synthetic Reality Lab, University of Central Florida, Orlando, Florida, United States), Gerd Bruder (SREAL, University of Central Florida, Orlando, Florida, United States), Greg Welch (SREAL, University of Central Florida, Orlando, Florida, United States)

Conference

Abstract: “In social virtual reality (VR), “Big Head” technique is one of common examples leveraging more of the display’s visual space to easily convey facial social cues with a slightly increased head scale. In this paper, we present a human-subject study to understand the impact of an increased or decreased head scale in social VR on participants’ ability to perceive facial expressions as well as their sense of comfort and feeling of “uncanniness.” We explored two head scaling methods and compared them with respect to perceptual thresholds and user preferences at different distances. We discuss implications and guidelines for practical applications that aim to leverage VR-enhanced social cues.”

Effects of Dark Mode Graphics on Visual Acuity and Fatigue with Virtual Reality Head-Mounted Displays

Austin Erickson (University of Central Florida), Kangsoo Kim (University of Central Florida), Gerd Bruder (University of Central Florida), Gregory F. Welch (University of Central Florida)

Conference

Abstract: “In this paper, we present a human-subject study investigating the correlations between the color mode and the ambient lighting with respect to visual acuity and fatigue on VR HMDs. We compare two color schemes, characterized by light letters on a dark background (dark mode), or dark letters on a light background (light mode), and show that the dark background in dark mode provides a significant advantage in terms of reduced visual fatigue and increased visual acuity in dim virtual environments on current HMDs. Based on our results, we discuss guidelines for user interfaces and applications.”

Exploring the Differences of Visual Discomfort Caused by Long-term Immersion between Virtual Environments and Physical Environments

Jie Guo (MRAD of Beijing Institute of Technology, China), Dongdong Weng (MRAD of Beijing Institute of Technology, China), Hui Fang (MRAD of Beijing Institute of Technology, China), Zhenliang Zhang (MRAD of Beijing Institute of Technology, China), Jiamin Ping (MRAD of Beijing Institute of Technology, China), Yue Liu (MRAD of Beijing Institute of Technology, China), Yongtian Wang (MRAD of Beijing Institute of Technology, China)

Conference

Abstract: “To investigate the effects of visual discomfort caused by long-term immersing in virtual environments (VEs), we conducted a comparative study to evaluate users’ visual discomfort in an eight-hour working rhythm and compared the differences between the VEs and the physical environments. The results show that VEs affects visual fatigue the most compared to the physical environments. The results also show that pupil size is negatively related to subjective visual fatigue, and the long-term work based on displays only influences the maximum accommodation response of participants. This work is a supplement to the necessary but insufficient-researched field of visual fatigue in long-term immersing in VEs.”

Session 14: Perception & manipulation

Tuesday, March 24, 3:30 PM - 5:00 PM,
Track 2 (Great Room 2)

Detection of Scaled Hand Interactions in Virtual Reality: The Effects of Motion Direction and Task Complexity

Shaghayegh Esmaeili (University of Florida, United States of America), Brett Benda (University of Florida, United States of America), Eric D. Ragan (University of Florida, United States of America)

Conference

Abstract: “In VR, while the most straightforward use of tracked hand motion maintains a one-to-one mapping between the physical and virtual world, some cases might benefit from changing this mapping through scaled or redirected interactions. It is important to know the extent to which remapping techniques can be applied to the scaled interactions without users detecting the difference. We extend prior research on redirected hand techniques by investigating user perception of scaled hand movements and estimating detection thresholds for different types of hand motion. We conducted two experiments with a 2AFC design to estimate the detection thresholds of remapped interaction.”

The Impact of Multi-sensory Stimuli on Confidence Levels for Perceptual-cognitive Tasks in VR

Sungchul Jung (University of Canterbury, NZ), Andrew L Wood (University of Canterbury, NZ), Simon Hoermann (University of Canterbury, NZ), Pramuditha L Abhayawardhana (University of Canterbury, NZ), Robert W Lindeman (University of Canterbury, NZ)

Conference

Abstract: “We investigate the effects of multi-sensory stimuli, namely visuals, audio, two types of tactile (floor vibration and wind), and smell in terms of the confidence levels on a location-matching task which requires a combination of perceptual and cognitive work inside a virtual environment. We measured the level of presence when participants visited virtual places with different combinations of sensory feedback. Our results show that our multi-sensory VR system was superior to a typical VR system (vision and audio) in terms of the sense of presence and user preference. However, the subjective confidence levels were higher in the typical VR system.”

Data-Driven Spatio-Temporal Analysis via Multi-Modal Zeitgebers and Cognitive Load in VR

Haodong Liao (Center for Future Media, School of Computer Science and Engineering, University of Electronic Science and Technology of China, China), Ning Xie (Center for Future Media, School of Computer Science and Engineering, University of Electronic Science and Technology of China, China), Huiyuan Li (School of Life Science and Technology, University of Electronic Science and Technology of China, China), Yuhang Li (Center for Future Media, School of Computer Science and Engineering, University of Electronic Science and Technology of China, China), Jianping Su (Center for Future Media, School of Computer Science and Engineering, University of Electronic Science and Technology of China, China), Feng Jiang (Center for Future Media, School of Computer Science and Engineering, University of Electronic Science and Technology of China, China), Weipeng Huang (Center for Future Media, School of Computer Science and Engineering, University of Electronic Science and Technology of China, China), Heng Tao Shen (Center for Future Media, School of Computer Science and Engineering, University of Electronic Science and Technology of China, China)

Conference

Abstract: “In this paper, we divide the external zeitgebers into visual and auditory zeitgebers. Then we combine these zeitgebers with the attention-oriented cognitive load to investigate their effects on temporal estimation and presence, particularly in IVEs. We propose a data-driven method to build a multi-modal predictive equation for time estimation and presence. We also design a complicated application and validate the predictive equation. Our feature-based model can guide the VR application design in terms of the subjective time length judgment and presence of users as well as achieve a better VR user experience.”

Think Twice: The Influence of Immersion on Decision Making during Gambling in Virtual Reality

Sebastian Oberdörfer (University of Würzburg, Germany), David Heidrich (German Aerospace Center (DLR), Germany), Marc Erich Latoschik (University of Würzburg, Germany)

Conference

Abstract: “An impaired decision making results in the inability to differentiate between advantageous and disadvantageous options. We investigated if and how immersion impacts decision making using a VR-based realization of the Iowa Gambling Task (IGT). Subjects are challenged to draw cards from four different decks of which two are advantageous. Selections made serve as a measure of a participant’s decision making during the task. We compared the effects of immersion on decision making between a low-immersive desktop-3D-based IGT realization and a high immersive VR version. Our results revealed significantly more disadvantageous decisions when completing the VR version.”

Examining Whether Secondary Effects of Temperature-Associated Virtual Stimuli Influence Subjective Perception of Duration

Austin Erickson (University of Central Florida), Gerd Bruder (University of Central Florida), Pamela J. Wisniewski (University of Central Florida), Gregory F. Welch (University of Central Florida)

Conference

Abstract: “We present a user study to evaluate the relationship between virtual stimuli presented on an AR-HMD and sense of perception of duration and temperature. In particular, we investigate two independent variables: the apparent temperature of the stimulus, which could be hot or cold, and the location of the stimulus, which could be in direct contact with the user, in indirect contact with the user, or both in direct and indirect contact simultaneously. We investigate how these variables affect the users’ perception of duration and perception of body and environment temperature by having participants make time estimations while observing the virtual stimulus and answering subjective questions regarding their body and environment temperatures.”

Session 15: Embodiment 2

Wednesday, March 25, 9:00 AM - 10:30 AM,
Track 1 (Great Room 1)

Engaging Participants in Selection Studies in Virtual Reality

Difeng Yu (The University of Melbourne, Australia), Qiushi Zhou (The University of Melbourne, Australia), Benjamin Tag (The University of Melbourne, Australia), Tilman Dingler (The University of Melbourne, Australia), Eduardo Velloso (The University of Melbourne, Australia), Jorge Goncalves (The University of Melbourne, Australia)

Conference

Abstract: “Selection studies are prevalent and indispensable for VR research. However, due to the tedious and repetitive nature of many such experiments, participants can become disengaged during the study, which is likely to impact the results and conclusions. In this work, we investigate participant disengagement in VR selection experiments and how this issue affects the outcomes. Moreover, we evaluate the usefulness of four engagement strategies to keep participants engaged during VR selection studies and investigate how they impact user performance. Based on our findings, we distill several design recommendations that can be useful for future VR selection studies or user tests in other domains that employ similar repetitive features.”

Effects of virtual hand representation on interaction and embodiment in HMD-based virtual environments using controllers

Christos Lougiakis (National and Kapodistrian University of Athens & ATHENA Research Centre, Greece), Akrivi Katifori (National and Kapodistrian University of Athens & ATHENA Research Centre, Greece), Maria Roussou (National and Kapodistrian University of Athens, Greece), Ioannis-Panagiotis Ioannidis (ATHENA Research Centre, Greece)

Conference

Abstract: “Extending the work of Argelaguet et al. in 2016, we explore the effects of virtual hand representations, in this case using controllers. We designed an experiment where users perform the task of moving a cube on a table with and without obstacles (Brick Wall, Barbed Wire, Electric Current), interacting inside an immersive virtual environment using three representations: Sphere, Controller, Hand. Results show that no significant differences were identified in the sense of agency, but the users’ performance with the Sphere was significantly worse and, in the case of the positioning task, the Controller outperformed the others. Additionally, the Hand generated the strongest sense of ownership, and it was the favorite representation.”

Using Facial Animation to Increase the Enfacement Illusion and Avatar Self-Identification

Mar Gonzalez-Franco (Microsoft Research), Anthony Steed (Microsoft Research), Steve Hoogendyk (Microsoft Research), Eyal Ofek (Microsoft Research)

Journal

Abstract: “Through avatar embodiment in Virtual Reality (VR) we can achieve the illusion that an avatar is substituting our body: the avatar moves as we move and we see it from a first person perspective. However, self-identification, the process of identifying a representation as being oneself, poses new challenges because a key determinant is that we see and have agency in our own face. Providing control over the face is hard with current HMD technologies because face tracking is either cumbersome or error prone. However, limited animation is easily achieved based on speaking. We investigate the level of avatar enfacement, that is believing that a picture of a face is one’s own face, with three levels of facial animation: (i) one in which the facial expressions of the avatars are static, (ii) one in which we implement a synchronous lip motion and (iii) one in which the avatar presents lip-sync plus additional facial animations, with blinks, designed by a professional animator. We measure self-identification using a face morphing tool that morphs from the face of the participant to the face of a gender matched avatar. We find that self-identification on avatars can be increased through pre-baked animations even when these are not photorealistic nor look like the participant.”

Comparative Evaluation of Viewing and Self-Representation on Passability Affordances to a Realistic Sliding Doorway in Real and Immersive Virtual Environments

Ayush Bhargava (Key Lime Interactive), Hannah Solini (Department of Psychology, Clemson University), Kathryn Lucaites (Department of Psychology, Clemson University), Jeffrey Bertrand (Clemson University Center for Workforce Development), Andrew Robb (School of Computing, Clemson University), Christopher Pagano (Department of Psychology, Clemson University), Sabarish Babu (School of Computing, Clemson University)

Conference

Abstract: “Virtual Reality simulations require users to make spontaneous affordance judgments such as stepping over obstacles, passing through gaps, etc. which are affected by our self-representation in the virtual world. As self-avatars become popular, it is important to explore how various affordance judgments are affected by their presence. In this work, we investigate the effects of body-scaled self-avatars on passability judgments for a sliding doorway in VR and compare it to the real world. The results suggest that passability judgments are more conservative in VR. However, the presence of a self-avatar does not significantly affect passability judgments made in VR.”

Reducing Task Load with an Embodied Intelligent Virtual Assistant for Improved Performance in Collaborative Decision Making

Kangsoo Kim (University of Central Florida), Celso de Melo (US Army Research Laboratory), Nahal Norouzi (University of Central Florida), Gerd Bruder (University of Central Florida), Gregory Welch (University of Central Florida)

Conference

Abstract: “In this paper, we investigate the effects of Intelligent Virtual Assistant (IVA) embodiment on collaborative decision making. In our study, participants performed a desert survival task in three conditions: (1) performing the task alone, (2) working with a disembodied voice assistant, and (3) working with an embodied assistant. Our results show that both assistant conditions led to higher performance over when performing the task alone, but interestingly the reported task load with the embodied assistant was significantly lower than with the disembodied assistant. We discuss the findings with implications for effective and efficient collaborations with IVAs while emphasizing the increased social presence/richness of the embodied assistant.”

Session 16: Applications - Training and simulation

Wednesday, March 25, 9:00 AM - 10:30 AM,
Track 2 (Great Room 2)

Automatic Synthesis of Virtual Wheelchair Training Scenarios

Wanwan Li (George Mason University), Javier Talavera (George Mason University), Amilcar Gomez Samayoa (George Mason University), Jyh-Ming Lien (George Mason University), Lap-Fai Yu (George Mason University)

Conference

Abstract: “In this paper, we propose an optimization-based approach for automatically generating virtual scenarios for wheelchair training in virtual reality. Our approach automatically generates a realistic furniture layout for a scene as well as a training path that the user needs to go through by controlling a simulated wheelchair. The training properties of the path, namely, its desired length, the extent of rotation, and narrowness, are optimized so as to deliver the desired training effects. We conducted an evaluation to validate the efficacy of the proposed approach. Users showed improvement in wheelchair control skills in terms of proficiency and precision after the training.”

Real-time VR Simulation of Laparoscopic Cholecystectomy based on Parallel Position-based Dynamics in GPU

Junjun Pan (State key laboratory of virtual reality technology and systems, Beihang University, Peng cheng laboratory), Leiyu Zhang (State key laboratory of virtual reality technology and systems, Beihang University), Peng Yu (State key laboratory of virtual reality technology and systems, Beihang University, Peng cheng laboratory), Yang Shen (Faculty of Education, Beijing Normal University), Haipeng Wang (Beijing Aerospace General Hospital), Aimin Hao (Beihang University, Peng cheng Laboratory), Hong Qin (Department of Computer Science, Stony Brook University)

Conference

Abstract: “VR-based medical training have greatly changed surgeons learning mode. It can simulate the surgery from the visual, auditory and tactile aspects. This paper presents a VR simulation framework based on position-based dynamics (PBD) for cholecystectomy. To accelerate the deformation of organs, PBD constraints are solved in parallel by a graph coloring algorithm. A bio-thermal conduction model and hybrid multi-model connection method are also presented to improve the realism of the fat tissue electrocautery. Finally, the simulator is evaluated by a number of digestive surgeons, who believed that the system can offer great help to the improvement of surgical skills.”

A Physics-based Virtual Reality Simulation Framework for Neonatal Endotracheal Intubation

Xiao Xiao (George Washington University), Shang Zhao (George Washington University), Yan Meng (George Washington University,), Lamia Soghier (Children’s National Health Systems), Xiaoke Zhang (George Washington University), James Hahn (George Washington University)

Conference

Abstract: “Neonatal endotracheal intubation (ETI) is a complex procedure. Low intubation success rates for pediatric residents indicate the current training regimen is inadequate for achieving positive patient outcomes. In this paper, we propose a fully interactive physics-based virtual reality (VR) simulation framework for neonatal ETI that converts the training of this medical procedure to a completely immersive virtual environment where both visual and physical realism were achieved. The validation study results from a group of neonatologists are presented demonstrating that VR is a promising platform to train medical professionals effectively for this procedure.”

Analysing usability and presence of a virtual reality operating room (VOR) simulator during laparoscopic surgery training

Meng Li (Delft University of Technology, Netherlands; Xi’an Jiaotong University, China), Sandeep Ganni (Delft University of Technology, Netherlands; GSL Medical College, India), Jeroen Ponten (Catharina Hospital, Netherlands), Armagan Albayrak (Delft University of Technology, the Netherlands), Anne-Francoise Rutkowski (Tilburg University, Netherlands), Jack Jakimowicz (Delft University of Technology, Netherlands; Catharina Hospital, Netherlands)

Conference

Abstract: “Immersive Virtual Reality (VR) laparoscopy simulation is emerging to enhance the attractiveness and realism of surgical procedural training. This study analyses the usability and presence of a Virtual Operating Room (VOR) setup via user evaluation and sets out the key elements for an immersive surgical procedural training. Thirty-seven surgical professionals performed a simulated cholecystectomy then assessed the system using questionnaires and interview. The VOR showed potential to become a useful tool in providing immersive training during laparoscopy procedure simulation. Future developments of user interfaces, VOR environment, team interaction and personalization should result in improvement of the system.”

VR Disability Simulation Reduces Implicit Bias Towards Persons with Disabilities

Tanvir Chowdhury (University of Texas), Sharif Mohammad Shahnewaz Ferdous (University of Texas), John Quarles (University of Texas)

Journal

Abstract: “This paper investigates how experiencing Virtual Reality (VR) Disability Simulation (DS) affects information recall and participants’ implicit association towards people with disabilities (PwD). Implicit attitudes are our actions or judgments towards various concepts or stereotypes (e.g., race) which we may or may not be aware of. Previous research has shown that experiencing ownership over a dark-skinned body reduces implicit racial bias. We hypothesized that a DS with a tracked Head Mounted Display (HMD) and a wheelchair interface would have a significantly larger effect on participants’ information recall and their implicit association towards PwD than a desktop monitor and gamepad. We conducted a 2x2 between-subjects experiment in which participants experienced a VR DS that teaches them facts about Multiple Sclerosis (MS) with factors of display (HMD, a desktop monitor) and interface (gamepad, wheelchair). Participants took two Implicit Association Tests (IAT) before and after experiencing the DS. Our study results show that the participants in an immersive HMD condition performed better than the participants in the non-immersive Desktop condition in their information recall task. Moreover, a tracked HMD and a wheelchair interface had significantly larger effects on participants’ implicit association towards PwD than a desktop monitor and a gamepad.”

Session 17: Visual Displays -devices 1

Wednesday, March 25, 9:00 AM - 10:30 AM,
Track 3 (Studio 1)

Factored Occlusion: Single Spatial Light Modulator Occlusion-capable Optical See-through Augmented Reality Display

Brooke Krajancich (Stanford University, USA), Nitish Padmanaban (Stanford University, USA), Gordon Wetzstein (Stanford University, USA)

Journal

Abstract: “Occlusion is a powerful visual cue that is crucial for depth perception and realism in optical see-through augmented reality (OST-AR). However, existing OST-AR systems additively overlay physical and digital content with beam combiners – an approach that does not easily support mutual occlusion, resulting in virtual objects that appear semi-transparent and unrealistic. In this work, we propose a new type of occlusion-capable OST-AR system. Rather than additively combining the real and virtual worlds, we employ a single digital micromirror device (DMD) to merge the respective light paths in a multiplicative manner. This unique approach allows us to simultaneously block light incident from the physical scene on a pixel-by-pixel basis while also modulating the light emitted by a light-emitting diode (LED) to display digital content. Our technique builds on mixed binary/continuous factorization algorithms to optimize time-multiplexed binary DMD patterns and their corresponding LED colors to approximate a target augmented reality (AR) scene. In simulations and with a prototype benchtop display, we demonstrate hard-edge occlusions, plausible shadows, and also gaze-contingent optimization of this novel display mode, which only requires a single spatial light modulator.”

ThinVR: Heterogeneous Microlens Arrays for Compact, 180 degree FOV VR Near-Eye Displays

Joshua Ratcliff (Intel Labs, USA), Alexey Supikov (Intel Labs, USA), Santiago Alfaro (Intel Labs, USA), Ronald Azuma (Intel Labs, USA)

Journal

Abstract: “Today’s Virtual Reality (VR) displays are dramatically better than the head-worn displays offered 30 years ago, but today’s displays remain nearly as bulky as their predecessors in the 1980’s. Also, almost all consumer VR displays today provide 90-110 degrees field of view (FOV), which is much smaller than the human visual system’s FOV which extends beyond 180 degrees horizontally. In this paper, we propose ThinVR as a new approach to simultaneously address the bulk and limited FOV of head-worn VR displays. ThinVR enables a head-worn VR display to provide 180 degrees horizontal FOV in a thin, compact form factor. Our approach is to replace traditional large optics with a curved microlens array of custom-designed heterogeneous lenslets and place these in front of a curved display. We found that heterogeneous optics were crucial to make this approach work, since over a wide FOV, many lenslets are viewed off the central axis. We developed a custom optimizer for designing custom heterogeneous lenslets to ensure a sufficient eyebox while reducing distortions. The contribution includes an analysis of the design space for curved microlens arrays, implementation of physical prototypes, and an assessment of the image quality, eyebox, FOV, reduction in volume and pupil swim distortion. To our knowledge, this is the first work to demonstrate and analyze the potential for curved, heterogeneous microlens arrays to enable compact, wide FOV head-worn VR displays.”

IlluminatedFocus: Vision Augmentation using Spatial Defocusing via Focal Sweep Eyeglasses and High-Speed Projector

Tatsuyuki Ueda (Osaka University), Daisuke Iwai (Osaka University), Takefumi Hiraki (Osaka University), Kosuke Sato (Osaka University)

Journal

Abstract: “Aiming at realizing novel vision augmentation experiences, this paper proposes the IlluminatedFocus technique, which spatially defocuses real-world appearances regardless of the distance from the user’s eyes to observed real objects. With the proposed technique, a part of a real object in an image appears blurred, while the fine details of the other part at the same distance remain visible. We apply Electrically Focus-Tunable Lenses (ETL) as eyeglasses and a synchronized high-speed projector as illumination for a real scene. We periodically modulate the focal lengths of the glasses (focal sweep) at more than 60 Hz so that a wearer cannot perceive the modulation. A part of the scene to appear focused is illuminated by the projector when it is in focus of the user’s eyes, while another part to appear blurred is illuminated when it is out of the focus. As the basis of our spatial focus control, we build mathematical models to predict the range of distance from the ETL within which real objects become blurred on the retina of a user. Based on the blur range, we discuss a design guideline for effective illumination timing and focal sweep range. We also model the apparent size of a real scene altered by the focal length modulation. This leads to an undesirable visible seam between focused and blurred areas. We solve this unique problem by gradually blending the two areas. Finally, we demonstrate the feasibility of our proposal by implementing various vision augmentation applications.”

Toward Standardized Classification of Foveated Displays

Josef Spjut (NVIDIA Corporation), Ben Boudaoud (NVIDIA Corporation), Jonghyun Kim (NVIDIA Corporation), Trey Greer (NVIDIA Corporation), Rachel Albert (NVIDIA Corporation), Michael Stengel (NVIDIA Corporation), Kaan Akşit (NVIDIA Corporation), David Luebke (NVIDIA Corporation)

Journal

Abstract: “Emergent in the field of head mounted display design is a desire to leverage the limitations of the human visual system to reduce the computation, communication, and display workload in power and form-factor constrained systems. Fundamental to this reduced workload is the ability to match display resolution to the acuity of the human visual system, along with a resulting need to follow the gaze of the eye as it moves, a process referred to as foveation. A display that moves its content along with the eye may be called a Foveated Display, though this term is also commonly used to describe displays with non-uniform resolution that attempt to mimic human visual acuity. We therefore recommend a definition for the term Foveated Display that accepts both of these interpretations. Furthermore, we include a simplified model for human visual Acuity Distribution Functions (ADFs) at various levels of visual acuity, across wide fields of view and propose comparison of this ADF with the Resolution Distribution Function of a foveated display for evaluation of its resolution at a particular gaze direction. We also provide a taxonomy to allow the field to meaningfully compare and contrast various aspects of foveated displays in a display and optical technology-agnostic manner.”

Computational Phase-Modulated Eyeglasses

Yuta Itoh (Tokyo Institute of Technology), Tobias Langlotz (University of Otago), Stefanie Zollmann (University of Otago), Daisuke Iwai (University of Otago), Kiyokawa Kiyoshi (NAIST), Toshiyuki Amano (University of Wakayama)

Journal

Abstract: “We present computational phase-modulated eyeglasses, a see-through optical system that modulates the view of the user using phase-only spatial light modulators (PSLM). A PSLM is a programmable reflective device that can selectively retardate, or delay, the incoming light rays. As a result, a PSLM works as a computational dynamic lens device. We demonstrate our computational phase-modulated eyeglasses with either a single PSLM or dual PSLMs and show that the concept can realize various optical operations including focus correction, bi-focus, image shift, and field of view manipulation, namely optical zoom. Compared to other programmable optics, computational phase-modulated eyeglasses have the advantage in terms of its versatility. In addition, we also presents some prototypical focus-loop applications where the lens is dynamically optimized based on distances of objects observed by a scene camera. We further discuss the implementation, applications but also discuss limitations of the current prototypes and remaining issues that need to be addressed in future research.”

Session 18: Perception & collaboration

Wednesday, March 25, 11:00 AM - 12:30 PM,
Track 1 (Great Room 1)

Asymmetric Effects of the Ebbinghaus Illusion on Depth Judgments

Hunter Finney (High Fidelity Virtual Environments Lab (Hi5 Lab), Computer & Information Science, University of Mississippi), J. Adam Jones (High Fidelity Virtual Environments Lab (Hi5 Lab), Computer & Information Science, University of Mississippi)

Conference

Abstract: “The Ebbinghaus illusion affects the perceived size of a disc enclosed by an annulus of either larger or smaller discs. Though many have seen consistent effects of the illusion on size perception, there have been mixed results when studying its effect on action-based tasks. We present a study utilizing a virtual environment to examine the illusion’s effect on reaching in depth. We found that size judgments were symmetrically affected by common Ebbinghaus configurations, but their distance judgments were asymmetrically affected. Large annulus configurations had no effect on distance judgments while small annulus configurations resulted in underestimation of distances.”

Effects of Depth Information on Visual Target Identification Task Performance in Shared Gaze Environments

Austin Erickson (University of Central Florida), Nahal Norouzi (University of Central Florida), Kangsoo Kim (University of Central Florida), Joseph J. LaViola Jr. (University of Central Florida), Gerd Bruder (University of Central Florida), Gregory F. Welch (University of Central Florida)

Journal

Abstract: “Augmented reality (AR) setups have the capability of facilitating collaboration for collocated and remote users by augmenting and sharing their virtual points of interest in each user’s physical space. With gaze being an important communication cue during human interaction, augmenting the physical space with each user’s focus of attention through different visualizations such as ray, frustum, and cursor has been studied in the past to enhance the quality of interaction. Understanding each user’s focus of attention is susceptible to error since it has to rely on both the user’s gaze and depth information of the target to compute the endpoint of the user’s gaze. Such information is computed by eye trackers and depth cameras respectively, which introduces two sources of errors into the shared gaze experience. Depending on the amount of error and type of visualization, the augmented gaze can negatively mislead a user’s attention during their collaboration instead of enhancing the interaction. In this paper, we present a human-subjects study to understand the effects of eye tracking errors, depth camera accuracy errors, and gaze visualization on users’ performance and subjective experience during a collaborative task with a virtual human partner, where users were asked to identify a target within a dynamic crowd. We simulate seven different levels of eye tracking error as a horizontal offset to the intended gaze point and seven different levels of depth accuracy errors that make the gaze point appear in front of or behind the intended gaze point. In addition, we examine four different visualization styles for shared gaze information, including an extended ray that passes through the target and extends to a fixed length, a truncated ray that halts upon reaching the target gaze point, a cursor visualization that appears at the target gaze point, as well as a combination of both cursor and truncated ray display modes.”

Live Semantic 3D Perception for Immersive Augmented Reality

LEI HAN (HKUST, Hong Kong, China), TIAN ZHENG (Tsinghua University, China), YINHENG ZHU (Tsinghua University, China), LAN XU (HKUST, HongKong, China), LU FANG (Tsinghua University, China)

Journal

Abstract: “Semantic understanding of 3D environments is critical for both the unmanned system and the human involved virtual/augmented reality (VR/AR) immersive experience. Spatially-sparse convolution, taking advantage of the intrinsic sparsity of 3D point cloud data, makes high resolution 3D convolutional neural networks tractable with state-of-the-art results on 3D semantic segmentation problems. However, the exhaustive computations limits the practical usage of semantic 3D perception for VR/AR applications in portable devices. In this paper, we identify that the efficiency bottleneck lies in the unorganized memory access of the sparse convolution steps, i.e., the points are stored independently based on a predefined dictionary, which is inefficient due to the limited memory bandwidth of parallel computing devices (GPU). With the insight that points are continuous as 2D surfaces in 3D space, a chunk based sparse convolution scheme is proposed to reuse neighboring points within each spatially organized chunk. An efficient multi-layer adaptive fusion module is further proposed for employing the spatial consistency cue of 3D data to further reduce the computational burden. Quantitative experiments on public datasets demonstrate that our approach works $11\times$ faster than previous approaches with competitive accuracy. By implementing both semantic and geometric 3D reconstruction simultaneously on a portable tablet device, we demo a foundation platform for immersive AR applications.”

Dyadic Acquisition of Survey Knowledge in a Shared Virtual Environment

Lauren Buck (Vanderbilt University, USA), Timothy McNamara (Vanderbilt University, USA), Bobby Bodenheimer (Vanderbilt University, USA)

Conference

Abstract: “Navigation and wayfinding are often accomplished collectively. Yet most studies dedicated to this behavior, and of the acquisition of spatial knowledge more generally focus on the individual. In this paper we extend the investigation of these topics to dyads. In particular, we focus on how well straight-line distances and directions between objects (survey knowledge) were learned by individuals and by dyads. Our experiment was carried out in a shared virtual environment and we report on the technical issues in conducting such a collaborative experiment such as the choice of locomotion mode and the provision of full-body self-avatars. Our findings indicate that dyads outperform individuals in their acquisition of survey knowledge.”

Design and Evaluation of Interactive Small Multiples Data Visualisation in Immersive Spaces

Jiazhou Liu (Monash University, Australia), Arnaud Prouzeau (Monash University, Australia), Barrett Ens (Monash University, Australia), Tim Dwyer (Monash University, Australia)

Conference

Abstract: “We explore the adaptation of traditional 2D small-multiples visualisation to 3D immersive spaces. We use a ”shelves” metaphor and consider a design space across several layout and interaction dimensions. We demonstrate a prototype system and perform two user studies comparing the effect of the shelf curvature dimension on users’ ability to perform comparison and trend analysis tasks. Our results suggest that, with fewer multiples, a flat layout is more performant. With an increasing number of multiples, this performance difference is diminished. However, a semi-circular layout is more prefered by users than full-circular and flat layouts.”

Session 19: 3DUI - Navigation - Flying/teleportation

Wednesday, March 25, 11:00 AM - 12:30 PM,
Track 2 (Great Room 2)

Teleporting through Virtual Environments: Effects of Path Scale and Environment Scale on Spatial Updating

Jonathan Kelly (Iowa State University, USA), Alec Ostrander (Iowa State University, USA), Alex Lim (Iowa State University, USA), Lucia Cherep (Iowa State University, USA), Stephen Gilbert (Iowa State University, USA)

Journal

Abstract: “Virtual reality systems typically allow users to physically walk and turn, but virtual environments (VEs) often exceed the available walking space. Teleporting has become a common user interface, whereby the user aims a laser pointer to indicate the desired location, and sometimes orientation, in the VE before being transported without self-motion cues. This study evaluated the influence of rotational self-motion cues on spatial updating performance when teleporting, and whether the importance of rotational cues varies across movement scale and environment scale. Participants performed a triangle completion task by teleporting along two outbound path legs before pointing to the unmarked path origin. Rotational self-motion reduced overall errors across all levels of movement scale and environment scale, though it also introduced a slight bias toward under-rotation. The importance of rotational self-motion was exaggerated when navigating large triangles and when the surrounding environment was large. Navigating a large triangle within a small VE brought participants closer to surrounding landmarks and boundaries, which led to greater reliance on piloting (landmark-based navigation) and therefore reduced - but did not eliminate - the impact of rotational self-motion cues. These results indicate that rotational self-motion cues are important when teleporting, and that navigation can be improved by enabling piloting.”

Getting There Together: Group Navigation in Distributed Virtual Environments

Tim Weissker (Bauhaus-Universität Weimar, Germany), Pauline Bimberg (Bauhaus-Universität Weimar, Germany), Bernd Froehlich (Bauhaus-Unveristät Weimar, Germany)

Journal

Abstract: “We analyzed the design space of group navigation tasks in distributed virtual environments and present a framework consisting of techniques to form groups, distribute responsibilities, navigate together, and eventually split up again. To improve joint navigation, our work focused on an extension of the Multi-Ray Jumping technique that allows adjusting the spatial formation of two distributed users as part of the target specification process. The results of a quantitative user study showed that these adjustments lead to significant improvements in joint two-user travel, which is evidenced by more efficient travel sequences and lower task loads imposed on the navigator and the passenger. In a qualitative expert review involving all four stages of group navigation, we confirmed the effective and efficient use of our technique in a more realistic use-case scenario and concluded that remote collaboration benefits from fluent transitions between individual and group navigation.”

The Space Bender: Supporting Natural Walking via Overt Manipulation of the Virtual Environment

Adalberto L. Simeone (KU Leuven, Belgium), Niels Christian Nilsson (Aalborg University Copenhagen, Denmark), André Zenner (DFKI, Germany), Marco Speicher (DHfPG, Germany), Florian Daiber (DFKI, Germany)

Conference

Abstract: “The Space Bender is a natural walking technique for room-scale VR. It builds on the idea of overtly manipulating the Virtual Environment by “bending” the geometry whenever the user comes in proximity of a physical boundary. We compared the Space Bender to two other similarly situated techniques: Stop and Reset and Teleportation, in a task requiring participants to traverse a 100 m path. Results show that the Space Bender was significantly faster than Stop and Reset, and preferred to the Teleportation technique, highlighting the potential of overt manipulation to facilitate natural walking.”

Exploring Effects of Screen-Fixed and World-Fixed Annotation on Navigation in Virtual Reality

James Dominic (Clemson University), Andrew Robb (Clemson University)

Conference

Abstract: “In this paper, we compare screen-fixed annotations that remained fixed in the user’s field of view, and world-fixed annotations that are linked to specific locations in the world. We also considered three different levels of navigation information: destination markers, maps visualizing the space, and path markers showing the optimal route to the destination. We ran a within-subjects study in which the participants navigated a virtual environment while completing a secondary task. Our results suggest that world-fixed annotations are not inherently better than screen-fixed annotations; instead, it is important to consider both the type of annotation and what information it displays.”

Magic Carpet: Interaction Fidelity for Flying in VR

Daniel Medeiros (University of Wellington), Antönio Sousa (University of Lisbon), Alberto Raposo (University of Rio de Janeiro), Joarquim Jorge (University of Lisbon)

Journal

Abstract: “Locomotion in virtual environments is currently a difficult and unnatural task to perform. Normally, researchers tend to devise ground-floor-based metaphors, to constrain the degrees of freedom (DoFs) during motion. These restrictions enable interactions that accurately emulate human gait to provide high interaction fidelity. However, flying allows users to reach specific locations in a virtual scene more expeditiously. Our experience suggests that high-interaction fidelity techniques may also improve the flying experience, although it is not innate to humans, since it requires simultaneously controlling additional DoFs. We contribute with the Magic Carpet, an approach to flying that combines a floor-proxy with a full-body representation, to avoid imbalance and cybersickness issues. This design space allows to address direction indication and speed control as two separate phases of travel, thereby enabling techniques with higher interaction fidelity. To validate our design space, we developed two complementary studies, one for each of the travel phases. In this paper, we present the results of both studies within the Magic Carpet design space. To this end, we applied both objective and subjective measurements to determine the best set of techniques inside our design space. Our results show that this approach enables high-interaction fidelity techniques while improving user experience.”

Session 20: Visualisation

Wednesday, March 25, 11:00 AM - 12:30 PM,
Track 3 (Studio 1)

Graphical Perception for Immersive Analytics

Matt Whitlock (University of Colorado - Boulder), Stephen Smart (University of Colorado - Boulder), Danielle Albers Szafir (University of Colorado - Boulder)

Conference

Abstract: “Immersive Analytics (IA) uses immersive virtual and augmented reality displays for data visualization and visual analytics. Empirical studies of data visualization interpretation typically focus on data analysis in traditional desktop environments rather than immersive environments. This study explores how people interpret data visualizations across different display types with five visual channels: color, size, height, orientation, and depth. We found that stereo viewing resolves some of the challenges of visualizations in 3D space and that while AR displays encourage increased navigation, they decrease performance with color-based visualizations. Our results provide guidelines on how to tailor visualizations to different displays.”

Immersive Process Model Exploration in Virtual Reality

André Zenner (German Research Center for Artificial Intelligence (DFKI), Germany), Akhmajon Makhsadov (German Research Center for Artificial Intelligence (DFKI), Germany), Sören Klingner (German Research Center for Artificial Intelligence (DFKI), Germany), David Liebemann (German Research Center for Artificial Intelligence (DFKI), Germany), Antonio Krüger (German Research Center for Artificial Intelligence (DFKI), Germany)

Journal

Abstract: “In many professional domains, relevant processes are documented as abstract process models, such as event-driven process chains (EPCs). EPCs are traditionally visualized as 2D graphs and their size varies with the complexity of the process. While process modeling experts are used to interpreting complex 2D EPCs, in certain scenarios such as, for example, professional training or education, also novice users inexperienced in interpreting 2D EPC data are facing the challenge of learning and understanding complex process models. To communicate process knowledge in an effective yet motivating and interesting way, we propose a novel virtual reality (VR) interface for non-expert users. Our proposed system turns the exploration of arbitrarily complex EPCs into an interactive and multi-sensory VR experience. It automatically generates a virtual 3D environment from a process model and lets users explore processes through a combination of natural walking and teleportation. Our immersive interface leverages basic gamification in the form of a logical walkthrough mode to motivate users to interact with the virtual process. The generated user experience is entirely novel in the field of immersive data exploration and supported by a combination of visual, auditory, vibrotactile and passive haptic feedback. In a user study with N = 27 novice users, we evaluate the effect of our proposed system on process model understandability and user experience, while comparing it to a traditional 2D interface on a tablet device. The results indicate a tradeoff between efficiency and user interest as assessed by the UEQ novelty subscale, while no significant decrease in model understanding performance was found using the proposed VR interface. Our investigation highlights the potential of multi-sensory VR for less time-critical professional application domains, such as employee training, communication, education, and related scenarios focusing on user interest.”

Real and Virtual Environment Mismatching Induces Arousal and Alters Movement Behavior

Christos Mousas (Purdue University), Dominic Kao (Purdue University), Alexandros Koilias (University of the Aegean), Banafsheh Rekabdar (Southern Illinois University)

Conference

Abstract: “This paper investigates whether the mismatching between a real and virtual environment can affect the arousal and movement behavior in the participants. One baseline and four mismatch conditions that examine different mismatching types were tested. Electrodermal activity and the walking motion of participants were captured to assess potential alterations in their arousal and movement behavior. Results indicated significant differences in the electrodermal activity and movement behavior of participants, especially when walking in a virtual environment that is mismatched both in appearance and physical constraints. Evidence was also found that correlates electrodermal activity with movement behavior.”

Simultaneous Run-Time Measurement of Motion-to-Photon Latency and Latency Jitter

Jan-Philipp Stauffert (University of Würzburg, Germany), Florian Niebling (University of Würzburg, Germany), Marc Erich Latoschik (University of Würzburg, Germany)

Conference

Abstract: “Latency in Virtual Reality (VR) applications can have numerous detrimental effects. Latency usually is reported as a mean value. This mean is taken during some specific intervals of sample runs with the target system, often detached in significant aspects from the final target scenario. This paper introduces an apparatus that can determine per-frame MTP latency to capture dynamic MTP latency in addition to the commonly reported mean values of latency. In contrast to previous approaches, the system does not rely on the HMD to be fixed to an external apparatus, can be used to assess any simulation setup, and can be extended to continuously measure latency during run-time.”

Automated Geometric Registration for Multi-Projector Displays on Arbitrary 3D Shapes Using Uncalibrated Devices

Mahdi Abbaspour Tehrani (University of California), M. Gopi (University of California), Aditi Majumder (University of California)

Journal

Abstract: “In this paper we present a completely automated and scalable multi-projector registration system that allows multiple completely uncalibrated projectors and cameras on arbitrary shape surfaces. Our method estimates the parameters of multiple uncalibrated tiled or superimposed projectors, the parameters of the observing cameras, the shape of the illuiminated 3D geometry and geometrically registers the projectors on it. This is achieved without using any fiducials, even if part of the surface is visible to only one camera. The method uses a completely automatic approach for cross-correlation and cross-validation of the device parameters and the surface geometry resulting in an accurate registration on the arbitrary unknown geometry that does not need an accurate prior calibration of each of the uncalibrated devices using physical patterns or fiducials. Estimating projector parameters allows for quick recalibration of the system in the face of projector movements, by re-estimating only the parameters of the moved projector and not the entire system. Thus, our work can enable easy deployment of spatially augmented reality environments of different sizes (from small table top objects to large immersive environments), different shapes (inside-looking-out or outside-looking in), and different configurations (tiled or superimposed) using the same proposed method.”

Session 21: Comfort and Workload

Wednesday, March 25, 2:00 PM - 3:30 PM,
Track 1 (Great Room 1)

The Effect of a Foveated Field-of-View Restrictor on VR Sickness

Isayas Berhe Adhanom (University of Nevada, Reno), Nathan Griffin (University of Nevada, Reno), Paul MacNeilage (University of Nevada, Reno), Eelke Folmer (University of Nevada, Reno)

Conference

Abstract: “Reducing the users’ field-of-view (FOV) during VR locomotion is an effective strategy to minimize visual-vestibular conflict and VR sickness. Current FOV restrictor implementations render a restrictor centered at the user’s FOV without taking the user’s eye-gaze into account. This could lead to users looking at the restrictor, while they are still exposed to peripheral optical flow which negates the effectiveness of the FOV restrictor. We compared a foveated FOV restrictor that moves with the user’s eye gaze to a fixed FOV restrictor and though no significant difference in VR sickness was detected, users’ eye gaze was more dispersed.”

Hakim Si-Mohammed (Inria, Univ. Rennes, CNRS, IRISA), Catarina Lopes-Dias (Institute of Neural Engineering, Graz University of Technology), Maria Duarte (Faculty of Science, University of Lisbon), Ferran Argelaguet (Inria, Univ. Rennes, CNRS, IRISA), Camille Jeunet (CLLE, Université de Toulouse), Géry Casiez (CRIStAL, Univ. Lille, Inria), Gernot Müller-Putz (Institute of Neural Engineering, Graz University of Technology), Anatole Lécuyer (Inria, Unv. Rennes, CNRS, IRISA), Reinhold Scherer (SCEEE, University of Essex)

Conference

Abstract: “When experiencing errors, a specific brain pattern the error-related potential can be observed in users’ EEG. This paper investigates the presence of ErrPs when Virtual Reality users face 3 types of errors: Tracking and feedback errors, and background anomalies. Our experiment on 15 participants exposed to the 3 types of errors while performing a pick and place task in VR showed that only tracking errors did generate discernible ErrPs. Moreover, the classification accuracy of these ErrPs was around 85%. This constitutes a first step towards the automatic detection of ErrPs in VR, paving the way towards self-corrective VR/AR applications.”

Introducing Mental Workload Assessment for the Design of Virtual Reality Training Scenarios

Tiffany Luong (IRT b<>com, Cesson-Sevigne, France ; Univ Rennes, Inria, CNRS, IRISA, Rennes, France), Ferran Argelaguet (Univ Rennes, Inria, CNRS, IRISA, Rennes, France), Nicolas Martin (IRT b<>com, Cesson-Sevigne, France), Anatole Lécuyer (Univ Rennes, Inria, CNRS, IRISA, Rennes, France)

Conference

Abstract: “In this paper, we propose to consider mental workload (MWL) for the design of complex training scenarios involving multiple parallel tasks in VR. The approach is based on the assessment of the MWL elicited by each potential task configuration in the training application to generate scenarios able to modulate the user’s MWL over time. It is illustrated by a VR flight training simulator based on the Multi-Attribute Task Battery II. A first user study (N=38) was conducted to assess the MWL. This assessment was then used to generate 3 training scenarios in order to induce different levels of MWL over time. A second user study (N=14) confirmed that the proposed approach was able to induce the expected MWL over time for each training scenario.”

Comparative Evaluation of the Effects of Motion Control on Cybersickness in Immersive Virtual Environments

Roshan Venkatakrishnan (Clemson University, USA), Rohith Venkatakrishnan (Clemson University, USA), Ayush Bhargava (Key Lime Interactive, USA), Kathryn Lucaites (Clemson University, USA), Hannah Solini (Clemson University, USA), Matias Volonte (Clemson University, USA), Andrew Robb (Clemson University, USA), Wen-Chieh Lin (National Chiao Tung University, Taiwan), Yun-Xuan Lin (National Chiao Tung University, Taiwan), Sabarish V. Babu (Clemson University, USA)

Conference

Abstract: “The lowering costs of consumer grade Virtual Reality (VR) has made the technology increasingly accessible to users around the world. However, cybersickness continues to remain one of the biggest hurdles to the widespread adoption of VR, making it increasingly important to explore the factors that influence its onset. Towards this cause, we sought to examine how the presence of control affects cybersickness in IVE’s. Results from our experiments indicate that simply providing control needn’t alleviate sickness but could even increase it. This seems indicative of the importance of the fidelity of the control metaphor’s feedback response in alleviating cybersickness.”

A Structural Equation Modeling Approach to Understand the Relationship between Control, Cybersickness and Presence in Virtual Reality

Rohith Venkatakrishnan (Clemson University, USA), Roshan Venkatakrishnan (Clemson University, USA), Reza Anaraky (Clemson University, USA), Matias Volonte (Clemson University, USA), Bart Knijnenburg (Clemson University), Sabarish V. Babu (Clemson University)

Conference

Abstract: “The commercialization of Virtual Reality (VR) devices is making the technology increasingly accessible to users around the world. Despite VR’s recent success, it has yet to become widely adopted and achieve its ultimate goal - convincingly simulate real life like experiences. In this work, we leverage structural equation modeling in an attempt to build a framework that explains the relationship between virtual motion control, workload, cybersickness, simulation duration, perceived time and presence. Our structural model helps explain why motion control could be an important factor to consider in addressing VR’s challenges and realizing its ultimate aim to simulate reality.”

Session 22: Novel interfaces and displays

Wednesday, March 25, 2:00 PM - 3:30 PM,
Track 2 (Great Room 2)

HiPad: Text entry for Head-Mounted Displays Using Circular Touchpad

Haiyan Jiang (Beijing Institute of Technology), Dongdong Weng (Beijing Institute of Technology; AICFVE of Beijing Film Academy)

Conference

Abstract: “Text entry in virtual reality (VR) is currently a common activity and a challenging problem. In this paper, we introduce HiPad, leveraging a circular touchpad with a circular virtual keyboard, to support the one-hand text entry in mobile head-mounted displays (HMDs). This technique input text by a common hand-held controller with a circular touchpad for HMDs and disambiguates the word based on the sequence of keys pressed by the user. The study results show that novices can achieve 13.57 Words per Minute (WPM) with VE-layout and 11.60 WPM with TP-layout using 6-keys HiPad.”

Peering Under the Hull: Enhanced Decision Making via an Augmented Environment

Matthew Timmerman (United States Navy), Amela Sadagic (Naval Postgraduate School), Cynthia Irvine (Naval Postgraduate School)

Conference

Abstract: “Daily management of complex computer networks onboard Navy ships typically includes multiple sessions during which a team presents a set of information and discusses issues relevant to their decision making. As an alternative to a set of two-dimensional blueprints that are inherently hard to understand, we designed and implemented an augmented reality (AR) system that allowed a small team to visualize a 3D model of the ship and its computer networks. The results of this empirical study offer early insights into the benefits and challenges of AR approaches in the decision making of small teams in high stakes real-world scenarios.”

Virtual environment with smell using wearable olfactory display and computational fluid dynamics simulation

Takamichi Nakamoto (Tokyo Institute of Technology, Japan), Tatsuya Hirasawa (Tokyo Institute of Technology, Japan), Yukiko Hanyu (Tokyo Institute of Technology, Japan)

Conference

Abstract: “We have developed the virtual olfactory environment where a user searches for an odor source. Its environment was prepared using computational fluid dynamics calculation. Moreover, we developed the wearable olfactory display made up of multiple micro dispensers and SAW device. The wearable olfactory display was attached beneath a head mount display to present a smell quickly. We made the virtual environment of the two-story building where four rooms were located at each floor. A user searched for the source of smoke smell, simulating the fire at the early stage. A half of the users could reach the correct source locations.”

Reading on 3D Surfaces in Virtual Environments

Chunxue Wei (The University of Melbourne), Difeng Yu (The University of Melbourne), Tilman Dingler (The University of Melbourne)

Conference

Abstract: “While text tends to lead a rather static life on paper and screens, virtual reality (VR) allows readers to interact with it in novel ways since the reading surface is no longer confined to a 2D plane. We conducted two user studies, in which we investigated text rendered on different surface shapes in VR, including planes, spheres, and cylinders and assessed their effects on legibility and the overall reading experience. Our studies disclose the impact of warp angles and view box widths on reading comfort, speed, and distraction and conclude with insights on rendering of text on 3D objects in VR.”

ReViVD: Exploration and Filtering of Trajectories in an Immersive Environment using 3D Shapes

François HOMPS (Ecole Centrale de Lyon), Yohan BEUGIN (Ecole Centrale de Lyon), Romain VUILLEMOT (Ecole Centrale de Lyon)

Conference

Abstract: “We present VRT, a tool for exploring and filtering large trajectory-based datasets using virtual reality. VRT’s novelty lies in using simple 3D shapes—such as cuboids, spheres and cylinders—as queries for users to select and filter groups of trajectories. Building on this simple paradigm, more complex queries can be created by combining previously made selection groups through a system of user-created Boolean operations. We demonstrate the use of VRT in different application domains, from GPS position tracking to simulated data (e. g., turbulent particle flows and traffic simulation). Our results show the ease of use and expressiveness of the 3D geometric shapes in a broad range of exploratory tasks.”

Session 23: Evaluation Methods

Wednesday, March 25, 2:00 PM - 3:30 PM,
Track 3 (Studio 1)

Design and Evaluation of a VR Training Simulation for Pump Maintenance Based on a Use Case at Grundfos

Frederik Winther (Aarhus University, Denmark), Linoj Ravindran (Aarhus University, Denmark), Kasper Paabøl Svendsen (Aarhus University, Denmark), Tiare Feuchtner (Aarhus University, Denmark)

Conference

Abstract: “Encouraged by technological advancements, more and more companies consider VR for training of their workforce. Thereby the need arises for understanding the potentials and limitations of VR training and establishing best practices. In pursuit of this, we developed a VR Training simulation for a use case at Grundfos, involving a sequential maintenance task. We evaluated this simulation in a user study with 36 participants, comparing it to two traditional forms of training (Pairwise Training and Video Training). The results of our evaluation, support that VR Training is effective in teaching the procedure of a maintenance task. However, traditional approaches with hands-on experience still lead to a significantly better outcome.”

Animals in Virtual Environments

Hemal Naik (Max Planck Institute of Animal Behavior, Centre for the Advanced Study of Collective Behaviour, University of Konstanz, Technische Universitaet Muenchen), Renaud Bastien (Max Planck Institute of Animal Behavior, Centre for the Advanced Study of Collective Behaviour, University of Konstanz), Nassir Navab (Technische Universitaet Muenchen), Iain Couzin (Max Planck Institute of Animal Behavior, Centre for the Advanced Study of Collective Behaviour, University of Konstanz)

Journal

Abstract: “The core idea in an XR (VR/MR/AR) application is to digitally stimulate one or more sensory organs (e.g. visual, auditory, and olfactory) of the user in an interactive way to achieve an immersive experience. Since early 2000s biologists have been using Virtual Environments (VE) to investigate the mechanisms of behavior in non-human animals including insect, fish, and mammals. VEs have become reliable tools for studying vision, cognition, and sensory-motor control in animals. In turn, the knowledge gained from studying such behaviors can be harnessed by researchers designing biologically inspired robots, smart sensors, and multi-agent artificial intelligence. VE for animals is becoming a widely used application of XR technology but such applications have not previously been reported in the technical literature related to XR. Biologists and computer scientists can benefit greatly from deepening interdisciplinary research in this emerging field and together we can develop new methods for conducting fundamental research in behavioral sciences and engineering. To support our argument we present this review which provides an overview of animal behavior experiments conducted in virtual environments.”

Evaluating Virtual Reality Experiences Through Participant Choices

Maria Murcia-López (Facebook, UK), Tara Collingwoode-Williams (Goldsmiths University of London, UK), William Steptoe (Facebook, UK), Raz Schwartz (Facebook, UK), Timothy J. Loving (Facebook, USA), Mel Slater (University of Barcelona, Spain)

Conference

Abstract: “When building virtual reality applications teams must choose between different configurations of the experience. We extend a framework for assessing how these factors contribute to the quality of participants’ experiences in an example evaluation. We consider four factors related to avatar expressiveness. Participants had the opportunity to spend a budget to modify the factors to improve their quality of experience. A Markov matrix and probabilities of a factor being present at a given level on participants’ final configurations were calculated. We present this work as an extended contribution to the evaluation of the responses of people to immersive virtual environments.”

Measuring Human Trust in a Virtual Assistant using Physiological Sensing in Virtual Reality

Kunal Gupta (The University of Auckland, New Zealand), Ryo Hajika (The University of Auckland, New Zealand), Yun Suen Pai (The University of Auckland, New Zealand), Andreas Duenser (Data61, CSIRO, Australia), Martin Lochner (Data61, CSIRO, Australia), Mark Billinghurst (The University of Auckland, New Zealand)

Conference

Abstract: “Through our research, we report on a novel methodology to investigate user’s trust in auditory assistance in a Virtual Reality (VR) based search task, under both high and low cognitive load and under varying levels of agent accuracy. We collected physiological sensor data, subjective data and behavioral measure of trust. Our results indicate that our custom VR environment enables researchers to measure and understand human trust in virtual agents using the matrices, and both cognitive load and agent accuracy play an important role in trust formation. We discuss the implications of the research and directions for future work.”

Toward Virtual Reality-based Evaluation of Robot Navigation among People

Grzeskowiak Fabien (INRIA Rennes, France), Babel Marie (INRIA Rennes, France), Bruneau Julien (INRIA Rennes, France), Pettre Julien (INRIA Rennes, France)

Conference

Abstract: “This paper explores the use of VR to study human-robot interactions during navigation tasks. In our case, not only human perception is concerned, but also the one of the robot which requires to be simulated to perceive the VR world. The contribution of this paper is twofold. It first provides a technical solution to perform human robot interactions in navigation tasks through VR. We then assess a simple interaction task that we replicate in real and in virtual conditions to perform a first estimation of the importance of the biases introduced by the use of VR.”

Session 24: Visual Displays -devices 2

Thursday, March 26, 9:00 AM - 10:30 AM,
Track 1 (Great Room 1)

TEllipsoid: Ellipsoidal Display for Videoconference System Transmitting Accurate Gaze Direction

Taro Ichii (Tokyo Institute of Technology, Japan), Hironori Mitake (Tokyo Institute of Technology, Japan), Shoichi Hasegawa (Tokyo Institute of Technology, Japan)

Conference

Abstract: “We propose “Tellipsoid”, an ellipsoidal display for the video conference system, that can realize not only accurate eye gaze transmission but also practicality in conferences, namely the convenience and the identity of the displayed face. The display consists of an ellipsoidal screen, small projector and convex mirror, where the bottom-installed projector projects the facial image of a remote participant onto the screen via the convex mirror. The facial image is made from the photos shot from 360 degrees around the participant. The gaze representation is implemented by projecting the 3D model of eyeballs onto a virtual ellipsoidal screen.”

Physically-inspired Deep Light Estimation from a Homogeneous-Material Object for Mixed Reality Lighting

Jinwoo Park (KAIST, Republic of Korea), Hunmin Park (KAIST, Republic of Korea), Sung-eui Yoon (KAIST, Republic of Korea), Woontack Woo (KAIST, Republic of Korea)

Journal

Abstract: “In mixed reality (MR), augmenting virtual objects consistently with real-world illumination is one of the key factors that provide a realistic and immersive user experience. For this purpose, we propose a novel deep learning-based method to estimate high dynamic range (HDR) illumination from a single RGB image of a reference object. To obtain illumination of a current scene, previous approaches inserted a special camera in that scene, which may interfere with user’s immersion, or they analyzed reflected radiances from a passive light probe with a specific type of materials or a known shape. The proposed method does not require any additional gadgets or strong prior cues, and aims to predict illumination from a single image of an observed object with a wide range of homogeneous materials and shapes. To effectively solve this ill-posed inverse rendering problem, three sequential deep neural networks are employed based on a physically-inspired design. These networks perform end-to-end regression to gradually decrease dependency on the material and shape. To cover various conditions, the proposed networks are trained on a large synthetic dataset generated by physically-based rendering. Finally, the reconstructed HDR illumination enables realistic image-based lighting of virtual objects in MR. Experimental results demonstrate the effectiveness of this approach compared against state-of-the-art methods. The paper also suggests some interesting MR applications in indoor and outdoor scenes.”

CasualStereo: Casual Capture of Stereo Panoramas with Spherical Structure-from-Motion

Lewis Baker (University of Otago, New Zealand), Steven Mills (University of Otago, New Zealand), Stefanie Zollmann (University of Otago, New Zealand), Jonathan Ventura (California Polytechnic State University, USA)

Conference

Abstract: “Hand-held capture of stereo panoramas involves spinning the camera in a roughly circular path to acquire a dense set of views of the scene. However, most existing structure-from-motion pipelines fail when trying to reconstruct such trajectories, due to the small baseline between frames. We evaluate spherical structure-from-motion for reconstructing handheld stereo panoramas. The spherical constraint introduces a strong regularization on the structure-from-motion process, making it well-suited to the use case of stereo panorama capture with a handheld camera. We demonstrate the effectiveness of spherical structure-from-motion for casual capture of high-resolution stereo panoramas and validate our results with a user study.”

Measuring System Visual Latency through Cognitive Latency on Video See-Through AR devices

Robert Gruen (Microsoft Research), Eyal Ofek (Microsoft Research), Anthony Steed (Microsoft Research, University College London), Ran Gal (Microsoft Research), Mike Sinclair (Microsoft Research), Mar Gonzalez-Franco (Microsoft Research)

Conference

Abstract: “Measuring Visual Latency in VR and AR devices has become increasingly complicated as many of the components will influence others in multiple loops and ultimately affect the human cognitive and sensory perception. In this paper we present a new method based on the idea that the performance of humans on a rapid motor task will remain constant, and that any added delay will correspond to the system latency. We ask users to perform a task inside video see-through devices to compare latency. We also calculate the latency of the systems using hardware instrumentation measurements for bench-marking. Results show that measurement through human cognitive performance can be reliable and comparable to hardware measurement.”

LiveDeep: Online Viewport Prediction for Live Virtual Reality Streaming Using Lifelong Deep Learning

Xianglong Feng (Rutgers University, USA), Yao Liu (SUNY Binghamton, USA), Sheng Wei (Rutgers University, USA)

Conference

Abstract: “This paper presents a novel viewport prediction approach for live VR streaming to reduce the bandwidth consumption. We propose a VR streaming-specific lifelong deep learning approach, namely LiveDeep, to create the online viewport prediction model and conduct real-time inference. LiveDeep involves (1) an alternate online data collection, labeling, training, and inference schedule to accommodate for the sparse training data; and (2) a mixture of hybrid neural network models to accommodate for the inaccuracy caused by a single model. We evaluate LiveDeep using a public VR user head movement dataset involving 48 users and 14 VR videos.”

Session 25: Tracking

Thursday, March 26, 9:00 AM - 10:30 AM,
Track 2 (Great Room 2)

Weakly Supervised Adversarial Learning for 3D Human Pose Estimation from Point Clouds

Zihao Zhang (Institute of Computing Technology, Chinese Academy of Sciences; University of Chinese Academy of Sciences), Lei Hu (Institute of Computing Technology, Chinese Academy of Sciences; University of Chinese Academy of Sciences), Xiaoming Deng (Bejing Key Laboratory of Human Computer Interactions, Institute of Software, Chinese Academy of Sciences), Shihong Xia (Institute of Computing Technology, Chinese Academy of Sciences; University of Chinese Academy of Sciences)

Journal

Abstract: “In this work, we study the point clouds-based 3D human pose estimation problem. Previous methods trying to solve this problem either treating the point clouds as 2D depth maps or as 3D point clouds. However, directly using convolutional neural network on 2D depth maps may cause the loss of 3D space information, while it is well established that processing the 3D point clouds is time-consuming. To solve this problem,instead of solely relying on 3D point clouds or 2D depth maps, we find a way for 3D human pose estimation by combining both 2D pose regression methods and 3D deep learning methods. Given the estimated 2D pose, we use hierarchical PoineNet to perform the 3D pose regression. It is relatively difficult to collect enough 3D labeled data for training a robust model. Therefore, we train the regression network in a weakly supervised adversarial learning manner using both fully-labeled data and weakly-labeled data. Thanks to adopting both 2D and 3D information, our method can precisely and efficiently estimate 3D human pose from a single depth map/point clouds. Experiments on ITOP dataset and Human3.6M dataset show that our method can outperforms the-state-of-the-art methods.”

3D Hand Tracking in the Presence of Excessive Motion Blur

Gabyong Park (KAIST, Republic of Korea), Antonis Argyros (University of Crete and FORTH, Greece), Juyoung Lee (KAIST, Republic of Korea), Woontack Woo (KAIST, Republic of Korea)

Journal

Abstract: “We present a sensor-fusion method that exploits a depth camera and a gyroscope to track the articulation of a hand in the presence of excessive motion blur. In case of slow and smooth hand motions, the existing methods estimate the hand pose fairly accurately and robustly, despite challenges due to the high dimensionality of the problem, self-occlusions, uniform appearance of hand parts, etc. However, the accuracy of hand pose estimation drops considerably for fast-moving hands because the depth image is severely distorted due to motion blur. Moreover, when hands move fast, the actual hand pose is far from the one estimated in the previous frame, therefore the assumption of temporal continuity on which tracking methods rely, is not valid. In this paper, we track fast-moving hands with the combination of a gyroscope and a depth camera. As a first step, we calibrate a depth camera and a gyroscope attached to a hand so as to identify their time and pose offsets. Following that, we fuse the rotation information of the calibrated gyroscope with model-based hierarchical particle filter tracking. A series of quantitative and qualitative experiments demonstrate that the proposed method performs more accurately and robustly in the presence of motion blur, when compared to state of the art algorithms, especially in the case of very fast hand rotations.”

FibAR: Embedding Optical Fibers in 3D Printed Objects for Active Markers in Dynamic Projection Mapping

Daiki Tone (Osaka University), Daisuke Iwai (Osaka University), Shinsaku Hiura (University of Hyogo), Kosuke Sato (Osaka University)

Journal

Abstract: “This paper presents a novel active marker for dynamic projection mapping (PM) that emits a temporal blinking pattern of infrared (IR) light representing its ID. We used a multi-material three dimensional (3D) printer to fabricate a projection object with optical fibers that can guide IR light from LEDs attached on the bottom of the object. The aperture of an optical fiber is typically very small; thus, it is unnoticeable to human observers under projection and can be placed on a strongly curved part of a projection surface. In addition, the working range of our system can be larger than previous marker-based methods as the blinking patterns can theoretically be recognized by a camera placed at a wide range of distances from markers. We propose an automatic marker placement algorithm to spread multiple active markers over the surface of a projection object such that its pose can be robustly estimated using captured images from arbitrary directions. We also propose an optimization framework for determining the routes of the optical fibers in such a way that collisions of the fibers can be avoided while minimizing the loss of light intensity in the fibers. Through experiments conducted using three fabricated objects containing strongly curved surfaces, we confirmed that the proposed method can achieve accurate dynamic PMs in a significantly wide working range.”

SPLAT: Spherical Localization and Tracking in Large Spaces

Lewis Baker (University of Otago, New Zealand), Jonathan Ventura (California Polytechnic State University, United States of America), Stefanie Zollmann (University of Otago, New Zealand), Steven Mills (University of Otago, New Zealand), Tobias Langlotz (University of Otago, New Zealand)

Conference

Abstract: “In Augmented Reality (AR) interfaces, it is essential to track camera motion in order to overlay graphics in the view of the user. However, in many outdoor scenarios the user maintains a static position performing mostly rotations, while Simultaneous Localization and Mapping (SLAM) methods typically require significant translational motion. In this paper, we present a SLAM method that combines spherical Structure-from-Motion and robust 3D tracking. We show that our method can track more reliably than ORB_SLAM2 in large spaces. We discuss this issue in the context of implementing an AR interface for live events in stadiums, and other outdoor environments.”

Deep Soft Procrustes for Markerless Volumetric Sensor Alignment

Vladimiros Sterzentsenko (Information Technologies Institute, Centre for Research and Technology Hellas, Thessaloniki, Greece), Alexandros Doumanoglou (Information Technologies Institute, Centre for Research and Technology Hellas, Thessaloniki, Greece), Spyridon Thermos (Information Technologies Institute, Centre for Research and Technology Hellas, Thessaloniki, Greece), Nikolaos Zioulis (Information Technologies Institute, Centre for Research and Technology Hellas, Thessaloniki, Greece), Dimitrios Zarpalas (Information Technologies Institute, Centre for Research and Technology Hellas, Thessaloniki, Greece), Petros Daras (Information Technologies Institute, Centre for Research and Technology Hellas, Thessaloniki, Greece)

Conference

Abstract: “In this work, we improve markerless data-driven correspondence estimation to achieve more robust and flexible multi-sensor spatial alignment. In particular, we incorporate geometric constraints in an end-to-end manner into a typical segmentation based model and bridge the intermediate dense classification task with the targeted pose estimation one. This is accomplished by a soft, differentiable procrustes analysis that regularizes the segmentation and achieves higher extrinsic calibration performance in expanded sensor placement configurations, while being unrestricted by the number of sensors of the volumetric capture system.”

Session 26: Visual rendering

Thursday, March 26, 9:00 AM - 10:30 AM,
Track 3 (Studio 1)

Eye-dominance-guided Foveated Rendering

Xiaoxu Meng (University of Maryland, College Park), Ruofei Du (Google LLC), Amitabh Varshney (University of Maryland, College Park)

Journal

Abstract: “Optimizing rendering performance is critical for a wide variety of virtual reality (VR) applications. Foveated rendering is emerging as an indispensable technique for reconciling interactive frame rates with ever-higher head-mounted display resolutions. Here, we present a simple yet effective technique for further reducing the cost of foveated rendering by leveraging ocular dominance – the tendency of the human visual system to prefer scene perception from one eye over the other. Our new approach, eye-dominance-guided foveated rendering (EFR), renders the scene at a lower foveation level (higher detail) for the dominant eye than the non-dominant eye. Compared with traditional foveated rendering, EFR can be expected to provide superior rendering performance while preserving the same level of perceived visual quality.”

Accelerated Stereo Rendering with Hybrid Reprojection-Based Rasterization and Adaptive Ray-Tracing

Niko Wißmann (TH Köln, Germany), Martin Mišiak (TH Köln, Germany), Arnulph Fuhrmann (TH Köln, Germany), Marc Erich Latoschik (University Würzburg, Germany)

Conference

Abstract: “This paper presents a hybrid rendering system that combines classic rasterization and real-time ray-tracing to accelerate stereoscopic rendering. The system reprojects the pre-rendered left half of the stereo image pair into the right perspective using a forward grid warping technique and identifies resulting reprojection errors, which are then efficiently resolved by adaptive real-time ray-tracing. The system achieves a significant performance gain, has a negligible quality impact, and is suitable even for higher rendering resolutions.”

Angular Dependence of the Spatial Resolution in Virtual Reality Displays

Ryan Beams (Food and Drug Administration, USA), Brendan Collins (Food and Drug Administration, USA), Andrea S. Kim (Food and Drug Administration, USA), Aldo Badano (Food and Drug Administration, USA)

Conference

Abstract: “We compare two methods for characterizing the angular dependence of the spatial resolution in virtual reality head-mounted displays (HMDs) by measuring the line spread response (LSR) across the field of view (FOV) of the device.”

Multiple-scale Simulation Method for Liquid with Trapped Air under Particle-based Framework

Sinuo Liu (University of Science and Technology Beijing, China), Ben Wang (University of Science and Technology Beijing, China), Xiaojuan Ban (University of Science and Technology Beijing, China)

Conference

Abstract: “In this paper, we propose a multi-scale simulation method under particle-based framework to achieve the realistic and efficient simulation of air-liquid fluid. A unified generation rule is proposed according to the kinetic energy and the velocity difference between fluid particles. Two velocity-based dynamic models are then established for different size of air materials respectively. The Brownian motion of small scale air materials is achieved by Schilk random function. The interaction and air transfer between large scale air materials is achieved by inverse diffusion equation and a new high-order kernel function. Experimental results show that the proposed method can improve the fidelity and richness of the fluid simulation.”

Where to display? How Interface Position Affects Comfort and Task Switching Time on Glanceable Interfaces

Samat Imamov (Virginia Tech, USA), Daniel Monzel (Virginia Tech, USA), Wallace Lages (Virginia Tech, USA)

Conference

Abstract: “A critical decision when designing glanceable information displays is where to place the content. However, no study has been made to systematically evaluate world-locked content position, considering both cognitive and physiological constraints. We designed a scenario that mimics context switching between a real world-task and an information display. Our results show that discomfort and context switching time increases as the information is displayed far from the task position. We also found participants preferred content at medium distances, although they were also faster with content at far distances.”

Session 27: Audio

Thursday, March 26, 11:00 AM - 12:30 PM,
Track 1 (Great Room 1)

Superhuman Hearing - Virtual Prototyping of Artificial Hearing: A Case Study on Interactions and Acoustic Beamforming

Michele Geronazzo (Aalborg University, Denmark), Luis S. Vieira (Khora VR, Denmark), Niels Christian Nilsson (Aalborg University, Denmark), Jesper Udesen (GN Audio A/S), Stefania Serafin (Aalborg University, Denmark)

Journal

Abstract: “Directivity and gain in microphone array systems for hearing aids or hearable devices allow users to acoustically enhance the information of a source of interest. This source is usually positioned directly in front. This feature is called acoustic beamforming. The current study aimed to improve users’ interactions with beamforming via a virtual prototyping approach in immersive virtual environments (VEs). Eighteen participants took part in experimental sessions composed of a calibration procedure and a selective auditory attention voice-pairing task. Eight concurrent speakers were placed in an anechoic environment in two virtual reality (VR) scenarios. The scenarios were a purely virtual scenario and a realistic 360 degrees audio-visual recording. Participants were asked to find an individual optimal parameterization for three different virtual beamformers: (i) head-guided, (ii) eye gaze-guided, and (iii) a novel interaction technique called dual beamformer, where head-guided is combined with an additional hand-guided beamformer. None of the participants were able to complete the task without a virtual beamformer (i.e., in normal hearing condition) due to the high complexity introduced by the design. However, participants were able to correctly pair all speakers using all three proposed interaction metaphors. Providing superhuman hearing abilities in the form of an acoustic beamformer guided by head movements resulted in statistically significant improvements in terms of pairing time, suggesting the task-relevance of interacting with multiple points of interests.”

Scene-Aware Audio Rendering via Deep Acoustic Analysis

Zhenyu Tang (University of Maryland), Nicholas J. Bryan (Adobe Research), Dingzeyu Li (Adobe Research), Timothy R. Langlois (Adobe Research), Dinesh Manocha (University of Maryland)

Journal

Abstract: “We present a new method to capture the acoustic characteristics of real-world rooms using commodity devices, and use the captured characteristics to generate similar sounding sources with virtual models. Given the captured audio and an approximate geometric model of a real-world room, we present a novel learning-based method to estimate its acoustic material properties. Our approach is based on deep neural networks that estimate the reverberation time and equalization of the room from recorded audio. These estimates are used to compute material properties related to room reverberation using a novel material optimization objective. We use the estimated acoustic material characteristics for audio rendering using interactive geometric sound propagation and highlight the performance on many real-world scenarios. We also perform a user study to evaluate the perceptual similarity between the recorded sounds and our rendered audio.”

EarVR: Using Ear Haptics in Virtual Reality for Deaf and Hard-of-Hearing People

Mohammadreza Mirzaei (Vienna University of Technology, Austria), Peter Kán (Vienna University of Technology, Austria), Hannes Kaufmann (Vienna University of Technology, Austria)

Journal

Abstract: “Virtual Reality (VR) has a great potential to improve skills of Deaf and Hard-of-Hearing (DHH) people. Most VR applications and devices are designed for persons without hearing problems. Therefore, DHH persons have many limitations when using VR. Adding special features in a VR environment, such as subtitles, or haptic devices will help them. Previously, it was necessary to design a special VR environment for DHH persons. We introduce and evaluate a new prototype called “EarVR” that can be mounted on any desktop or mobile VR Head-Mounted Display (HMD). EarVR analyzes 3D sounds in a VR environment and locates the direction of the sound source that is closest to a user. It notifies the user about the sound direction using two vibro-motors placed on the user’s ears. EarVR helps DHH persons to complete sound-based VR tasks in any VR application with 3D audio and a mute option for background music. Therefore, DHH persons can use all VR applications with 3D audio, not only those applications designed for them. Our user study shows that DHH participants were able to complete a simple VR task significantly faster with EarVR than without. The completion time of DHH participants was very close to participants without hearing problems. Also, it shows that DHH participants were able to finish a complex VR task with EarVR, while without it, they could not finish the task even once. Finally, our qualitative and quantitative evaluation among DHH participants indicates that they preferred to use EarVR and it encouraged them to use VR technology more.”

Outdoor Sound Propagation Based on Adaptive FDTD-PE

Shiguang Liu (Tianjin University), Jin Liu (Tianjin University)

Conference

Abstract: “We propose an adaptive FDTD-PE method to simulate sound propagation in 3D scenes taking into account atmospheric inhomogeneity and the ground effect. In the simulation, the ground is considered as a porous medium with a certain thickness and the scene is categorized into a number of 2D planes. Furthermore, a novel encoding method was designed to process sound pressure data through function fitting. Finally, an efficient sound rendering method with this encoding representation is developed to perform auralization in the frequency-domain. Experiments indicate that our method can realistically simulate outdoor sound propagation, with quite higher speed and lower storage.”

Session 28: Applications : safety, education, architecture, traffic control

Thursday, March 26, 11:00 AM - 12:30 PM,
Track 1 (Great Room 1)

Exploring Eye Gaze Visualization Techniques for Identifying Distracted Students in Educational VR

Yitoshee Rahman (University of Louisiana at Lafayette, Lafayette, Louisiana, United States), Sarker Monojit Asish (University of Louisiana at Lafayette, Lafayette, Louisiana, United States), Nicholas P. Fisher (University of Louisiana at Lafayette, Lafayette, Louisiana, United States), Ethan Charles Bruce (University of Louisiana at Lafayette, Lafayette, Louisiana, United States), Arun K. Kulshreshth (University of Louisiana at Lafayette, Lafayette, Louisiana, United States), Christoph W. Borst (University of Louisiana at Lafayette, Lafayette, Louisiana, United States)

Conference

Abstract: “Virtual Reality (VR) headsets with embedded eye trackers could be used in VR-based education in which a live teacher guides a group of students. The eye tracking could enable better insights into students’ activities and behavior patterns. For real-time insight, a teacher’s VR environment can display student eye gaze. These visualizations would help identify students who are confused/distracted, and the teacher could better guide them to focus on important objects. We present six gaze visualization techniques for a VR-embedded teacher’s view, and we present a user study to compare these techniques.”

Presence, Mixed Reality, and Risk-Taking Behavior: A Study in Safety Interventions

Sogand Hasanzadeh (Virginia Tech, United States), Nicholas Polys (Virginia Tech, United States), Jesus M. de la Garza (Clemson University, United States)

Journal

Abstract: “Immersive environments have been successfully applied to a broad range of safety training in high-risk domains. However, very little research has used these systems to evaluate the risk-taking behavior of construction workers. In this study, we investigated the feasibility and usefulness of providing passive haptics in a mixed-reality environment to capture the risk-taking behavior of workers, identify at-risk workers, and propose injury-prevention interventions to counteract excessive risk-taking and risk-compensatory behavior. Within a mixed-reality environment in a CAVE-like display system, our subjects installed shingles on a (physical) sloped roof of a (virtual) two-story residential building on a morning in a suburban area. Through this controlled, within-subject experimental design, we exposed each subject to three experimental conditions by manipulating the level of safety intervention. Workers’ subjective reports, physiological signals, psychophysical responses, and reactionary behaviors were then considered as promising measures of Presence. The results showed that our mixed-reality environment was a suitable platform for triggering behavioral changes under different experimental conditions and for evaluating the risk perception and risk-taking behavior of workers in a risk-free setting. These results demonstrated the value of immersive technology to investigate natural human factors.”

Health and Safety of VR Use by Children in an Educational Use Case

Robert Rauschenberger (Phoenix User Research Center, Exponent, Inc.), Brandon Barakat (Phoenix User Research Center, Exponent, Inc.)

Conference

Abstract: “The present study examined potential health and safety effects of short-term virtual reality (VR) use by children in an educational use case scenario. Thirty child participants (ages 10-12 years) used VR for 30 minutes daily across five consecutive days. A variety of optometric, psychophysical, and self-reported comfort measures were collected. There was no empirical evidence that short-term use of VR in an educational use setting by children ages 10 to 12 years resulted in any adverse visual, spatial representational, or balance aftereffects, or that it caused undue nausea, oculomotor discomfort, or disorientation.”

Design and Evaluation of a Tool to Support Air Traffic Control with 2D and 3D Visualizations

Gernot Rottermanner (St. Poelten University of Applied Sciences, St. Poelten, Austria), Victor Adriel de Jesus Oliveira (St. Poelten University of Applied Sciences, St. Poelten, Austria), Patrik Lechner (St. Poelten University of Applied Sciences, St. Poelten, Austria), Philipp Graf (St. Poelten University of Applied Sciences, St. Poelten, Austria), Mylene Kreiger (St. Poelten University of Applied Sciences, St. Poelten, Austria), Markus Wagner (St. Poelten University of Applied Sciences, St. Poelten, Austria), Michael Iber (St. Poelten University of Applied Sciences, St. Poelten, Austria), Carl-Herbert Rokitansky (University of Salzburg, Computer Sciences Institute, Aerospace Research, Salzburg, Austria), Kurt Eschbacher (University of Salzburg, Computer Sciences Institute, Aerospace Research, Salzburg, Austria), Volker Grantz (Frequentis AG, Vienna, Austria), Volker Settgast (Fraunhofer Austria Research GmbH, Vienna, Austria), Peter Judmaier (St. Poelten University of Applied Sciences, St. Poelten, Austria)

Conference

Abstract: “Air traffic control officers (ATCOs) are specialized workers responsible to monitor and guide airplanes in their assigned airspace. Such a task is highly visual and mainly supported by 2D visualizations. In this paper, we designed and assessed an application for visualizing air traffic in both orthographic (2D) and perspective (3D) views. A user study was then performed to compare these two types of representations in terms of situation awareness, workload, performance, and user acceptance. Results show that the 3D view yielded both higher situation awareness and less workload than the 2D view condition. However, such a performance does not match the opinion of the ATCOs about the 3D representation.”

Learning in the Field: Comparison of Desktop, Immersive Virtual Reality, and Actual Field Trips for Place-Based STEM Education

Jiayan Zhao (The Pennsylvania State University, USA), Peter LaFemina (The Pennsylvania State University, USA), Julia Carr (The Pennsylvania State University, USA), Pejman Sajjadi (The Pennsylvania State University, USA), Jan Oliver Wallgrün (The Pennsylvania State University, USA), Alexander Klippel (The Pennsylvania State University, USA)

Conference

Abstract: “With immersive virtual reality (iVR) entering the mainstream, virtual field trips (VFTs) are increasingly being considered as an effective form of learning in STEM disciplines such as geosciences. However, little research has investigated the implications of VFTs in place-based STEM education. We report on a study that divided an introductory geoscience course into three groups with the first two groups experiencing a VFT either on desktop or in iVR, while the third group went on an actual field trip. Our findings demonstrate positive learning effects of VFTs and provide evidence that geology VFTs need not be limited to iVR setups.”

Session 29: AR - Perception

Thursday, March 26, 11:00 AM - 12:30 PM,
Track 3 (Studio 1)

ARCHIE: A User-Focused Framework for Testing Augmented Reality Applications in the Wild

Sarah Lehman (Temple University), Haibin Ling (Stony Brook University), Chiu Tan (Temple University)

Conference

Abstract: “We present ARCHIE, a framework for testing augmented reality applications in the wild. ARCHIE collects user feed- back and system state data in situ to help developers identify and debug issues important to testers. It also supports testing of multiple application versions (“profiles”) in a single evaluation, prioritizing those versions which the tester finds more appealing. We implemented four test case applications and used them to examine ARCHIE’s performance overhead and context switching cost. We demonstrate that ARCHIE provides no significant overhead for AR applications, and introduces at most 2% processing overhead when switching among large groups of testable profiles.”

The Plausibility Paradox For Scaled-Down Users In Virtual Environments

Matti Pouke (University of Oulu, Finland), Katherine J. Mimnaugh (University of Oulu, Finland), Timo Ojala (University of Oulu, Finland), Steven M. LaValle (University of Oulu, Finland)

Conference

Abstract: “This paper identifies a new phenomenon: when users interact with simulated objects in a virtual environment where the user is much smaller than usual, there is a mismatch between the expected and the correct approximation of physics at that scale. We investigated perceived realism in a virtual reality experience in which the user has been scaled down by a factor of ten. Forty-four subjects performed an interaction task with objects under two physics simulation conditions. In one condition, the objects behaved accurately according to physics that would be correct at that reduced scale. In the other condition, the objects behaved as if no scaling had occurred. We found that a significant majority of users preferred the latter condition.”

The Role of Viewing Distance and Feedback on Affordance Judgments in Augmented Reality

Holly Gagnon (University of Utah, USA), Dun Na (Vanderbilt University, USA), Keith Heiner (University of Utah, USA), Jeanine Stefanucci (University of Utah, USA), Sarah Creem-Regehr (University of Utah, USA), Bobby Bodenheimer (Vanderbilt University, USA)

Conference

Abstract: “The effectiveness of Augmented Reality (AR) increases when viewers perceive that they can act on virtual objects as if they are real. We examined effects of viewing distance (correlated to the virtual features seen in the field of view) and verbal feedback on observers’ judgments of passing through an AR aperture using the Microsoft Hololens. Passing through judgments were closer to actual shoulder width when viewed at a near distance compared to a farther viewing point. Verbal feedback reduced error over trials at the farther distance. The results have implications for ways to improve accuracy of affordance judgments in AR.”

Glanceable AR: Evaluating Information Access Methods for Head-Worn Augmented Reality

Feiyu Lu (Virginia Tech, United States), Shakiba Davari (Virginia Tech, United States), Lee Lisle (Virginia Tech, United States), Yuan Li (Virginia Tech, United States), Doug Bowman (Virginia Tech, United States)

Conference

Abstract: “Augmented reality head-worn displays (AR HWDs) have the potential to assist personal computing and the acquisition of everyday information. In this research, we propose Glanceable AR, an interaction paradigm for accessing information in AR HWDs. In Glanceable AR, secondary information resides at the periphery of vision to stay unobtrusive and can be accessed by a quick glance whenever needed. We proposed two novel hands-free interfaces under the paradigm using head rotation or eye-tracked gaze to access information. We evaluated them in two dual-task scenarios along with a baseline HUD technique.”

Influence of Perspective on Dynamic Tasks in Virtual Reality

Naval Bhandari (University of Bath, UK), Eamonn O’Neill (University of Bath, UK)

Conference

Abstract: “Users are increasingly able to move around and perform tasks in virtual environments (VEs). Such movements and tasks are typically represented in a VE using either a first-person perspective (1PP) or a third-person perspective (3PP). In Virtual Reality (VR), 1PP is almost universally used. 3PP can be represented as either egocentric or allocentric. However, there is little empirical evidence about which view may be better suited to dynamic tasks in particular. This paper compares the use of 1PP, egocentric 3PP and allocentric 3PP for dynamic tasks in VR. Our results indicate that 1PP provides the best spatial perception and performance across several dynamic tasks. This advantage is less pronounced as the task becomes more dynamic.”