IEEE VR logo

March 22nd - 26th

IEEE VR logo

March 22nd - 26th

IEEE Computer Society IEEE

IEEE Computer Society IEEE

Exhibitors and Supporters

TVCG Papers

TVCG Invited Papers

Conference Papers


Teleporting through virtual environments:Effects of path scale and environment scale on spatial updating

Jonathan Kelly (Iowa State University), Alec Ostrander (Iowa State University), Alex Lim (Iowa State University), Lucia Cherep (Iowa State University), Stephen B. Gilbert (Iowa State University)

TVCG

Abstract: “Virtual reality systems typically allow users to physically walk and turn, but virtual environments (VEs) often exceed the available walking space. Teleporting has become a common user interface, whereby the user aims a laser pointer to indicate the desired location, and sometimes orientation, in the VE before being transported without self-motion cues. This study evaluated the influence of rotational self-motion cues on spatial updating performance when teleporting, and whether the importance of rotational cues varies across movement scale and environment scale. Participants performed a triangle completion task by teleporting along two outbound path legs before pointing to the unmarked path origin. Rotational self-motion reduced overall errors across all levels of movement scale and environment scale, though it also introduced a slight bias toward under-rotation. The importance of rotational self-motion was exaggerated when navigating large triangles and when the surrounding environment was large. Navigating a large triangle within a small VE brought participants closer to surrounding landmarks and boundaries, which led to greater reliance on piloting (landmark-based navigation) and therefore reduced - but did not eliminate - the impact of rotational self-motion cues. These results indicate that rotational self-motion cues are important when teleporting, and that navigation can be improved by enabling piloting.”

Weakly Supervised Adversarial Learning for 3D Human Pose Estimation from Point Clouds

Zihao Zhang (Institute of Computing Technology, Chinese Academy of Sciences), Lei Hu (Institute of Computing Technology, Chinese Academy of Sciences), Xiaoming Deng (Institute of Software, Chinese Academy of Sciences), Shihong Xia (Institute of Computing Technology, Chinese Academy of Sciences)

TVCG

Abstract: “In this work, we study the point clouds-based 3D human pose estimation problem. Previous methods trying to solve this problem either treating the point clouds as 2D depth maps or as 3D point clouds. However, directly using convolutional neural network on 2D depth maps may cause the loss of 3D space information, while it is well established that processing the 3D point clouds is time-consuming. To solve this problem,instead of solely relying on 3D point clouds or 2D depth maps, we find a way for 3D human pose estimation by combining both 2D pose regression methods and 3D deep learning methods. Given the estimated 2D pose, we use hierarchical PoineNet to perform the 3D pose regression. It is relatively difficult to collect enough 3D labeled data for training a robust model. Therefore, we train the regression network in a weakly supervised adversarial learning manner using both fully-labeled data and weakly-labeled data. Thanks to adopting both 2D and 3D information, our method can precisely and efficiently estimate 3D human pose from a single depth map/point clouds. Experiments on ITOP dataset and Human3.6M dataset show that our method can outperforms the-state-of-the-art methods.”

Getting There Together: Group Navigation in Distributed Virtual Environments

Tim Weissker (Bauhaus-Universitaet Weimar), Pauline Bimberg (Bauhaus-Universitaet Weimar), Bernd Froehlich (Bauhaus-Universität Weimar)

TVCG

Abstract: “We analyzed the design space of group navigation tasks in distributed virtual environments and present a framework consisting of techniques to form groups, distribute responsibilities, navigate together, and eventually split up again. To improve joint navigation, our work focused on an extension of the Multi-Ray Jumping technique that allows adjusting the spatial formation of two distributed users as part of the target specification process. The results of a quantitative user study showed that these adjustments lead to significant improvements in joint two-user travel, which is evidenced by more efficient travel sequences and lower task loads imposed on the navigator and the passenger. In a qualitative expert review involving all four stages of group navigation, we confirmed the effective and efficient use of our technique in a more realistic use-case scenario and concluded that remote collaboration benefits from fluent transitions between individual and group navigation.”

Factored Occlusion: Single Spatial Light Modulator Occlusion-capable Optical See-through Augmented Reality Display

Brooke Krajancich (Stanford University), Nitish Padmanaban (Stanford University), Gordon Wetzstein (Stanford University)

TVCG

Abstract: “Occlusion is a powerful visual cue that is crucial for depth perception and realism in optical see-through augmented reality (OST-AR). However, existing OST-AR systems additively overlay physical and digital content with beam combiners – an approach that does not easily support mutual occlusion, resulting in virtual objects that appear semi-transparent and unrealistic. In this work, we propose a new type of occlusion-capable OST-AR system. Rather than additively combining the real and virtual worlds, we employ a single digital micromirror device (DMD) to merge the respective light paths in a multiplicative manner. This unique approach allows us to simultaneously block light incident from the physical scene on a pixel-by-pixel basis while also modulating the light emitted by a light-emitting diode (LED) to display digital content. Our technique builds on mixed binary/continuous factorization algorithms to optimize time-multiplexed binary DMD patterns and their corresponding LED colors to approximate a target augmented reality (AR) scene. In simulations and with a prototype benchtop display, we demonstrate hard-edge occlusions, plausible shadows, and also gaze-contingent optimization of this novel display mode, which only requires a single spatial light modulator.”

The Security-Utility Trade-off for Iris Authentication and Eye Animation for Social Virtual Avatars

Brendan John (University of Florida), Sanjeev Koppal (University of Florida), Sophie Joerg (Clemson University), Eakta JAIN (University of Florida)

TVCG

Abstract: “The gaze behavior of virtual avatars is critical to social presence and perceived eye contact during social interactions in Virtual Reality. Virtual Reality headsets are being designed with integrated eye tracking to enable compelling virtual social interactions. This paper shows that the near infra-red cameras used in eye tracking capture eye images that contain iris patterns of the user. Because iris patterns are a gold standard biometric, the current technology places the user’s biometric identity at risk. Our first contribution is an optical defocus based hardware solution to remove the iris biometric from the stream of eye tracking images. We characterize the performance of this solution with different internal parameters. Our second contribution is a psychophysical experiment with a same-different task that investigates the sensitivity of users to a virtual avatar’s eye movements when this solution is applied. By deriving detection threshold values, our findings provide a range of defocus parameters where the change in eye movements would go unnoticed in a conversational setting. Our third contribution is a perceptual study to determine the impact of defocus parameters on the perceived eye contact, attentiveness, naturalness, and truthfulness of the avatar. Thus, if a user wishes to protect their iris biometric, our approach provides a solution that balances biometric protection while preventing their conversation partner from perceiving a difference in the user’s virtual avatar. This work is the first to develop secure eye tracking configurations for VR/AR/XR applications and motivates future work in the area.”

3D Hand Tracking in the Presence of Excessive Motion Blur

Gabyong Park (KAIST), Antonis Argyros (FORTH), Juyoung Lee (Korea Advanced Institute of Science and Technology), Woontack Woo (KAIST )

TVCG

Abstract: “We present a sensor-fusion method that exploits a depth camera and a gyroscope to track the articulation of a hand in the presence of excessive motion blur. In case of slow and smooth hand motions, the existing methods estimate the hand pose fairly accurately and robustly, despite challenges due to the high dimensionality of the problem, self-occlusions, uniform appearance of hand parts, etc. However, the accuracy of hand pose estimation drops considerably for fast-moving hands because the depth image is severely distorted due to motion blur. Moreover, when hands move fast, the actual hand pose is far from the one estimated in the previous frame, therefore the assumption of temporal continuity on which tracking methods rely, is not valid. In this paper, we track fast-moving hands with the combination of a gyroscope and a depth camera. As a first step, we calibrate a depth camera and a gyroscope attached to a hand so as to identify their time and pose offsets. Following that, we fuse the rotation information of the calibrated gyroscope with model-based hierarchical particle filter tracking. A series of quantitative and qualitative experiments demonstrate that the proposed method performs more accurately and robustly in the presence of motion blur, when compared to state of the art algorithms, especially in the case of very fast hand rotations.”

DGaze: CNN-Based Gaze Prediction in Dynamic Scenes

Zhiming Hu (Peking University), Sheng Li (Peking University), Congyi Zhang (The University of Hong Kong), Kangrui Yi (Peking University), Guoping Wang (Peking Univesity), Dinesh Manocha (University of Maryland )

TVCG

Abstract: “We conduct novel analyses of users’ gaze behaviors in dynamic virtual scenes and, based on our analyses, we present a novel CNN-based model called DGaze for gaze prediction in HMD-based applications. We first collect 43 users’ eye tracking data in 5 dynamic scenes under free-viewing conditions. Next, we perform statistical analysis of our data and observe that dynamic object positions, head rotation velocities, and salient regions are correlated with users’ gaze positions. Based on our analysis, we present a CNN-based model (DGaze) that combines object position sequence, head velocity sequence, and saliency features to predict users’ gaze positions. Our model can be applied to predict not only realtime gaze positions but also gaze positions in the near future and can achieve better performance than prior method. In terms of realtime prediction, DGaze achieves a 22.0% improvement over prior method in dynamic scenes and obtains an improvement of 9.5% in static scenes, based on using the angular distance as the evaluation metric. We also propose a variant of our model called DGaze_ET that can be used to predict future gaze positions with higher precision by combining accurate past gaze data gathered using an eye tracker. We further analyze our CNN architecture and verify the effectiveness of each component in our model. We apply DGaze to gaze-contingent rendering and a game, and also present the evaluation results from a user study.”

Superhuman Hearing - Virtual Prototyping of Artificial Hearing: a Case Study on Interactions and Acoustic Beamforming

Michele Geronazzo (Aalborg University), Luis Vieira (Khora VR), Niels Christian Nilsson (Aalborg University Copenhagen), Jesper Udesen (Jabra GN), Stefania Serafin (Aalborg University)

TVCG

Abstract: “Directivity and gain in microphone array systems for hearing aids or hearable devices allow users to acoustically enhance the information of a source of interest. This source is usually positioned directly in front. This feature is called acoustic beamforming. The current study aimed to improve users’ interactions with beamforming via a virtual prototyping approach in immersive virtual environments (VEs). Eighteen participants took part in experimental sessions composed of a calibration procedure and a selective auditory attention voice-pairing task. Eight concurrent speakers were placed in an anechoic environment in two virtual reality (VR) scenarios. The scenarios were a purely virtual scenario and a realistic 360 degrees audio-visual recording. Participants were asked to find an individual optimal parameterization for three different virtual beamformers: (i) head-guided, (ii) eye gaze-guided, and (iii) a novel interaction technique called dual beamformer, where head-guided is combined with an additional hand-guided beamformer. None of the participants were able to complete the task without a virtual beamformer (i.e., in normal hearing condition) due to the high complexity introduced by the design. However, participants were able to correctly pair all speakers using all three proposed interaction metaphors. Providing superhuman hearing abilities in the form of an acoustic beamformer guided by head movements resulted in statistically significant improvements in terms of pairing time, suggesting the task-relevance of interacting with multiple points of interests.”

Understanding The Effects of Depth Information in Shared Gaze Augmented Reality Environments

Austin Erickson (University of Central Florida), Nahal Norouzi (University of Central Florida), Kangsoo Kim (University of Central Florida), Joseph LaViola (University of Central Florida), Gerd Bruder (University of Central Florida), Greg Welch (University of Central Florida)

TVCG

Abstract: “Augmented reality (AR) setups have the capability of facilitating collaboration for collocated and remote users by augmenting and sharing their virtual points of interest in each user’s physical space. With gaze being an important communication cue during human interaction, augmenting the physical space with each user’s focus of attention through different visualizations such as ray, frustum, and cursor has been studied in the past to enhance the quality of interaction. Understanding each user’s focus of attention is susceptible to error since it has to rely on both the user’s gaze and depth information of the target to compute the endpoint of the user’s gaze. Such information is computed by eye trackers and depth cameras respectively, which introduces two sources of errors into the shared gaze experience. Depending on the amount of error and type of visualization, the augmented gaze can negatively mislead a user’s attention during their collaboration instead of enhancing the interaction. In this paper, we present a human-subjects study to understand the effects of eye tracking errors, depth camera accuracy errors, and gaze visualization on users’ performance and subjective experience during a collaborative task with a virtual human partner, where users were asked to identify a target within a dynamic crowd. We simulate seven different levels of eye tracking error as a horizontal offset to the intended gaze point and seven different levels of depth accuracy errors that make the gaze point appear in front of or behind the intended gaze point. In addition, we examine four different visualization styles for shared gaze information, including an extended ray that passes through the target and extends to a fixed length, a truncated ray that halts upon reaching the target gaze point, a cursor visualization that appears at the target gaze point, as well as a combination of both cursor and truncated ray display modes.”

Mind the Gap: The Underrepresentation of Female Participants and Authors in Virtual Reality Research

Tabitha C. Peck (Davidson College), Laura E. Sockol (Davidson College), Sarah M Hancock (Davidson College)

TVCG

Abstract: “A common goal of human-subject experiments in virtual reality (VR) research is evaluating VR hardware and software for use by the general public. A core principle of human-subject research is that the sample included in a given study should be representative of the target population; otherwise, the conclusions drawn from the findings may be biased and may not generalize to the population of interest. In order to assess whether characteristics of participants in VR research are representative of the general public, we investigated participant demographic characteristics from human-subject experiments in the Proceedings of the IEEE Virtual Reality Conferences from 2015-2019. We also assessed the representation of female authors. In the 325 relevant papers, which presented 365 human-participant experiments, we found evidence of significant underrepresentation of women as both participants and authors. To investigate whether this underrepresentation may bias researchers’ findings, we then conducted a meta-analysis and meta-regression to assess whether demographic characteristics of study participants were associated with a common outcome evaluated in VR research: the change in simulator sickness following head-mounted display VR exposure. As expected, participants in VR studies using HMDs experienced small but significant increases in simulator sickness. However, across the included studies, the change in simulator sickness was systematically associated with the proportion of female participants. We discuss the negative implications of conducting experiments on non-representative samples and provide methodological recommendations mitigating bias for future VR research.”

A Steering Algorithm for Redirected Walking Using Reinforcement Learning

Ryan R Strauss (Davidson College), Raghuram Ramanujan (Davidson College), Andrew Becker (Bank of America), Tabitha C. Peck (Davidson College)

TVCG

Abstract: “Redirected Walking (RDW) steering algorithms have traditionally relied on human-engineered logic. However, recent advances in reinforcement learning (RL) have produced systems that surpass human performance on a variety of control tasks. This paper investigates the potential of using RL to develop a novel reactive steering algorithm for RDW. Our approach uses RL to train a deep neural network that directly prescribes the rotation, translation, and curvature gains to transform a virtual environment given a user’s position and orientation in the tracked space. We compare our learned algorithm to steer-to-center using simulated and real paths. We found that our algorithm outperforms steer-to-center on simulated paths, and found no significant difference on distance traveled on real paths. We demonstrate that when modeled as a continuous control problem, RDW is a suitable domain for RL, and moving forward, our general framework provides a promising path towards an optimal RDW steering algorithm.”

The Impact of a Self-Avatar, Hand Collocation, and Hand Proximity on Embodiment and Stroop Interference

Tabitha C. Peck (Davidson College), Altan Tutar (Davidson College)

TVCG

Abstract: “Understanding the effects of hand proximity to objects and tasks is critical for hand-held and near-hand objects. Even though self-avatars have been shown to be beneficial for various tasks in virtual environments, little research has investigated the effect of avatar hand proximity on working memory. This paper presents a between-participants user study investigating the effects of self-avatars and physical hand proximity on a common working memory task, the Stroop interference task. Results show that participants felt embodied when a self-avatar was in the scene, and that the subjective level of embodiment decreased when a participant’s hands were not collocated with the avatar’s hands. Furthermore, a participant’s physical hand placement was significantly related to Stroop interference: proximal hands produced a significant increase in accuracy compared to non-proximal hands. Surprisingly, Stroop interference was not mediated by the existence of a self-avatar or level of embodiment.”

Eye-dominance-guided Foveated Rendering

Xiaoxu Meng (University of Maryland College Park), Ruofei Du (Google), Amitabh Varshney (University of Maryland College Park)

TVCG

Abstract: “Optimizing rendering performance is critical for a wide variety of virtual reality (VR) applications. Foveated rendering is emerging as an indispensable technique for reconciling interactive frame rates with ever-higher head-mounted display resolutions. Here, we present a simple yet effective technique for further reducing the cost of foveated rendering by leveraging ocular dominance – the tendency of the human visual system to prefer scene perception from one eye over the other. Our new approach, eye-dominance-guided foveated rendering (EFR), renders the scene at a lower foveation level (higher detail) for the dominant eye than the non-dominant eye. Compared with traditional foveated rendering, EFR can be expected to provide superior rendering performance while preserving the same level of perceived visual quality.”

ThinVR: Heterogeneous microlens arrays for compact, 180 degree FOV VR near-eye displays

Joshua Ratcliff (Intel Labs), Alexey Supikov (Intel Labs), Santiago Eloy Alfaro (Intel Labs), Ronald Azuma (Intel Labs)

TVCG

Abstract: “Today’s Virtual Reality (VR) displays are dramatically better than the head-worn displays offered 30 years ago, but today’s displays remain nearly as bulky as their predecessors in the 1980’s. Also, almost all consumer VR displays today provide 90-110 degrees field of view (FOV), which is much smaller than the human visual system’s FOV which extends beyond 180 degrees horizontally. In this paper, we propose ThinVR as a new approach to simultaneously address the bulk and limited FOV of head-worn VR displays. ThinVR enables a head-worn VR display to provide 180 degrees horizontal FOV in a thin, compact form factor. Our approach is to replace traditional large optics with a curved microlens array of custom-designed heterogeneous lenslets and place these in front of a curved display. We found that heterogeneous optics were crucial to make this approach work, since over a wide FOV, many lenslets are viewed off the central axis. We developed a custom optimizer for designing custom heterogeneous lenslets to ensure a sufficient eyebox while reducing distortions. The contribution includes an analysis of the design space for curved microlens arrays, implementation of physical prototypes, and an assessment of the image quality, eyebox, FOV, reduction in volume and pupil swim distortion. To our knowledge, this is the first work to demonstrate and analyze the potential for curved, heterogeneous microlens arrays to enable compact, wide FOV head-worn VR displays.”

Scene-Aware Audio Rendering via Deep Acoustic Analysis

Zhenyu Tang (University of Maryland), Nicholas J. Bryan (Adobe Research), Dingzeyu Li (Adobe Research), Timothy Richard Langlois (Adobe), Dinesh Manocha (University of Maryland )

TVCG

Abstract: “We present a new method to capture the acoustic characteristics of real-world rooms using commodity devices, and use the captured characteristics to generate similar sounding sources with virtual models. Given the captured audio and an approximate geometric model of a real-world room, we present a novel learning-based method to estimate its acoustic material properties. Our approach is based on deep neural networks that estimate the reverberation time and equalization of the room from recorded audio. These estimates are used to compute material properties related to room reverberation using a novel material optimization objective. We use the estimated acoustic material characteristics for audio rendering using interactive geometric sound propagation and highlight the performance on many real-world scenarios. We also perform a user study to evaluate the perceptual similarity between the recorded sounds and our rendered audio.”

Physically-inspired Deep Light Estimation from a Homogeneous-Material Object for Mixed Reality Lighting

Jinwoo Park (KAIST), Hunmin Park (KAIST), Sung-eui Yoon (KAIST), Woontack Woo (KAIST )

TVCG

Abstract: “In mixed reality (MR), augmenting virtual objects consistently with real-world illumination is one of the key factors that provide a realistic and immersive user experience. For this purpose, we propose a novel deep learning-based method to estimate high dynamic range (HDR) illumination from a single RGB image of a reference object. To obtain illumination of a current scene, previous approaches inserted a special camera in that scene, which may interfere with user’s immersion, or they analyzed reflected radiances from a passive light probe with a specific type of materials or a known shape. The proposed method does not require any additional gadgets or strong prior cues, and aims to predict illumination from a single image of an observed object with a wide range of homogeneous materials and shapes. To effectively solve this ill-posed inverse rendering problem, three sequential deep neural networks are employed based on a physically-inspired design. These networks perform end-to-end regression to gradually decrease dependency on the material and shape. To cover various conditions, the proposed networks are trained on a large synthetic dataset generated by physically-based rendering. Finally, the reconstructed HDR illumination enables realistic image-based lighting of virtual objects in MR. Experimental results demonstrate the effectiveness of this approach compared against state-of-the-art methods. The paper also suggests some interesting MR applications in indoor and outdoor scenes.”

Realtime Semantic 3D Perception for Immersive Augmented Reality

LEI HAN (Hong Kong University of Science and Technology), Tian Zheng (Tsinghua University), Yinheng Zhu (Tsinghua University), Lan XU (HKUST ), Lu Fang (Tsinghua University)

TVCG

Abstract: “Semantic understanding of 3D environments is critical for both the unmanned system and the human involved virtual/augmented reality (VR/AR) immersive experience. Spatially-sparse convolution, taking advantage of the intrinsic sparsity of 3D point cloud data, makes high resolution 3D convolutional neural networks tractable with state-of-the-art results on 3D semantic segmentation problems. However, the exhaustive computations limits the practical usage of semantic 3D perception for VR/AR applications in portable devices. In this paper, we identify that the efficiency bottleneck lies in the unorganized memory access of the sparse convolution steps, i.e., the points are stored independently based on a predefined dictionary, which is inefficient due to the limited memory bandwidth of parallel computing devices (GPU). With the insight that points are continuous as 2D surfaces in 3D space, a chunk based sparse convolution scheme is proposed to reuse neighboring points within each spatially organized chunk. An efficient multi-layer adaptive fusion module is further proposed for employing the spatial consistency cue of 3D data to further reduce the computational burden. Quantitative experiments on public datasets demonstrate that our approach works $11\times$ faster than previous approaches with competitive accuracy. By implementing both semantic and geometric 3D reconstruction simultaneously on a portable tablet device, we demo a foundation platform for immersive AR applications.”

FibAR: Embedding Optical Fibers in 3D Printed Objects for Active Markers in Dynamic Projection Mapping

Daiki Tone (Osaka University), Daisuke Iwai (Osaka University), Shinsaku Hiura (University of Hyogo), Kosuke Sato (Osaka University)

TVCG

Abstract: “This paper presents a novel active marker for dynamic projection mapping (PM) that emits a temporal blinking pattern of infrared (IR) light representing its ID. We used a multi-material three dimensional (3D) printer to fabricate a projection object with optical fibers that can guide IR light from LEDs attached on the bottom of the object. The aperture of an optical fiber is typically very small; thus, it is unnoticeable to human observers under projection and can be placed on a strongly curved part of a projection surface. In addition, the working range of our system can be larger than previous marker-based methods as the blinking patterns can theoretically be recognized by a camera placed at a wide range of distances from markers. We propose an automatic marker placement algorithm to spread multiple active markers over the surface of a projection object such that its pose can be robustly estimated using captured images from arbitrary directions. We also propose an optimization framework for determining the routes of the optical fibers in such a way that collisions of the fibers can be avoided while minimizing the loss of light intensity in the fibers. Through experiments conducted using three fabricated objects containing strongly curved surfaces, we confirmed that the proposed method can achieve accurate dynamic PMs in a significantly wide working range.”

IlluminatedFocus: Vision Augmentation using Spatial Defocusing via Focal Sweep Eyeglasses and High-Speed Projector

Tatsuyuki Ueda (Osaka University), Daisuke Iwai (Osaka University), Takefumi Hiraki (Osaka University), Kosuke Sato (Osaka University)

TVCG

Abstract: “Aiming at realizing novel vision augmentation experiences, this paper proposes the IlluminatedFocus technique, which spatially defocuses real-world appearances regardless of the distance from the user’s eyes to observed real objects. With the proposed technique, a part of a real object in an image appears blurred, while the fine details of the other part at the same distance remain visible. We apply Electrically Focus-Tunable Lenses (ETL) as eyeglasses and a synchronized high-speed projector as illumination for a real scene. We periodically modulate the focal lengths of the glasses (focal sweep) at more than 60 Hz so that a wearer cannot perceive the modulation. A part of the scene to appear focused is illuminated by the projector when it is in focus of the user’s eyes, while another part to appear blurred is illuminated when it is out of the focus. As the basis of our spatial focus control, we build mathematical models to predict the range of distance from the ETL within which real objects become blurred on the retina of a user. Based on the blur range, we discuss a design guideline for effective illumination timing and focal sweep range. We also model the apparent size of a real scene altered by the focal length modulation. This leads to an undesirable visible seam between focused and blurred areas. We solve this unique problem by gradually blending the two areas. Finally, we demonstrate the feasibility of our proposal by implementing various vision augmentation applications.”

Avatar and Sense of Embodiment: Studying the Relative Preference Between Appearance, Control and Point of View

Rebecca Fribourg (Inria), Ferran Argelaguet Sanz (Inria), Anatole Lécuyer (Inria), Ludovic Hoyet (Inria)

TVCG

Abstract: “In Virtual Reality, a number of studies have been conducted to assess the influence of avatar appearance, avatar control and user point of view on the Sense of Embodiment (SoE) towards a virtual avatar. However, such studies tend to explore each factor in isolation. This paper aims to better understand the inter-relations among these three factors by conducting a subjective matching experiment. In the presented experiment (n=40), participants had to match a given “optimal” SoE avatar configuration (realistic avatar, full-body motion capture, first person point of view), starting by a “minimal” SoE configuration (minimal avatar, no control, third person point of view), by increasing iteratively the level of each factor. The choices of the participants provide insights about their preferences and perception over the three factors considered. Moreover, the subjective matching procedure was conducted in the context of four different interaction tasks with the goal of covering a wide range of actions an avatar can do in a VE. The paper also describes a baseline experiment (n=20) which was used to define the number and order of the different levels for each factor, prior to the subjective matching experiment (e.g. different degrees of realism ranging from abstract to personalised avatars for the visual appearance). The results of the subjective matching experiment show that point of view and control levels were consistently increased by users before appearance levels when it comes to enhancing the SoE. Second, several configurations were identified with equivalent SoE as the one felt in the optimal configuration, but vary between the tasks. Taken together, our results provide valuable insights about which factors to prioritize in order to enhance the SoE towards an avatar in different tasks, and about configurations which lead to fulfilling SoE in VE.”

Animals in Virtual Environments

Hemal Naik (Technical University of Munich), Renaud Bastien (Max Planck Institute of Animal Behavior), Nassir Navab (Technische Universität München), Iain Couzin (Max Planck Institute of Animal Behavior)

TVCG

Abstract: “The core idea in an XR (VR/MR/AR) application is to digitally stimulate one or more sensory organs (e.g. visual, auditory, and olfactory) of the user in an interactive way to achieve an immersive experience. Since early 2000s biologists have been using Virtual Environments (VE) to investigate the mechanisms of behavior in non-human animals including insect, fish, and mammals. VEs have become reliable tools for studying vision, cognition, and sensory-motor control in animals. In turn, the knowledge gained from studying such behaviors can be harnessed by researchers designing biologically inspired robots, smart sensors, and multi-agent artificial intelligence. VE for animals is becoming a widely used application of XR technology but such applications have not previously been reported in the technical literature related to XR. Biologists and computer scientists can benefit greatly from deepening interdisciplinary research in this emerging field and together we can develop new methods for conducting fundamental research in behavioral sciences and engineering. To support our argument we present this review which provides an overview of animal behavior experiments conducted in virtual environments.”

EarVR: Using Ear Haptics in Virtual Reality for Deaf and Hard-of-Hearing People

Mohammadreza Mirzaei (Vienna University of Technology), Peter Kán (Vienna University of Technology), Hannes Kaufmann (Vienna University of Technology)

TVCG

Abstract: “Virtual Reality (VR) has a great potential to improve skills of Deaf and Hard-of-Hearing (DHH) people. Most VR applications and devices are designed for persons without hearing problems. Therefore, DHH persons have many limitations when using VR. Adding special features in a VR environment, such as subtitles, or haptic devices will help them. Previously, it was necessary to design a special VR environment for DHH persons. We introduce and evaluate a new prototype called “EarVR” that can be mounted on any desktop or mobile VR Head-Mounted Display (HMD). EarVR analyzes 3D sounds in a VR environment and locates the direction of the sound source that is closest to a user. It notifies the user about the sound direction using two vibro-motors placed on the user’s ears. EarVR helps DHH persons to complete sound-based VR tasks in any VR application with 3D audio and a mute option for background music. Therefore, DHH persons can use all VR applications with 3D audio, not only those applications designed for them. Our user study shows that DHH participants were able to complete a simple VR task significantly faster with EarVR than without. The completion time of DHH participants was very close to participants without hearing problems. Also, it shows that DHH participants were able to finish a complex VR task with EarVR, while without it, they could not finish the task even once. Finally, our qualitative and quantitative evaluation among DHH participants indicates that they preferred to use EarVR and it encouraged them to use VR technology more.”

Pseudo-Haptic Display of Mass and Mass Distribution During Object Rotation in Virtual Reality

Run Yu (Virginia Tech), Doug Bowman (Virginia Tech)

TVCG

Abstract: “We propose and evaluate novel pseudo-haptic techniques to display mass and mass distribution for proxy-based object manipulation in virtual reality. These techniques are specifically designed to generate haptic effects during the object’s rotation. They rely on manipulating the mapping between visual cues of motion and kinesthetic cues of force to generate a sense of heaviness, which alters the perception of the object’s mass-related properties without changing the physical proxy. First we present a technique to display an object’s mass by scaling its rotational motion relative to its mass. A psycho-physical experiment demonstrates that this technique effectively generates correct perceptions of relative mass between two virtual objects. We then present two pseudo-haptic techniques designed to display an object’s mass distribution. One of them relies on manipulating the pivot point of rotation, while the other adjusts rotational motion based on the real-time dynamics of the moving object. An empirical study shows that both techniques can influence perception of mass distribution, with the second technique being significantly more effective.”

Immersive Process Model Exploration in Virtual Reality

André Zenner (German Research Center for Artificial Intelligence (DFKI), Saarland Informatics Campus), Akhmajon Makhsadov (German Research Center for Artificial Intelligence (DFKI), Saarland Informatics Campus), Sören Klingner (German Research Center for Artificial Intelligence (DFKI), Saarland Informatics Campus), David Liebemann (German Research Center for Artificial Intelligence (DFKI), Saarland Informatics Campus), Antonio Krüger (German Research Center for Artificial Intelligence (DFKI), Saarland Informatics Campus)

TVCG

Abstract: “In many professional domains, relevant processes are documented as abstract process models, such as event-driven process chains (EPCs). EPCs are traditionally visualized as 2D graphs and their size varies with the complexity of the process. While process modeling experts are used to interpreting complex 2D EPCs, in certain scenarios such as, for example, professional training or education, also novice users inexperienced in interpreting 2D EPC data are facing the challenge of learning and understanding complex process models. To communicate process knowledge in an effective yet motivating and interesting way, we propose a novel virtual reality (VR) interface for non-expert users. Our proposed system turns the exploration of arbitrarily complex EPCs into an interactive and multi-sensory VR experience. It automatically generates a virtual 3D environment from a process model and lets users explore processes through a combination of natural walking and teleportation. Our immersive interface leverages basic gamification in the form of a logical walkthrough mode to motivate users to interact with the virtual process. The generated user experience is entirely novel in the field of immersive data exploration and supported by a combination of visual, auditory, vibrotactile and passive haptic feedback. In a user study with N = 27 novice users, we evaluate the effect of our proposed system on process model understandability and user experience, while comparing it to a traditional 2D interface on a tablet device. The results indicate a tradeoff between efficiency and user interest as assessed by the UEQ novelty subscale, while no significant decrease in model understanding performance was found using the proposed VR interface. Our investigation highlights the potential of multi-sensory VR for less time-critical professional application domains, such as employee training, communication, education, and related scenarios focusing on user interest.”

Presence, Mixed Reality, and Risk-Taking Behavior: A Study in Safety Interventions

Sogand Hasanzadeh (Virginia Tech), Nicholas Polys (Virginia Tech), Jesus M. de la Garza (Clemson University)

TVCG

Abstract: “Immersive environments have been successfully applied to a broad range of safety training in high-risk domains. However, very little research has used these systems to evaluate the risk-taking behavior of construction workers. In this study, we investigated the feasibility and usefulness of providing passive haptics in a mixed-reality environment to capture the risk-taking behavior of workers, identify at-risk workers, and propose injury-prevention interventions to counteract excessive risk-taking and risk-compensatory behavior. Within a mixed-reality environment in a CAVE-like display system, our subjects installed shingles on a (physical) sloped roof of a (virtual) two-story residential building on a morning in a suburban area. Through this controlled, within-subject experimental design, we exposed each subject to three experimental conditions by manipulating the level of safety intervention. Workers’ subjective reports, physiological signals, psychophysical responses, and reactionary behaviors were then considered as promising measures of Presence. The results showed that our mixed-reality environment was a suitable platform for triggering behavioral changes under different experimental conditions and for evaluating the risk perception and risk-taking behavior of workers in a risk-free setting. These results demonstrated the value of immersive technology to investigate natural human factors.”

Toward Standardized Classification of Foveated Displays

Josef Spjut (NVIDIA), Ben Boudaoud (NVIDIA), Jonghyun Kim (NVIDIA), Trey Greer (NVIDIA), Rachel Albert (NVIDIA CORPORATION, NVIDIA RESEARCH), Michael Stengel (NVIDIA Corporation), KAAN AKSIT (NVIDIA RESEARCH), David Luebke (NVIDIA CORPORATION, NVIDIA RESEARCH)

TVCG

Abstract: “Emergent in the field of head mounted display design is a desire to leverage the limitations of the human visual system to reduce the computation, communication, and display workload in power and form-factor constrained systems. Fundamental to this reduced workload is the ability to match display resolution to the acuity of the human visual system, along with a resulting need to follow the gaze of the eye as it moves, a process referred to as foveation. A display that moves its content along with the eye may be called a Foveated Display, though this term is also commonly used to describe displays with non-uniform resolution that attempt to mimic human visual acuity. We therefore recommend a definition for the term Foveated Display that accepts both of these interpretations. Furthermore, we include a simplified model for human visual Acuity Distribution Functions (ADFs) at various levels of visual acuity, across wide fields of view and propose comparison of this ADF with the Resolution Distribution Function of a foveated display for evaluation of its resolution at a particular gaze direction. We also provide a taxonomy to allow the field to meaningfully compare and contrast various aspects of foveated displays in a display and optical technology-agnostic manner.”

Using Facial Animation to Increase the Enfacement Illusion and Avatar Self-Identification

Mar Gonzalez Franco (Microsoft Research), Anthony Steed (University College London), Steve Hoogendyk (Microsoft), Eyal Ofek (Microsoft Research)

TVCG

Abstract: “Through avatar embodiment in Virtual Reality (VR) we can achieve the illusion that an avatar is substituting our body: the avatar moves as we move and we see it from a first person perspective. However, self-identification, the process of identifying a representation as being oneself, poses new challenges because a key determinant is that we see and have agency in our own face. Providing control over the face is hard with current HMD technologies because face tracking is either cumbersome or error prone. However, limited animation is easily achieved based on speaking. We investigate the level of avatar enfacement, that is believing that a picture of a face is one’s own face, with three levels of facial animation: (i) one in which the facial expressions of the avatars are static, (ii) one in which we implement a synchronous lip motion and (iii) one in which the avatar presents lip-sync plus additional facial animations, with blinks, designed by a professional animator. We measure self-identification using a face morphing tool that morphs from the face of the participant to the face of a gender matched avatar. We find that self-identification on avatars can be increased through pre-baked animations even when these are not photorealistic nor look like the participant.”

Motor Performance in 3D Object Manipulation Tasks

Alexander Kulik (Bauhaus-Universität Weimar), André Kunert (Bauhaus-Universität Weimar), Bernd Froehlich (Bauhaus-Universität Weimar)

TVCG

Abstract: “Fitts’s law facilitates approximate comparisons of target acquisition performance across a variety of settings. Conceptually, also the index of difficulty of 3D object manipulation with six degrees of freedom can be computed, which allows the comparison of results from different studies. Prior experiments, however, often revealed much worse performance than one would reasonably expect on this basis. We argue that this discrepancy stems from confounding variables and show how Fitts’s law and related research methods can be applied to isolate and identify relevant factors of motor performance in 3D manipulation tasks. The results of a formal user study (N=21) demonstrate competitive performance in compliance with Fitts’s model and provide empirical evidence that simultaneous 3D rotation and translation can be beneficial.”

Magic Carpet: Interaction Fidelity for Flying in VR

Daniel Medeiros (University of Wellington), Antönio Sousa (University of Lisbon), Alberto Raposo (University of Rio de Janeiro), Joarquim Jorge (University of Lisbon)

TVCG Invited

Abstract: “Locomotion in virtual environments is currently a difficult and unnatural task to perform. Normally, researchers tend to devise ground-floor-based metaphors, to constrain the degrees of freedom (DoFs) during motion. These restrictions enable interactions that accurately emulate human gait to provide high interaction fidelity. However, flying allows users to reach specific locations in a virtual scene more expeditiously. Our experience suggests that high-interaction fidelity techniques may also improve the flying experience, although it is not innate to humans, since it requires simultaneously controlling additional DoFs. We contribute with the Magic Carpet, an approach to flying that combines a floor-proxy with a full-body representation, to avoid imbalance and cybersickness issues. This design space allows to address direction indication and speed control as two separate phases of travel, thereby enabling techniques with higher interaction fidelity. To validate our design space, we developed two complementary studies, one for each of the travel phases. In this paper, we present the results of both studies within the Magic Carpet design space. To this end, we applied both objective and subjective measurements to determine the best set of techniques inside our design space. Our results show that this approach enables high-interaction fidelity techniques while improving user experience.”

Effect of Avatar Appearance on Detection Thresholds for Remapped Hand Movements

Nami Ogawa (University of Tokyo), Takuji Narumi (University of Tokyo), Michitaka Hirose (University of Tokyo)

TVCG Invited

Multi-Window 3D Interaction for Collaborative Virtual Reality

André Kunert (University of Weimar), Tim Weissker (University of Weimar), Bernd Fröhlich (University of Weimar), Alexander Kulik (University of Weimar)

TVCG Invited

ENTROPiA: Towards Infinite Surface Haptic Displays in Virtual Reality Using Encountered-Type Rotating Props

Victor Mercado (INRIA Rennes), Maud Marchal (INRIA Rennes), Anatole Lécuyer (INRIA Rennes)

TVCG Invited

VR Disability Simulation Reduces Implicit Bias Towards Persons with Disabilities

Tanvir Irfan Chowdhury (University of Texas), Sharif Mohammad Shahnewaz Ferdous (University of Texas), John Quarles (University of Texas)

TVCG Invited

Abstract: “This paper investigates how experiencing Virtual Reality (VR) Disability Simulation (DS) affects information recall and participants’ implicit association towards people with disabilities (PwD). Implicit attitudes are our actions or judgments towards various concepts or stereotypes (e.g., race) which we may or may not be aware of. Previous research has shown that experiencing ownership over a dark-skinned body reduces implicit racial bias. We hypothesized that a DS with a tracked Head Mounted Display (HMD) and a wheelchair interface would have a significantly larger effect on participants’ information recall and their implicit association towards PwD than a desktop monitor and gamepad. We conducted a 2x2 between-subjects experiment in which participants experienced a VR DS that teaches them facts about Multiple Sclerosis (MS) with factors of display (HMD, a desktop monitor) and interface (gamepad, wheelchair). Participants took two Implicit Association Tests (IAT) before and after experiencing the DS. Our study results show that the participants in an immersive HMD condition performed better than the participants in the non-immersive Desktop condition in their information recall task. Moreover, a tracked HMD and a wheelchair interface had significantly larger effects on participants’ implicit association towards PwD than a desktop monitor and a gamepad.”

Modeling Data-Driven Dominance Traits for Virtual Characters using Gait Analysis

Tanmay Randhavane (University of North Carolina), Aniket Bera (University of North Carolina), Emily Kubin (University of Tilburg), Kurt Gray (University of North Carolina), Dinesh Manocha (University of Maryland)

TVCG Invited

Computational Phase-Modulated Eyeglasses

Yuta Itoh (Tokyo Institute of Technology), Tobias Langlotz (University of Otago), Stefanie Zollmann (University of Otago), Daisuke Iwai (University of Otago), Kiyokawa Kiyoshi (NAIST), Toshiyuki Amano (University of Wakayama)

TVCG Invited

Thinh Nguyen-Vo (Simon Fraser University), Bernhard E. Riecke (Simon Fraser University), Wolfgang Stuerzlinger (Simon Fraser University), Duc Minh Pham (Simon Fraser University), Ernst Kruijff (Bonn-Rhein-Sieg University of Applied Sciences)

TVCG Invited

Automated Geometric Registration for Multi-Projector Displays on Arbitrary 3D Shapes Using Uncalibrated Devices

Mahdi Abbaspour Tehrani (University of California), M. Gopi (University of California), Aditi Majumder (University of California)

TVCG Invited

Engaging Participants in Selection Studies in Virtual Reality

Difeng Yu (The University of Melbourne), Qiushi Zhou (The University of Melbourne), Benjamin Tag (The University of Melbourne), Tilman Dingler (The University of Melbourne), Eduardo Velloso (The University of Melbourne), Jorge Goncalves (The University of Melbourne)

Conference

Detection Thresholds for Vertical Gains in VR and Drone-based Telepresence Systems

Keigo Matsumoto (The University of Tokyo), Eike Langbehn (University of Hamburg), Takuji Narumi (the University of Tokyo), Frank Steinicke (Universität Hamburg)

Conference

Design and Evaluation of a VR Training Simulation for Pump Maintenance

Frederik Winther (Aarhus University), Linoj Ravindran (Aarhus University), Kasper Paabøl Svendsen (Aarhus University), Tiare Feuchtner (Aarhus University)

Conference

ARCHIE: A User-Focused Framework for Testing Augmented Reality Applications in the Wild

Sarah M. Lehman (Temple University), Haibin Ling (Stony Brook University), Chiu Tan (Temple University)

Conference

Effects of Interacting with a Crowd of Emotional Virtual Humans on Users’ Affective and Non-Verbal Behaviors

Matias Volonte (Clemson University), Yu Chun Hsu (Multimedia Engineering at National Chiao Tung University), Kuan-yu Liu (Multimedia Engineering at National Chiao Tung University), Joseph P Mazer (Clemson University), Sai-Keung Wong (National Chiao Tung University), Sabarish V. Babu (Clemson University)

Conference

TEllipsoid: Ellipsoidal Display for Videoconference System Transmitting Accurate Gaze Direction

Taro Ichii (Tokyo Institute of Technology), Hironori Mitake (Tokyo Institute of Technology), Shoichi Hasegawa (Tokyo Institute of Technology)

Conference

Investigating Bubble Mechanism for Ray-casting to Improve 3D Target Acquisition in Virtual Reality

Yiqin Lu (Tsinghua University), Chun Yu (Tsinghua University), Yuanchun Shi (Tsinghua University)

Conference

Detection of Scaled Hand Interactions in Virtual Reality: The Effects of Motion Direction and Task Complexity

Shaghayegh Esmaeili (University of Florida), Brett Benda (University of Florida), Eric Ragan (University of Florida)

Conference

Asymmetric Effects of the Ebbinghaus Illusion on Depth Judgments

Hunter Finney (University of Mississippi), Adam Jones (University of Mississippi)

Conference

Automatic Synthesis of Virtual Wheelchair Training Scenarios

Wanwan Li (George Mason University), Javier Talavera (George Mason University), Amilcar Gomez (George Mason University), Jyh-Ming Lien (George Mason University), Lap-Fai Yu (George Mason University)

Conference

The Effect of a Foveated Field-of-View Restrictor on VR Sickness

Isayas Berhe Adhanom (University of Nevada), Nathan Navarro Griffin (University of Nevada), Paul MacNeilage (University of Nevada, Reno), Eelke Folmer (University of Nevada)

Conference

Effects of Locomotion Style and Body Visibility of a Telepresence Avatar

Youjin Choi (KAIST), Jeongmi Lee (KAIST), Sung-Hee Lee (KAIST)

Conference

Above Surface Interaction for Multiscale Navigation in Mobile Virtual Reality

Tim Menzner (Coburg University of Applied Sciences and Arts), Travis Gesslein (Coburg University of Applied Sciences and Arts), Alexander Otte (Coburg University of Applied Sciences and Arts), Jens Grubert (Coburg University)

Conference

Accelerated Stereo Rendering with Hybrid Reprojection-Based Rasterization and Adaptive Ray-Tracing

Niko Wißmann (TH Köln), Martin Misiak (TH Köln), Arnulph Fuhrmann (TH Köln), Marc Erich Latoschik (University of Würzburg)

Conference

Directing versus Attracting Attention: Exploring the Effectiveness of Central and Peripheral Cues in Panoramic Videos

Anastasia Maria Schmitz (University College London), Andrew MacQuarrie (University College London), Simon Julier (University College London), Nicola Binetti (University College London), Anthony Steed (University College London)

Conference

Exploring Eye Gaze Visualization Techniques for Identifying Distracted Students in Educational VR

Yitoshee Rahman (University of Louisiana at Lafayette), Sarker Monojit Asish (University of Louisiana at Lafayette), Nicholas P Fisher (University of Louisiana at Lafayette), Ethan Charles Bruce (University of Louisiana at Lafayette), Arun K Kulshreshth (University of Louisiana at Lafayette), Christoph W Borst (University of Louisiana at Lafayette)

Conference

Recurrent Enhancement of Visual Comfort for Casual Stereoscopic Photography

Yuzhen Niu (Fuzhou University), Qingyang Zheng (Fuzhou University), Wenxi Liu (Fuzhou University), Wenzhong Guo (Fuzhou University)

Conference

Real-time VR Simulation of Laparoscopic Cholecystectomy based on Parallel Position-Based Dynamics in GPU

Junjun Pan (VR lab), zhangleiyu zhangleiyu (Beihang University), Peng Yu (Beihang University), Yang Shen (National Engineering Laboraory for Cyberlearning and Intelligent Technology,Faulty of education), Haipeng Wang (Beijing General Aerospace Hospital), Hong Qin (Stony Brook University)

Conference

The Impact of Multi-sensory Stimuli on Confidence Levels for Perceptual-cognitive Tasks in VR

Sungchul Jung (University of Canterbury), Andrew Limmer-Wood (University of Canterbury), Simon Hoermann (University of Canterbury), Pram Abhayawardhana (University of Canterbury), Robert W. Lindeman (University of Canterbury)

Conference

A User Study on View-sharing Techniques for One-to-Many Mixed Reality Collaborations

Geonsun Lee (Korea University), HyeongYeop Kang (Kyung Hee University), JongMin Lee (Korea University), JungHyun Han (Korea University)

Conference

An Optical Design for Avatar-User Co-axial Viewpoint Telepresence

Kei Tsuchiya (The University of Electro-Communications), Naoya KOIZUMI (Department of Informatics)

Conference

Real Walking in the Virtual World: HEX-CORE-PROTOTYPE Omnidirectional Treadmill

Ziyao Wang (Southeast University), Haikun Wei (Southeast University), KanJian Zhang (Southeast University), Liping Xie (Southeast University)

Conference

A Tangible Spherical Proxy for Object Manipulation in Augmented Reality

David Englmeier (LMU Munich), Julia Dörner (LMU Munich), Andreas Butz (LMU Munich), Tobias Höllerer (University of California, Santa Barbara)

Conference

Visualization and evaluation of ergonomic visual field parameters in first person virtual environments

Tobias Günther (Technische Universität Dresden), Inga-Lisa Hilgers (Technische Universität Dresden), Rainer Groh (Technische Universität Dresden), Martin Schmauder (Technische Universität Dresden)

Conference

The Effects of Virtual Audience Size on Social Anxiety during Public Speaking

Fariba Mostajeran (University of Hamburg), Melik Berk Balci (Universitätsklinikum Hamburg-Eppendorf), Frank Steinicke (Universität Hamburg), Simone Kühn (Universitätsklinikum Hamburg-Eppendorf), Jürgen Gallinat (Universitätsklinikum Hamburg-Eppendorf)

Conference

Design and Evaluation of Interaction Techniques Dedicated to Integrate Encountered-Type Haptic Displays in Virtual Environments

Victor Rodrigo MERCADO (Inria Rennes), Maud Marchal (IRISA/INSA), Anatole Lécuyer (Inria)

Conference

Alpaca: AR Graphics Extensions for Web Applications

Tanner Hobson (University of Tennessee), Jeremiah Duncan (University of Tennessee), Mohammad Raji (The University of Tennessee, Knoxville), Aidong Lu (University of North Carolina at Charlotte), Jian Huang (University of Tennessee)

Conference

Touch the Wall: Comparison of Virtual and Augmented Reality with Conventional 2D Screen Eye-Hand Coordination Training Systems

Anil Ufuk Batmaz (Simon Fraser University ), Aunnoy K Mutasim (Simon Fraser University), Elham Sadr (SFU), Morteza Malekmakan (Simon Fraser University), Wolfgang Stuerzlinger (Simon Fraser University)

Conference

HiPad: Text entry for Head-Mounted Displays using Touchpad

Haiyan Jiang (Beijing Institute of Technology), Dongdong Weng (Beijing Institute of Technology)

Conference

A Physics-based Virtual Reality Simulation Framework for Neonatal Endotracheal Intubation

Xiao Xiao (George Washington University), SHANG ZHAO (George Washington Unverisity), Yan Meng (George Washington University), Lamia Soghier (Children’s National Medical Center), Xiaoke Zhang (George Washington University), James Hahn (George Washington University)

Conference

Graphical Perception for Immersive Analytics

Matt Whitlock (University of Colorado), Stephen Smart (University of Colorado Boulder), Danielle Albers Szafir (University of Colorado Boulder)

Conference

Feature Guided Path Redirection for VR Navigation

Antong Cao (Beihang University), Lili Wang (Beihang University), Yi Liu (State Key Laboratory of Virtual Reality Technology and Systems,school of computer science and engineering, Beihang University), Voicu Popescu (Purdue University)

Conference

Angular Dependence of Spatial Resolution in Virtual Reality Displays

Ryan Beams (Food and Drug Administration), Brendan Collins (Food and Drug Adminstration), Andrea Seung Kim (Food & Drug Administration), Aldo Badano (Food and Drug Administration)

Conference

Effects of virtual hand representation on interaction and embodiment in HMD-based virtual environments using controllers

Christos Lougiakis (National and Kapodistrian University of Athens), Akrivi Katifori (University of Athens), Maria Roussou (University of Athens), Ioannis-Panagiotis Ioannidis (ATHENA Research and Innovation Centre)

Conference

Enlightening Patients with Augmented Reality

Andreas Jakl (St. Poelten University of Applied Sciences), Anna Maria Lienhart (St. Poelten University of Applied Sciences), Clemens Baumann (St. Poelten University of Applied Sciences), Arian Jalaaefar (St. Poelten University of Applied Sciences), Alexander Schlager (St. Poelten University of Applied Sciences), Lucas Schöffer (St. Poelten University of Applied Sciences), Franziska Bruckner (St. Poelten University of Applied Sciences)

Conference

CasualStereo: Casual Capture of Stereo Panoramas with Spherical Structure-from-Motion

Lewis Baker (University of Otago), Steven Mills (University of Otago), Stefanie Zollmann (University of Otago), Jonathan Ventura (California Polytechnic State University)

Conference

Implementation and Evaluation of Touch-based Interaction Using Electrovibration Haptic Feedback in Virtual Environments

Lu Zhao (School of Optics and Photonics), Yue Liu (School of Optics and Photonics), Dejiang Ye (Beijing Institute of Technology, Beijing), Zhuoluo Ma (Beijing Institute of Technology), Weitao Song (Nanyang Technological University)

Conference

Hakim Si-Mohammed (Inria), Catarina Lopes Dias (Graz University of Technology), Maria Duarte (University of Lisbon), Ferran Argelaguet Sanz (Inria), Camille Jeunet (Université de Toulouse), Géry Casiez (Université de Lille), Gernot R. Müller-Putz (Graz University of Technology), Anatole Lécuyer (Inria), Reinhold Scherer (University of Essex)

Conference

The Plausibility Paradox for Scaled-Down Users in Virtual Environments

Matti Pouke (University of Oulu), Katherine Mimnaugh (University of Oulu), Timo Ojala (University of Oulu), Steven M. LaValle (University of Oulu)

Conference

Manipulating Puppets in VR

Michael Nitsche (Georgia Institute of Technology), Pierce McBride (Georgia Institute of Technology)

Conference

Data-Driven Spatio-Temporal Analysis via Multi-Modal Zeitgebers and Cognitive Load in VR

Haodong Liao (University of Electronic Science and Technology of China), Ning Xie (University of Electronic Science and Technology of China), Huiyuan Li (University of Electronic Science and Technology of China), Yuhang Li (University of Electronic Science and Technology of China), Jianping Su (University of Electronic Science and Technology of China), Feng Jiang (University of Electronic Science and Technology of China), Weipeng Huang (University of Electronic Science and Technology of China), Heng Tao Shen (University of Electronic Science and Technology of China)

Conference

Introducing Mental Workload Assessment for the Design of Virtual Reality Training Scenarios

Tiffany Luong (b<>com), Ferran Argelaguet Sanz (Inria), Nicolas Martin (b<>com), Anatole Lécuyer (Inria)

Conference

Evaluating Virtual Reality Experiences Through Participant Choices

María Murcia-López (Facebook), Tara Collingwoode-Williams (Independent Researcher), William Steptoe (Facebook), Raz Schwartz (Facebook), Timothy J. Loving (Facebook), Mel Slater (Independent Researcher)

Conference

The Role of Viewing Distance and Feedback on Affordance Judgments in Augmented Reality

Holly C Gagnon (University of Utah), Dun Na (Vanderbilt University), Keith Heiner (University of Utah), Jeanine Stefanucci (University of Utah), Sarah Creem-Regehr (University of Utah), Bobby Bodenheimer (Vanderbilt University)

Conference

Transfer of Coordination Skill to the Unpracticed Hand in Immersive Environments

Shan Xiao (College of Information Science and Technology, Jinan University), Xupeng Ye (College of Information Science and Technology, Jinan University), Yaqiu Guo (Jinan University), BoYu Gao (Jinan University), Jinyi Long (Jinan Unviersity)

Conference

Real and Virtual Environment Mismatching Induces Arousal and Alters Movement Behavior

Christos Mousas (Purdue University), Dominic Kao (Purdue University), Alexandros Fabio Koilias (University of the Aegean), Banafsheh Rekabdar (Southern Illinois University)

Conference

SPLAT: Spherical Localization and Tracking in Large Spaces

Lewis Baker (University of Otago), Jonathan Ventura (California Polytechnic State University), Stefanie Zollmann (University of Otago), Steven Mills (University of Otago), Tobias Langlotz (University of Otago)

Conference

Measuring System Visual Latency through Cognitive Latency on Video See-Through AR devices

Robert Gruen (Microsoft), Eyal Ofek (Microsoft Research), Anthony Steed (University College London), Ran Gal (Microsoft Research), Mike Sinclair (Microsoft), Mar Gonzalez Franco (Microsoft Research)

Conference

The Self-Avatar Follower Effect in Virtual Reality

Mar Gonzalez Franco (Microsoft Research), Brian A Cohn (Microsoft Research), Dalila Burin (Tohoku University), Eyal Ofek (Microsoft Research), Antonella Maselli (Microsoft Research)

Conference

Exploring the impact of 360° movie cuts in users’ attention

Carlos Marañes (Universidad de Zaragoza), Diego Gutierrez (Universidad de Zaragoza), Ana Serrano (Universidad de Zaragoza)

Conference

Effects of volumetric capture avatars on social presence in immersive virtual environments

SungIk Cho (Korea University), Seung-wook Kim (Korea University), JongMin Lee (Korea University), JeongHyeon Ahn (Korea University), JungHyun Han (Korea University)

Conference

VR Bridges: Simulating Smooth Uneven Surfaces in VR

Khrystyna Vasylevska (TU Wien), Bálint István Kovács (TU Wien), Hannes Kaufmann (TU Wien)

Conference

Comparative Evaluation of the Effects of Motion Control on Cybersickness in Immersive Virtual Environments

Roshan Venkatakrishnan (Clemson University), Rohith Venkatakrishnan (Clemson University), Ayush Bhargava (Key Lime Interactive), Kathryn Lucaites (Clemson University), Hannah Solini (Clemson University), Matias Volonte (Clemson University), Andrew Robb (Clemson University), Wen-Chieh Lin (National Chiao Tung University), Yun-Xuan Lin (National Chiao Tung University), Sabarish V. Babu (Clemson University)

Conference

Precise and realistic grasping and manipulation in Virtual Reality without force feedback

thibauld delrieu (CEA LIST), Vincent Weistroffer (CEA LIST), Jean-Pierre Gazeau (PPrime Institut)

Conference

Take a Look Around – The Impact of Decoupling Gaze and Travel-direction in Seated and Ground-based Virtual Reality Utilizing Torso-directed Steering

Daniel Zielasko (University of Trier), Yuen C. Law (Costa Rica Institute of Technology), Benjamin Weyers (Trier University)

Conference

Virtual Big Heads: Analysis of Human Perception and Comfort of Head Scales in Social Virtual Reality

Zubin Choudhary (University of Central Florida), Kangsoo Kim (University of Central Florida), Ryan Schubert (University of Central Florida), Gerd Bruder (University of Central Florida), Greg Welch (University of Central Florida)

Conference

Multiple-scale Simulation Method for Liquid with Trapped Air under Particle-based Framework

Sinuo Liu (University of Science and Technology Beijing), Ben Wang (Beijing Key Laboratory of Knowledge Engineering for Materials Science, Beijing ), Xiaojuan Ban (Beijing Key Laboratory of Knowledge Engineering for Materials Science)

Conference

Health and Safety of VR Use by Children in an Educational Use Case

Robert Rauschenberger (Exponent, Inc.), Brandon Barakat (Exponent, Inc.)

Conference

The Space Bender: Supporting Natural Walking via Overt Manipulation of the Virtual Environment

Adalberto L. Simeone (KU Leuven), Niels Christian Nilsson (Aalborg University Copenhagen), André Zenner (German Research Center for Artificial Intelligence (DFKI), Saarland Informatics Campus), Marco Speicher (DFKI, Saarland Informatics Campus), Florian Daiber (DFKI, Saarland Informatics Campus)

Conference

A Structural Equation Modeling Approach to Understand the Relationship between Control, Cybersickness and Presence in Virtual Reality

Rohith Venkatakrishnan (Clemson University), Roshan Venkatakrishnan (Clemson University), Reza Ghaiumy Anaraky (Clemson University), Matias volonte (Clemson University), Bart Knijnenburg (Clemson University), Sabarish V. Babu (Clemson University)

Conference

A Comparison of Visual Attention Guiding Approaches for 360°Image-Based VR Tours

Jan Oliver Wallgrün (The Pennsylvania State University), Mahda M. Bagher (The Pennsylvania State University), pejman sajjadi (The Pennsylvania State University), Alexander Klippel (The Pennsylvania State University)

Conference

Dyadic Acquisition of Survey Knowledge in a Shared Virtual Environment

Lauren Buck (Vanderbilt University), Timothy P McNamara (Vanderbilt University), Bobby Bodenheimer (Vanderbilt University)

Conference

Peering Under the Hull: Enhanced Decision Making via an Augmented Environment

MATTHEW TIMMERMAN (United States Navy), Amela Sadagic (Naval Postgraduate School (NPS)), Cynthia Irvine (Naval Postgraduate School)

Conference

SalBiNet360: Saliency Prediction on 360° Images with Local-Global Bifurcated Deep Network

Dongwen Chen (South China University of Technology), Chunmei Qing (South China University of Technology), Xiangmin Xu (South China university of technology), Huansheng Zhu (South China University of Technology)

Conference

Analyzing Pedestrian Behavior in Augmented Reality - Proof of Concept

Philipp Maruhn (Technical University of Munich), André Dietrich (Technical University of Munich), Lorenz Prasch (Technical University of Munich), Sonja Schneider (Technical University of Munich)

Conference

Eye-Gaze Activity in Crowds: Impact of Virtual Reality and Density

berton florian (Inria), Ludovic Hoyet (Inria), Anne-Hélène Olivier (University of Rennes 2), Julien Bruneau (Inria), OLIVIER LE MEUR (Univ Rennes, CNRS, Inria, IRISA), Julien Pettré (Inria)

Conference

ThermAirGlove: A Pneumatic Glove for Thermal Perception and Material Identification in Virtual Reality

Shaoyu Cai (City University of Hong Kong), Pingchuan Ke (City University of Hong Kong), Takuji Narumi (the University of Tokyo), Kening Zhu (City University of Hong Kong)

Conference

Measuring Human Trust in a Virtual Assistant using Physiological Sensing in Virtual Reality.

Kunal Gupta (The University of Auckland), Ryo Hajika (The University of Auckland), Yun Suen Pai (Auckland Bioengineering Institute, University of Auckland), Andreas Duenser (CSIRO), Martin Lochner (CSIRO), Mark Billinghurst (The University of Auckland)

Conference

Dynamic Artificial Potential Fields for Multi-User Redirected Walking

Tianyang Dong (Zhejiang University of Technology), Xianwei Chen (Zhejiang University of Technology), Yifan Song (Zhejiang University of Technology)

Conference

Improving Obstacle Awareness to Enhance Interaction in Virtual Reality

Ivan Valentini (University of Genoa), Giorgio Ballestin (University of Genoa), Chiara Bassano (University of Genoa), Fabio Solari (University of Genoa), Manuela Chessa (University of Genoa)

Conference

Comparative Evaluation of Viewing and Self-Representation on Passability Affordances to a Realistic Sliding Doorway in Real and Immersive Virtual Environments

Ayush Bhargava (Key Lime Interactive), Hannah Solini (Clemson University), Kathryn Lucaites (Clemson University), Jeffrey W. Bertrand (Clemson University), Andrew Robb (Clemson University), Christopher Pagano (Clemson University), Sabarish V. Babu (Clemson University)

Conference

Toward Virtual Reality-based Evaluation of Robot Navigation among People: a Pilot Study

Fabien Grzeskowiak (Inria Centre Bretagne Atlantique), Marie Babel (INSA), Julien Bruneau (Inria), Julien Pettré (Inria)

Conference

Think Twice: The Influence of Immersion on Decision Making during Gambling in Virtual Reality

Sebastian Oberdörfer (University of Würzburg), David Heidrich (German Aerospace Center (DLR)), Marc Erich Latoschik (Department of Computer Science, HCI Group)

Conference

Effects of Dark Mode Graphics on Visual Acuity and Fatigue with Virtual Reality Head-Mounted Displays

Austin Erickson (University of Central Florida), Kangsoo Kim (University of Central Florida), Gerd Bruder (University of Central Florida), Greg Welch (University of Central Florida)

Conference

Examining Whether Secondary Effects of Temperature-Associated Virtual Stimuli Influence Subjective Perception of Duration

Austin Erickson (University of Central Florida), Gerd Bruder (University of Central Florida), Pamela J. Wisniewski (University of Central Florida), Greg Welch (University of Central Florida)

Conference

Slicing Volume: Hybrid 3D/2D Multi target Selection Technique for Dense Virtual Environments

Roberto A Montano-Murillo (University of sussex), Cuong Nguyen (Adobe Research), Rubaiat Habib Kazi (Adobe Research), Sriram Subramanian (University of Sussex), Stephen DiVerdi (Adobe), Diego Martinez-Plasencia (University of Sussex)

Conference

Simultaneous Run-Time Measurement of Motion-to-Photon Latency and Latency Jitter

Jan-Philipp Stauffert (University of Würzburg), Florian Niebling (University of Würzburg ), Marc Erich Latoschik (Department of Computer Science, HCI Group)

Conference

Deep Soft Procrustes for Markerless Volumetric Sensor Alignment

Vladimiros Sterzentsenko (Information Technologies Institute / Centre of Research & Technology - Hellas), Alexandros Doumanoglou (Information Technologies Institute / Center for Research and Technology HELLAS), Spyridon Thermos (Centre for Research and Technology Hellas), Nikolaos Zioulis (Centre for Research and Technology Hellas), Dimitrios Zarpalas (Information Technologies Institute / Centre of Research & Technology - Hellas), Petros Daras (Information Technologies Institute / Centre of Research & Technology - Hellas)

Conference

Glanceable AR: Evaluating Information Access Methods for Head-Worn Augmented Reality

Feiyu Lu (Virginia Tech), Shakiba Davari (Virginia Tech), Lee Lisle (Virginia Tech), Yuan Li (Virginia Tech), Doug Bowman (Virginia Tech)

Conference

Analyzing usability and presence of a virtual reality operating room (VOR) simulator during laparoscopic surgery training

Meng Li (Delft University of Technology), Sandeep Ganni (Delft University of Technology), Jeroen Ponten (Catharina Hospital), Armagan Albayrak (Delft University of Technology), Anne-Françoise Rutkowski (Tilburg University), Jack Jakimowicz (Catharina Hospital)

Conference

A Comparative Analysis of 3D User Interaction: How to Move Virtual Objects in Mixed Reality

Hyo Jeong Kang (University of Florida), Jung-hye Shin (University of Wisconsin-Madison), Kevin Ponto (University of Wisconsin - Madison)

Conference

Verbal Interactions between Agents and Avatars in Shared Virtual Environments using Propositional Planning

Andrew Best (University of North Carolina at Chapel Hill), Sahil Narang (University of North Carolina at Chapel Hill), Dinesh Manocha (University of Maryland )

Conference

Reducing Task Load with an Embodied Intelligent Virtual Assistant for Improved Performance in Collaborative Decision Making

Kangsoo Kim (University of Central Florida), Celso M. M de Melo (US Army Research Laboratory), Nahal Norouzi (University of Central Florida), Gerd Bruder (University of Central Florida), Greg Welch (University of Central Florida)

Conference

Determining Peripersonal Space Boundaries and Their Plasticity in Relation to Object and Agent Characteristics in an Immersive Virtual Environment

Lauren Buck (Vanderbilt University), Sohee Park (Vanderbilt University), Bobby Bodenheimer (Vanderbilt University)

Conference

Design and Evaluation of Interactive Small Multiples Data Visualisation in Immersive Spaces

Jiazhou Liu (Monash University), Arnaud Prouzeau (Monash University), Barrett Ens (Monash University), Tim Dwyer (Monash University)

Conference

LiveDeep: Online Viewport Prediction for Live Virtual Reality Streaming Using Lifelong Deep Learning

Xianglong Feng (Rutgers University), Yao Liu (Binghamton University), Sheng Wei (Rutgers University)

Conference

Virtual environment with smell using wearable olfactory display and computational fluid dynamics simulation

Takamichi Nakamoto (Tokyo Institute of Technology), Tatsuya Hirasawa (Tokyo Institute of Technology), Yukiko Hanyu (Tokyo Institute of Technology)

Conference

Outdoor Sound Propagation Based on Adaptive FDTD-PE

Shiguang Liu (Tianjin University), Jin Liu (Tianjin University)

Conference

Where to display? How Interface Position Affects Comfort and Task Switching Time on Glanceable Interfaces

Samat Imamov (Virginia Tech), Daniel Monzel (Virginia Tech), Wallace Lages (Virginia Tech)

Conference

Optimal Planning for Redirected Walking Based on Reinforcement Learning in Multi-user Environment with Irregularly Shaped Physical Space

Dong-Yong Lee (Yonsei University), Yong-Hun Cho (Yonsei University), Daehong Min (Yonsei University), In-Kwon Lee (Yonsei University)

Conference

Shaking Hands in Virtual Space: Recovery in Redirected Walking for Direct Interaction between Two Users

Daehong Min (Yonsei University), Dong-Yong Lee (Yonsei University), Yong-Hun Cho (Yonsei University), In-Kwon Lee (Yonsei University)

Conference

Exploring Visual Techniques for Boundary Awareness During Interaction in Augmented Reality Head-Mounted Displays

Wenge Xu (Xi’an Jiaotong-Liverpool University), Hai-Ning Liang (Xi’an Jiaotong-Liverpool University), Yuzheng Chen (Xi’an Jiaotong-Liverpool University), Xiang Li (Xi’an Jiaotong-Liverpool University), Kangyou Yu (Xi’an Jiaotong-Liverpool University)

Conference

How About the Mentor? Effective Workspace Visualization in AR Telementoring

Chengyuan Lin (Purdue University), Edgar Rojas-Muñoz (Purdue University), Maria E Cabrera (University of Washington), Natalia Sanchez-Tamayo (Purdue University), Daniel Andersen (Purdue University), Voicu Popescu (Purdue University), Juan Antonio Barragan Noguera (Purdue University), Ben Zarzaur (Indiana University School of Medicine), Pat Murphy (Indiana University School of Medicine), Kathryn Anderson (Indiana University School of Medicine), Thomas Douglas (Naval Medical Center Portsmouth), Clare Griffis (Naval Medical Center Portsmouth), Juan Wachs (Purdue University)

Conference

Optimization and Manipulation of Contextual Mutual Spaces for Multi-User Virtual and Augmented Reality Interaction

Mohammad Keshavarzi (University of California Berkeley), Allen Y Yang (University of California, Berkeley), Woojin Ko (University of California, Berkeley ), Luisa Caldas (UC Berkeley)

Conference

Design and Evaluation of a Tool to Support Air Traffic Control with 2D and 3D Visualizations

Gernot Rottermanner (St. Pölten University of Applied Sciences), Victor Adriel de Jesus Oliveira (St. Poelten University of Applied Sciences), Peter Judmaier (University of Applied Sciences), Mylene Kreiger (St. Pölten University of Applied Sciences), Philipp Graf (St. Pölten University of Applied Sciences), Volker Settgast (Fraunhofer Austria Research GmbH), Carl-Herbert Rokitansky (University of Salzburg, Computer Sciences Institute, Aerospace Research), Kurt Eschbacher (University of Salzburg, Computer Sciences Institute, Aerospace Research), Patrik Lechner (St. Pölten University of Applied Sciences), Michael Iber (St. Pölten University of Applied Sciences), Volker Grantz (Frequentis AG), Markus Wagner (St. Poelten, University of Applied Sciences)

Conference

Influence of Perspective on Dynamic Tasks in Virtual Reality

Naval Bhandari (University of Bath), Eamonn O’Neill (University of Bath)

Conference

Reading on 3D Surfaces in Virtual Environments

Chunxue Wei (The University of Melbourne), Difeng Yu (The University of Melbourne), Tilman Dingler (University of Melbourne)

Conference

VRT: Exploration and Filtering of Trajectories in an Immersive Environment using 3D Shapes

François Homps (Ecole Centrale de Lyon), Yohan Beugin (Ecole Centrale de Lyon), Romain Vuillemot (Ecole Centrale de Lyon)

Conference

Design and Initial Evaluation of an Interactive VR based Architectural Design Discussion System

Ting-Wei Hsu (National Chiao Tung University), Ming-Han Tsai (Feng Chia University), Sabarish V. Babu (Clemson University), Pei-Hsien Hsu (National Chiao Tung University), Hsuan-Ming Chang (National Chiao Tung University), Wen-Chieh Lin (National Chiao Tung University), Jung-Hong Chuang (National Chiao Tung University,)

Conference

Exploring Effects of Spatially Decoupled and Spatially Coupled Annotation on Navigation in Virtual Reality

James Dominic (Clemson University), Andrew Robb (Clemson University)

Conference

Disambiguation Techniques for Freehand Object Manipulations in Virtual Reality

Di Laura Chen (University of Toronto), Ravin Balakrishnan (University of Toronto), Tovi Grossman (University of Toronto)

Conference

Learning in the Field: Comparison of Desktop, Immersive Virtual Reality, and Actual Field Trips for Place-Based STEM Education

Jiayan Zhao (The Pennsylvania State University), Peter LaFemina (The Pennsylvania State University), Julia Carr (The Pennsylvania State University), pejman sajjadi (The Pennsylvania State University), Jan Oliver Wallgrün (The Pennsylvania State University), Alexander Klippel (The Pennsylvania State University)

Conference

Exploring the Differences of Visual Discomfort Caused by Long-term Immersion between Virtual Environments and Physical Environments

Jie Guo (Beijing Institute of Technology), Dongdong Weng (Beijing Institute of Technology), Hui Fang (Beijing Institute of Technology), Zhenliang Zhang (Beijing Institute of Technology), Jiamin Ping (Beijing Institute of Technology), Yue Liu (Beijing Institute of Technology), Yongtian Wang (Beijing Institute of Technology)

Conference

Comparing the quality of highly realistic digital humans in 3DoF and 6DoF: a volumetric video case study

Shishir Subramanyam (Centrum Wiskunde & Informatica), Jie Li (Centrum Wiskunde & Informatica), Irene Viola (Centrum Wiskunde & Infomatica), Pablo Cesar (CWI)

Conference