Papers


 

Wednesday March 25th

 

11:15 - 12:25 | Training and Perception                 

 

Session Chair: Ryan McMahan, University of Texas at Dallas, United States

 

Assessing Knowledge Retention of an Immersive Serious Game vs. a Traditional Education Method in Aviation Safety Long Paper

Luca Chittaro, Fabio Buttussi

Illustration of paper #119
Abstract: Thanks to the increasing availability of consumer head-mounted displays, educational applications of immersive VR could now reach to the general public, especially if they include gaming elements (immersive serious games). Safety education of citizens could be a particularly promising domain for immersive serious games, because people tend not to pay attention to and benefit from current safety materials. In this paper, we propose an HMD-based immersive game for educating passengers about aviation safety that allows players to experience a serious aircraft emergency with the goal of surviving it. We compare the proposed approach to a traditional aviation safety education method (the safety card) used by airlines. Unlike most studies of VR for safety knowledge acquisition, we do not focus only on assessing learning immediately after the experience but we extend our attention to knowledge retention over a longer time span. This is a fundamental requirement, because people need to retain safety procedures in order to apply them when faced with danger. A knowledge test administered before, immediately after and one week after the experimental condition showed that the immersive serious game was superior to the safety card. Moreover, subjective as well as physiological measurements employed in the study showed that the immersive serious game was more engaging and fear-arousing than the safety card, a factor that can contribute to explain the obtained superior retention, as we discuss in the paper.

 

The Role of Dimensional Symmetry on Bimanual Psychomotor Skills Education in Immersive Virtual Environments Short Paper

Jeffrey Bertrand, David Brickler, Sabarish Babu, Kapil Madathil, Melissa Zelaya, Tianwei Wang, Jun Luo, John Wagner, Anand Gramopadhye

Illustration of paper #296
Abstract: The need for virtual reality applications for education and training involving bimanual dexterous activities has been increasing in recent years. However, it is unclear how the amount of correspondence between a virtual interaction metaphor to the real-world equivalent, otherwise known as dimensional symmetry, affects bimanual pscyhomotor skills training and how skills learned in the virtual simulation transfer to the real world. How does the number of degrees of freedom enhance or hinder the learning process? Does the increase in dimensional symmetry affect cognitive load? In an empirical evaluation, we compare the effectiveness of a natural 6-DOF interaction metaphor to a simplified 3-DOF metaphor. Our simulation interactively educates users in the step-by-step process of taking a precise measurement using calipers and micrometers in a simulated technical workbench environment. We conducted a usability study to evaluate the user experience and pedagogical benefits using measures including a pre and post cognition questionnaire over all levels of Bloom's taxonomy, workload assessment, system usability, and real world psychomotor assessment tasks. Results from the pre and post cognition questionnaires suggest that learning outcomes improved throughout all levels of Bloom's taxonomy for both conditions, and trends in the data suggest that the 6-DOF metaphor was more effective in real-world skill transference compared to the 3-DOF metaphor.

 

Scalability Limits of Large Immersive High-resolution Displays Short Paper

Charilaos Papadopoulos, Koosha Mirhosseini, Ievgeniia Gutenko, Kaloian Petkov, Arie E. Kaufman, Bireswar Laha

Illustration of paper #120
Abstract: We present the results of a variable information space experiment, targeted at exploring the scalability limits of immersive high-resolution, tiled-display walls under physical navigation. Our work is motivated by a lack of evidence supporting the extension of previously established benefits on substantially large, room-shaped displays. Using the Reality Deck, a gigapixel resolution immersive display, as its apparatus, our study spans four display form-factors, starting at 100 megapixels arranged planarly and up to one gigapixel in a horizontally immersive setting. We focus on four core tasks: visual search, attribute search, comparisons and pattern finding. We present a quantitative analysis of per-task user performance across the various display conditions. Our results demonstrate improvements in user performance as the display form-factor changes to 600 megapixels. At the 600 megapixel to 1 gigapixel transition, we observe no tangible performance improvements and the visual search task regressed substantially. Additionally, our analysis of subjective mental effort questionnaire responses indicates that subjective user effort grows as the display size increases, validating previous studies on smaller displays. Our analysis of the participants' physical navigation during the study sessions shows an increase in user movement as the display grew. Finally, by visualizing the participants' movement within the display apparatus space, we discover two main approaches (termed ``overview'' and ``detail'') through which users chose to tackle the various data exploration tasks. The results of our study can inform the design of immersive high-resolution display systems and provide insight into how users navigate within these room-sized visualization spaces.

 

Exploring the Effects of Image Persistence in Low Frame Rate Virtual Environments Short Paper

David Zielinski, Hrishikesh Rao, Marc Sommer, Regis Kopper

Illustration of paper #285
Abstract: In virtual reality applications, there is an aim to provide real time graphics which run at high refresh rates. However, there are many situations in which this is not possible due to simulation or rendering issues. When running at low frame rates, several aspects of the user experience are affected. For example, each frame is displayed for an extended period of time, causing a high persistence image artifact. The effect of this artifact is that movement may lose continuity, and the image jumps from one frame to another. In this paper, we discuss our initial exploration of the effects of high persistence frames caused by low refresh rates and compare it to high frame rates and to a technique we developed to mitigate the effects of low frame rates. In this technique, the low frame rate simulation images are displayed with low persistence by blanking out the display during the extra time such image would be displayed. In order to isolate the visual effects, we constructed a simulator for low and high persistence displays that does not affect input latency. A controlled user study comparing the three conditions for the tasks of 3D selection and navigation was conducted. Results indicate that the low persistence display technique may not negatively impact user experience or performance as compared to the high persistence case. Directions for future work on the use of low persistence displays for low frame rate situations are discussed.

 

 

 

 

14:00 - 15:25 | Perception and Cohabitation                                

 

Session Chair: Regis Kopper Duke University, United States

 

Subliminal Reorientation and Repositioning in Immersive Virtual Environments using Saccadic Suppression Long Paper

Benjamin Bolte, Markus Lappe

Illustration of paper #116
Abstract: Virtual reality strives to provide a user with an experience of a simulated world that feels as natural as the real world. Yet, to induce this feeling, sometimes it becomes necessary for technical reasons to deviate from a one-to-one correspondence between the real and the virtual world, and to reorient or reposition the user's viewpoint. Ideally, users should not notice the change of the viewpoint to avoid breaks in perceptual continuity. Saccades, the fast eye movements that we make in order to switch gaze from one object to another, produce a visual discontinuity on the retina, but this is not perceived because the visual system suppresses perception during saccades. As a consequence, our perception fails to detect rotations of the visual scene during saccades. We investigated whether saccadic suppression of image displacement (SSID) can be used in an immersive virtual environment (VE) to unconsciously rotate and translate the observer's viewpoint. To do this, the scene changes have to be precisely time-locked to the saccade onset. We used electrooculography (EOG) for eye movement tracking and assessed the performance of two modified eye movement classification algorithms for the challenging task of online saccade detection that is fast enough for SSID. We investigated the sensitivity of participants to translations (forward/backward) and rotations (in the transverse plane) during trans-saccadic scene changes. We found that participants were unable to detect approximately +/-0.5m translations along the line of gaze and +/-5° rotations in the transverse plane during saccades with an amplitude of 15°. If the user stands still, our approach exploiting SSID thus provides the means to unconsciously change the user's virtual position and/or orientation. For future research and applications, exploiting SSID has the potential to improve existing redirected walking and change blindness techniques for unlimited navigation through arbitrarily-sized VEs by real walking.

 

Distance Estimation in Large Immersive Projection Systems, Revisited Short Paper

Gerd Bruder, Ferran Argelaguet, Anne-Helene Olivier, Anatole Lecuyer

Abstract: When walking within an immersive projection environment, accommodation distance, parallax and angular resolution vary according to the distance between the user and the projection walls which can influence spatial perception. As CAVE-like virtual environments get bigger, accurate spatial perception within the projection setup becomes increasingly important for application domains that require the user to be able to naturally explore a virtual environment by moving through the physical interaction space. In this paper we describe an experiment which analyzes how distance estimation is biased when the distance to the screen and parallax vary. The experiment was conducted in a large immersive projection setup with up to ten meter interaction space. The results showed that both the screen distance and parallax have a strong asymmetric effect on distance judgments. We found an increased distance underestimation for positive parallax conditions. In contrast, we found less distance overestimation for negative and zero parallax conditions. We conclude the paper discussing the results with view on future large immersive projection environments.

 

 

Turbulent Motions Cannot Shake VR Short Paper

Florian Soyka, Elena Kokkinara, Markus Leyrer, Heinrich Buelthoff, Mel Slater, Betty Mohler

Abstract: The International Air Transport Association forecasts that there will be at least a 30% increase in passenger demand for flights over the next five years. In these circumstances the aircraft industry is looking for new ways to keep passengers occupied, entertained and healthy, and one of the methods under consideration is immersive virtual reality. It is therefore becoming important to understand how motion sickness and presence in virtual reality are influenced by physical motion. We were specifically interested in the use of head-mounted displays (HMD) while experiencing in-flight motions such as turbulence. 50 people were tested in different virtual environments varying in their context (virtual airplane versus magic carpet ride over tropical islands) and the way the physical motion was incorporated into the virtual world (matching visual and auditory stimuli versus no incorporation). Participants were subjected to three brief periods of turbulent motions realized with a motion simulator. Physiological signals (postural stability, heart rate and skin conductance) as well as subjective experiences (sickness and presence questionnaires) were measured. None of our participants experienced severe motion sickness during the experiment and although there were only small differences between conditions we found indications that it is beneficial for both wellbeing and presence to choose a virtual environment in which turbulent motions could be plausible and perceived as part of the scenario. Therefore we can conclude that brief exposure to turbulent motions does not get participants sick.

 

Perceiving Mass in Mixed Reality through Pseudo-Haptic Rendering of Newton's Third Law Short Paper

Paul Issartel, Florimond Guéniat, Sabine Coquillart, Mehdi Ammi

Abstract: In mixed reality, real objects can be used to interact with virtual objects. However, unlike in the real world, real objects do not encounter any opposite reaction force when pushing against virtual objects. The lack of reaction force during manipulation prevents users from perceiving the mass of virtual objects. Although this could be addressed by equipping real objects with force-feedback devices, such a solution remains complex and impractical. In this work, we present a technique to produce an illusion of mass without any active force-feedback mechanism. This is achieved by simulating the effects of this reaction force in a purely visual way. A first study demonstrates that our technique indeed allows users to differentiate light virtual objects from heavy virtual objects. In addition, it shows that the illusion is immediately effective, with no prior training. In a second study, we measure the lowest mass difference (JND) that can be perceived with this technique. The effectiveness and ease of implementation of our solution provides an opportunity to enhance mixed reality interaction at no additional cost.
 
 
User Cohabitation in Multi-stereoscopic Immersive Virtual Environment for Individual Navigation Tasks Short Paper

Weiya Chen, Nicolas Ladeveze, Céline Clavel, Daniel Mestre, Patrick Bourdot

Illustration of paper #281
Abstract: In a Multi-stereoscopic immersive system, several users sharing the same restricted workspace, e.g. a CAVE, may need to perform independent navigation to achieve loosely coupled collaboration tasks for a complex scenario. In this context, a proper navigation paradigm should provide users both an efficient control of virtual navigation and a guarantee of users' safety in the real workspace relative to the display system and between users. In this aim, we propose several alterations of the human joystick metaphor by introducing implicit adaptive control to allow safe individual navigation for multiple users. We conducted a user study with an object-finding task in a double-stereoscopic CAVE-like system to evaluate both users' navigation performance in the virtual world and their behavior in the real workspace under different conditions. The results highlight that the improved paradigm allows two users to navigate independently despite physical system limitations.

 

 

 

 

16:00 - 16:55 | 3D Interaction

 

Session Chair: Gerd Bruder, Universität Hamburg, Germany

 

3D Finger CAPE: Clicking Action and Position Estimation under Self-Occlusions in Egocentric Viewpoint Long Paper

Youngkyoon Jang, Seung-Tak Noh, Hyung Jin Chang, Tae-Kyun Kim, Woontack Woo

Abstract: In this paper we present a novel framework for simultaneous detection of click action and estimation of occluded fingertip positions from egocentric viewed single-depth image sequences. For the detection and estimation, a novel probabilistic inference based on knowledge priors of clicking motion and clicked position is presented. Based on the detection and estimation results, we were able to achieve a fine resolution level of a bare hand-based interaction with virtual objects in egocentric viewpoint. Our contributions include: (i) a rotation and translation invariant finger clicking action and position estimation using the combination of 2D image-based fingertip detection with 3D hand posture estimation in egocentric viewpoint. (ii) a novel spatio-temporal random forest, which performs the detection and estimation efficiently in a single framework. We also present (iii) a selection process utilizing the proposed clicking action detection and position estimation in an arm reachable AR/VR space, which does not require any additional device. Experimental results show that the proposed method delivers promising performance under frequent self-occlusions in the process of selecting objects in AR/VR space whilst wearing an egocentric-depth camera-attached HMD.

 

3DTouch: A wearable 3D input device for 3D applications Short Paper

Anh Nguyen, Amy Banic

Abstract: 3D applications appear in every corner of life in the current technology era. There is a need for an ubiquitous 3D input device that works with many different platforms, from head-mounted displays (HMDs) to mobile touch devices, 3DTVs, and even the Cave Automatic Virtual Environments. We present 3DTouch, a novel wearable 3D input device worn on the fingertip for 3D manipulation tasks. 3DTouch is designed to fill the missing gap of a 3D input device that is self-contained, mobile, and universally works across various 3D platforms. This paper presents a low-cost solution to designing and implementing such a device. Our approach relies on a relative positioning technique using an optical laser sensor and a 9-DOF inertial measurement unit. The device employs touch input for the benefits of passive haptic feedback, and movement stability. On the other hand, with touch interaction, 3DTouch is conceptually less fatiguing to use over many hours than 3D spatial input devices. We propose a set of 3D interaction techniques including selection, translation, and rotation using 3DTouch. An evaluation also demonstrates the device's tracking accuracy of 1.10 mm and 2.33 degrees for subtle touch interaction in 3D space. We envision that modular solutions like 3DTouch opens up a whole new design space for interaction techniques to further develop on. With 3DTouch, we attempt to bring 3D applications a step closer to users.

 

Human-Scale Passive Haptic Feedback for Augmenting Interaction and Perception in Virtual Environments Short Paper

Merwan Achibet, Adrien Girard, Maud Marchal, Anatole Lécuyer, Anthony Talvas

Abstract: Haptic feedback is known to improve 3D interaction in virtual environments but current haptic interfaces remain complex and tailored to desktop interaction. In this paper, we introduce the 'Elastic-Arm', a novel approach for incorporating haptic feedback in immersive virtual environments in a simple and cost-effective way. The Elastic-Arm is based on a body-mounted elastic armature that links the user's hand to her shoulder. As a result, a progressive resistance force is perceived when extending the arm. This haptic feedback can be incorporated with various 3D interaction techniques and we illustrate the possibilities offered by our system through several use cases based on well-known examples such as the Bubble technique, Redirected Touching and pseudo-haptics. These illustrative use cases provide users with haptic feedback during selection and navigation tasks but they also enhance their perception of the virtual environment. Taken together, these examples suggest that the Elastic-Arm can be transposed in numerous applications and with various 3D interaction metaphors in which a mobile haptic feedback can be beneficial. It could also pave the way for the design of new interaction techniques based on 'human-scale' egocentric haptic feedback.

 

 

Thursday March 26th

 

8:30 - 10:00 | Training and Virtual Humans

 

Session Chair: Luciana Nedel Federal University of Rio Grande do Sul, Brazil

 

Virtual Training: Learning Transfer of Assembly Tasks Presentation of Previously Published TVCG Paper

Patrick Carlson, Alicia Peters, Stephen Gilbert, Judy M Vance and Andy Luse

Abstract: In training assembly workers in a factory, there are often barriers such as cost and lost productivity due to shutdown. The use of virtual reality (VR) training has the potential to reduce these costs. This research compares virtual bimanual haptic training versus traditional physical training and the effectiveness for learning transfer. In a mixed experimental design, participants were assigned to either virtual or physical training and trained by assembling a wooden burr puzzle as many times as possible during a twenty minute time period. After training, participants were tested using the physical puzzle and were retested again after two weeks. All participants were trained using brightly colored puzzle pieces. To examine the effect of color, testing involved the assembly of colored physical parts and natural wood colored physical pieces. Spatial ability as measured using a mental rotation test, was shown to correlate with the number of assemblies they were able to complete in the training. While physical training outperformed virtual training, after two weeks the virtually trained participants actually improved their test assembly times. The results suggest that the color of the puzzle pieces helped the virtually trained participants in remembering the assembly process.
 

 

Effects of Field of View and Visual Complexity on Virtual Reality Training Effectiveness for a Visual Scanning Task Presentation of Previously Published TVCG Paper

Eric Ragan, Doug Bowman, Regis Kopper, Cheryl Stinson, Siroberto Scerbo and Ryan McMahan

Illustration of paper #122
Abstract: Virtual reality training systems are commonly used in a variety of domains, and it is important to understand how the realism of a training simulation influences training effectiveness. We conducted a controlled experiment to test the effects of display and scenario properties on training effectiveness for a visual scanning task in a simulated urban environment. The experiment varied the levels of field of view and visual complexity during a training phase and then evaluated scanning performance with the simulator’s highest levels of fidelity and scene complexity. To assess scanning performance, we measured target detection and adherence to a prescribed strategy. The results show that both field of view and visual complexity significantly affected target detection during training; higher field of view led to better performance and higher visual complexity worsened performance. Additionally, adherence to the prescribed visual scanning strategy during assessment was best when the level of visual complexity during training matched that of the assessment conditions, providing evidence that similar visual complexity was important for learning the technique. The results also demonstrate that task performance during training was not always a sufficient measure of mastery of an instructed technique. That is, if learning a prescribed strategy or skill is the goal of a training exercise, performance in a simulation may not be an appropriate indicator of effectiveness outside of training— evaluation in a more realistic setting may be necessary.

 

Teaming Up With Virtual Humans: How Other People Change Our Perceptions of and Behavior with Virtual Teammates Long Paper

Andrew Robb, Andrew Cordar, Samsun Lampotang, Casey White, Adam Wendling, Benjamin Lok

Illustration of paper #122
Abstract: In this paper we present a study exploring whether the physical presence of another human changes how people perceive and behave with virtual teammates. We conducted a study (n = 69) in which nurses worked with a simulated health care team to prepare a patient for surgery. The agency of participants' teammates was varied between conditions; participants either worked with a virtual surgeon and a virtual anesthesiologist, a human confederate playing a surgeon and a virtual anesthesiologist, or a virtual surgeon and a human confederate playing an anesthesiologist. While participants perceived the human confederates to have more social presence (p < 0.01), participants did not preferentially agree with their human team members. We also observed an interaction effect between agency and behavioral realism. Participants experienced less social presence from the virtual anesthesiologist, whose behavior was less in line with participants' expectations, when a human surgeon was present.

 

Touch Sensing on Non-Parametric Rear-Projection Surfaces: A Physical-Virtual Head for Hands-On Healthcare Training Short Paper

Jason Hochreiter, Salam Daher, Arjun Nagendran, Laura Gonzalez, Greg Welch

Abstract: We demonstrate a generalizable method for unified multitouch detection and response on a human head-shaped surface with a rear-projection animated 3D face. The method helps achieve hands-on touch-sensitive training with dynamic physical-virtual patient behavior. The method, which is generalizable to other non-parametric rear-projection surfaces, requires one or more infrared (IR) cameras, one or more projectors, IR light sources, and a rear-projection surface. IR light reflected off of human fingers is captured by cameras with matched IR pass filters, allowing for the localization of multiple finger touch events. These events are tightly coupled with the rendering system to produce auditory and visual responses on the animated face displayed using the projector(s), resulting in a responsive, interactive experience. We illustrate the applicability of our physical prototype in a medical training scenario.

 

 

 

 

10:30 - 12:05 | Scouting Virtual Worlds                                

 

Session Chair: Eric Hodgson Miami University, United States

 

Going through, going around: a study on individual avoidance of groups Long Paper

Julien Bruneau, Anne-Hélène Olivier, Julien Pettré

Abstract: When avoiding a group, a walker has two possibilities: either he goes through it or around it. Going through very dense groups or around huge ones would not seem natural and could break any sense of presence in a virtual environment. This paper aims to enable crowd simulators to handle such situations correctly. To this end, we need to understand how real humans decide to go through or around groups. As a first hypothesis, we apply the Principle of Minimum Energy (PME) on different group sizes and density. According to this principle, a walker should go around small and dense groups whereas he should go through large and sparse groups. Such principle has already been used for crowd simulation; the novelty here is to apply it to decide on a global avoidance strategy instead of local adaptations only.Our study quantifies decision thresholds. However, PME leaves some inconclusive situations for which the two solutions paths have similar energetic costs. In a second part, we propose an experiment to corroborate PME decisions thresholds with real observations. As controlling the factors of an experiment with many people is extremely hard, we propose to use Virtual Reality as a new method to observe human behavior. This work represent the first crowd simulation algorithm component directly designed from a VR-based study. We also consider the role of secondary factors in inconclusive situations. We show the influence of the group appearance and direction of relative motion in the decision process. Finally, we draw some guidelines to integrate our conclusions to existing crowd simulators and show an example of such integration. We evaluate the achieved improvements.

 

Virtual Proxemics: Locomotion in the Presence of Obstacles in Large Immersive Projection Environments Short Paper

Fernando Argelaguet Sanz, Anne-Hélène Olivier, Gerd Bruder, Julien Pettré, Anatole Lécuyer

Illustration of paper #300
Abstract: In this paper, we investigate obstacle avoidance behavior during real walking in a large immersive projection setup. We analyze the walking behavior of users when avoiding real and virtual static obstacles. In order to generalize our study, we consider both anthropomorphic and inanimate objects, each having his virtual and real counterpart. The results showed that users exhibit different locomotion behaviors in the presence of real and virtual obstacles, and in the presence of anthropomorphic and inanimate objects. Precisely, the results showed a decrease of walking speed as well as an increase of the clearance distance (i.e., the minimal distance between the walker and the obstacle) when facing virtual obstacles compared to real ones. Moreover, our results suggest that users act differently due to their perception of the obstacle: users keep more distance when the obstacle is anthropomorphic compared to an inanimate object and when the orientation of anthropomorphic obstacle is from the profile compared to a front position. We discuss implications on future large shared immersive projection spaces.

 

 

 

 

 

 

The Effect of Visual Display Properties and Gain Presentation Mode on the Perceived Naturalness of Virtual Walking Speeds Short Paper

Niels Christian Nilsson, Stefania Serafin, Rolf Nordahl

Illustration of paper #113
Abstract: Individuals tend to find realistic walking speeds too slow when relying on treadmill walking or Walking-In-Place (WIP) techniques for virtual travel. This paper details three studies investigating the effects of visual display properties and gain presentation mode on the perceived naturalness of virtual walking speeds: The first study compared three different degrees of peripheral occlusion; the second study compared three different degrees of perceptual distortion produced by varying the geometric field of view (GFOV); and the third study compared three different ways of presenting visual gains. All three studies compared treadmill walking and WIP locomotion. The first study revealed no significant main effects of peripheral occlusion. The second study revealed a significant main effect of GFOV, suggesting that the GFOV size may be inversely proportional to the degree of underestimation of the visual speed. The third study found a significant main effect of gain presentation mode. Allowing participants to interactively adjust the gain led to a smaller range of perceptually natural gains and this approach was significantly faster. However, the efficiency may come at the expense of confidence. Generally the lower and upper bounds of the perceptually natural speeds were higher for treadmill walking than WIP. However, not all differences were statistically significant.

 

Applying Latency to Half of a Self-Avatar's Body to Change Real Walking Patterns Short Paper

Gayani Samaraweera, Alex Perdomo, John Quarles

Illustration of paper #289
Abstract: Latency (i.e., time delay) in a Virtual Environment is known to disrupt user performance, presence and induce simulator sickness. However, can we utilize the effects caused by experiencing latency to benefit virtual rehabilitation technologies? We investigate this question by conducting an experiment that is aimed at altering gait by introducing latency applied to one side of a self-avatar with a front-facing mirror. This work was motivated by previous findings where participants altered their gait with increasing latency, even when participants failed to notice considerably high latencies as 150ms or 225ms. In this paper, we present the results of a study that applies this novel technique to average healthy persons (i.e., to demonstrate the feasibility of the approach before applying it to persons with disabilities). The results indicate a tendency to create asymmetric gait in persons with symmetric gait when latency is applied to one side of their self-avatar. Thus, the study shows the potential of applying one-sided latency in a self-avatar, which could be used to develop asymmetric gait rehabilitation techniques.

 

Cognitive Resource Demands of Redirected Walking Long Paper

Gerd Bruder, Paul Lubos, Frank Steinicke

Abstract: Redirected walking allows users to walk through a large-scale immersive virtual environment (IVE) while physically remaining in a reasonably small workspace. Therefore, manipulations are applied to virtual camera motions so that the user's self-motion in the virtual world differs from movements in the real world. Previous work found that the human perceptual system tolerates a certain amount of inconsistency between proprioceptive, vestibular and visual sensation in IVEs, and even compensates for slight discrepancies with recalibrated motor commands. Experiments showed that users are not able to detect an inconsistency if their physical path is bent with a radius of at least 22 meters during virtual straightforward movements. If redirected walking is applied in a smaller workspace, manipulations become noticeable, but users are still able to move through a potentially infinitely large virtual world by walking. For this semi-natural form of locomotion, the question arises if such manipulations impose cognitive demands on the user, which may compete with other tasks in IVEs for finite cognitive resources. In this article we present an experiment in which we analyze the mutual influence between redirected walking and verbal as well as spatial working memory tasks using a dual-tasking method. The results show an influence of redirected walking on verbal as well as spatial working memory tasks, and we also found an effect of cognitive tasks on walking behavior. We discuss the implications and provide guidelines for using redirected walking in virtual reality laboratories.

 

 

 

13:45 - 15:00 | HMD Calibration                               

 

Session Chair: Holger Regenbrecht University of Otago, New Zealand

 

Light-Field Correction for Spatial Calibration of Optical See-Through Head-Mounted Displays Long Paper

Yuta Itoh, Gudrun Klinker

Illustration of paper #185
Abstract: A critical requirement for AR applications with Optical See-ThroughHead-Mounted Displays (OST-HMD) is to project 3D information correctly into the current viewpoint of the user -- more particularly, according to the user's eye position. Recently-proposed interaction-free calibration methods automatically estimate this projection by tracking the user's eye position, thereby freeing users from tedious manual calibrations. However, the method is still prone to contain systematic calibration errors. Such errors stem from eye-/HMD-related factors and are not represented in the conventional eye-HMD model used for HMD calibration. This paper investigates one of these factors -- the fact that optical elements of OST-HMDs distort incoming world-light rays before they reach the eye, just as corrective glasses do. Any OST-HMD requires an optical element to display a virtual screen. Each such optical element has different distortions. Since users see a distorted world through the element, ignoring this distortion degenerates the projection quality. We propose a light-field correction method, based on a machine learning technique, which compensates the world-scene distortion caused by OST-HMD optics. We demonstrate that our method reduces the systematic error and significantly increases the calibration accuracy of the interaction-free calibration.

 

Corneal-Imaging Calibration for Optical See-Through Head-Mounted Displays Long Paper

Alexander Plopski, Yuta Itoh, Christian Nitschke, Kiyoshi Kiyokawa, Gudrun Klinker, Haruo Takemura

Illustration of paper #156
Abstract: In recent years optical see-through head-mounted displays (OST-HMDs) have moved from conceptual research to a market of mass-produced devices with new models and applications being released continuously. It remains challenging to deploy augmented reality (AR) applications that require consistent spatial visualization. Examples include maintenance, training and medical tasks, as the view of the attached scene camera is shifted from the user's view. A calibration step can compute the relationship between the HMD-screen and the user's eye to align the digital content. However, this alignment is only viable as long as the display does not move, an assumption that rarely holds for an extended period of time. As a consequence, continuous recalibration is necessary. Manual calibration methods are tedious and rarely support practical applications. Existing automated methods do not account for user-specific parameters and are error prone. We propose the combination of a pre-calibrated display with a per-frame estimation of the user's cornea position to estimate the individual eye center and continuously recalibrate the system. With this, we also obtain the gaze direction, which allows for instantaneous uncalibrated eye gaze tracking, without the need for additional hardware and complex illumination. Contrary to existing methods, we use simple image processing and do not rely on iris tracking, which is typically noisy and can be ambiguous. Evaluation with simulated and real data shows that our approach achieves a more accurate and stable eye pose estimation, which results in an improved and practical calibration with a largely improved distribution of projection error.

 

Subjective Evaluation of a Semi-Automatic Optical See-Through Head-Mounted Display Calibration Technique Long Paper

Kenneth Moser, Yuta Itoh, Kohei Oshima, Edward Swan, Gudrun Klinker, Christian Sandor

Illustration of paper #148
Abstract: With the growing availability of optical see-through (OST) head-mounted displays (HMDs), there is a present need for robust, uncomplicated, and automatic calibration methods suited for non-expert users. This work presents the results of a user study which both objectively and subjectively examines registration accuracy produced by three OST HMD calibration methods: (1) SPAAM, (2) Degraded SPAAM, and (3) Recycled INDICA, a recently developed semi-automatic calibration method. Accuracy metrics used for evaluation include subject provided quality values and error between perceived and absolute registration coordinates. Our results show all three calibration methods produce very accurate registration in the horizontal direction but caused subjects to perceive the distance of virtual objects to be closer than intended. Surprisingly, the semi-automatic calibration method produced more accurate registration vertically and in perceived object distance overall. User assessed quality values were also the highest for Recycled INDICA, particularly when objects were shown at distance. The results of this study confirm that Recycled INDICA is capable of producing equal or superior on-screen registration compared to common OST HMD calibration methods. We also identify a potential hazard in using reprojection error as a quantitative analysis technique to predict registration accuracy. We conclude with discussing the further need for examining INDICA calibration in binocular HMD systems, and the present possibility for creation of a closed-loop continuous calibration method for OST Augmented Reality.

 

 

 

15:30 - 16:30 | Displays                              

 

Session Chair: Hiroyuki Kajimoto The University of Electro-Communications, Japan

 

Extended Depth-of-Field Projector by Fast Focal Sweep Projection Long Paper

Daisuke Iwai, Shoichiro Mihara, Kosuke Sato

Abstract: A simple and cost-efficient method for extending a projector's depth-of-field (DOF) is proposed. By leveraging liquid lens technology, we can periodically modulate the focal length of a projector at a frequency that is higher than the critical flicker fusion (CFF) frequency. Fast periodic focal length modulation results in forward and backward sweeping of focusing distance. Fast focal sweep projection makes the point spread function (PSF) of each projected pixel integrated over a sweep period (IPSF; integrated PSF) nearly invariant to the distance from the projector to the projection surface as long as it is positioned within sweep range. This modulation is not perceivable by human observers. Once we compensate projection images for the IPSF, the projected results can be focused at any point within the range. Consequently, the proposed method requires only a single offline PSF measurement; thus, it is an open-loop process. We have proved the approximate invariance of the projector's IPSF both numerically and experimentally. Through experiments using a prototype system, we have confirmed that the image quality of the proposed method is superior to that of normal projection with fixed focal length. In addition, we demonstrate that a structured light pattern projection technique using the proposed method can measure the shape of an object with large depth variances more accurately than normal projection techniques.

 

Robust High-speed Tracking against Illumination Changes for Dynamic Projection Mapping Short Paper

Tomohiro Sueishi, Hiromasa Oku, Masatoshi Ishikawa

Abstract: Dynamic Projection Mapping, projection-based AR for a moving object without misalignment by a high-speed optical axis controller by rotational mirrors, has a trade-off between stability of high-speed tracking and high visibility for a variety of projection content. In this paper, a system that will provide robust high-speed tracking without any markers on objects against illumination changes, including projected images, is realized by introducing a retroreflective background with the optical axis controller for Dynamic Projection Mapping. Low-intensity episcopic light is projected with Projection Mapping content, and the light reflected from the background is sufficient for high-speed cameras but is nearly invisible to observers. In addition, we introduce adaptive windows and peripheral weighted erosion to maintain accurate high-speed tracking. Under low light conditions, we examined the visual performance of diffuse reflection and retroreflection from both camera and observer viewpoints. We evaluated stability relative to illumination and disturbance caused by non-target objects. Dynamic Projection Mapping with partially well-lit content in a low-intensity light environment is realized by our proposed system.
 
 

A Distributed Memory Hierarchy and Data Management for Interactive Scene Navigation and Modification on Tiled Display Walls Presentation of Previously Published TVCG Paper

Duy-Quoc Lai, Behzad Sajadi, Shan Jiang, Aditi Majumder,M. Gopi

Illustration of paper #122
Abstract: Simultaneous modification and navigation of massive 3D models are difficult because repeated data edits affect the data layout and coherency on a secondary storage, which in turn affect the interactive out-of-core rendering performance. In this paper, we propose a novel approach for distributed data management for simultaneous interactive navigation and modification of massive 3D data using the readily available infrastructure of a tiled display. Tiled multi-displays, projection or LCD panel based, driven by a PC cluster, can be viewed as a cluster of storage-compute-display (SCD) nodes. Given a cluster of SCD node infrastructure, we first propose a distributed memory hierarchy for interactive rendering applications. Second, in order to further reduce the latency in such applications, we propose a new data partitioning approach for distributed storage among the SCD nodes that reduces the variance in the data load across the SCD nodes. Our data distribution method takes in a data set of any size, and reorganizes it into smaller partitions, and stores it across the multiple SCD nodes. These nodes store, manage, and coordinate data with other SCD nodes to simultaneously achieve interactive navigation and modification. Specifically, the data is not duplicated across these distributed secondary storage devices. In addition, coherency in data access, due to screen-space adjacency of adjacent displays in the tile, as well as object space adjacency of the data sets, is well leveraged in the design of the data management technique. Empirical evaluation on two large data sets, with different data density distribution, demonstrates that the proposed data management approach achieves superior performance over alternative state-of-the-art methods.

 

 

Friday March 27th

 

8:30 - 9:45 | Simulation and Rendering                               

 

Session Chair: Mathias Harders University of Innsbruck, Austria

 

WAVE: Interactive Wave-based Sound Propagation for Virtual Environments Long Paper

Ravish Mehra, Atul Rungta, Abhinav Golas, Ming Lin, Dinesh Manocha

Illustration of paper #181
Abstract: We present an interactive wave-based sound propagation system that generates accurate, realistic sound in virtual environments for dynamic (moving) sources and listeners. We propose a novel algorithm to accurately solve the wave equation for dynamic sources and listeners using a combination of precomputation techniques and GPU-based runtime evaluation. Our system can handle large environments typically used in VR applications, compute spatial sound corresponding to listener's motion (including head tracking) and handle both omnidirectional and directional sources, all at interactive rates. As compared to prior wave-based techniques applied to large scenes with moving sources, we observe significant improvement in runtime memory. The overall sound-propagation and rendering system has been integrated with the Half-Life 2 game engine, Oculus-Rift head-mounted display, and the Xbox game controller to enable users to experience high-quality acoustic effects (e.g., amplification, diffraction low-passing, high-order scattering) and spatial audio, based on their interactions in the VR application. We provide the results of preliminary user evaluations, conducted to study the impact of wave-based acoustic effects and spatial audio on users' navigation performance in virtual environments.

 

Fast physically accurate rendering of multimodal signatures of distributed fracture in heterogeneous materials Long Paper

Yon Visell

Illustration of paper #314
Abstract: This paper proposes a fast, physically accurate method for synthesizing multimodal, acoustic and haptic, signatures of distributed fracture in quasi-brittle heterogeneous materials, such as wood, granular media, or other fiber composites. Fracture processes in these materials are challenging to simulate with existing methods, due to the prevalence of large numbers of disordered, quasi-random spatial degrees of freedom, representing the complex physical state of a sample over the geometric volume of interest. Here, I develop an algorithm for simulating such processes, building on a class of statistical lattice models of fracture that have been widely investigated in the physics literature. This algorithm is enabled through a recently published mathematical construction based on the inverse transform method of random number sampling. It yields a purely time domain stochastic jump process representing stress fluctuations in the medium. The latter can be readily extended by a mean field approximation that captures the averaged constitutive (stress-strain) behavior of the material. Numerical simulations and interactive examples demonstrate the ability of these algorithms to generate physically plausible acoustic and haptic signatures of fracture in complex, natural materials interactively at audio sampling rates.

 

Aggregate Constraints for Virtual Manipulation with Soft Fingers Long Paper

Anthony Talvas, Maud Marchal, Christian Duriez, Miguel A. Otaduy

Illustration of paper #250
Abstract: Interactive dexterous manipulation of virtual objects remains a complex challenge that requires both appropriate hand models and accurate physically-based simulation of interactions. In this paper, we propose an approach based on novel aggregate constraints for simulating dexterous grasping using soft fingers. Our approach aims at improving the computation of contact mechanics when many contact points are involved, by aggregating the multiple contact constraints into a minimal set of constraints. We also introduce a method for non-uniform pressure distribution over the contact surface, to adapt the response when touching sharp edges. We use the Coulomb-Contensou friction model to efficiently simulate tangential and torsional friction. We show through different use cases that our aggregate constraint formulation is well-suited for simulating interactively dexterous manipulation of virtual objects through soft fingers, and efficiently reduces the computation time of constraint solving.

 

 

 

 

 

 

 

 

13:30 - 14:15 | Mobile Devices          

 

Session Chair: Ferran Argelaguet Inria, France

 

Mixed Reality Simulation with Physical Mobile Display Devices Short Paper

Mathieu Rodrigue, Drew Waranis, Tim Wood, Tobias Höllerer

Abstract:This paper presents the design and implementation of a system for simulating mixed reality in setups combining mobile devices and large backdrop displays. With a mixed reality simulator, one can perform usability studies and evaluate mixed reality systems while minimizing confounding variables. This paper describes how mobile device AR design factors can be flexibly and systematically explored without sacrificing the touch and direct unobstructed manipulation of a physical personal MR display. First, we describe general principles to consider when implementing a mixed reality simulator, enumerating design factors. Then, we present our implementation which utilizes personal mobile display devices in conjunction with a large surround-view display environment. Standing in the center of the display, a user may direct a mobile device, such as a tablet or head-mounted display, to a portion of the scene, which affords them a potentially annotated view of the area of interest. The user may employ gesture or touch screen interaction on a simulated augmented camera feed, as they typically would in video-see-through mixed reality applications. We present calibration and system performance results and illustrate our system's flexibility by presenting the design of three usability evaluation scenarios.

 

 

 

 

Object Impersonation: Towards Effective Interaction in Tablet- and HMD-Based Hybrid Virtual Environments Short Paper

Jia Wang, Robert Lindeman

Illustration of paper #146
Abstract: In virtual reality, hybrid virtual environment (HVE) systems provide the immersed user with multiple interactive representations of the virtual world, and can be effectively used for 3D interaction tasks with highly diverse requirements. We present a new HVE metaphor called Object Impersonation that allows the user to not only manipulate a virtual object from outside, but also become the object, and maneuver from inside. This approach blurs the line between travel and object manipulation, leading to efficient cross-task interaction in various task scenarios. Using a tablet- and HMD-based HVE system, two different designs of Object Impersonation were implemented, and compared to a traditional, non-hybrid 3D interface for three different object manipulation tasks. Results indicate improved task performance and enhanced user experience with the added orientation control from the object's point of view. However, they also revealed higher cognitive overhead to attend to both interaction contexts, especially without sufficient reference cues in the virtual environment.

 

Mobile User Interfaces for Efficient Verification of Holograms Short Paper

Andreas Hartl, Jens Grubert, Christian Reinbacher, Clemens Arth, Dieter Schmalstieg

Abstract: Paper documents such as passports, visas and banknotes are frequently checked by inspection of security elements. In particular, view-dependent elements such as holograms are interesting, but the expertise of individuals performing the task varies greatly. Augmented Reality systems can provide all relevant information on standard mobile devices. Hologram verification still takes long and causes considerable load for the user. We aim to address this drawback by first presenting a work flow for recording and automatic matching of hologram patches. Several user interfaces for hologram verification are presented, aiming to noticeably reduce verification time. We evaluate the most promising interfaces in a user study with prototype applications running on off-the-shelf hardware. Our results indicate that there is a significant difference in capture time between interfaces but that users do not prefer the fastest interface.

 

 

 

 

 

 

14:15 - 14:45 | AR Lighting         

 

Session Chair: Daisuke Iwai Osaka University, Japan

 

Image-Space Illumination for Augmented Reality in Dynamic Environments Short Paper

Lukas Gruber, Jonathan Ventura, Dieter Schmalstieg

Abstract: We present an efficient approach for probeless light estimation and coherent rendering of Augmented Reality in dynamic scenes. This approach can handle dynamically changing scene geometry and dynamically changing light sources in real time with a single mobile RGB-D sensor and without relying on an invasive lightprobe. We jointly filter both in-view dynamic geometry and outside-view static geometry. The resulting reconstruction provides the input for efficient global illumination computation in image-space. We demonstrate that our approach can deliver state-of-the-art Augmented Reality rendering effects for scenes that are more scalable and more dynamic than previous work.

 

Light Field Projection for Lighting Reproduction Short Paper

Zhong Zhou, Tao Yu, Xiaofeng Qiu, Ruigang Yang, Qinping Zhao

Abstract: We propose a novel approach to generate 4D light field in the physical world for lighting reproduction. The light field is generated by projecting lighting images on a lens array. The lens array turns the projected images into a controlled anisotropic point light source array which can simulate the light field of a real scene. In terms of acquisition, we capture an array of light probe images from a real scene, based on which an incident light field is generated. The lens array and the projectors are geometric and photometrically calibrated, and an efficient resampling algorithm is developed to turn the incident light field into the images projected onto the lens array. The reproduced illumination, which allows per-ray lighting control, can produce realistic lighting result on real objects, avoiding the complex process of geometric and material modeling. We demonstrate the effectiveness of our approach with a prototype setup.