2017 IEEE VR Los Angeles logo
2017 IEEE VR Los Angeles logo
IEEE Computer Society IEEE

Exhibitors and Supporters

Silver Level

logo

logo


Bronze Level

logo

logo

logo

logo

logo

logo

logo

logo

logo

logo


Event Supporters

logo

logo


Publisher

logo


Other Supporters

logo

logo

Papers


Monday
March 20

Tuesday
March 21

Wednesday
March 22

8:30am - 10:00am

360° Video Cinematic Experience Extraordinary Environments and Abnormal Objects
8:30am - 10:00am

Avatars and Virtual Humans
10:30am - 12:00pm

Plausibility, Emotions, and Ethics
10:30am - 12:00pm

Haptics Walking Alone and Together
10:30am - 12:00pm

Motion Tracking and Capturing
1:30pm - 2:40pm

Travel and Navigation
1:30pm - 2:40pm

Touch and Vibrotactile Feedback
1:30pm - 3:00pm

Systems and Applications
4:00pm - 5:15pm

Visual Displays
4:00pm - 5:15pm

Acoustics and Auditory Displays


 

Plausibility, Emotions, and Ethics

Monday, March 20, 10:30am - 12:00pm

Session Chair: Eric Hodgson

 

A Psychophysical Experiment Regarding Components of the Plausibility Illusion

TVCG

Richard Skarbez, Solène Neyret, Frederick P. Brooks Jr., Mel Slater, and Mary C. Whitton

Abstract: We report on the design and results of an experiment investigating factors influencing Slater's Plausibility Illusion (Psi) in virtual environments. Slater proposed Psi and Place Illusion (PI) as orthogonal components of virtual experience which contribute to realistic response in a VE. PI corresponds to the traditional conception of presence as being there,'' so there exists a substantial body of previous research relating to PI, but very little relating to Psi. We developed this experiment to investigate the components of plausibility illusion using subjective matching techniques similar to those used in color science. Twenty-one participants each experienced a scenario with the highest level of coherence (the extent to which a scenario matches user expectations and is internally consistent), then in eight different trials chose transitions from lower-coherence to higher-coherence scenarios with the goal of matching the level of Psi they felt in the highest-coherence scenario. At each transition, participants could change one of the following coherence characteristics: the behavior of the other virtual humans in the environment, the behavior of their own body, the physical behavior of objects, or the appearance of the environment. Participants tended to choose improvements to the virtual body before any other improvements. This indicates that having an accurate and well-behaved representation of oneself in the virtual environment is the most important contributing factor to Psi. This study is the first to our knowledge to focus specifically on coherence factors in virtual environments.

 

The Plausibility of a String Quartet Performance in Virtual Reality

TVCG

Ilias Bergstrom, Antonio Sergio Azevedo, Panos Papiotis, Nuno Saldanha, and Mel Slater

Abstract: We describe an experiment that explores the contribution of auditory and other features to the illusion of plausibility in a virtual environment that depicts the performance of a string quartet. ‘Plausibility’ refers to the component of presence that is the illusion that the perceived events in the virtual environment are really happening. The features studied were: Gaze (the musicians ignored the participant, the musicians sometimes looked towards and followed the participant’s movements), Sound Spatialization (Mono, Stereo, Spatial), Auralization (no sound reflections, reflections corresponding to a room larger than the one perceived, reflections that exactly matched the virtual room), and Environment (no sound from outside of the room, birdsong and wind corresponding to the outside scene). We adopted the methodology based on color matching theory, where 20 participants were first able to assess their feeling of plausibility in the environment with each of the four features at their highest setting. Then five times participants started from a low setting on all features and were able to make transitions from one system configuration to another until they matched their original feeling of plausibility. From these transitions a Markov transition matrix was constructed, and also probabilities of a match conditional on feature configuration. The results show that Environment and Gaze were individually the most important factors influencing the level of plausibility. The highest probability transitions were to improve Environment and Gaze, and then Auralization and Spatialization. We present this work as both a contribution to the methodology of assessing presence without questionnaires, and showing how various aspects of a musical performance can influence plausibility.

 

Emotional Qualities of VR Space

VR Proceedings

Asma Naz, Regis Kopper, Ryan McMahan, and Mihai Nadin

Abstract: The emotional response a person has to a living space is predominantly affected by light, color and texture as space-making elements. In order to verify whether this phenomenon could be replicated in a simulated environment, we conducted a user study in a six-sided projected immersive display that utilized equivalent design attributes of brightness, color and texture in order to assess to which extent the emotional response in a simulated environment is affected by the same parameters affecting real environments. Since emotional response depends upon the context, we evaluated the emotional responses of two groups of users: inactive (passive) and active (performing a typical daily activity). The results from the perceptual study generated data from which design principles for a virtual living space are articulated. Such a space, as an alternative to expensive built dwellings, could potentially support new, minimalist lifestyles of occupants, defined as the neo-nomads, aligned with their work experience in the digital domain through the generation of emotional experiences of spaces. Data from the experiments confirmed the hypothesis that perceivable emotional aspects of real-world spaces could be successfully generated through simulation of design attributes in the virtual space. The subjective response to the virtual space was consistent with corresponding responses from real-world color and brightness emotional perception. Our data could serve the virtual reality (VR) community in its attempt to conceive of further applications of virtual spaces for well-defined activities.

 

Asking Ethical Questions in Research using Immersive Virtual and Augmented Reality Technologies with Children and Youth

VR Proceedings

Erica Southgate, Shamus Smith, and Jill Scevak

Abstract: The increasing availability of intensely immersive virtual, augmented and mixed reality experiences using head-mounted displays (HMD) has prompted deliberations about the ethical implications of using such technology to resolve technical issues and explore the complex cognitive, behavioral and social dynamics of human virtuality'. However, little is known about the impact such immersive experiences will have on children (aged 0-18 years). This paper outlines perspectives on child development to present conceptual and practical frameworks for conducting ethical research with children using immersive HMD technologies. The paper addresses not only procedural ethics (gaining institutional approval) but also ethics-in-practice (on-going ethical decision-making).

 

Travel and Navigation

Monday, March 20, 1:30pm - 2:40pm

Session Chair: Tabitha Peck

 

Guided Head Rotation and Amplified Head Rotation: Evaluating Semi-natural Travel and Viewing Techniques in Virtual Reality

VR Proceedings

Shyam Prathish Sargunam, Kasra Rahimi Moghadam, Mohamed Suhail, and Eric Ragan

Abstract: Traditionally in virtual reality systems, head tracking is used in head-mounted displays (HMDs) to allow users to control viewing using 360-degree head and body rotations. Our research explores interaction considerations that enable semi-natural methods of view control that will work for seated use of virtual reality with HMDs when physically turning all the way around is not ideal, such as when sitting on a couch or at a desk. We investigate the use of amplified head rotations so physically turning in a comfortable range can allow viewing of a 360-degree virtual range. Additionally, to avoid situations where the user's neck is turned in an uncomfortable position for an extended period, we also use redirection during virtual movement to gradually realign the user's head position back to the neutral, straight-ahead position. We ran a controlled experiment to evaluate guided head rotation and amplified head rotation without realignment during movement, and we compared both to traditional one-to-one head-tracked viewing as a baseline for reference. After a navigation task, overall errors on spatial orientation tasks were relatively low with all techniques, but orientation effects, sickness, and preferences varied depending on participants' 3D gaming habits. Using the guided rotation technique, participants who played 3D games performed better, reported higher preference scores, and demonstrated significantly lower sickness results compared to non-gamers.

 

Automatic Speed and Direction Control along Constrained Navigation Paths

VR Proceedings

Seyedkoosha Mirhosseini, Ievgeniia Gutenko, Sushant Ojal, Joseph Marino, and Arie Kaufman

Abstract: For many virtual reality applications, a pre-calculated fly-through path is the \emph{de facto} standard navigation method. Such a path is convenient for users and ensures coverage of critical areas throughout the scene. Traditional applications use constant camera speed, allow for fully user-controlled manual speed adjustment, or use automatic speed adjustment based on heuristics from the scene. We introduce two novel methods for constrained path navigation and exploration in virtual environments which rely on the natural orientation of the user's head during scene exploration. Utilizing head tracking to obtain the user's area of focus, we perform automatic camera speed adjustment to allow for natural off-axis scene examination. We expand this to include automatic camera navigation along the pre-computed path, abrogating the need for any navigational inputs from the user. Our techniques are applicable for any scene with a pre-computed navigation path, including medical applications such as virtual colonoscopy, coronary fly-through, or virtual angioscopy, and graph navigation. We compare the traditional methods (constant speed and manual speed adjustment) and our two methods (automatic speed adjustment and automatic speed/direction control) to determine the effect of speed adjustment on system usability, mental load, performance, and user accuracy. Through this evaluation we observe the effect of automatic speed adjustment compared to traditional methods. We observed no negative impact from automatic navigation, and the users performed as well as with the manual navigation.

 

Amplified Head Rotation in Virtual Reality and the Effects on 3D Search, Training Transfer, and Spatial Orientation

TVCG Invited

Eric D. Ragan, Siroberto Scerbo, Felipe Bacim, and Doug A. Bowman

Abstract: Many types of virtual reality (VR) systems allow users to use natural, physical head movements to view a 3D environment. In some situations, such as when using systems that lack a fully surrounding display or when opting for convenient low-effort interaction, view control can be enabled through a combination of physical and virtual turns to view the environment, but the reduced realism could potentially interfere with the ability to maintain spatial orientation. One solution to this problem is to amplify head rotations such that smaller physical turns are mapped to larger virtual turns, allowing trainees to view the entire surrounding environment with small head movements. This solution is attractive because it allows semi-natural physical view control rather than requiring complete physical rotations or a fully-surrounding display. However, the effects of amplified head rotations on spatial orientation and many practical tasks are not well understood. In this paper, we present an experiment that evaluates the influence of amplified head rotation on 3D search, spatial orientation, and cybersickness. In the study, we varied the amount of amplification and also varied the type of display used (head-mounted display or surround-screen CAVE) for the VR search task. By evaluating participants first with amplification and then without, we were also able to study training transfer effects. The findings demonstrate the feasibility of using amplified head rotation to view 360 degrees of virtual space, but noticeable problems were identified when using high amplification with a head-mounted display. In addition, participants were able to more easily maintain a sense of spatial orientation when using the CAVE version of the application, which suggests that visibility of the user’s body and awareness of the CAVE’s physical environment may have contributed to the ability to use the amplification technique while keeping track of orientation.

 

Visual Displays

Monday, March 20, 4:00pm - 5:15pm

Session Chair: Laura Trutoiu

 

Wide Field of View Varifocal Near-Eye Display using See-Through Deformable Membrane Mirrors

TVCG

David Dunn, Cary Tippets, Kent Torell, Petr Kellnhofer, Kaan Aksit, Piotr Didyk, Karol Myszkowski, David Luebke, and Henry Fuchs

Abstract: Accommodative depth cues, a wide field of view, and ever-higher resolutions all present major hardware design challenges for near-eye displays. Optimizing a design to overcome one of these challenges typically leads to a trade-off in the others. We tackle this problem by introducing an all-in-one solution – a new wide field of view gaze-tracked near-eye display for augmented reality applications. The key component of our solution is the use of a single see-through varifocal deformable membrane mirror for each eye reflecting a display. They are controlled by airtight cavities and change the effective focal power to present a virtual image at a target depth plane which is determined by the gaze tracker. The benefits of using the membranes include wide field of view (100° diagonal) and fast depth switching (from 20 cm to infinity within 300 ms). Our subjective experiment verifies the prototype and demonstrates its potential benefits for near-eye see-through displays.

 

Efficient Hybrid Image Warping for High Frame-Rate Stereoscopic Rendering

TVCG

Andre Schollmeyer, Simon Schneegans, Stephan Beck, Anthony Steed, and Bernd Froehlich

Abstract: Modern virtual reality simulations require a constant high-frame rate from the rendering engine. They may also require very low latency and stereo images. Previous rendering engines for virtual reality applications have exploited spatial and temporal coherence by using image-warping to re-use previous frames or to render a stereo pair at lower cost than running the full render pipeline twice. However these previous approaches have shown artifacts or have not scaled well with image size. We present a new image-warping algorithm that has several novel contributions: an adaptive grid generation algorithm for proxy geometry for image warping; a low-pass hole-filling algorithm to address un-occlusion; and support for transparent surfaces by efficiently ray casting transparent fragments stored in per-pixel linked lists of an A-Buffer. We evaluate our algorithm with a variety of challenging test cases. The results show that it achieves better quality image-warping than state-of-the-art techniques and that it can support transparent surfaces effectively. Finally, we show that our algorithm can achieve image warping at rates suitable for practical use in a variety of applications on modern virtual reality equipment.

 

The Problem of Persistence with Rotating Displays

TVCG

Matthew Regan and Gavin S. P. Miller

Abstract: Motion-to-photon latency causes images to sway from side to side in a VR/AR system, while display persistence causes smearing; both of these are undesirable artifacts. We show that once latency is reduced or eliminated, smearing due to display persistence becomes the dominant visual artifact, even with accurate tracker prediction. We investigate the human perceptual mechanisms responsible for this and we demonstrate a modified 3D rotation display controller architecture for driving a high-speed digital display which minimizes latency and persistence. We simulate it in software and we built a testbench based on a very high frame rate (2880 fps 1-bit images) display system mounted on a mechanical rotation gantry which emulates display rotation during head rotation in an HMD.

 

360° Video Cinematic Experience

Tuesday, March 21, 8:30am - 10:00am

Session Chair: Gerd Bruder

 

MR360: Mixed Reality Rendering for a 360° Panoramic Videos

TVCG

Taehyun Rhee, Lohit Petikam, Benjamin Allen, and Andrew Chalmers

Abstract: This paper presents a novel immersive system called MR360 that provides interactive mixed reality (MR) experiences using a conventional low dynamic range (LDR) 360 degree, panoramic video (360-video) shown in head mounted displays (HMDs). MR360 seamlessly composites 3D virtual objects into a live 360-video using the input panoramic video as the lighting source to illuminate the virtual objects. Image based lighting (IBL) is perceptually optimized to provide fast and believable results using the LDR 360-video as the lighting source. Regions of most salient lights in the input panoramic video are detected to optimize the number of lights used to cast perceptible shadows. Then, the areas of the detected lights adjust the penumbra of the shadow to provide realistic soft shadows. Finally, our real-time differential rendering synthesizes illumination of the virtual 3D objects into the 360-video. MR360 provides the illusion of interacting with objects in a video, which are actually 3D virtual objects seamlessly composited into the background of the 360-video. MR360 was implemented in a commercial game engine and tested using various 360-videos. Since our MR360 pipeline does not require any pre-computation, it can synthesize an interactive MR scene using a live 360-video stream while providing realistic high performance rendering suitable for HMDs.

 

6-DOF VR Videos with a Single 360-Camera

VR Proceedings

Jingwei Huang, Zhili Chen, Duygu Ceylan, and Hailin Jin

Abstract: Recent breakthroughs in consumer level virtual reality (VR) headsets is creating a growing user-base in demand of immersive, full 3D VR experiences. While monoscopic 360-videos are one of the most common content for VR headsets, they lack 3D information and thus cannot be viewed with full 6 degree-of-freedom (DOF). We present an approach that addresses this limitation via a novel warping algorithm that can synthesize new views both with rotational and translational motion of the viewpoint. This enables to playback an input monoscopic 360-video in a VR headset where the new viewpoints are determined by the 6-DOF motion of the headset. Our solution tailors standard structure-from-motion and dense reconstruction algorithms to work accurately for 360-videos and is optimized for GPUs to achieve VR frame rates (> 120 fps). We demonstrate the effectiveness our approach on a variety of videos with interesting content.

 

Cinematic Virtual Reality: Evaluating the Effect of Display Type on the Viewing Experience for Panoramic Video

VR Proceedings

Andrew MacQuarrie and Anthony Steed

Abstract: The proliferation of head-mounted displays (HMD) in the market means that cinematic virtual reality (CVR) is an increasingly popular format. We explore several metrics that may indicate advantages and disadvantages of CVR compared to traditional viewing formats such as TV. We explored the consumption of panoramic videos in three different display systems: a HMD, a SurroundVideo+ (SV+), and a standard 16:9 TV. The SV+ display features a TV with projected peripheral content. A between-groups experiment of 63 participants was conducted, in which participants watched panoramic videos in one of these three display conditions. Aspects examined in the experiment were spatial awareness, narrative engagement, enjoyment, memory, fear, attention, and a viewer’s concern about missing something. Our results indicated that the HMD offered a significant benefit in terms of enjoyment and spatial awareness, and our SV+ display offered a significant improvement in enjoyment over traditional TV. We were unable to confirm the work of a previous study that showed incidental memory may be lower in a HMD over a TV. Drawing attention and a viewer’s concern about missing something were also not significantly different between display conditions. It is clear that passive media viewing consists of a complex interplay of factors, such as the media itself, the characteristics of the display, as well as human aspects including perception and attention. While passive media viewing presents many challenges for evaluation, identifying a number of broadly applicable metrics will aid our understanding of these experiences, and allow the creation of better, more engaging CVR content and displays.

 

ScreenX: Public Immersive Theatres with Uniform Movie Viewing Experiences

TVCG Invited

Jungjin Lee, Sangwoo Lee, Younghui Kim, and Junyong Noh

Abstract: This paper introduces ScreenX, which is a novel movie viewing platform that enables ordinary movie theatres to become multi-projection movie theatres. This enables the general public to enjoy immersive viewing experiences. The left and right side walls are used to form surrounding screens. This surrounding display environment delivers a strong sense of immersion in general movie viewing. However, naïve display of the content on the side walls results in the appearance of distorted images according to the location of the viewer. In addition, the different dimensions in width, height, and depth among theatres may lead to different viewing experiences. Therefore, for successful deployment of this novel platform, an approach to providing similar movie viewing experiences across target theatres is presented. The proposed image representation model ensures minimum average distortion of the images displayed on the side walls when viewed from different locations. Furthermore, the proposed model assists with determining the appropriate variation of the content according to the diverse viewing environments of different theatres. The theatre suitability estimation method excludes outlier theatres that have extraordinary dimensions. In addition, the content production guidelines indicate appropriate regions to place scene elements for the side wall, depending on their importance. The experiments demonstrate that the proposed method improves the movie viewing experiences in ScreenX theatres. Finally, ScreenX and the proposed techniques are discussed with regard to various aspects and the research issues that are relevant to this movie viewing platform are summarized.

 

Extraordinary Environments and Abnormal Objects

Tuesday, March 21, 8:30am - 10:00am

Session Chair: Xubo Yang

 

A Study on the Use of an Immersive Virtual Reality Store to Investigate Consumer Perceptions and Purchase Behavior toward Non-standard Fruits and Vegetables

VR Proceedings

Adrien Verhulst, Jean-Marie Normand, Cindy Lombart, and Guillaume Moreau

Abstract: In this paper we present an immersive virtual reality user study aimed at investigating how customers perceive and if they would purchase non-standard (i.e. misshaped) fruits and vegetables (FaVs) in supermarkets and hypermarkets. Indeed, food waste is a major issue for the retail sector and a recent trend is to reduce it by selling non-standard goods. An important question for retailers relates to the FaVs' level of abnormality'' that consumers would agree to buy. However, this question cannot be tackled using classical'' marketing techniques that perform user studies within real shops since fresh produce such as FaVs tend to rot rapidly preventing studies to be repeatable or to be run for a long time. In order to overcome those limitations, we created a virtual grocery store with a fresh FaVs section where 142 participants were immersed using an Oculus Rift DK2 HMD. Participants were presented either normal'', slightly misshaped'', misshaped'' or severely misshaped'' FaVs. Results show that participants tend to purchase a similar number of FaVs whatever their deformity. Nevertheless participants' perceptions of the quality of the FaV depend on the level of abnormality.

 

The Martian: Examining Human Physical Judgments Across Virtual Gravity Fields

TVCG

Tian Ye, Siyuan Qi, James Kubricht, Yixin Zhu, Hongjing Lu, and Song-Chun Zhu

Abstract: This paper examines how humans adapt to novel physical situations with unknown gravitational acceleration in immersive virtual environments. We designed four virtual reality experiments with different tasks for participants to complete: strike a ball to hit a target, trigger a ball to hit a target, predict the landing location of a projectile, and estimate the flight duration of a projectile. The first two experiments compared human behavior in the virtual environment with real-world performance reported in the literature. The last two experiments aimed to test the human ability to adapt to novel gravity fields by measuring their performance in trajectory prediction and time estimation tasks. The experiment results show that: 1) based on brief observation of a projectile’s initial trajectory, humans are accurate at predicting the landing location even under novel gravity fields, and 2) humans’ time estimation in a familiar earth environment fluctuates around the ground truth flight duration, although the time estimation in unknown gravity fields indicates a bias toward earth’s gravity.

 

Scaled Jump in Gravity-reduced Virtual Environments

TVCG

MyoungGon Kim, SungIk Cho, Tanh Quang Tran, Seong-Pil Kim, Ohung Kwon, and JungHyun Han

Abstract: The reduced gravity experienced in lunar or Martian surfaces can be simulated on the earth using a cable-driven system, where the cable lifts a person to reduce his or her weight. This paper presents a novel cable-driven system designed for the purpose. It is integrated with a head-mounted display(HMD) and a motion capture system. Focusing on jump motions within the system, this paper proposes a scaled jump and reports the experiments made for quantifying the extent to which a jump can be scaled without the discrepancy between physical and virtual jumps being noticed by the user. With the tolerable range of scaling computed from these experiments, an application, named retargeted jumping, is developed and tested for presence evaluation. The core techniques developed for the proposed system can be extended to develop extreme-sport simulators such as parasailing and skydiving.

 

Earthquake Safety Training through Virtual Drills

TVCG

Changyang Li, Wei Liang, Chris Quigley, Yibiao Zhao, and Lap-Fai Yu

Abstract: Recent popularity of consumer-grade virtual reality devices, such as the Oculus Rift and the HTC Vive, has enabled household users to experience highly immersive virtual environments. We take advantage of the commercial availability of these devices to provide an immersive and novel virtual reality training approach, designed to teach individuals how to survive earthquakes, in common indoor environments. Our approach makes use of virtual environments realistically populated with furniture objects for training. During a training, a virtual earthquake is simulated. The user navigates in, and manipulates with, the virtual environments to avoid getting hurt, while learning the observation and self-protection skills to survive an earthquake. We demonstrated our approach for common scene types such as offices, living rooms and dining rooms. To test the effectiveness of our approach, we conducted an evaluation by asking users to train in several rooms of a given scene type and then test in a new room of the same type. Evaluation results show that our virtual reality training approach is effective, with the participants who are trained by our approach performing better, on average, than those trained by alternative approaches in terms of the capabilities to avoid physical damage and to detect potentially dangerous objects.

 

Haptics

Tuesday, March 21, 10:30am - 12:00pm

Session Chair: Robert Lindeman

 

Evaluation of a Penalty and a Constraint-Based Haptic Rendering Algorithm with Different Haptic Interfaces and Stiffness Values

VR Proceedings

Mikel Sagardia and Thomas Hulin

Abstract: This work presents an evaluation study in which the effects of a penalty-based and a constraint-based haptic rendering algorithm on the user performance and perception are analyzed. A total of N = 24 participants performed in a within-design study three variations of peg-in-hole tasks in a virtual environment after trials in an identically replicated real scenario as a reference. In addition to the two mentioned haptic rendering paradigms, two haptic devices were used, the HUG and a Sigma.7, and the force stiffness was also varied with maximum and half values possible for each device. Both objective (time and trajectory, collision performance, and muscular effort) and subjective ratings (contact perception, ergonomy, and workload) were recorded and statistically analyzed. The results show that the constraint-based haptic rendering algorithm with a lower stiffness than the maximum possible yields the most realistic contact perception, while keeping the visual inter-penetration between the objects roughly at around 15% of that caused by penalty-based algorithm (i.e., non perceptible in many cases). This result is even more evident with the HUG, the haptic device with the highest force display capabilities, although user ratings point to the Sigma.7 as the device with highest usability and lowest workload indicators. Altogether, the paper provides qualitative and quantitative guidelines for mapping properties of haptic algorithms and devices to user performance and perception.

 

VRRobot: Robot Actuated Props in an Infinite Virtual Environment

VR Proceedings

Emanuel Vonach, Clemens Gatterer, and Hannes Kaufmann

Abstract: We present the design and development of a fully immersive virtual reality (VR) system that can provide prop-based haptic feedback in an infinite virtual environment. It is conceived as a research tool for studying topics related to haptics in VR and based on off-the-shelf components. A robotic arm moves physical props, dynamically matching pose and location of an object in the virtual world. When the user reaches for the virtual object, his or her hands also encounter it in the real physical space. The interaction is not limited to specific body parts and does not rely on an external structure like an exoskeleton. In combination with a locomotion platform for close-to-natural walking, this allows unrestricted haptic interaction in a natural way in virtual environments of unlimited size. We describe the concept, the hardware and software architecture in detail. We establish safety design guidelines for human-robot interaction in VR. Our technical evaluation shows good response times and accuracy. We report on a user study conducted with 34 participants indicating promising results, and discuss the potential of our system.

 

Inducing Self-Motion Sensations in Driving Simulators using Force-Feedback and Haptic Motion

VR Proceedings

Guillaume Bouyer, Amine Chellali, and Anatole Lécuyer

Abstract: Producing sensations of motion in driving simulators often requires using cumbersome and expensive motion platforms. In this ar- ticle we present a novel and alternative approach for producing self-motion sensations in driving simulations by relying on haptic- feedback. The method consists in applying a force-feedback pro- portional to the acceleration of the virtual vehicle directly in the hands of the driver, by means of a haptic device attached to the manipulated controller (or a steering wheel). We designed a proof- of-concept based on a standard gamepad physically attached at the extremity of a standard 3DOF haptic display. Haptic effects were designed to match notably the acceleration/braking (longitudinal forces) and left/right turns (lateral forces) of the virtual vehicle. A preliminary study conducted with 23 participants, engaged in gamepad-based active VR navigations in a straight line, showed that haptic motion effects globally improved the involvement and realism of motion sensation for participants with prior experience with haptic devices. Taken together, our results suggest that our approach could be further tested and used in driving simulators in entertainment and/or professional contexts.

 

Shifty: A Weight-Shifting Dynamic Passive Haptic Proxy to Enhance Object Perception in Virtual Reality

TVCG

André Zenner and Antonio Krüger

Abstract: We define the concept of Dynamic Passive Haptic Feedback (DPHF) for virtual reality by introducing the weight-shifting physical DPHF proxy object Shifty. This concept combines actuators known from active haptics and physical proxies known from passive haptics to construct proxies that automatically adapt their passive haptic feedback. We describe the concept behind our ungrounded weight-shifting DPHF proxy Shifty and the implementation of our prototype. We then investigate how Shifty can, by automatically changing its internal weight distribution, enhance the user’s perception of virtual objects interacted with in two experiments. In a first experiment, we show that Shifty can enhance the perception of virtual objects changing in shape, especially in length and thickness. Here, Shifty was shown to increase the user’s fun and perceived realism significantly, compared to an equivalent passive haptic proxy. In a second experiment, Shifty is used to pick up virtual objects of different virtual weights. The results show that Shifty enhances the perception of weight and thus the perceived realism by adapting its kinesthetic feedback to the picked-up virtual object. In the same experiment, we additionally show that specific combinations of haptic, visual and auditory feedback during the pick-up interaction help to compensate for visual-haptic mismatch perceived during the shifting process.

 

Walking Alone and Together

Tuesday, March 21, 10:30am - 12:00pm

Session Chair: Niels Christian Nilsson

 

Walking with virtual people: Evaluation of locomotion interfaces in dynamic environments

TVCG Invited

Anne-Hélène Olivier, Julien Bruneau, Richard Kulpa, and Julien Pettré

Abstract: Navigating in virtual environments requires using some locomotion interfaces, especially when the dimensions of the environments exceed the ones of the Virtual Reality system. Locomotion interfaces induce some biases both in the perception of the self-motion or in the formation of virtual locomotion trajectories. These biases have been mostly evaluated in the context of static environments, and studies need to be revisited in the new context of populated environments where users interact with virtual characters. We focus on situations of collision avoidance between a real participant and a virtual character, and compared it to previous studies on real walkers. Our results show that, as in reality, the risk of future collision is accurately anticipated by participants, however with delay. We also show that collision avoidance trajectories formed in VR have common properties with real ones, with some quantitative differences in avoidance distances. More generally, our evaluation demonstrates that reliable results can be obtained for qualitative analysis of small scale interactions in VR. We discuss these results in the perspective of a VR platform for large scale interaction applications, such as in a crowd, for which real data are difficult to gather.

 

An Evaluation of Strategies for Two-User Redirected Walking in Shared Physical Spaces

VR Proceedings

Mahdi Azmandian, Timofey Grechkin, and Evan Suma Rosenberg

Abstract: As the focus of virtual reality technology is shifting from single-person experiences to multi-user interactions it becomes increasingly important to accommodate multiple co-located users within a shared real-world space. For locomotion and navigation, the introduction of multiple users moving both virtually and physically creates additional challenges related to potential user-on-user collisions. In this work, we focus on defining the extent of these challenges, in order to apply redirected walking to two users immersed in virtual reality experiences within a shared physical tracked space. Using a computer simulation framework, we explore the costs and benefits of splitting available physical space between users versus attempting to algorithmically prevent user-to-user collisions. We also explore fundamental components of collision prevention such as steering the users away from each other, forced stopping, and user re-orientation. Each component was analyzed for the number of potential disruptions to the flow of the virtual experience. We also develop a novel collision prevention algorithm that reduces overall interruptions by 17.6% and collision prevention events by 58.3%. Our results show that sharing space using our collision prevention method is superior to subdividing the tracked space.

 

Bending the Curve: Sensitivity to Bending of Curved Paths and Application in Room-Scale VR

TVCG

Eike Langbehn, Paul Lubos, Gerd Bruder, and Frank Steinicke

Abstract: Redirected walking (RDW) promises to allow near-natural walking in an infinitely large virtual environment (VE) by subtle manipulations of the virtual camera. Previous experiments analyzed the human sensitivity to RDW manipulations by focusing on the worst-case scenario, in which users walk perfectly straight ahead in the VE, whereas they are redirected on a circular path in the real world. The results showed that a physical radius of at least 22 meters is required for undetectable RDW. However, users do not always walk exactly straight in a VE. So far, it has not been investigated how much a physical path can be bent in situations in which users walk a virtual curved path instead of a straight one. Such curved walking paths can be often observed, for example, when users walk on virtual trails, through bent corridors, or when circling around obstacles. In such situations the question is not, whether or not the physical path can be bent, but how much the bending of the physical path may vary from the bending of the virtual path. In this article, we analyze this question and present redirection by means of bending gains that describe the discrepancy between the bending of curved paths in the real and virtual environment. Furthermore, we report the psychophysical experiments in which we analyzed the human sensitivity to these gains. The results reveal encouragingly wider detection thresholds than for straightforward walking. Based on our findings, we discuss the potential of curved walking and present a first approach to leverage bent paths in a way that can provide undetectable RDW manipulations even in room-scale VR.

 

Altering User Movement Behaviour in Virtual Environments

TVCG

Adalberto L. Simeone, Ifigeneia Mavridou, and Wendy Powell

Abstract: In immersive Virtual Reality systems, users tend to move in a Virtual Environment as they would in an analogous physical environment. In this work, we investigated how user behaviour is affected when the Virtual Environment differs from the physical space. We created two sets of four environments each, plus a virtual replica of the physical environment as a baseline. The first focused on aesthetic discrepancies, such as a water surface in place of solid ground. The second focused on mixing immaterial objects together with those paired to tangible objects. For example, barring an area with walls or obstacles. We designed a study where participants had to reach three waypoints laid out in such a way to prompt a decision on which path to follow based on the conflict between the mismatching visual stimuli and their awareness of the real layout of the room. We analysed their performances to determine whether their trajectories were altered significantly from the shortest route. Our results indicate that participants altered their trajectories in presence of surfaces representing higher walking difficulty (for example, water instead of grass). However, when the graphical appearance was found to be ambiguous, there was no significant trajectory alteration. The environments mixing immaterial with physical objects had the most impact on trajectories with a mean deviation from the shortest route of 60 cm against the 37 cm of environments with aesthetic alterations. The co-existance of paired and unpaired virtual objects was reported to support the idea that all objects participants saw were backed by physical props. From these results and our observations, we derive guidelines on how to alter user movement behaviour in Virtual Environments.

 

Touch and Vibrotactile Feedback

Tuesday, March 21, 1:30pm - 2:40pm

Session Chair: Anatole Lecyuer

 

Wearable Tactile Device using Mechanical and Electrical Stimulation for Fingertip Interaction with Virtual World

VR Proceedings

Vibol Yem and Hiroyuki Kajimoto

Abstract: We developed “Finger Glove for Augmented Reality” (FinGAR), which combines electrical and mechanical stimulation to selectively stimulate skin sensory mechanoreceptors and provide tactile feedback of virtual objects. A DC motor provides high-frequency vibration and shear deformation to the whole finger, and an array of electrodes provide pressure and low-frequency vibration with high spatial resolution. FinGAR devices are attached to the thumb, index finger and middle finger. It is lightweight, simple in mechanism, easy to wear, and does not disturb the natural movements of the hand. All of these attributes are necessary for a general-purpose virtual reality system. User study was conducted to evaluate its ability to reproduce sensations of four tactile dimensions: macro roughness, friction, fine roughness and hardness. Result indicated that skin deformation and cathodic stimulation affect macro roughness and hardness, whereas high-frequency vibration and anodic stimulation affect friction and fine roughness.

 

Exploring the Effect of Vibrotactile Feedback through the Floor on Social Presence in an Immersive Virtual Environment

VR Proceedings

Myungho Lee, Gerd Bruder, and Gregory F. Welch

Abstract: We investigate the effect of vibrotactile feedback delivered to one's feet in an immersive virtual environment (IVE). In our study, participants observed a virtual environment where a virtual human (VH) walked toward the participants and paced back and forth within their social space. We compared three conditions as follows: participants in the Sound condition heard the footsteps of the VH; participants in the Vibration condition experienced the vibration of the footsteps along with the sounds; while participants in the Mute condition were not exposed to sound nor vibrotactile feedback. We found that the participants in the Vibration condition felt a higher social presence with the VH compared to those who did not feel the vibration. The participants in the Vibration condition also exhibited greater avoidance behavior while facing the VH and when the VH invaded their personal space.

 

Designing a Vibrotactile Head-Mounted Display for Spatial Awareness in 3D Spaces

TVCG

Victor Adriel de Jesus Oliveira, Luca Brayda, Luciana Nedel, and Anderson Maciel

Abstract: Due to the perceptual characteristics of the head, vibrotactile Head-mounted Displays are built with low actuator density. Therefore, vibrotactile guidance is mostly assessed by pointing towards objects in the azimuthal plane. When it comes to multisensory interaction in 3D environments, it is also important to convey information about objects in the elevation plane. In this paper, we design and assess a haptic guidance technique for 3D environments. First, we explore the modulation of vibration frequency to indicate the position of objects in the elevation plane. Then, we assessed a vibrotactile HMD made to render the position of objects in a 3D space around the subject by varying both stimulus loci and vibration frequency. Results have shown that frequencies modulated with a quadratic growth function allowed a more accurate, precise, and faster target localization in an active head pointing task. The technique presented high usability and a strong learning effect for a haptic search across different scenarios in an immersive VR setup.

 

Acoustics and Auditory Displays

Tuesday, March 21, 4:00pm - 5:15pm

Session Chair: Stefania Serrafin

 

Sounding Solid Combustibles: Non-premixed Flame Sound Synthesis for Different Solid Combustibles

TVCG Invited

Qiang Yin and Shiguang Liu

Abstract: With the rapidly growing VR industry, in recent years, more and more attention has been paid for fire sound synthesis. However, previous methods usually ignore the influences of the different solid combustibles, leading to unrealistic sounding results. This paper proposes SSC (sounding solid combustibles), which is a new recording-driven non-premixed flame sound synthesis framework accounting for different solid combustibles. SSC consists of three components: combustion noise, vortex noise and popping sounds. The popping sounds are the keys to distinguish the differences of solid combustibles. To improve the quality of fire sound, we extract the features of popping sounds from the real fire sound examples based on modified Empirical Mode Decomposition (EMD) method. Unlike previous methods, we take both direct combustion noise and vortex noise into account because the fire model is non-premixed flame. In our method, we also greatly resolve the synchronization problem during blending the three components of SSC. Due to the introduction of the popping sounds, it is easy to distinguish the fire sounds of different solid combustibles by our method, with great potential in practical applications such as games, VR system, etc. Various experiments and comparisons are presented to validate our method.

 

Acoustic VR of Human Tongue: A Real-Time Speech-Driven Visual Tongue System

VR Proceedings

Ran Luo, Qiang Fang, Jianguo Wei, Wenhuan Lu, Weiwei Xu, and Yin Yang

Abstract: We propose an acoustic-VR system that converts acoustic signals of human language (Chinese) to realistic 3D tongue animation sequences in real time. It is known that directly capturing the 3D geometry of the tongue at a frame rate that matches the tongue's swift movement during the language production is challenging. This difficulty is handled by utilizing the electromagnetic articulography (EMA) sensor as the intermediate medium linking the acoustic data to the simulated virtual reality. We leverage Deep Neural Networks to train a model that maps the input acoustic signals to the positional information of pre-defined EMA sensors based on 1,108 utterances. Afterwards, we develop a novel reduced physics-based dynamics model for simulating the tongue's motion. Unlike the existing methods, our deformable model is nonlinear, volume-preserving, and accommodates collision between the tongue and the oral cavity (mostly with the jaw). The tongue's deformation could be highly localized which imposes extra difficulties for existing spectral model reduction methods. Alternatively, we adopt a spatial reduction method that allows an expressive subspace representation of the tongue's deformation. We systematically evaluate the simulated tongue shapes with real-world shapes acquired by MRI/CT. Our experiment demonstrates that the proposed system is able to deliver a realistic visual tongue animation corresponding to a user's speech signal.

 

Efficient Construction of the Spatial Room Impulse Response

VR Proceedings

Carl Schissler, Peter Stirling, and Ravish Mehra

Abstract: An important component of the modeling of sound propagation for virtual reality (VR) is the spatialization of the room impulse response (RIR) for directional listeners. This involves convolution of the listener's head-related transfer function (HRTF) with the RIR to generate a spatial room impulse response (SRIR) which can be used to auralize the sound entering the listener's ear canals. Previous approaches tend to evaluate the HRTF for each sound propagation path, though this is too slow for interactive VR latency requirements. We present a new technique for computation of the SRIR that performs the convolution with the HRTF in the spherical harmonic (SH) domain for RIR partitions of a fixed length. The main contribution is a novel perceptually-driven metric that adaptively determines the lowest SH order required for each partition to result in no perceptible error in the SRIR. By using lower SH order for some partitions, our technique saves a significant amount of computation and is almost an order of magnitude faster than the previous approach. We compared the subjective impact of this new method to the previous one and observe a strong scene-dependent preference for our technique. As a result, our method is the first that can compute high-quality spatial sound for the entire impulse response fast enough to meet the audio latency requirements of interactive virtual reality applications.

 

Avatars and Virtual Humans

Wednesday, March 22, 8:30am - 10:00am

Session Chair: Eric Ragan

 

Rapid One-Shot Acquisition of Dynamic VR Avatars

VR Proceedings

Charles Malleson, Maggie Kosek, Martin Klaudiny, Ivan Huerta, Jean-Charles Bazin, Alexander Sorkine-Hornung, Mark Mine, and Kenny Mitchell

Abstract: We present a system for rapid acquisition of bespoke, animatable, full-body avatars including face texture and shape. A blendshape rig with a skeleton is used as a template for customization. Identity blendshapes are used to customize the body and face shape at the fitting stage, while animation blendshapes allow the face to be animated. The subject assumes a T-pose and a single snapshot is captured using a stereo RGB plus depth sensor rig. Our system automatically aligns a photo texture and fits the 3D shape of the face. The body shape is stylized according to body dimensions estimated from segmented depth. The face identity blendweights are optimised according to image-based facial landmarks, while a custom texture map for the face is generated by warping the input images to a reference texture according to the facial landmarks. The total capture and processing time is under 10~seconds and the output is a light-weight, game-engine-ready avatar which is recognizable as the subject. We demonstrate our system in a VR environment in which each user sees the other users' animated avatars through a VR headset with real-time audio-based facial animation and live body motion tracking, affording an enhanced level of presence and social engagement compared to generic avatars.

 

Prism Aftereffects for Throwing with a Self-Avatar in an Immersive Virtual Environment

VR Proceedings

Bobby Bodenheimer, Sarah Creem-Regehr, Jeanine Stefanucci, Elena Shemetova, and William Thompson

Abstract: The use of first-person self-avatars in immersive virtual environments (VEs) has grown over recent years. It is unknown, however, how visual feedback from a self-avatar influences a user's online actions and subsequent calibration of actions within an immersive VE. The current paper uses a prism throwing adaptation paradigm to test the role of a self-avatar arm or full body on action calibration in a VE. Participants' throwing accuracy to a target on the ground was measured first in a normal viewing environment, then with the visual field rotated clockwise about their vertical axis by 17 degrees (prism simulation), and then again in the normal viewing environment with the prism distortion removed. Participants experienced either no-avatar, a first-person avatar arm and hand, or a first-person full body avatar during the entire experimental session, in a between-subjects manipulation. Results showed similar throwing error and adaptation during the prism exposure for all conditions, but a reduced aftereffect (displacement with respect to the target in the opposite direction of the prism-exposure) when the avatar arm or full body was present. The results are discussed in the context of how an avatar can provide a visual frame of reference to aid in action calibration.

 

Repeat after Me: Using Mixed Reality Humans to Influence Best Communication Practices

VR Proceedings

Andrew Cordar, Adam Wendling, Casey White, Samsun Lampotang, and Benjamin Lok

Abstract: In the past few years, advances have been made on how mixed reality humans (MRHs) can be used for interpersonal communication skills training for medical teams; however, little research has looked at how MRHs can influence communication skills during training. One way to influence communication skills is to leverage MRHs as models of communication behavior. We created a mixed reality medical team training exercise designed to impact communication behaviors that are critical for patient safety. We recruited anesthesia residents to go through an operating room training exercise with MRHs to assess and influence residents' closed loop communication behaviors during medication administration. We manipulated the behavior of the MRHs to determine if the MRHs could influence the residents' closed loop communication behavior. Our results showed that residents' closed loop communications behaviors were influenced by MRHs. Additionally, we found there was a statistically significant difference between groups based on which MRH behavior residents observed. Because the MRHs significantly impacted how residents communicated in simulation, this work expands the boundaries for how VR can be used and demonstrates that MRHs could be used as tools to address complex communication dynamics in a team setting

 

Paint with Me: Stimulating Creativity and Empathy While Painting with a Painter in Virtual Reality

TVCG

Lynda Gerry

Abstract: While nothing can be more vivid, immediate and real than our own sensorial experiences, emerging virtual reality technologies are playing with the possibility of being able to share someone else’s sense reality. The Painter Project is a virtual environment where users see a video from a painter’s point of view in tandem with a tracked rendering of their own hand while they paint on a physical canvas. The end result is an experiment in superimposition of one experiential reality on top of another, hopefully opening a new window into an artist’s creative process. Virtual environments have been shown to significantly alter sense of self in terms of transformed self-representations (Yee & Bailenson, 2007), spatial localization of the body (Leggenhager, Tadi, Metzinger, & Blanke, 2007), body ownership (Petkova & Ehrsson, 2008; Maister, Slater, Sanchez-Vives, & Tsarkiris, 2015), and the form, shape, and morphology of the body, termed “homuncular flexibility” (Won, Bailenson, Lee, & Lanier, 2015). Further, virtual environments impact social interactions and social awareness, specifically implicit racial bias (Groom, Bailenson, & Clifford, 2009), social perspective taking (Gehlbach et al., 2015), and helping behavior towards persons with disabilities (Ahn, Le, & Bailenson, 2015). This study hypothesized that moving along with a painter could induce motor resonance whilst still allowing users to cognitively empathize with the painter while listening to her guide them into her creative imagination and process. The mirror neuron hypothesis proposes that unify action perception and action production, allowing the “understanding of the actions of others from the inside” (Rizzolatti, Sinigaglia, & Gallese, 2006). Bernard Berenson writes about the tactile values of painting, and argues that the essential qualities of artistic vision are a matter of touch and movement (Berenson, 1962). Maxine Sheets-Johstone (2012) writes, “The making of all art is quintessentially movement-dependent.” (p. 397) This explorative study demonstrates that the experience of artistic creativity can be communicated and simulated through kinesthetic empathy and the mediation of embodied experiences in virtual reality.

 

Motion Tracking and Capturing

Wednesday, March 22, 10:30am - 12:00pm

Session Chair: Regis Kopper

 

The Light Field Attachment: Turning a DSLR into a Light Field Camera Using a Low Budget Camera Ring

TVCG Invited

Yuwang Wang, Yebin Liu, Wolfgang Heidrich, and Qionghai Dai

Abstract: We propose a concept for a lens attachment that turns a standard DSLR camera and lens into a light field camera. The attachment consists of 8 low-resolution, low-quality side cameras arranged around the central high-quality SLR lens. Unlike most existing light field camera architectures, this design provides a high-quality 2D image mode, while simultaneously enabling a new high-quality light field mode with a large camera baseline but little added weight, cost, or bulk compared with the base DSLR camera. From an algorithmic point of view, the high-quality light field mode is made possible by a new light field super-resolution method that first improves the spatial resolution and image quality of the side cameras and then interpolates additional views as needed. At the heart of this process is a super-resolution method that we call iterative Patch- And Depth-based Synthesis (iPADS), which combines patch-based and depth-based synthesis in a novel fashion. Experimental results obtained for both real captured data and synthetic data confirm that our method achieves substantial improvements in super-resolution for side-view images as well as the high-quality and view-coherent rendering of dense and high-resolution light fields.

 

Optimizing Placement of Commodity Depth Cameras for Known 3D Dynamic Scene Capture

VR Proceedings

Rohan Chabra, Adrian Ilie, Nicholas Rewkowski, Young-Woon Cha, and Henry Fuchs

Abstract: Commodity depth sensors, such as the Microsoft Kinect®, have been widely used for the capture and reconstruction of the 3D structure of a room-sized dynamic scene. Camera placement and coverage during capture significantly impact the quality of the resulting reconstruction. In particular, dynamic occlusions and sensor interference have been shown to result in poor resolution and holes in the reconstruction results. This paper presents a novel algorithmic framework and an off-line optimization of depth sensor placements for a given 3D dynamic scene, simulated using virtual 3D models. We derive a fitness metric for a particular configuration of sensors by combining factors such as visibility and resolution of the entire dynamic scene along with probabilities of interference between sensors. We employ this fitness metric both in a greedy algorithm that determines the number of depth sensors needed to cover the scene, and in a simulated annealing algorithm that optimizes the placements of those sensors. We compare our algorithm’s optimized placements with manual sensor placements for a real dynamic scene. We present quantitative assessments using our fitness metric, as well as qualitative assessments to demonstrate that our algorithm not only enhances the resolution and total coverage of the reconstruction but also fills in voids by avoiding occlusions and sensor interference when compared with the reconstruction of the same scene using a manual sensor placement.

 

Sweeping-Based Volumetric Calibration and Registration of Multiple RGBD-Sensors for 3D Capturing Systems

VR Proceedings

Stephan Beck and Bernd Froehlich

Abstract: The accurate calibration and registration of a set of color and depth (RGBD) sensors into a shared coordinate system is an essential requirement for 3D surround capturing systems. We present a method to calibrate multiple unsynchronized RGBD-sensors with high accuracy in a matter of minutes by sweeping a tracked checkerboard through the desired capturing space in front of each sensor. Through the sweeping process, a large number of robust correspondences between the depth and the color image as well as the 3D world positions can be automatically established. In order to obtain temporally synchronized correspondences between an RGBD-sensor’s data streams and the tracked target’s positions we apply an off-line optimization process based on error minimization and a coplanarity constraint. The correspondences are entered into a 3D look-up table which is used during runtime to transform depth and color information into the application’s world coordinate system. Our proposed method requires a manual effort of less than one minute per RGBD-sensor and achieves a high calibration accuracy with an average 3D error below 3.5 mm and an average texture reprojection error smaller than 1 pixel.

 

Recognition and Mapping of Facial Expressions to Avatar by Embedded Photo Reflective Sensors in Head Mounted Display

VR Proceedings

Katsuhiro Suzuki, Fumihiko Nakamura, Jiu Otsuka, Katsutoshi Masai, Yuta Itoh, Yuta Sugiura, and Maki Sugimoto

Abstract: We propose a facial expression mapping technology between virtual avatars and Head-Mounted Display (HMD) users. HMD allow people to enjoy an immersive Virtual Reality (VR) experience. A virtual avatar can be a representative of the user in the virtual environment. However, the synchronization of the the virtual avatar's expressions with those of the HMD user is limited. The major problem of wearing an HMD is that a large portion of the user's face is occluded, making facial recognition difficult in an HMD-based virtual environment. To overcome this problem, we propose a facial expression mapping technology using retro-reflective photoelectric sensors. The sensors attached inside the HMD measures the distance between the sensors and the user's face. The distance values of five basic facial expressions (Neutral, Happy, Angry, Surprised, and Sad) are used for training the neural network to estimate the facial expression of a user. We achieved an overall accuracy of 88% in recognizing the facial expressions. Our system can also reproduce facial expression change in real-time through an existing avatar using regression. Consequently, our system enables estimation and reconstruction of facial expressions that correspond to the user's emotional changes.

 

Systems and Applications

Wednesday, March 22, 1:30pm - 3:00pm

Session Chair: Pablo Figueroa

 

Semantic Entity-Component State Management Techniques to Enhance Software Quality for Multimodal VR-Systems

TVCG

Martin Fischbach, Dennis Wiebusch, and Marc Erich Latoschik

Abstract: Modularity, modifiability, reusability, and API usability are important software qualities that determine the maintainability of software architectures. Virtual, Augmented, and Mixed Reality (VR, AR, MR) systems, modern computer games, as well as interactive human-robot systems often include various dedicated input-, output-, and processing subsystems. These subsystems collectively maintain a real-time simulation of a coherent application state. The resulting interdependencies between individual state representations, mutual state access, overall synchronization, and flow of control implies a conceptual close coupling whereas software quality asks for a decoupling to develop maintainable solutions. This article presents five semantics-based software techniques that address this contradiction: Semantic grounding, code from semantics, grounded actions, semantic queries, and decoupling by semantics. These techniques are applied to extend the well-established entity-component-system (ECS) pattern to overcome some of this pattern's deficits with respect to the implied state access. A walk-through of central implementation aspects of a multimodal (speech and gesture) VR-interface is used to highlight the techniques' benefits. This use-case is chosen as a prototypical example of complex architectures with multiple interacting subsystems found in many VR, AR and MR architectures. Finally, implementation hints are given, lessons learned regarding maintainability pointed-out, and performance implications discussed.

 

Enhancements to VTK Enabling Scientific Visualization in Immersive Environments

VR Proceedings

Patrick O’Leary, Aashish Chaudhary, William Sherman, Ken Martin, David Lonie, James Money, Sankhesh Jhaveri, Eric Whiting, and Sandy McKenzie

Abstract: Modern scientific, engineering and medical computational simulations, as well as experimental and observational data sensing/measuring devices, produce enormous amounts of data. While statistical analysis is one tool that provides insight into this data, it is scientific visualization that is tactically important for scientific discovery, product design and data analysis. These benefits are impeded, however, when the scientific visualization algorithms are implemented from scratch a time consuming and redundant process in immersive application development. This process then can greatly benefit from leveraging the state-of-the-art open-source Visualization Toolkit (VTK) and its community. Over the past two (almost three) decades, integrating VTK with a virtual reality (VR) environment has only been attempted to varying degrees of success. In this paper, we demonstrate two new approaches to simplify this amalgamation of an immersive interface with visualization rendering from VTK. In addition, we cover several enhancements to VTK that provide near real-time updates and efficient interaction. Finally, we demonstrate the combination of VTK with both Vrui and OpenVR immersive environments in example applications.

 

MagicToon: A 2D-to-3D Creative Cartoon Modeling System with Mobile AR

VR Proceedings

Lele Feng, Xubo Yang, and Shuangjiu Xiao

Abstract: We present MagicToon, an interactive modeling system with mobile augmented reality (AR) that allows children to build 3D cartoon scenes creatively from their own 2D cartoon drawings on paper. Our system consists of two major components: an automatic 2D-to-3D cartoon model creator and an interactive model editor to construct more complicated AR scenes. The model creator can generate textured 3D cartoon models according to 2D drawings automatically and overlay them on the real world, bringing life to flat cartoon drawings. With our interactive model editor, the user can perform several optional operations on 3D models such as copying and animating in AR context through a touchscreen of a handheld device. The user can also author more complicated AR scenes by placing multiple registered drawings simultaneously. The results of our user study have shown that our system is easier to use compared with traditional sketch-based modeling systems and can give more play to children's innovations compared with AR coloring books.

 

Emulation of Physician Tasks in Eye-Tracked Virtual Reality for Remote Diagnosis of Neurodegenerative Disease

TVCG

Jason Orlosky, Yuta Itoh, Maud Ranchet, Kiyoshi Kiyokawa, John Morgan, and Hannes Devos

Abstract: For neurodegenerative conditions like Parkinson’s disease, early and accurate diagnosis is still a difficult task. Evaluations can be time consuming, patients must often travel to metropolitan areas or different cities to see experts, and misdiagnosis can result in improper treatment. To date, only a handful of assistive or remote methods exist to help physicians evaluate patients with suspected neurological disease in a convenient and consistent way. In this paper, we present a low-cost VR interface designed to support evaluation and diagnosis of neurodegenerative disease and test its use in a clinical setting. Using a commercially available VR display with an infrared camera integrated into the lens, we have constructed a 3D virtual environment designed to emulate common tasks used to evaluate patients, such as fixating on a point, conducting smooth pursuit of an object, or executing saccades. These virtual tasks are designed to elicit eye movements commonly associated with neurodegenerative disease, such as abnormal saccades, square wave jerks, and ocular tremor. Next, we conducted experiments with 9 patients with a diagnosis of Parkinson’s disease and 7 healthy controls to test the system’s potential to emulate tasks for clinical diagnosis. We then applied eye tracking algorithms and image enhancement to the eye recordings taken during the experiment and conducted a short follow-up study with two physicians for evaluation. Results showed that our VR interface was able to elicit five common types of movements usable for evaluation, physicians were able to confirm three out of four abnormalities , and visualizations were rated as potentially useful for diagnosis.