IEEE VR logo

March 22nd - 26th

IEEE VR logo

March 22nd - 26th

IEEE Computer Society IEEE

IEEE Computer Society IEEE

Posters


Doctoral Consortium - Posters / Videos 1 & 2

Spatial Referencing for Anywhere, Anytime Augmented Reality (dc-1)

Yuan Li

Session: DoctoralConsortium Posters / Videos 1 & 2

Hubs Link: See in Hubs


Utilizing AR Glasses as Mobility Aid for People with Low Vision (dc-10)

Hein Min Htike

Session: DoctoralConsortium Posters / Videos 1 & 2

Hubs Link: See in Hubs


Resolving Cue Conflicts in Augmented Reality (dc-16)

Haley Adams

Session: DoctoralConsortium Posters / Videos 1 & 2

Hubs Link: See in Hubs


Gaze Analysis and Prediction in Virtual Reality (dc-11)

Zhiming Hu

Session: DoctoralConsortium Posters / Videos 1 & 2

Hubs Link: See in Hubs


A Neuro-VR toolbox for assessment and intervention in Autism: Brain responses to non-verbal, gaze and proxemics behaviour in Virtual Humans (dc-12)

Cliona Kelly

Session: DoctoralConsortium Posters / Videos 1 & 2

Hubs Link: See in Hubs


Privacy-Preserving Relived Experiences in Virtual Reality (dc-13)

Cheng Yao Wang

Session: DoctoralConsortium Posters / Videos 1 & 2

Hubs Link: See in Hubs


The Immersive Space to Think: Immersive Analytics for Multimedia Data (dc-14)

Lee Lisle

Session: DoctoralConsortium Posters / Videos 1 & 2

Hubs Link: See in Hubs


Quality, Presence, and Emotions in Virtual Reality Communications (dc-15)

Marta Orduna

Session: DoctoralConsortium Posters / Videos 1 & 2

Hubs Link: See in Hubs


The Impact of Social Interactions on an Embodied Individual’s Self-perception in Virtual Environments (dc-17)

David Mal

Session: DoctoralConsortium Posters / Videos 1 & 2

Hubs Link: See in Hubs


Enhancing Proxy-Based Haptics in Virtual Reality (dc-2)

André Zenner

Session: DoctoralConsortium Posters / Videos 1 & 2

Hubs Link: See in Hubs


Augmented Reality Animals: Are They Our Future Companions? (dc-3)

Nahal Norouzi

Session: DoctoralConsortium Posters / Videos 1 & 2

Hubs Link: See in Hubs


Affective Embodiment: The effect of avatar appearance and gesture representation on emotions in VR (dc-4)

Swati Pandita

Session: DoctoralConsortium Posters / Videos 1 & 2

Hubs Link: See in Hubs


Multimodal User-Defined inputs for Optical See Through Augmented Reality Environments (dc-5)

Adam Sinclair Williams, Francisco Raul Ortega

Session: DoctoralConsortium Posters / Videos 1 & 2

Hubs Link: See in Hubs


Scene-aware Sound Rendering in Virtual and Real Worlds (dc-6)

Zhenyu Tang, Dinesh Manocha

Session: DoctoralConsortium Posters / Videos 1 & 2

Hubs Link: See in Hubs


The Modulation of Peripersonal Space Boundaries in Immersive Virtual Environments (dc-7)

Lauren Buck

Session: DoctoralConsortium Posters / Videos 1 & 2

Hubs Link: See in Hubs


Virtual Reality for Safety and Independence in Everyday Activities (dc-8)

Sarah E. Anderson

Session: DoctoralConsortium Posters / Videos 1 & 2

Hubs Link: See in Hubs


Immersive VR and embodied learning: the role of embodied affordances in the long-term retention of semantic knowledge (dc-9)

Mahda M. Bagher

Session: DoctoralConsortium Posters / Videos 1 & 2

Hubs Link: See in Hubs


Posters / Videos 1 & 3

Windtherm Fire: An MR System for Experiencing Breathing Fire of a Dragon (1)

Yuta Ogiwara, Masatoshi Suzuki, Akihiro Matsuura

Session: Posters / Videos 1 & 3

Hubs Link: See in Hubs

Teaser Video: Watch Now

ABSTRACT: We present an interactive MR system in which hot wind is sent to a player’s face synchronously to the virtual environment. The system uses a VR accessory called Windtherm, which is attachable to a HMD, heats the air inside, and blows the wind with changeable forces and directions. We improve the device to work interactively to the virtual events. We also develop an MR content in which a player interacts with a small dragon and occasionally receives its breathing fire with simultaneous tactile stimulus of actual wind for enhancing the sense of presence in the fictitious world.


Subtle Gaze Direction with Asymmetric Field-of-View Modulation in Headworn Virtual Reality (2)

Dhrubo Jyoti Paul, Eric Ragan

Session: Posters / Videos 1 & 3

Hubs Link: See in Hubs

Teaser Video: Watch Now

ABSTRACT: Virtual reality offers freedom to look around, but designers at times want to encourage attention to certain areas of interest without creating interruptions. Our research presents asymmetric changes to field-of-view in head-worn virtual reality to subtly encourage gaze direction. Preliminary results indicate this method induces turning behavior, but reactions may not be consistent. Asymmetric reduction in the visual field creates curiosity and induces movement towards or away from the reduced direction. Results indicate users more often looked towards the reduced direction to compensate for what they were missing. Further, some users were not able to detect field-of-view modulations.


Impact of AR Display Context Switching and Focal Distance Switching on Human Performance: Replication on an AR Haploscope (3)

Mohammed Safayet Arefin, Nate Phillips, Alexander Plopski, Joseph L Gabbard, J. Edward Swan II

Session: Posters / Videos 1 & 3

Hubs Link: See in Hubs

Teaser Video: Watch Now

ABSTRACT: In augmented reality (AR) environments, information is often distributed between real and virtual contexts, and often appears at different distances from the user. Therefore, to integrate the information, users must repeatedly switch context and refocus the eyes. Previously, Gabbard, Mehra, and Swan (2018) examined these issues,using a text-based visual search task and a monocular optical see-through AR display. In this work, the authors report a replication of this earlier experiment, using a custom-built AR haploscope. The successful replication, on a very different display, is consistent with the hypothesis that the findings are a general property of AR.


Rhythmic proprioceptive stimulation improves embodiment in a walking avatar when added to visual stimulation (4)

Kean Kouakoua, Cyril Duclos, David Labbe PhD

Session: Posters / Videos 1 & 3

Hubs Link: See in Hubs

Teaser Video: Watch Now

ABSTRACT: Synchronicity between user’s movements and that of their selfavatar leads to a subjective illusion of embodiment over the virtual avatar. Tendon vibrations can be used to produce a perception of movement. Such stimulation have been combined with virtual reality to induce the ownership illusion. We investigated the ability of using a virtual self-avatar combined with tendon vibrations to give standing users the impression of walking physically. The vibrations, applied on 24 participants, had different levels of complexity and congruency with the avatar’s movements. Results suggest that the pattern of vibrations isn’t crucial to produce ownership illusion.


Impact of Fake News in VR compared to Fake News on Social Media, a pilot study (6)

Adrien Alexandre Verhulst, Wanqi Zhao, Fumihiko Nakamura, Masaaki Fukuoka, Maki Sugimoto, Masahiko Inami

Session: Posters / Videos 1 & 3

Hubs Link: See in Hubs

Teaser Video: Watch Now

ABSTRACT: With the rise of easily shareable online information, fake news have become a pressing issue in our societies. News credibility depends of several factors (reliance on a source of information, media outlet, etc.). We investigate here the \textit{immersion} factor by comparing in a pilot study (N=6) the impact of fake news in VR (with a 360 degrees video), and fake news on social media (2D videos).


Pain Experience in Social VR: Comparing Companionship from Close Others and Stranger (7)

Angel Hwang, Yilu Sun, Neta Tamir, Andrea Stevenson Won

Session: Posters / Videos 1 & 3

Hubs Link: See in Hubs

Teaser Video: Watch Now

ABSTRACT: The distractive qualities of VR are known to decrease pain, while social interaction in VR also increases distraction and provides social support. In a pilot study, we investigated whether interacting with an attachment figure vs. a stranger in VR would yield any difference on participants’ pain threshold and ratings. We designed a within-group single-factor experiment with 3 conditions in VR; being alone, interacting with an attachment figure, and interacting with a stranger. In preliminary findings, we observed participants reported the highest pain tolerance when interacting with loved ones and the most severe perceived pain when interacting with strangers in VR.


Embodied Realistic Avatar System with Body Motions and Facial Expressions for Communication in Virtual Reality Applications (8)

Sahar Aseeri, Sebastian Marin, Richard Landers, Victoria Interrante, Evan Suma Rosenberg

Session: Posters / Videos 1 & 3

Hubs Link: See in Hubs

Teaser Video: Watch Now

ABSTRACT: Providing effective communication in VR applications requires body language and facial expressions to transmit users’ thoughts and behavior. Embodied VR is an effective way to realistically render users’ movements onto a virtual avatar in real time. There are many VR applications that utilize this technique; however, most of them have limitations in photorealistic realism, body motion, and facial expressions. In this work, we introduce a novel VR communication system that mimics users’ movements, facial expressions, and speech in order to render these capture data into different types of avatar representations in real-time.


SiSiMo: Towards Simulator Sickness Modeling for 360° Videos Viewed with an HMD (9)

Alexander Raake, Ashutosh Singla, Rakesh Rao Ramachandra Rao, Werner Robitza, Frank Hofmeyer

Session: Posters / Videos 1 & 3

Hubs Link: See in Hubs

Teaser Video: Watch Now

ABSTRACT: Users may experience symptoms of simulator sickness while watching 360°/VR videos with Head-Mounted Displays (HMDs). At present, practically no solution exists that can efficiently eradicate the symptoms of simulator sickness from virtual environments. Therefore, in the absence of a solution, it is required to at least quantify the amount of sickness. In this paper, we present initial work on our Simulator Sickness Model SiSiMo including a first component to predict simulator sickness scores over time. Using linear regression of short term scores already shows promising performance for predicting the scores collected from a number of user tests.


VR Piano Learning Platform with Leap Motion and Pressure Sensors (10)

Febrina Wijaya, Ying-Chun Tseng, Wan-Lun Tsai, Tse-Yu Pan, Min-Chun Hu

Session: Posters / Videos 1 & 3

Hubs Link: See in Hubs

Teaser Video: Watch Now

ABSTRACT: Aiming at helping novices to learn and practice playing piano, we propose a virtual reality (VR) piano system which can record the user’s piano fingering data. Our system uses two Leap Motion sensors for wide-range hand tracking, and we developed a method to calibrate and compose the tracking results of two Leap Motion sensors. We employ multiple pressure sensors attached on the user’s fingertips to more accurately check whether the user is pressing the piano key. The experimental results show that our VR piano system provides users a great piano-playing experience and is helpful for piano learning.


XR Framework for Collaborating Remote Heterogeneous Devices (11)

JONGYONG KIM, Jonghoon Song, Woong Seo, Insung Ihm, Seung-Hyun Yoon, Sanghun Park

Session: Posters / Videos 1 & 3

Hubs Link: See in Hubs

Teaser Video: Watch Now

ABSTRACT: we have developed a framework that allows users with different types of devices, in different physical spaces, to effectively build a virtual world where collaborative work can be performed.


Auditory Stimulation on Touching a Virtual Object Outside a User’s Field of View (12)

Zentaro Kimura, MIE SATO

Session: Posters / Videos 1 & 3

Hubs Link: See in Hubs

Teaser Video: Watch Now

ABSTRACT: We investigate the effects of auditory stimuli that improve the ease of touching a virtual object outside a user’s field of view. Our impression evaluation experiments show that a 45-degree expanded stereophonic sound contributes to the natural interaction between a user and a virtual object without using visual information.


Removal of the Infrared Light Reflection of Eyeglass Using Multi-Channel CycleGAN Applied for the Gaze Estimation Images (13)

Yoshikazu Onuki, Kosei Kudo, Itsuo Kumazawa

Session: Posters / Videos 1 & 3

Hubs Link: See in Hubs

Teaser Video: Watch Now

ABSTRACT: In VR environments, the importance of the gaze estimation is increasing. Some HMDs enable users to wear with spectacles, and the eyeglass reflection often causes serious obstruction to detect the corneal reflection. We propose the multi-channel CycleGAN to generate the eye images with no eyeglass from those with eyeglass. Proposed method has 4 channels input, which consists of three normal eye images at different time points and an image in blinking, in order to distinguish stationary and moving reflections. Proposal achieved to selectively remove the eyeglass reflections and keep the corneal reflections alive in gaze estimation images.


Tracking Multiple Collocated HTC Vive Setups in a Common Coordinate System (14)

Tim Weissker, Philipp Tornow, Bernd Froehlich

Session: Posters / Videos 1 & 3

Hubs Link: See in Hubs

Teaser Video: Watch Now

ABSTRACT: Multiple collocated HTC Vive setups sharing the same base stations and room calibration files still track their devices in different coordinate systems. We present a procedure for mapping the tracking data of multiple users to a common coordinate system and show that it enables spatially consistent interactions of collocated collaborators.


Addressing Deaf or Hard-of-Hearing People in Avatar-Based Mixed Reality Collaboration Systems (15)

Kristoffer Waldow, Arnulph Fuhrmann

Session: Posters / Videos 1 & 3

Hubs Link: See in Hubs

Teaser Video: Watch Now

ABSTRACT: Automatic Speech Recognition (ASR) technologies can be used to address people with auditory disabilities by integrating them in an interpersonal communication via textual visualization of speech. Especially in avatar-based Mixed Reality (MR) remote collaboration systems, speech is an important additional modality and allows natural human interaction. Therefore, we propose an easy to integrate ASR and textual visualization extension for an avatar-based MR remote collaboration system that visualizes speech via spatial floating text bubbles. In a small pilot study, we achieved word accuracy of our extension of 97% by measuring the widely used word error rate.


An Immersive and Interactive Visualization of Gravitational Waves (16)

Stefan Franz Lontschar, Krzysztof Pietroszek, Christian Gütl, Irene Humer Irene, Christian Eckhardt

Session: Posters / Videos 1 & 3

Hubs Link: See in Hubs

Teaser Video: Watch Now

ABSTRACT: In this work we are presenting a novel virtual learning environment (VLE) for gravitational waves using a field density representation. We identified three main areas of understanding gravitational waves such as wave source, spatial irradiation distribution and wave type, which the VLE supports and was designed for. Participant were tested on their knowledge post and prior to the VLE experience and we found promising results in improving the understanding for gravitational waves.


A Low-Cost Approach to Fish Tank Virtual Reality with Semi-Automatic Calibration Support (17)

Niko Wißmann, Martin Misiak, Arnulph Fuhrmann, Marc Erich Latoschik

Session: Posters / Videos 1 & 3

Hubs Link: See in Hubs

Teaser Video: Watch Now

ABSTRACT: We describe the components and implementation of a cost-effective fish tank virtual reality system. It is based on commodity hardware and provides accurate view tracking combined with high resolution stereoscopic rendering. The system is calibrated very quickly in a semi-automatic step using computer vision. By avoiding the resolution disadvantages of current VR headsets, our prototype is suitable for a wide range of perceptual VR studies.


Extended Realities – How Changing Scale Affects Spatial Learning (18)

Jiayan Zhao, Mark Simpson, Jan Oliver Wallgrün, pejman sajjadi, Alexander Klippel

Session: Posters / Videos 1 & 3

Hubs Link: See in Hubs

Teaser Video: Watch Now

ABSTRACT: Investigating the relationship between the human body and its environment is essential to understand the process of acquiring spatial knowledge. However, few empirical evaluations have looked at how the visual accessibility of an environment affects spatial learning through direct experiences. To address this gap, this paper extends research on geographic scale (ground vs. pseudo-aerial perspectives) by incorporating active exploration in a human study. Results indicate that only low spatial ability participants benefit from the pseudo-aerial perspective in terms of spatial learning. In contrast, high spatial ability participants make more efficient use of the normal ground perspective.


CZ Investigator: Learning About Critical Zones Through a VR Serious Game (19)

pejman sajjadi, Mahda M. Bagher, Zheng Cui, Jessica Gall Myrick, Janet Swim, Timothy White, Alexander Klippel

Session: Posters / Videos 1 & 3

Hubs Link: See in Hubs

Teaser Video: Watch Now

ABSTRACT: The critical zone (CZ) plays a pivotal role in the food-energy-water nexus, yet in its entirety, it is not well understood by society. Challenges range from imagining the invisible (from bedrock to soil) to understanding complex relations between the involved components. We have launched a transdisciplinary project driven by immersive technologies that allow for an extension of what physical reality offers society about the concept of CZ. We have developed a VR serious game that enables learners to have a concrete experience about what a CZ is, and how natural and human processes affect it.


Light Field Editing Propagation using 4D Convolutional Neural Networks (20)

Zhicheng Lu, Xiaoming Chen, Yuk Ying Chung, Zhibo Chen

Session: Posters / Videos 1 & 3

Hubs Link: See in Hubs

Teaser Video: Watch Now

ABSTRACT: Light field image (LFI) editing is an under-researched problem. This poster proposes two different LFI editing schemes - the direct editing scheme and the deep-learning-based scheme using interleaved spatial and angular filters, which enable automatic propagation of the user’s augmentation edits from central view to all the other sub-views of LFI. We constructed a preliminary LFI dataset and compared two schemes, the experimental results show that the learning-based scheme produces higher PSNR (0.51dB) and more pleasant subjective editing results than the direct editing.


Shooter Bias in Virtual Reality: The Effect of Avatar Race and Socioeconomic Status on Shooting Decisions (21)

Katharina Seitz, Jessica J Good, Tabitha C. Peck

Session: Posters / Videos 1 & 3

Hubs Link: See in Hubs

Teaser Video: Watch Now

ABSTRACT: Shooter bias has been extensively studied in desktop environments, however research in more immersive environments is lacking. We evaluated the effects of target race and perceived socioeconomic status using a 2(target race: Black or White) x 2(target SES: low or high) x 2(target object: gun or cellphone) within-subjects design. Participants (N = 50) completed 160 trials. We found evidence that shooter bias exists in virtual reality. More data is needed to strengthen conclusions. Virtual reality fills an important gap in shooter bias research, as it increases the realism of the task and provides a potential avenue for police training.


Do You Speak Holo? A Mixed Reality Application for Foreign Language Learning in Children with Language Disorders (22)

Emanuele Torelli, Ibrahim El Shemy, Silvia Silleresi, Lukasz Moskwa, Giulia Cosentino, Franca Garzotto

Session: Posters / Videos 1 & 3

Hubs Link: See in Hubs

Teaser Video: Watch Now

ABSTRACT: In this paper we present Do You Speak Holo?, an application for Microsoft HoloLens co-designed with linguistics experts that aims to facilitate the learning process of a foreign language for children and adolescents with language disorders (LD). The application includes several educational activities, aimed at improving both vocabulary and morpho-syntactic aspects of the language through the interaction with virtual content immersed in the real world. The application includes a virtual assistant, which acts as a virtual teacher and guides the user step by step in the comprehension and execution of the activities.


Detection thresholds of Tactile Perception in Virtual Environments (23)

Lu Zhao, Yue Liu, Dejiang Ye, Zhuoluo Ma

Session: Posters / Videos 1 & 3

Hubs Link: See in Hubs

Teaser Video: Watch Now

ABSTRACT: We investigate the detection thresholds of tactile perception in virtual environments. We estimated the absolute detection thresholds for electrovibration stimuli under different parameters of excitation signals (waveform and frequency). Experimental results and subjective comments suggest that participants’ tactile sensitivities to electrovibration stimuli are decreased in virtual reality. Frequency significantly affects the absolute detection thresholds, and lower voltages can be perceived under 60Hz for square wave than sinusoidal wave. Our findings provide a foundation to future studies aiming at modelling and simulating desired tactile perception of objects in virtual reality.


The Impact of Haptic and Visual Feedback on Teaching (24)

Kern Qi, David Borland, Niall L. Williams, Emily Jackson, James Minogue, Tabitha C. Peck

Session: Posters / Videos 1 & 3

Hubs Link: See in Hubs

Teaser Video: Watch Now

ABSTRACT: Haptic feedback, an important aspect of learning in virtual reality, has been demonstrated in contexts such as surgical training. However, deploying haptic feedback in other educational practices remains understudied. Haptically-enabled science simulations enable students to experience abstract scientific concepts through concrete and observable lessons in which students can physically experience the concepts being taught through haptic feedback. The present study aims to investigate the effect of an educational simulation on the understanding of basic physics concepts related to buoyancy.


Augmented Reality Image Generation with Optical Consistency using Generative Adversarial Networks (25)

Shunya Iketani, Masaaki Sato, Masataka Imura

Session: Posters / Videos 1 & 3

Hubs Link: See in Hubs

Teaser Video: Watch Now

ABSTRACT: Various methods are used for estimating light source informations from real objects to achieve optical consistency in augmented reality (AR), but in practice, there are difficulties in using real objects.We propose a method of achieving optical consistency without estimating the light source information, using generative adversarial networks (GANs) that input AR images without optical consistency and mask image. The generated AR images from our proposed method show the appropriate expression of drop shadows and reflections of surrounding objects.


Affective Embodiment: Embodying emotions through postural representation in VR (26)

Swati Pandita, Jessica Yee, Andrea Stevenson Won

Session: Posters / Videos 1 & 3

Hubs Link: See in Hubs

Teaser Video: Watch Now

ABSTRACT: The Proteus effect suggests that users try to behave according to the perceived identity of their avatar. However, what happens when the way an avatar moves communicates an emotional state? We conducted a within- subjects study to investigate the potential of transformed embodied experiences on users’ emotional states. We assigned participants to embody three generic avatars that moved in ways that evoked positive and negative emotional states. We discuss the relationship between the avatar postures and participants’ self-reports, along with the clinical implications of experiencing a transformed virtual body.


Investigating the Necessity of Meaningful Context Anchoring in Augmented Reality Smart Glasses Interaction for Everyday Learning (27)

Nanjie ‘Jimmy’ Rao, Lina Zhang, Sharon Lynn Chu, Katarina Jurczyk, Chelsea Candelora, Samantha Su, Cameron J Kozlin

Session: Posters / Videos 1 & 3

Hubs Link: See in Hubs

Teaser Video: Watch Now

ABSTRACT: The wearable nature of smart glasses opens up many possibilities for these devices to support informal and everyday learning. A key question arises: is physical contextualization necessary for smart glasses to support learning? This is a within-subjects study comparing the use of meaningful anchoring (MA) and arbitrary anchoring (AA) smart glasses in a simulated learning scenario. Results showed AA performed better on learning scores and task performance, but that AR immersion and system usability were better for MA. The necessity varies depending on usage scenarios. MA is more suitable for UX-oriented scenarios, while AA is preferred for efficiency-driven scenarios.


CARAI: A Formative Evaluation Methodology for VR Simulations (28)

Alec G Moore, Xinyu Hu, James Coleman Eubanks, Afham Aiyaz, Ryan P. McMahan

Session: Posters / Videos 1 & 3

Hubs Link: See in Hubs

Teaser Video: Watch Now

ABSTRACT: Virtual reality (VR) has a long history of being used to simulate real-world tasks, often for training and educational purposes. Yet, developing a VR system that is usable and effective is not trivial and is still a challenging task. We present a methodology for formatively evaluating the usability of VR simulations by capturing a real-world task, automating data logging, running a rigorous user study, analyzing subtask user performances, and inspecting subpar subtasks for usability issues.


Transporting Real World Rigid and Articulated Objects into Egocentric VR Experiences (29)

Catherine Taylor, Robin McNicholas, Darren Cosker

Session: Posters / Videos 1 & 3

Hubs Link: See in Hubs

Teaser Video: Watch Now

ABSTRACT: Traditional consumer virtual reality (VR) methods for facilitating interaction – e.g.controllers – limit immersion as they provide minimal tactile feedback and do not accurately represent real-world interactions with physical objects. To address these limitations, we propose an egocentric tracking pipeline in which physical rigid and articulated objects are tracked in a moving first-person camera, attached to a VR Head Mounted Display (HMD), and their behaviour used to interact with their virtual counterparts. The tracking pipeline uses a neural network, VRProp-Net+, to predict model parameters from RGB images of unconstrained scenes.


Towards an Immersive Virtual Simulation for Studying Cybersickness during Spatial Knowledge Acquisition (30)

Yun-Xuan Lin, Sabarish V. Babu, Rohith Venkatakrishnan, Roshan Venkatakrishnan, Ying-Chu Wang, Wen-Chieh Lin

Session: Posters / Videos 1 & 3

Hubs Link: See in Hubs

Teaser Video: Watch Now

ABSTRACT: Cybersickness is one of the challenges that has prevented the widespread adoption of Virtual Reality (VR) and its applications. Due to its importance, there have been extensive studies on understanding and reducing cybersickness. Inspired by previous work that has sought to reduce cybersickness by applying the blurring effect and reducing the field of view of VR scenes, we present a simulation with controllable peripheral blur in the VR image. The simulation can be used to study cybersickness and spatial knowledge acquisition in an engaging virtual walking scenario.


Docking Haptics: Dynamic Combinations of Grounded and Worn Devices (31)

Anthony Steed, Sebastian J Friston, Vijay Pawar, David Swapp

Session: Posters / Videos 1 & 3

Hubs Link: See in Hubs

Teaser Video: Watch Now

ABSTRACT: Grounded haptic devices can provide a variety of forces but have limited working volumes. Wearable haptic devices operate over a large volume but are relatively restricted in the types of stimuli they can generate. We propose the concept of docking haptics, in which different types of haptic devices are dynamically docked at run time. We built a proof of prototype from a force feedback arm and hand exoskeleton. This prototype create the sensation of weight on the hand when it is within reach of the grounded device, but away from the grounded device, hand-referenced force feedback is still available


Map Displays and Landmark Effects on Wayfinding in Unfamiliar Environments (32)

Sabah Boustila, Paul Milgram, Greg A Jamieson

Session: Posters / Videos 1 & 3

Hubs Link: See in Hubs

Teaser Video: Watch Now

ABSTRACT: In this work, we investigated the effect of map presentations and landmarks on wayfinding performance. We carried out an experiment in virtual reality, participants were asked to navigate inside a 3D environment to find targets shown on the maps. We studied two kinds of maps: Skymap, a world-scale, and world-aligned head-up map and a track-up bird’s eye view map. Results showed that neither SkyMap nor landmarks did improve target finding performances. In fact, participants performed better with the track-up map.


A study on the effects of head-mounted displays movement and image movement on Virtual Reality sickness (33)

YanXiang Zhang, RuoYi Wang

Session: Posters / Videos 1 & 3

Hubs Link: See in Hubs

Teaser Video: Watch Now

ABSTRACT: Virtual Reality sickness will make users discomfort. We tested 26 volunteers wearing Oculus Rift. Three different experiments were conducted: (1,2) the effect of VR head-mounted display device (HMD) movement (un)synchronizes with inclination of the image on VR sickness; (3) the effect of image movement direction on VR sickness. The Simulator Sickness Questionnaire was filled by the participants. The results showed that the VR sickness symptoms are relieved when VR HMD doesn’t synchronize with inclination of the image. In addition, the different image movement direction will affect the vertigo feeling. The results will be of help for VR content creation.


Combining Wristband Display and Wearable Haptics for Augmented Reality (34)

Gianluca Paolocci, Tommaso Lisini Baldi, Davide Barcelli, Domenico Prattichizzo

Session: Posters / Videos 1 & 3

Hubs Link: See in Hubs

Teaser Video: Watch Now

ABSTRACT: Taking advantages of widely distributed hardware such as smart-phones and tablets, Mobile Augmented Reality (MAR) market is rapidly growing. Major improvements can be envisioned in increasing the realism of virtual interaction and providing multimodal experiences. We propose a novel system prototype that locates the display on the forearm using a rigid support to avoid constraints due to hand-holding, and is equipped with hand tracking and cutaneous feedback. The hand tracking enables the manipulation of virtual objects, while the haptic rendering enhances the user’s perception of the virtual entities. Subjects’ personal evaluations suggest that the AR experience provided by the wrist-based approach is more engaging and immersive.


Musical Brush: Exploring Creativity in an AR-based Tool Combining Music and Drawing Generation (35)

Rafael Valer, Rodrigo Schramm, Luciana Nedel

Session: Posters / Videos 1 & 3

Hubs Link: See in Hubs

Teaser Video: Watch Now

ABSTRACT: The 21st century’s economic growth and social transformation are hardly based on creativity. To help with the development of this skill, the Creativity Support Tools (CST) concept was proposed. Here, we introduce \textit{Musical Brush}, an artistic-based CST mobile application for music and drawing improvisation. We investigated different types of interaction designs and audiovisual feedbacks, measuring their impact on creativity support. A user study with 26 subjects was conducted to verify the Creativity Support Index (CSI) score of each design. Results showed the suitability of the association of Musical Brush with Augmented Reality (AR) for creating new sounds and drawings.


A Usability Assessment of Augmented Situated Visualization (36)

Renan L Martins Guarese, João Becker, Henrique Fensterseifer, Marcelo Walter, Carla M.D.S. Freitas, Luciana Nedel, Anderson Maciel

Session: Posters / Videos 1 & 3

Hubs Link: See in Hubs

Teaser Video: Watch Now

ABSTRACT: The present work proposes the use of data visualization techniques allied to an AR user interface to provide information for users to define the most convenient location to sit down at an event. This accounts for different sets of arbitrary demands by projecting 3D information directly atop the seats. The users can also rearrange the data to narrow down the search and switch the attribute being displayed. The proposed approach was tested against a comparable 2D interactive visualization of the same data. Each user performed 12 location choosing tasks in a classroom. Qualitative and quantitative data indicated that the augmented solution is promising in some senses, exposing that AR may help users to make better decisions.


Exploring the Effects of a Virtual Companion on Solitary Jogging Experience (37)

Takeo Hamada, Ari Hautasaari, Michiteru Kitazaki, Noboru Koshizuka

Session: Posters / Videos 1 & 3

Hubs Link: See in Hubs

Teaser Video: Watch Now

ABSTRACT: Group exercise with superior partners is more effective for gaining motivation and improving athletic performance than when exercising alone, but it can be difficult to always find such partners. In this paper, we propose a novel experience of jogging with a virtual companion instead of a human partner. We report on qualitative analysis results from a controlled experiment to investigate how joggers feel and how their behavior changes when they jog alone, with a human partner, or with a virtual companion. The virtual companion was represented as either a full-body, limb-only, or a point-light avatar using smart glasses.


Relative Room Size Judgements in Impossible Spaces (38)

Catherine Barwulor, Andrew Robb

Session: Posters / Videos 1 & 3

Hubs Link: See in Hubs

Teaser Video: Watch Now

ABSTRACT: Investigated how participant’s judgments of the size of impossible spaces is influenced by the respective ratio of the size of two overlapping rooms and by the means used to report the size of rooms. Participants (n=36) were randomly assigned to one of three different room conditions: one baseline condition and two impossible condition. Results illustrate the importance of the reporting method used when evaluating spatial perception in impossible spaces. Results suggest that the important spatial relationships are preserved in impossible spaces: 1) judgments concerning the size of individual rooms, and 2) judgments concerning the relative relationship of different room.


Front Camera Eye Tracking for Mobile VR (39)

Panagiotis Drakopoulos, George Alex Koulieris, Katerina Mania

Session: Posters / Videos 1 & 3

Hubs Link: See in Hubs

Teaser Video: Watch Now

ABSTRACT: User fixations is a fast and natural input method for VR interaction. Previous attempts for mobile eye tracking in VR were limited due to low accuracy, long processing time and the need for hardware add-ons such as anti-reflective lens coating and IR emitters. We present an innovative mobile VR eye tracking methodology, utilizing only the captured images of the front-facing (selfie) camera through the headset’s lens, without any modifications. A preliminary study confirms that the presented eye tracking methodology performs comparably to eye trackers in commercial VR headsets when the eyes move in the central part of the headset’s field of view.


The Effect of Navigational Aids on Spatial Memory in Virtual Reality (40)

Shachar Maidenbaum, Ansh Patel, Tamara Gedankien, Joshua Jacobs

Session: Posters / Videos 1 & 3

Hubs Link: See in Hubs

Teaser Video: Watch Now

ABSTRACT: Modern navigation aids revolutionized human navigation, but are also criticized for potential negative effects on human spatial memory. However, previous tests focused on spatial skills such as environment learning and mental rotations rather then on spatial memory directly. Here, we test directly the effect of a simple navigational aid – an augmented arrow pointing towards the next target – on users’ spatial memory. We find that performance was significantly more efficient with these guiding arrows, but with no significant decrease in spatial memory performance. This suggests negative effects may stem from other features such as attention, and tempers the media’s over-alarmist view.


A Pilot Study Comparing Two Naturalistic Gesture-based Interaction Interfaces to Support VR-based Public Health Laboratory Training (41)

Jessica Voge, Brian Adrian Flowers, Dan Duggan, Nicolas Herrera, Arthur Wollocko

Session: Posters / Videos 1 & 3

Hubs Link: See in Hubs

Teaser Video: Watch Now

ABSTRACT: Training opportunities for public health laboratories (PHLs) are limited, and traditional wet laboratory training is expensive and potentially hazardous. Virtual reality presents a promising complement existing training and provides hands-on experience. To develop an interaction library tailored to PHLs, we deconstructed biosafety training procedures into tasks and corresponding physical interactions, leveraging our open source VIRTUOSO software developer kit (VSDK). We mapped the interaction library to interaction interfaces on two VR systems. In a pilot study, we compared the user experience of two interaction interfaces. Users completed a simulated biosafety spill procedure and provided feedback about their experience and preferences.


Automatic Detection of Cybersickness from Physiological Signal in a Virtual Roller Coaster Simulation (42)

Rifatul Islam, Yonggun Lee, mehrad Jaloli, imtiaz muhammad arafat, Dakai Zhu, John Quarles

Session: Posters / Videos 1 & 3

Hubs Link: See in Hubs

Teaser Video: Watch Now

ABSTRACT: Virtual reality (VR) systems often induce motion sickness like discomfort known as cybersickness. The standard approach for detecting cybersickness includes collecting both subjective and objective measurements, while participants are exposed to VR. With the recent advancement of machine learning, we can train deep neural networks to detect cybersickness severity from subjective (e.g., self-reported sickness periodically) and objective measurements. In this study, we collected physiological data from 31 participants while they were immersed in VR. Self-reported verbal sickness was collected at each minute interval for labeling the physiological data. Finally, a simple neural network was proposed to detect cybersickness severity.


Real-Time Lighting Estimation via Differentiable Screen Space Rendering (43)

Celong Liu, Zhong Li, Shuxue Quan, Yi Xu

Session: Posters / Videos 1 & 3

Hubs Link: See in Hubs

Teaser Video: Watch Now

ABSTRACT: In this paper, we present a method that estimates the real-world lighting condition from a single RGB image of an indoor scene, with information of support plane provided by commercial Augmented Reality (AR) frameworks (e.g., ARCore, ARKit, etc.). First, a Deep Neural Network (DNN) is used to segment the foreground. We only focus on the foreground objects to reduce computation complexity. Then we introduce differentiable screen-space rendering, a novel approach for estimating the normal and lighting condition jointly. We recover the most plausible lighting condition using spherical harmonics. We show that our approach provides plausible results and considerably enhances the visual realism in AR applications.


A Constrained Path Redirection for Passive Haptics (44)

Lili Wang, Zixiang Zhao, Xuefeng Yang, Huidong Bai, Amit Barde, Mark Billinghurst

Session: Posters / Videos 1 & 3

Hubs Link: See in Hubs

Teaser Video: Watch Now

ABSTRACT: Navigation with passive haptic feedback can enhance users’ immersion in virtual environments. We propose a practical constrained path redirection method to provide users with corresponding haptic feedback at the right time and place. We have quantified the VR exploration practicality in a study and the results show advantages over steer-to-center method in terms of presence, and over Steinicke’s method in terms of matching errors and presence.


Visual Guidance Methods in Immersive and Interactive VR Environments with Connected 360° Videos (45)

Samuel Cosgrove, Joseph LaViola

Session: Posters / Videos 1 & 3

Hubs Link: See in Hubs

Teaser Video: Watch Now

ABSTRACT: There is emerging research in using 360-degree panoramas in virtual reality (VR) for “360 VR” with choice of navigation and interaction. Unlike standard VR with the freedom of synthetic graphics, there are challenges in designing appropriate user interfaces (UIs) for 360 VR navigation within the limitations of fixed assets. We designed a novel software system called RealNodes that presents an interactive and explorable 360 VR environment. We developed four visual guidance UIs for 360 VR navigation. A comparative study determined choice of UI had a significant effect on task completion times, showing one of the methods, Arrow, was best.


Learning to Match 2D Images and 3D LiDAR Point Clouds for Outdoor Augmented Reality (46)

Weiquan Liu, Baiqi Lai, Cheng Wang, Xuesheng Bian, WenTao Yang, Yan Xia, Xiuhong Lin, Shang-Hong Lai, Dongdong Weng, Jonathan Li

Session: Posters / Videos 1 & 3

Hubs Link: See in Hubs

Teaser Video: Watch Now

ABSTRACT: Large-scale LiDAR point clouds provide basic 3D information support for outdoor AR. Especially, matching 2D images across to 3D LiDAR point clouds can establish the spatial relationship of 2D and 3D space, which is a solution for the virtual-real registration of AR. This paper first provides a precise 2D-3D patch-volume dataset. Second, we propose Siam2D3D-Net to jointly learn feature representations for image patches and LiDAR point cloud volumes. Experimental results indicate the Siam2D3D-Net can match and establish 2D-3D correspondences from the query image to the LiDAR point cloud. Finally, an application demonstrates the possibility of proposed outdoor AR virtual-real registration.


Robust turbulence simulation for particle-based fluids using the Rankine vortex model (47)

Xiaokun Wang, Sinuo Liu, Xiaojuan Ban, Yanrui Xu, Jing Zhou, Jiri Kosinka

Session: Posters / Videos 1 & 3

Hubs Link: See in Hubs

Teaser Video: Watch Now

ABSTRACT: We propose a novel turbulence refinement method based on the Rankine vortex model for SPH (smoothed particle hydrodynamics) simulations. Surface details are enhanced by recovering the energy lost in the rotational degrees of freedom of SPH particles. The Rankine vortex model is used to convert the diffused and stretched angular kinetic energy of particles to the linear kinetic energy of their neighbours. Our model naturally prevents the positive feedback effect between the velocity and vorticity fields since the vortex model is designed to alter the velocity without introducing external sources.


Egocentric Sonification of Continuous Spatial Data in Situated Analytics (48)

Markus Berger

Session: Posters / Videos 1 & 3

Hubs Link: See in Hubs

Teaser Video: Watch Now

ABSTRACT: The combination of increasingly ubiquitous spatial data and Augmented Reality (AR) facilitates interactive, spatially embedded reasoning for users who are physically present in a data environment. Many techniques from visual and immersive analytics carry over into this “Situated Analytics” context. However, continuous, wide-area datasets like air quality records are difficult to display visually without obscuring a large part of the real environment. Established egocentric techniques usually represent a detailed view of the near field at the expense of the larger data context. We present an approach towards a situated sonification that can restore the surrounding data context.


Depth Augmented Omnidirectional Stereo for 6-DoF VR Photography (49)

Tobias Bertel, Moritz Muehlhausen, Moritz Kappel, Paul Maximilian Bittner, Christian Richardt, Marcus Magnor

Session: Posters / Videos 1 & 3

Hubs Link: See in Hubs

Teaser Video: Watch Now

ABSTRACT: We present an end-to-end pipeline that enables head-motion parallax for omnidirectional stereo (ODS) panoramas. Based on an ODS panorama containing a left and right eye view, our method estimates dense horizontal disparity fields between the stereo image pair. From this, we calculate a depth augmented stereo panorama (DASP) by explicitly reconstructing the scene geometry from the viewing circle corresponding to the ODS representation. The generated DASP representation supports motion parallax within the ODS viewing circle. Our approach operates directly on existing ODS panoramas. The experiments indicate the robustness and versatility of our approach on multiple real-world ODS panoramas.


The influence of text rotation, font and distance on legibility in VR (50)

Andre Büttner, Stefan Michael Grünvogel, Arnulph Fuhrmann

Session: Posters / Videos 1 & 3

Hubs Link: See in Hubs

Teaser Video: Watch Now

ABSTRACT: The legibility of text in virtual environments (VE) is especially important for the simulation of user interfaces of real machines and devices. For this purpose a study was conducted to examine the influence of rotation, the distance and the font of text on the legibility in VR. In addition, the minimum readable text size under these conditions was measured using the angular unit dmm. The results of the study show that text rotated 60° or more requires a much larger text size to be legible regardless of the distance of the text to the viewer and font.


Temporal RVL: A Depth Stream Compression Method (51)

Hanseul Jun, Jeremy Bailenson

Session: Posters / Videos 1 & 3

Hubs Link: See in Hubs

Teaser Video: Watch Now

ABSTRACT: The advent of depth cameras has led to new opportunities, but at the same time has led to new challenges in the form of larger network bandwidth. To address this problem, we propose a lossy compression method Temporal RVL, which results in better compression with little loss of depth information. Temporal RVL adds a preprocessing step to RVL and effectively utilizes the similarities across frames, while maintaining important depth information such as edges. For the default settings, Temporal RVL achieves a compression ratio of 20.1 (4.2 times higher than RVL) while at the same time facilitating faster decompression.


Audio-Visual Spatial Alignment Requirements of Central and Peripheral Object Events (52)

Davide Berghi, Hanne Stenzel, Marco Volino, Adrian Hilton, Philip J.B. Jackson

Session: Posters / Videos 1 & 3

Hubs Link: See in Hubs

Teaser Video: Watch Now

ABSTRACT: Immersive audio-visual perception relies on the spatial integration of both auditory and visual information which are heterogeneous sensing modalities with different fields of reception and spatial resolution. This study investigates the perceived coherence of audio-visual object events presented either centrally or peripherally with horizontally aligned/misaligned sound. Various object events were selected to represent three acoustic feature classes. Subjective test results in a simulated virtual environment from 18 participants indicate a wider capture region in the periphery, with an outward bias favoring more lateral sounds. Centered stimulus results support previous findings for simpler scenes.


Extracting and Transferring Hierarchical Knowledge to Robots Using Virtual Reality (53)

Zhenliang Zhang, Jie Guo, Dongdong Weng, Yue Liu, Yongtian Wang

Session: Posters / Videos 1 & 3

Hubs Link: See in Hubs

Teaser Video: Watch Now

ABSTRACT: We study the knowledge transfer problem by training the task of folding clothes in the virtual world using an Oculus Headset and validating with a physical Baxter robot. We argue such complex transfer is realizable if an abstract graph-based knowledge representation is adopted to facilitate the process. An And-Or-Graph (AOG) grammar model is introduced to represent the knowledge, which can be learned from the human demonstrations performed in the Virtual Reality (VR), followed by the case analysis of folding clothes represented and learned by the AOG grammar model.


Comparing Motion-based Versus Controller-based Pseudo-haptic Weight Sensations in VR (54)

Yutaro Hirao, Tuukka M. Takala, Anatole Lécuyer

Session: Posters / Videos 1 & 3

Hubs Link: See in Hubs

Teaser Video: Watch Now

ABSTRACT: This work examines whether pseudo-haptic experiences can be achieved using a game controller without motion tracking. For this purpose we implemented a virtual hand manipulation method that uses the controller’s analog stick. We compared the method’s pseudo-haptic experience to that of the conventional approach of using a hand-held motion controller. The results suggest that our analog stick manipulation can present pseudo-weight sensations in a similar way to the conventional approach. Thus, it could be a viable alternative for inducing pseudo-haptic experiences.


Asymmetric Interaction between HMD Wearers and Spectators with a Large Display (55)

Finn Welsford-Ackroyd, Andrew Chalmers, Rafael K dos Anjos, Daniel Medeiros, Hyejin Kim, Taehyun James Rhee

Session: Posters / Videos 1 & 3

Hubs Link: See in Hubs

Teaser Video: Watch Now

ABSTRACT: HMDs provide an immersive VR experience, but make it difficult to communicate the experience with spectators not wearing HMDs. We propose a spectator-oriented approach for collaborative tasks and demonstrations. We render the virtual world in a large-scale tiled video wall, where the spectator can freely explore the environment independent from the HMD wearer. To improve collaboration, we implemented a pointer system where spectator is able to point at objects on the screen which maps directly onto the objects in the virtual world. This interaction enables spectators to effectively communicate and feel semi-immersed without the need to wear a HMD.


Effects of Physical Prop Shape on Virtual Stairs Travel Techniques (56)

Connor Kasarda, Maria Swartz, Kyle Mitchell, Rajiv Khadka, Amy Banic

Session: Posters / Videos 1 & 3

Hubs Link: See in Hubs

Teaser Video: Watch Now

ABSTRACT: Experiences of Virtual Reality training and architectural virtual environments benefit when provided a higher sensation of stair climbing. Passive haptic props can provide that sensation. These methods present a safe approach by placing short ramps on the floor rather than a physical staircase. To improve a user’s level of immersion, we conducted an experiment to explore the shape of physical props to change the way users were aligned and moved while traveling up or down avirtual set of stairs. We investigated three methods for physical props while ascending and descending virtual stairs. Our poster will discuss our results that elongated props provide a better experience and are more preferred.


Posters / Videos 2 & 4

Evoking Pseudo-Haptics of Resistance Force by Viewpoint Displacement (57)

Shoki Tada, Takefumi Ogawa

Session: Posters / Videos 2 & 4

Hubs Link: See in Hubs

Teaser Video: Watch Now

ABSTRACT: In conventional methods, pseudo-haptics is evoked by displacing a user’s hand in a virtual space. However, it is hard to apply this procedure in a mixed reality space, which is based on real space. In this study, we examined the pseudo-haptics evoked by dynamically displacing the viewpoint of a user in a video see-through environment. We simulated the proposed system in a virtual space, and conducted experiments to evaluate the effectiveness of the proposed method.


Real or surreal: A pilot study on creative idea generation in MR vs. VR (58)

Angel Hwang, Yilu Sun, Cameron McKee, Andrea Stevenson Won

Session: Posters / Videos 2 & 4

Hubs Link: See in Hubs

Teaser Video: Watch Now

ABSTRACT: The present pilot study examined collaborative creativity in virtual and mixed reality. Comparing equivalent MR and VR conditions, we explored whether different environments can potentially influence users’ creative production when asked to generate new ideas on water and energy sparing. We adopted the Linkography approach to evaluate participants creative performance by assessing both the final brainstorming outputs and the step-by-step processes of forming ideas. We looked at both participants’ individual ideas (node) and their relationships with former ideas (link). Our results demonstrated participants generated more ideas, and more unique ideas, when they were in the VR condition.


Attractiveness and Confidence in Walking Style of Male and Female Virtual Characters (59)

Anne Thaler, Andreas Bieg, Naureen Mahmood, Michael J. Black, Betty Mohler, Nikolaus F. Troje

Session: Posters / Videos 2 & 4

Hubs Link: See in Hubs

Teaser Video: Watch Now

ABSTRACT: Animated virtual characters are essential to many applications. Little is known so far about biological and personality inferences made from a virtual character’s body shape and motion. Here, we investigated how sex-specific differences in walking style relate to the perceived attractiveness and confidence of male and female virtual characters. The characters were generated by reconstructing body shape and walking motion from optical motion capture data. The results suggest that sexual dimorphism in walking style plays a different role in attributing biological and personality traits to male and female virtual characters. This finding has important implications for virtual character animation.


Magic Bounce: Playful Interaction on Superelastic Display (60)

Toshiki Nishino, Akihiro Matsuura

Session: Posters / Videos 2 & 4

Hubs Link: See in Hubs

Teaser Video: Watch Now

ABSTRACT: We present an interactive system using a superelastic surface on which to bounce 3D objects. Ellipsoids are especially used as interface owing to the bounce back movement on the surface. Touching information of the objects on the surface including the depth is detected and used for application that is projected on the surface. Several interactive contents are demonstrated as examples of possible interactions using this system.


Evaluating the Influence of the HMD, Usability, and Fatigue in 360VR Video Quality Assessments (61)

Marta Orduna, Pablo Perez, César Díaz, Narciso García

Session: Posters / Videos 2 & 4

Hubs Link: See in Hubs

Teaser Video: Watch Now

ABSTRACT: VR communications present challenges such as the influence of HMD, usability or fatigue in QoE that should be addressed in subjective test methodologies. So, we present an experiment where video quality and presence are jointly assessed in two HMDs. We have observed that while presence is affected by the evaluation mechanism or duration of the test, quality is mainly affected by the HMD. This statement implies that methodologies that take into account technical concepts, such as encoding and transmission, and socioemotional concepts are necessary to obtain reliable results of QoE in VR.


Perceptual Distortions Between Windows and Screens: Stereopsis Predicts Motion Parallax (62)

Xiaoye Michael Wang, Anne Thaler, Siavash Eftekharifar, Adam Bebko, Nikolaus F. Troje

Session: Posters / Videos 2 & 4

Hubs Link: See in Hubs

Teaser Video: Watch Now

ABSTRACT: Stereopsis and motion parallax provide depth information, capable of producing more realistic user experiences after being integrated into a flat screen (e.g. immersive virtual reality). Extensive research shows that stereoscopic screens increase realism, while few studies have investigated users’ responses to parallax screens without stereopsis. In this study, we examined users’ evaluations of screens with only parallax or stereopsis. We found that with only parallax, the mapping between observer motion and viewpoint change should be around 0.6 for a more realistic perceptual experience, and observers were less sensitive to stereoscopic distortions as a result of a different interpupillary distance scaling.


Memory Journalist: Creating Virtual Reality Exergames for the Treatment of Older Adults with Dementia (63)

Sebastian Rings, Caspar Prasuhn, Frank Steinicke, Tobias Picker

Session: Posters / Videos 2 & 4

Hubs Link: See in Hubs

Teaser Video: Watch Now

ABSTRACT: In this poster we show the intermediate results of our first therapy exergame for older adults with neurological diseases, in particular, dementia and Alzheimer’s disease. It has been shown that regular exercises improve balance of older adults when performing everyday tasks. However, creating exergames for older adults with dementia leads to challenges when it comes to designing training tasks. We introduce Memory Journalist, which is a exergame based on moderate complex motor-cognitive tasks. In this game patients need to memorize locations of landmarks and photograph them using a tracked 3D-printed wearable camera by exploiting simple motor tasks.


Exploring Effect Of Different External Stimuli On Body Association In VR (64)

Prabodh Sakhardande, Amarnath Murugan, Jayesh S. Pillai

Session: Posters / Videos 2 & 4

Hubs Link: See in Hubs

Teaser Video: Watch Now

ABSTRACT: Body association in VR is the extent to which users perceive a virtual body as their own. Prior research has studied the effect of tactile, visual and visuomotor stimuli on body association. Additionally, studies have been conducted to test the effect of olfactory stimuli on immersion, but how it affects body association hasn’t been explored. Through a systematic study, we compare the effect of tactile, visual, visuomotor and olfactory stimuli on body association in VR. This work paves the way towards understanding olfactory sensations and how they might affect experiences in VR.


Gaze+Gesture Interface: Considering Social Acceptability (65)

Hwan Heo, Minho Lee, Sungjei Kim, Youngbae Hwang

Session: Posters / Videos 2 & 4

Hubs Link: See in Hubs

Teaser Video: Watch Now

ABSTRACT: If smart glasses become more advanced and more popular than regular glasses in the future, socially acceptable user interfaces can be required. In this paper, we propose a user interface on HoloLens by using gaze tracking for navigation, and unobtrusive gestures based on deep learning for selection/manipulation, which is more socially acceptable than the existing user interfaces on smart glasses. A study was conducted to investigate social acceptability from the users’ perspective, and the results showed the advantages of the proposed method to improve social acceptability.


Creating a VR Experience of Solitary Confinement (66)

Trenton Plager, Ying Zhu, Douglas A. Blackmon

Session: Posters / Videos 2 & 4

Hubs Link: See in Hubs

Teaser Video: Watch Now

ABSTRACT: The goal of this project is to create a realistic VR experience of solitary confinement and study its impact on users. Although there have been active debates and studies on this subject, very few people have personal experience of solitary confinement. Our first aim is to create such an experience in VR to raise the awareness of solitary confinement. We also want to conduct user studies to compare the VR solitary confinement experience with other types of media experiences, such as films or personal narrations. Finally, we want to study people’s sense of time in such a VR environment.


Representing Virtual Transparent Objects on OST-HMDs Considering Accommodation and Vergence (67)

Yuto Kimura, Shinnosuke Manabe, Asako Kimura, Fumihisa Shibata

Session: Posters / Videos 2 & 4

Hubs Link: See in Hubs

Teaser Video: Watch Now

ABSTRACT: We represent virtual transparent objects on OST-HMDs, considering accommodation and vergence. In AR with conventional stereoscopic displays, it is difficult to reproduce the defocus and the disparity between the images, surface, reflection, and refraction, of a transparent object with different depths. Our method represented them by reproducing the defocus with blur processing and the disparity with pseudo parallax refraction. In an experiment, it was confirmed that the transparent object reproduced with the proposed method makes the images more realistic compared to the unprocessed one.


MotionNote: A Novel Human Pose Representation (68)

Dubeom Kim, BHARATESH CHAKRAVARTHI S B, Seonghun Kim, ADITHYA BALASUBRAMANYAM, youngho chai, Ashok Kumar Patil

Session: Posters / Videos 2 & 4

Hubs Link: See in Hubs

Teaser Video: Watch Now

ABSTRACT: 3D avatar and their motions are extensively used in robotics, HCI, entertainment industry, fitness training and virtual reality. This paper presents ongoing research on developing novel motion notation approach called MotionNote for a pose of human bone, like Labanotation and Musical notes are used for dance and music. MotionNote includes motion data capture and reconstruction on avatar, motion visualization on a unit sphere and motion notation on a 2D equirectangular perspective grid. Preliminary results show that it is feasible for understanding human motion using motion notation on 2D EPG.


Omnidirectional Motion Input: The Basis of Natural Interaction in Room-Scale Virtual Reality (69)

Ziyao Wang, Liping Xie, Haikun Wei, KanJian Zhang, JinXia Zhang

Session: Posters / Videos 2 & 4

Hubs Link: See in Hubs

Teaser Video: Watch Now

ABSTRACT: The basis of natural interaction in virtual reality is the exact input of human motions, which includes the locomotion, body motion, facial expressions, etc. We define the input of all such human motions as the omnidirectional motion input(OMI). Traditional motion input systems are designed for specific motions. This paper proposes an OMI system that could provide the room-scale locomotion input and body motion input simultaneously. Our system only needs 2m^2 area and the time delay less than 20ms. The experiments demonstrate the effectiveness of our OMI system.


Tactile Presentation Device Using Sound Wave Vibration (70)

Yudai Okamoto, Yoichi Yamazaki, Masataka Imura

Session: Posters / Videos 2 & 4

Hubs Link: See in Hubs

Teaser Video: Watch Now

ABSTRACT: In this study, we focus on sound waves, and propose a method to present tactile sensations by generating vibrations via irradiating sound waves of various frequencies and amplitudes on a plate. In experiments using a parametric speaker as an acoustic device, the vibration of the plate during sound wave irradiation was measured; it was confirmed that different tactile sensations could be obtained by changing the frequency and amplitude of the irradiated sound wave.


Real-time Illumination Estimation for Mixed Reality on Mobile Devices (71)

Di Xu, Zhen Li, Yanning Zhang

Session: Posters / Videos 2 & 4

Hubs Link: See in Hubs

Teaser Video: Watch Now

ABSTRACT: We present a lightweight lighting estimation method for the purpose of realistic mixed reality (MR) on mobile devices. Given a single RGB image, our method estimates the environment lighting and renders the virtual object in real-time. Compared to previous approaches, our method is more robust and efficient as it works for both indoor and outdoor scenes in real-time. Experiments show that our approach achieves realistic rendering in various MR scenarios.


Real-time Depth Estimation for Aerial Panoramas in Virtual Reality (72)

Di Xu, Xiao Jun Liu, Yanning Zhang

Session: Posters / Videos 2 & 4

Hubs Link: See in Hubs

Teaser Video: Watch Now

ABSTRACT: We present a real-time depth estimation method for the challenging aerial panoramas, where the viewing angle is changing rapidly and the lighting condition is more complicated. Our graph convolutional network(GCN)-based framework makes full use of the global connection information of the omnidirectional images, and is trained with extensive outdoor data. Experiments show that our method is robust to estimate the depth of outdoor aerial panoramas captured from various angles accurately.


Guided Sine Fitting for Latency Estimation in Virtual Reality (73)

Jan-Philipp Stauffert, Florian Niebling, Jean-Luc Lugrin, Marc Erich Latoschik

Session: Posters / Videos 2 & 4

Hubs Link: See in Hubs

Teaser Video: Watch Now

ABSTRACT: Latency in Virtual Reality applications can lead to decreased performance and cybersickness. Multiple approaches exist to determine latency. Yet many scientific publications fail to report their system’s latency despite the potentially detrimental impact. This paper extends Steed’s sine fitting approach by using KCF tracking to track the positions of physical objects in video recordings. We provide a software for convenient usage. Our combination of sine fitting with KCF tracking allows to measure Motion-To-Photon latency of arbitrary tracking devices without any additional preparation.


Building the Virtual Stage: A System for Enabling Mixed Reality Theatre (74)

Jietong Chen, Kunal Shailesh Shitut, Joe Geigel, David Munnell, Marla Schweppe

Session: Posters / Videos 2 & 4

Hubs Link: See in Hubs

Teaser Video: Watch Now

ABSTRACT: In this work, we present an infrastructure for enabling live theatrical productions that make use of virtual stage elements viewed using augmented reality devices. We present challenges that guided the design of the system and illustrate use of the framework through examples from live productions that utilize our prototype implementation.


ReliveReality: Enabling Socially Reliving Experiences in Virtual Reality via a Single RGB camera (75)

Cheng Yao Wang, Shengguang Bai, Andrea Stevenson Won

Session: Posters / Videos 2 & 4

Hubs Link: See in Hubs

Teaser Video: Watch Now

ABSTRACT: We present a new time-machine-like experience sharing method, ReliveReality, which transforms photo/video memories into 3D reconstructed memories and allows users to relive experiences through VR socially. ReliveReality utilizes deep learning-based computer vision techniques to reconstruct people in clothing, estimate multi-person 3D poses and reconstruct 3D environments with only a single RGB camera. Integrating with a networked multi-user VR environment, ReliveReality enables people to enter into a past experience, move around and relive a memory from different perspectives in VR togethers.


Natural User Interfaces for Mixed Reality: Controlling Virtual Objects with your Real Hands (76)

Sergio Serra-Sanchez, Redouane Kachach, Ester Gonzalez-Sosa, Alvaro Villegas

Session: Posters / Videos 2 & 4

Hubs Link: See in Hubs

Teaser Video: Watch Now

ABSTRACT: We explore a novel way of interacting in Mixed Reality using jointly hand dynamics and appearance. Our prototype is based on HTC Vive, Leap Motion sensor and ZED Mini stereo camera. Leap Motion allows to track the egocentric hands in 3D. To get the segmented hands, a color-based algorithm is applied to the egocentric images. A user interface consisting on interactive buttons is designed, one with physical props enabling haptic sensation. Tracking of physical objects is implemented using fiducial marker and Aruco library. Results suggest that users find attractive this novel interactive way, specially while using physical prop.


Hand Motion with Eyes-free Interaction for Authentication in Virtual Reality (77)

Yujun Lu, BoYu Gao, Jinyi Long, Jian Weng

Session: Posters / Videos 2 & 4

Hubs Link: See in Hubs

Teaser Video: Watch Now

ABSTRACT: Designing an authentication method is a crucial component to secure privacy in information systems. In this work, we propose a novel authentication method in which a user can perform a 3D trajectory on two sides in an eye-free manner. We design a pilot study, which compared the usability and security between eye-engage and eye-free input. The initial result revealed our purposed method can achieve a trade-off between usability and security.


Recognition of Emotional influence of a character. Experimental Design and Preliminary Results (78)

Juan Sebastián Vargas Molano, Nicolas Casanova, Oscar Carrillo Carrillo, Wilson J. Sarmiento

Session: Posters / Videos 2 & 4

Hubs Link: See in Hubs

Teaser Video: Watch Now

ABSTRACT: This paper shows the experimental setup and preliminary results of a work in progress that looks to induce and recognize the emotional influence of a character. The experiment starts with a user seated on an bus. A character takes the bus, and he walks and sits in front of the user. Three different behaviors are shown by the character, i.e., indifference, anxiety, and aggressiveness. A set of sensors were used to record the biometric signals of the user, and a bag of features approach is applied to the signals in combination with a machine-learning algorithms to recognize different emotional responses.


Modulating the Gait of a Real-Time Self-Avatar to Induce Changes in Stride Length During Treadmill Walking (79)

Iris Willaert, Rachid Aissaoui, Sylvie Nadeau, Cyril Duclos, David Labbe PhD

Session: Posters / Videos 2 & 4

Hubs Link: See in Hubs

Teaser Video: Watch Now

ABSTRACT: In VR, it is possible to simulate visual self-representation by mapping one’s body movements with an avatar. The purpose of this study was to investigate the effects of increasing a avatar’s stride length on the stride length of embodied users. Nine healthy subjects walked on a treadmill while viewing their self-avatar in a virtual mirror through an HMD, while the stride length of the avatar was manipulated. The results of this study demonstrated a tendency to induce changes to stride length as a result of modified visual feedback experienced through an embodied avatar in healthy participants.


A just noticeable difference for perceiving virtual surfaces through haptic interaction (80)

Jing Huang, Deng Wang, Yaoping Hu

Session: Posters / Videos 2 & 4

Hubs Link: See in Hubs

Teaser Video: Watch Now

ABSTRACT: Perception of virtual surfaces via force feedback (i.e., haptic interaction through the sense of touch) is a popular topic in a virtual environment (VE). However, few studies reported on a just noticeable difference (JND) of perceiving convex/concave, vertically-placed virtual surfaces under the influence of force directions. We postulated a JND-critical hypothesis that there is a minimum amplitude subtended such surfaces for human perception under force directions. Our empirical results confirmed this hypothesis, suggesting an amplitude range of 1.19 ~ 1.95 mm for distinguishing the virtual surfaces in a VE. This finding would be potential for creating a 3D VE for haptic interaction.


An Immersive Gesture-based Drone Command System (81)

Saksham Gupta, Pronnoy Goswami, Parth Vora, Hudson Chase, Mohak Bheda, Abhimanyu Chadha, Denis Gracanin

Session: Posters / Videos 2 & 4

Hubs Link: See in Hubs

Teaser Video: Watch Now

ABSTRACT: One of the primary reasons for difficulties when flying a drone is limited understanding of mapping between the physical movements/orientation of the drone and using a remote controller. We describe an immersive gesture-based drone control approach which aims to increase the drone flying accuracy and reduce the initial learning curve associated with flying a drone.


Individual differences in teleporting through virtual environments: A latent profile analysis (82)

Lucia Cherep, Alex Lim, Jonathan Kelly, Anthony Miller, Stephen B. Gilbert

Session: Posters / Videos 2 & 4

Hubs Link: See in Hubs

Teaser Video: Watch Now

ABSTRACT: Teleportation in virtual reality (VR) affords the ability to explore beyond the physical space. Previous work has demonstrated that this interface comes at a spatial cognitive cost – though, upon closer inspection, not everyone appears similarly affected. A latent profile analysis identified three groups that significantly differed on spatial updating performance and follow-up analyses showed significant differences in objective measures of spatial ability (e.g., mental rotation and perspective-taking). These results suggest that there are individual differences in domains of spatial cognition that are related to how well a user may keep track of his or her location while teleporting in VR.


Investigating the Influence of Odors Visuals Representations on the Sense of Smell, a pilot study (83)

Adrien Alexandre Verhulst, Eulalie Verhulst, Minori Manabe, Hiroto Saito

Session: Posters / Videos 2 & 4

Hubs Link: See in Hubs

Teaser Video: Watch Now

ABSTRACT: This pilot study researches the representation of smells in Virtual Reality and its association with real smells. Olfactive stimuli in VR remains limited and difficult for effective use. We propose a work-around with a ``synesthesia-like’’ approach (e.g. the smell of perfume is associated to a pink smoke / seeing a violet can elicit its odor; etc.). We present 2 studies (N=14) where: 1/ We compare 3 smell representations (smoke-based, image-based, visually physically-based) to know the best representation; and 2/ We compare good / bad real smells with good / bad virtual smells to know how the user associates them.


Vibro-vestibular Wheelchair with a Curved Pedestal Presenting a Vehicle Riding Sensation in a Virtual Environment (84)

Vibol Yem, Ryunosuke Yagi, Minori Unno, Fumiya Miyashita, Yasushi Ikei

Session: Posters / Videos 2 & 4

Hubs Link: See in Hubs

Teaser Video: Watch Now

ABSTRACT: We developed a vehicle ride simulation system for immersive virtual reality. The system comprised a wheelchair for vibration and vestibular sensation, and a pedestal with a curved surface for the wheelchair to run on to provide a gravitational acceleration sensation. Our system was presented as a demonstration at an international conference for two examples of riding in a car and riding on a roller coaster. We provided 66 participants with a questionnaire to evaluate the quality of our system. Results showed that whole-body vibration by the actuator improved immersion in the riding experience.


Patrik Goncalves, Tonja-Katrin Machulla, Jason Orlosky

Session: Posters / Videos 2 & 4

Hubs Link: See in Hubs

Teaser Video: Watch Now

ABSTRACT: Age-related macular degeneration (AMD) is the most common cause for acquired blindness in industrialized countries. Individuals with AMD have difficulty recognizing faces or distinguishing keys on a keyboard, but their peripheral vision is not affected. so patients can still recognize objects in the periphery. We present a head mounted display based visual assistant that presents a magnified image to the user within a section of the screen. In this work, we examine how different sizes of this magnification section as well as magnification factors may affect the precision, accuracy and speed of inputting words on a keyboard for patients with AMD, which we simulated with eye-tracking.


Perception of Walking Self-body Avatar Enhances Virtual-walking Sensation (86)

Yusuke Matsuda, Junya Nakamura, Tomohiro Amemiya Ph.D., Yasushi Ikei, Michiteru Kitazaki

Session: Posters / Videos 2 & 4

Hubs Link: See in Hubs

Teaser Video: Watch Now

ABSTRACT: Various virtual walking systems have been developed using treadmill or leg-support devices to move users’ legs. We proposed and evaluated a virtual walking system for sitting observers using only passive sensations such as optic flow and foot vibrations. In this study, we examined whether the virtual body representing observer’s walking could facilitate the sensations of virtual walking in a virtual environment. We found that the virtual body enhanced sensations of walking, leg action and presence except for self-motion. These results suggest that perception of walking self-body avatar would facilitate sensations of walking and presence independently from foot vibrations.


Investigation of the effect of virtual reality on postural stability in healthy adults (87)

Jinseok Oh, Christopher J Curry, Arash Mahnan

Session: Posters / Videos 2 & 4

Hubs Link: See in Hubs

Teaser Video: Watch Now

ABSTRACT: Previous research has shown that individuals behave differently in certain virtual reality tasks. The effect of VR on human posture and stability is an important factor that can influence the future applications of VR devices. This current study seeks to investigate how a person’s postural stability differs between VR and normal environment while attempting to replicate the influence of target distance on sway. Ten healthy subjects were tested in both environments with targets varying in distance. The results found a significant difference in postural stability for normal anatomical stance tasks between VR and normal environments.


AR Room: Real-Time Framework of Camera Location and Interaction for Augmented Reality Services (88)

Sangheon Park, Hyunwoo Cho, Chanho Park, Young-Suk Yoon, Sung-Uk Jung

Session: Posters / Videos 2 & 4

Hubs Link: See in Hubs

Teaser Video: Watch Now

ABSTRACT: In this paper, we propose a framework of AR service that multi-users can experience contents respectively in the single space. To implement the proposed framework, we consider the structure and strategy which can share the view and data; such as device location, interaction. Estimated locations information from each device of clients are sent to the server. The server synchronizes the location and interaction information based on point cloud map and shares it to all clients. To demonstrate our framework, we build the AR service called “Winter Village” and show the strategy of our framework.


Automated Assessment System with Cross Reality for Neonatal Endotracheal Intubation Training (89)

SHANG ZHAO, Wei Li, Xiaoke Zhang, Xiao Xiao, Yan Meng, John Philbeck, Naji Younes, Rehab Alahmadi, Lamia Soghier, James Hahn

Session: Posters / Videos 2 & 4

Hubs Link: See in Hubs

Teaser Video: Watch Now

ABSTRACT: Neonatal endotracheal intubation (ETI) is a resuscitation skill that requires an effective training regimen. The current manikin-based training regimen lacks the visualization inside the manikin and the quantification of performance, resulting in inaccurate guidance and highly variable manual assessment. We present an XR ETI simulation system that registers augmented instruments to their virtual counterparts, capturing all motions, visualizing the entire procedure, and offering instructors with information for assessment. Our automated assessment model can predict the ETI performance by using the performance parameters extracted from the collected motions and the scores from an expert rater, which achieves 83.5% classification accuracy.


AffordIt!: A Tool for Authoring Object Component Behavior in VR (90)

Sina Masnadi, Andrés N Vargas González, Brian Williamson, Joseph LaViola

Session: Posters / Videos 2 & 4

Hubs Link: See in Hubs

Teaser Video: Watch Now

ABSTRACT: This paper presents AffordIt!, a tool for adding affordances to the component parts of a virtual object. Following 3D scene reconstruction and segmentation procedures, domain experts find themselves with complete virtual objects, but no intrinsic behaviors have been assigned, forcing them to use unfamiliar Desktop-based 3D editing tools. Our solution allows a user to select a region of interest for a mesh cutter tool, assign an intrinsic behavior and view an animation preview of their work. To evaluate the usability and workload of AffordIt! we ran an exploratory study to gather feedback. Results show high usability and low workload ratings.


Augmented Reality for Infrastructure Inspection with Semi-autonomous Aerial Systems: An Examination of User Performance, Workload, and System Trust (91)

Jared Van Dam, Alexander Krasner, Joseph L Gabbard

Session: Posters / Videos 2 & 4

Hubs Link: See in Hubs

Teaser Video: Watch Now

ABSTRACT: The use of augmented reality (AR) with drones in infrastructure inspection can increase human capabilities by helping workers access hard-to-reach areas and supplementing their field of view with useful information. Still unknown though is how these aids impact performance when they are imperfect. A total of 28 participants flew as an autonomous drone while completing a target detection task around a simulated bridge. Results indicated significant differences between cued and un-cued trials but not between the four cue types. Differences in trust amongst the four cues indicate that participants may trust some cue styles more than others.


Applying Stress Management Techniques in Augmented Reality: Stress Induction and Reduction in Healthcare Providers During Virtual Triage Simulation (92)

Jacob Stuart, Ileri Akinnola, Francisco Guido-Sanz, Mindi Anderson, Desiree Diaz, Greg Welch, Benjamin Lok

Session: Posters / Videos 2 & 4

Hubs Link: See in Hubs

Teaser Video: Watch Now

ABSTRACT: Exposure to realistic stressful situations during an educational program may help mitigate the effects of stress on performance. We explored how virtual humans in an augmented reality environment induce stress. We also explored if users can effectively utilize stress management techniques taught during a simulation. We conducted a within-subjects pilot experiment (n=12) using an exploratory mixed-method design with a series of virtual patients using the Simple Triage and Rapid Treatment (START) system. This work proposes a need to explore how realistic scenarios using virtual humans can induce stress, and which techniques are most effective in reducing user stress in virtual simulations.


Neurophysiological Effects of Presence in Calm Virtual Environments (93)

Arindam Dey, Jane Phoon, Shuvodeep Saha, Chelsea Dobbins, Mark Billinghurst

Session: Posters / Videos 2 & 4

Hubs Link: See in Hubs

Teaser Video: Watch Now

ABSTRACT: Presence, the feeling of being there, is an important factor that affects the overall experience of virtual reality. Presence is measured through post-experience subjective questionnaires. While questionnaires are a widely used method in human-based research, they suffer from participant biases, dishonest answers, and fatigue. In this paper, we measured the effects of different levels of presence (high and low) in virtual environments using physiological and neurological signals as an alternative method. Results indicated a significant effect of presence on both physiological and neurological signals.


Minimal Embodiment: Effects of a Portable Version of a Virtual Disembodiment Experience on Fear of Death (94)

Carmen Chan, Angel Hwang, Daphne Sun, Andrea Stevenson Won

Session: Posters / Videos 2 & 4

Hubs Link: See in Hubs

Teaser Video: Watch Now

ABSTRACT: To understand the potential of applying consumer VR in clinical settings, the present study experimented with “minimal embodiment” for body ownership using a 3 DOF consumer system. Building on a virtual out-of-body experience (OBE) in Bourdin and colleagues’ study (2017), we compared participants’ fear-of-death in a control condition (participants remained in control of the avatar body), and a disembodiment condition (participants were drifted out of the avatar body and lost visuotactile contact). Results revealed an indirect effect of perceived embodiment increasing fear-of-death through a heightened degree of reported OBE in the experimental condition. Limitations and future work are addressed accordingly.


Improving Camera Travel for Immersive Colonography (95)

Soraia F Paulo, Daniel Medeiros, Pedro Brasil Borges, Joaquim P Jorge, Daniel S Lopes

Session: Posters / Videos 2 & 4

Hubs Link: See in Hubs

Teaser Video: Watch Now

ABSTRACT: Colonography allows radiologists to navigate intricate subject-specific 3D colon images. Typically, travel is performed via Fly-Through or Fly-Over techniques that enable semi-automatic traveling through a constrained, well-defined path. While these techniques have been studied in non-VR desktop environments, their performance is yet not well understood in VR setups. In this paper, we study the effect of both techniques in immersive colonography and introduce the Elevator technique, which maintains a fixed camera orientation throughout navigation. Results suggest Fly-Over was overall the best for lesion detection at the cost of slower procedures, while Fly-Through may offer a more balanced trade-off between speed and effectiveness.


Perception of Head Motion Effect on Emotional Facial Expression in Virtual Reality (96)

qiongdan cao, Hui Yu, Charles Nduka

Session: Posters / Videos 2 & 4

Hubs Link: See in Hubs

Teaser Video: Watch Now

ABSTRACT: In this paper we present a study investigating the impact of head motion on realism, its effect on perceived emotional intensity, and how it affects the affinity for facial expressions. The purpose of this work is to enhance the realism of interactive virtual characters. We designed an experiment to measure the impact through a combination of methods. This included subject behavioural data rating designed facial animations in Virtual Reality (VR) and questionnaire ratings. The results showed that head motions had a positive impact on facial expressions, they enhance realism, perceived emotional intensity, and affinity for virtual characters.


Predicting Tolerance to Velocity Mismatch Between Virtual and Physical Head Rotation in Cloud Virtual Reality Systems (97)

Jiaqi Zhang, Minxia Yang, Lu Yu

Session: Posters / Videos 2 & 4

Hubs Link: See in Hubs

Teaser Video: Watch Now

ABSTRACT: The idea of virtual velocity loss strategy is introduced to reduce the viewport quality degradation caused by the limited capacity and bandwidth of current network transmission for cloud VR service. To avoid the negative effect of velocity mismatch, this paper quantitatively models the human perceptual tolerance to the velocity mismatch with regard to various physical rotational velocities and velocity losses through a psychophysiological experiment. The experimental results revealed a decreased tolerance to loss for larger physical velocity. We found that virtual rotational velocity can be tuned at least about 25.8 percent less than physical rotational velocity.


Automatic Calibration of Commercial Optical See-Through Head-Mounted Displays for Medical Applications (98)

Xue Hu, Fabrizio Cutolo, Fabio Tatti, Ferdinando Rodriguez Y Baena

Session: Posters / Videos 2 & 4

Hubs Link: See in Hubs

Teaser Video: Watch Now

ABSTRACT: The manual calibration of commercial Optical See-Through Head-Mounted Displays is neither accurate nor convenient for medical applications. An automatic calibration method is thus desired. State-of-the-art automatic calibrations simplify the eye-screen system as a pinhole camera and tedious offline calibrations are required. Furthermore, they have never been tested on commercial products. We present a gaze-based automatic calibration method that can be easily implemented in commercial headsets. The algorithm has been tested with Microsoft HoloLens. Compared with manual calibration, user studies show that our method provides a comparably accurate but more convenient and practical solution under both monocular and binocular rendering mode.


VR2ML: A Universal Recording and Machine Learning System for Improving Virtual Reality Experiences (99)

Yuto Yokoyama, Katashi Nagao

Session: Posters / Videos 2 & 4

Hubs Link: See in Hubs

Teaser Video: Watch Now

ABSTRACT: We developed software for recording and reusing user actions and associated effects in various VR applications. In particular, we implemented two plugin systems that allow 3D motion data recorded in VR to be transferred to a machine learning module without programming. One is a system that runs on Unity and records actions and events in VR, and it is called “VRec.” The other is a system for using data recorded with VRec for machine learning, called “VR2ML.”


PoseMMR: A Collaborative Mixed Reality Authoring Tool for Character Animation (100)

ye pan, Kenny Mitchell

Session: Posters / Videos 2 & 4

Hubs Link: See in Hubs

Teaser Video: Watch Now

ABSTRACT: Augmented reality devices enable new approaches for character animation, e.g., given that character posing is three dimensional in nature it follows that interfaces with higher degrees-of-freedom (DoF) should outperform 2D interfaces. We present PoseMMR, allowing multiple users to animate characters in a mixed reality environment, like how a stop-motion animator would manipulate a physical puppet, frame-by-frame, to create the scene. Our study of group-based expert walkthroughs showed that PoseMMR can facilitate immersive posing, animation editing, version control and collaboration. We provide a set of guidelines to foster the development of immersive technologies as tools for collaborative authoring of character animation.


PhyAR: Determining the Utility of Augmented Reality for Physics Education in the Classroom (101)

Corey Richard Pittman, Joseph LaViola

Session: Posters / Videos 2 & 4

Hubs Link: See in Hubs

Teaser Video: Watch Now

ABSTRACT: Physics is frequently cited as a difficult roadblock and hindrance to retention in STEM majors. In this paper, we present the results of a study exploring the potential utility and use cases of augmented reality in secondary and post secondary physics courses. To gather meaningful information, we developed PhyAR, prototype physics education application in augmented reality. We collected feedback and opinions from a qualitative study of university students with STEM backgrounds. Our findings point towards a clear desire to see the use of more interactive 3D AR content in physics courses.


Place in the World or on the Screen? Investigating the Effects of Augmented Reality Head-up Display User Interfaces on Drivers’ Spatial Knowledge Acquisition and Glance Behavior (102)

Nayara Faria, Joseph L Gabbard, Missie Smith

Session: Posters / Videos 2 & 4

Hubs Link: See in Hubs

Teaser Video: Watch Now

ABSTRACT: When navigating via car, developing robust mental representations (spatial knowledge) of the environment is crucial in situations where technology fails, or we need to find locations not included in a navigation system’s database. In this work, we present a study that examines how screen-relative and world-relative augmented reality (AR) head-up display interfaces affect drivers’ glance behavior and spatial knowledge acquisition. Results showed that both AR interfaces have similar impact on the levels of spatial knowledge acquired. However, eye-tracking analyses showed fundamental differences in the way participants visually interacted with different AR interfaces; with conformal-graphics demanding more visual attention from drivers.


Pre-Contact Kinematic Features for the Categorization of Contact Events as Intended or Unintended (103)

Jaime Maldonado, Thorsten Kluss, Christoph Zetzsche

Session: Posters / Videos 2 & 4

Hubs Link: See in Hubs

Teaser Video: Watch Now

ABSTRACT: Contact events during manipulation tasks can be distinguished in two categories: intended and unintended. We investigated the categorization of contact events during an object placing task executed in virtual reality based on kinematic features measured during the movement segment previous to the contact. The experimental setup enabled us to generate unintended contacts by triggering unexpected interruptions of the placing movement. Experimental results indicate that the kinematic features enable the distinction of intended and unintended contacts independent of substantial variations of movement properties (amplitude, duration, velocity), unless the unintended contact occurs toward the end of the planned movement.


Elastic-Move : Passive Force Feedback Devices for Virtual Reality Locomotion (104)

Da-Chung Yi, Kuan Ning Chang, YunHsuan Tai, I CHENG CHEN, Yi-Ping Hung

Session: Posters / Videos 2 & 4

Hubs Link: See in Hubs

Teaser Video: Watch Now

ABSTRACT: In this paper, we propose two haptic devices with force feedback: “Elastic-Rope” and “Elastic-Box” for VR locomotion. Compared to dash, our devices allows users to move continuously in the virtual environment with less VR sickness. In addition, conducted an experiment that compared the degree of VR sickness by simulator sickness questionnaire(SSQ) survey while moving in the virtual environment among Dash, Teleport, Elastic-Rope, and Elastic-Box. As a result, Elastic-Rope and Elastic-Box reduce the VR sickness of users. Besides, Elastic-Box can be transposed in numerous applications because it is a low-cost and versatile device. This work suggests that passive force feedback can effectively reduce VR sickness.


Micro-mirror array plates simulation using ray tracing for mid-air imaging (105)

Shunji Kiuchi, Naoya KOIZUMI Ph.D.

Session: Posters / Videos 2 & 4

Hubs Link: See in Hubs

Teaser Video: Watch Now

ABSTRACT: We present a simulation of micro-mirror array plates (MMAPs) using ray tracing for displaying a mid-air image. MMAPs form a mid-air image in the plane-symmetrical position with respect to MMAPs. However, MMAPs have two limitations: generation of undesired images and limited the visible range. Since these limitations change depending on the structure of a mid-air imaging system or observing position, it is difficult for non-optical designers to use such as system. To solve this problem, we provide a ray tracing based simulation for MMAPs. We investigated the optimum parameters to form a mid-air image using ray tracing. We then compared a simulated CG image and an actual photo to confirm whether characteristics of MMAPs can be simulated.


Effect of marker location on user detection in omnidirectional images (106)

Ricardo Eiris, Brendan John, Eakta JAIN, Masoud Gheisari

Session: Posters / Videos 2 & 4

Hubs Link: See in Hubs

Teaser Video: Watch Now

ABSTRACT: Omnidirectional images have seen increasing usage in virtual tours to display remote destinations. These applications use markers or landmarks within the images to drive user interaction. Viewers must be able to efficiently locate and interact with markers for a positive user experience that retains immersion. However, the effect of marker positioning at different spatial locations on user performance remains unstudied. This work studies the positioning of visual markers within a omnidirectional image environment at three different elevation ranges. Our results show that markers positioned less than 32o from the equator were found significantly faster than markers between 32o and 64o.


3D Human Reconstruction from an Image for Mobile Telepresence Systems (107)

Yuki Takeda, Akira Matsuda, Jun Rekimoto

Session: Posters / Videos 2 & 4

Hubs Link: See in Hubs

Teaser Video: Watch Now

ABSTRACT: In remote collaboration using a telepresence system, viewing a 3D-reconstructed workspace can help a person understand the workspace situation. However, when reconstructing the workspace in 3D using depth cameras mounted on mobile telepresence, the back of workers or objects becomes a blind spot and cannot be captured. We propose a method to reduce the blind spots of a 3D-reconstructed person by reconstructing a 3D model of the person from an RGB-D image taken with a depth camera and changing the pose of the 3D model according to the movement of the person’s body.


Improving Free-Viewpoint Video Content Production Using RGB-Camera-Based Skeletal Tracking (108)

Andrew MacQuarrie, Anthony Steed

Session: Posters / Videos 2 & 4

Hubs Link: See in Hubs

Teaser Video: Watch Now

ABSTRACT: Free-Viewpoint Video (FVV) is a type of volumetric content in which an animated, video-textured 3D mesh of a character performance is constructed using data from an array of cameras. Previous work has demonstrated excellent results when creating motion graphs from FVV content, but these techniques are often prohibitively expensive in practice. We propose the use of skeletons to identify cut points between FVV clips, allowing a minimal set of frames to be processed into a 3D mesh. While our method performed with 2.8% poorer accuracy than the state-of-the-art for our synthetic dataset, cost and processing time requirements are dramatically reduced.


Recurrent R-CNN: Online Instance Mapping with context correlation (109)

Chen Wang, Yue Qi, Yang Shuo

Session: Posters / Videos 2 & 4

Hubs Link: See in Hubs

Teaser Video: Watch Now

ABSTRACT: We introduce a novel neural network architecture Recurrent R-CNN using context correlation for online semantic instance segmentation in commodity RGBD scene. Recurrent R-CNN use a new region convolution neural network combined with recurrent unit to assign instance-level semantic class labels to different objects in the scene. We combines with object-level RGB-D SLAM system and makes information to flow in a spatially consistent way across video frames. This dependencies allows significantly higher accuracy and consistency in instance segmentation than state-of-the-art alternatives. Also, we demonstrate a promising augmented reality application under the premise of object-level scene understanding.


Prop-Based Egocentric and Exocentric Virtual Object Storage Techniques (110)

Rajiv Khadka, Amy Banic

Session: Posters / Videos 2 & 4

Hubs Link: See in Hubs

Teaser Video: Watch Now

ABSTRACT: As Virtual and Augmented Reality tasks become increasingly complex, there is a need to organize virtual information within the virtual environment better. Users may need to organize their virtual objects or tools by hiding them until usage is needed or carry with them to another location. For these scenarios, there may be benefits to associate or store virtual objects in relation to one’s avatar (Egocentric) or in relation to the other virtual objects in the environment (Exocentric). This research presents these techniques and results of a user study comparing the effects of Egocentric and Exocentric Virtual Object Storage Techniques on cognition in a virtual environment.


The Effects of Avatar Visibility on Behavioral Responses with or without Mirror-Visual Feedback in Virtual Environments (111)

BoYu Gao, Joonwoo Lee, Huawei Tu, Wonjun Seong, HyungSeok Kim

Session: Posters / Videos 2 & 4

Hubs Link: See in Hubs

Teaser Video: Watch Now

ABSTRACT: Existing studies have shown that increasing avatar visibility could not improve perceptual responses. With the recent advances of VR technology, full-body tracking avatars have been adopted to social interactions and games with lightweight head-mounted displays. However, it is unknown about the effects of full-body avatars on behavioral responses. Hence, in this study, we designed a full-body avatar visibility with or without virtual-mirror feedback, and investigated their effects on presence, embodiment, and task performance in a bow-shooting game. This study provides initial results of using avatar visibility to enhance behavioral responses in virtual environments.


Interactive Navigation System in Mixed-Reality for Neurosurgery (112)

Ehsan Azimi, Peter Kazanzides, Ruby Liu, Camilo Molina, Judy Huang

Session: Posters / Videos 2 & 4

Hubs Link: See in Hubs

Teaser Video: Watch Now

ABSTRACT: In many bedside procedures, surgeons must rely on their spatiotemporal reasoning to estimate the position of an internal target by manually measuring external anatomical landmarks. One example of such a rocedure is ventriculostomy, where the surgeon inserts the catheter in the patient’s skull to divert the cerebrospinal fluid and alleviate the intracranial pressure. However, one-third of the insertions miss the target which can ultimately lead to undesirable surgical outcomes. We have developed an interactive navigation system using mixed reality on a head-mounted display that overlays the target directly on the patient’s anatomy and provides visual guidance for the surgeon to insert the catheter on the correct path to the target.


Potential Effects of Dynamic Parallax on Eyesight in Virtual Reality System (113)

Hui Fang, Dongdong Weng, Jie Guo, Ruiying Shen, Haiyan Jiang, Ziqi Tu

Session: Posters / Videos 2 & 4

Hubs Link: See in Hubs

Teaser Video: Watch Now

ABSTRACT: This paper presents an investigation of a dynamically changed parallax for 2D content in VR systems and potential effects on eyesight. By controlling the movement of the left and right views, parallax is continuously and slowly adjusted within comfort zone. This can alleviate the static fixation of the eyes to the fixed distance of the display, relieving visual discomfort. The experimental results show that dynamic parallax can alleviate the visual discomfort and reduce the potential adverse effects on eyesight to some extent. Moreover, the uniform variation parallax has a better effect than the jump parallax.


Usability of a Foreign Body Object Scenario in VR for Nursing Education (114)

Benjamin Stephanus Botha, Lizette de Wet, Yvonne Botma

Session: Posters / Videos 2 & 4

Hubs Link: See in Hubs

Teaser Video: Watch Now

ABSTRACT: With this study, the researchers created a Virtual Environment (VE) where nursing students could practice managing a scenario of an adult patient with a respiratory foreign body object. The VE was exposed to rigorous usability and user experience testing with the help of two expert review panels, one made up of Computer Science experts and one of Health Science experts. Each panel evaluated the VE and scenario using heuristic evaluation and cognitive walkthroughs. The results and recommendations were implemented to improve the VE, thus enabling students to experience an accurate virtual scenario in their training. The improved VE based on the results of the expert reviews are presented in this paper.


Optical Flow, Perturbation Velocities and Postural Response In Virtual Reality (115)

Markus Santoso

Session: Posters / Videos 2 & 4

Hubs Link: See in Hubs

Teaser Video: Watch Now

ABSTRACT: The purpose of this study was to investigate the effect of optical flow velocity in a virtual reality (VR) environment on user’s postural control. We hypothesized that the velocity of the optical flow will perturb user’s balance. Seventeen young, healthy participants were tested in one-foot support stances. Our study showed the visual perturbations increased COP distance and the slowest perturbation velocity induced the highest response. For VR communities, developers could use this information to raise their awareness that any sudden shift in the virtual environment at any velocity could reduce a user’s postural stability and place them at risk of falling, particularly at slower perturbation velocities.


Framing the Scene: An Examination of Augmented Reality Head Worn Displays for Construction Assembly Tasks (116)

Eric T Bloomquist, Joseph L Gabbard, Kyle Tanous, Yimin Qin, Tanyel Bulbul

Session: Posters / Videos 2 & 4

Hubs Link: See in Hubs

Teaser Video: Watch Now

ABSTRACT: The aim of this work is to examine how augmented reality (AR) head worn displays (HWDs) influence worker task performance in comparison to traditional paper blueprints when assembling three various sized wooden frame walls. In our study, 18 participants assembled three different sized frames using one of the three display conditions (conformal AR interface, tag-along AR interface, and paper blueprints). Results indicate that for large frame assembly, the conformal AR interface reduced assembly errors, yet there were no differences in assembly times between display conditions. Additionally, traditional paper blueprints resulted in significantly faster assembly time for small frame assembly.


Fast Hand-Object Interaction Using Gesture Guide Optimization (117)

Yunlong Che, Yue Qi

Session: Posters / Videos 2 & 4

Hubs Link: See in Hubs

Teaser Video: Watch Now

ABSTRACT: In this paper, we present a fast approach to model the interactions between the human hand with virtual objects. Given a detailed hand model, we adopt a gesture guided optimization to find the stable poses for manipulating the virtual object. During this process, we identify user’s gesture with predefined proposition-based descriptions script and re-tracked hand pose, then generate the contact points using the gesture prior. Begin with these points, we compute the stable pose for interaction and then obtain plausible looking motion. In practice, our real-time algorithm can perform common manipulation in less than 20 ms (run on a single CPU) with virtual objects.


Phil Lopes, Tian Nana, Ronan Boulic

Session: Posters / Videos 2 & 4

Hubs Link: See in Hubs

Teaser Video: Watch Now

ABSTRACT: Cybersickness continues to be one of the main barriers for mainstream adoption of Virtual Reality (VR). Despite the wealth of research available on this topic, it is still an unsolved problem. This paper explores the potential influence of cybersickness over the course of a VR experience on player Blink-Rate. This investigation was conducted on the same VR task with two diverging control schemes with a low or high risk of inducing cybersickness. A total of 34 participant data was collected from two separate playing sessions. Although no significant differences were observed, sick individuals showed to have a higher blink-rate frequency over the course of the VR experience


Towards an Immersive Guided Virtual Reality Microfabrication Laboratory Training System (119)

Fang Wang, Xinhao Xu, Weiyu Feng, Jhon Bueno Vesga, Zheng Liang, Scottie D Murrell Mr

Session: Posters / Videos 2 & 4

Hubs Link: See in Hubs

Teaser Video: Watch Now

ABSTRACT: In this paper, we present a 3D virtual reality-based interactive laboratory training system that provided training on how to operate a variety of machines in a microfabrication lab environment. The training system focused on providing fully immersive guided learning features that helped users to learn the lab operations independently. The system consisted of a hint system that automatically highlights lab tools, VR hand controller assistance, and an auto scoring system. Ten participants were tested using the system. Preliminary results showed clear improvement in learning speed, independent learning ability, and error reductions during this immersive guided VR learning environment


Design of virtual reality reach and grasp modes factoring upper limb ergonomics (120)

Alvaro Joffre Uribe Quevedo, David Rojas Gualdron, Priya Kartick

Session: Posters / Videos 2 & 4

Hubs Link: See in Hubs

Teaser Video: Watch Now

ABSTRACT: Successfully performing tasks in virtual environments requires effective interactions between the user and the environment. However, the technology presents `one-size-fits-all’ solutions that do not account the high variability of users who now have access to it. Efforts on improving human factors have led to custom-made user input devices factoring ergonomics for increasing task completion effectiveness. In this paper, we present a preliminary study on usability, engagement and task completion of two interactive modes that factor the upper limb ergonomics against a normal mode for reaching and grasping objects.


Presenting COLIBRI VR, an Open-Source Toolkit to Render Real-World Scenes in Virtual Reality (121)

Gregoire Dupont de Dinechin, Alexis Paljic

Session: Posters / Videos 2 & 4

Hubs Link: See in Hubs

Teaser Video: Watch Now

ABSTRACT: From image-based virtual tours of apartments to digital museum exhibits, transforming photographs of real-world scenes into visually faithful virtual environments has many applications. In this paper, we present our development of a toolkit that places recent advances in the field of image-based rendering (IBR) into the hands of virtual reality (VR) researchers and content creators. We map out how these advances can improve the way we usually render virtual scenes from photographs. We then provide insight into the toolkit’s design as a package for the Unity game engine and share details on core elements of our implementation.


Efficient Peripheral Flicker Reduction for Foveated Rendering in Mobile VR Systems (122)

Haomiao Jiang, Tianxin Ning, Behnam Bastani

Session: Posters / Videos 2 & 4

Hubs Link: See in Hubs

Teaser Video: Watch Now

ABSTRACT: We propose an efficient algorithm to largely mitigate the flickering artifacts induced by tile based foveated rendering, which can be executed in sub-milliseconds on mobile VR platform. The algorithm can be easily integrated to VR applications without intrusively affecting the development process. We demonstrate the speed and visual quality of the proposed algorithm on the Oculus Quest.


Accuracy of Commodity Finger Tracking Systems for Virtual Reality Head-Mounted Displays (123)

Daniel Schneider, Alexander Otte, Axel Simon Kublin, Per Ola Kristensson, Eyal Ofek, Michel Pahud, Alexander Martschenko, Jens Grubert

Session: Posters / Videos 2 & 4

Hubs Link: See in Hubs

Teaser Video: Watch Now

ABSTRACT: Representing users’ hands and fingers in virtual reality is crucial for many tasks. Recently, virtual reality head-mounted displays, capable of camera-based inside-out tracking and finger and hand tracking, are becoming popular and complement add-on solutions, such as Leap Motion. However, interacting with physical objects requires an accurate grounded positioning of the virtual reality coordinate system relative to relevant objects, and a good spatial positioning of the user’s fingers and hands. To better understand the capabilities of Virtual Reality headset finger tracking, we ran a controlled experiment comparing commodity hand and finger tracking systems (HTC Vive and Leap Motion) and report on the accuracy of commodity hand tracking


Lingering Effects Associated with Virtual Reality: An Analysis Based on Consumer Discussions Over Time (124)

John J. Porter III, Andrew Robb, Kristopher Kohm

Session: Posters / Videos 2 & 4

Hubs Link: See in Hubs

Teaser Video: Watch Now

ABSTRACT: Since the release of the Oculus Rift CV1 in 2016, millions of VR headsets became available to consumers across the globe. Since then, many users have logged their experiences through online discussion forums. We found that many reported experiencing various ``lingering effects’’ generally after spending at least a full hour in VR. Two major categories were identified and divided into sub-categories: perceptual effects and behavioral effects. Users agreed that these effects completely disappeared after several weeks and felt no long-term side effects. The topics of interest we identified here could be foundations for future research in laboratory settings.


VRiAssist: An Eye-Tracked Virtual Reality Low Vision Assistance Tool (125)

Sina Masnadi, Brian Williamson, Andrés N Vargas González, Joseph LaViola

Session: Posters / Videos 2 & 4

Hubs Link: See in Hubs

Teaser Video: Watch Now

ABSTRACT: We present VRiAssist, an eye-tracking-based visual assistance tool designed to help people with visual impairments interact with virtual reality environments. VRiAssist’s visual enhancements dynamically follow a user’s gaze to project corrections on the affected area of the user’s eyes. VRiAssist provides a distortion correction tool to revert the distortions created by bumps on the retina, a color/brightness correction tool that improves contrast and color perception, and an adjustable magnification tool. The results of a small 5 person user study indicate that VRiAssist helped users see better in the virtual environment depending on their level of visual impairment.


Looking Also From Another Perspective: Exploring the Benefits of Alternative Views for Alignment Tasks (126)

Alejandro Martin-Gomez, Ulrich Eck, Nassir Navab, Javad Fotouhi

Session: Posters / Videos 2 & 4

Hubs Link: See in Hubs

Teaser Video: Watch Now

ABSTRACT: Object alignment is an activity frequently performed in several domains. Traditional AR/VR methods provide virtual guides such as text, arrows, or animations to assist users during alignment. This work explores the feasibility of using virtual cameras and mirrors instead. We conducted a study where participants aligned objects assisted by virtual cameras and mirrors in VR. Data regarding alignment, time to completion, users’ distance traveled, average head velocity, usability, and mental effort was collected. Results show that virtual helpers reduce mental effort and distance traveled and increase acceptance without negatively affecting alignment accuracy.


Observation of Presence in an Ecologically Valid Ethnographic Study Using an Immersive Augmented Reality Virtual Diorama Application (127)

Maria C. R. Harrington

Session: Posters / Videos 2 & 4

Hubs Link: See in Hubs

Teaser Video: Watch Now

ABSTRACT: The main research question is centered on the usefulness of immersive Augmented Reality (AR) applications for informal learning activities. Can users experience presence – or a sense of being there - when using AR apps? To begin to approach this question an ethnographic study was conducted in July, 2019 in a museum with 56 volunteer participants to document behavior and measure learning, attitudes, and emotional outcomes of an AR application. Reported are the preliminary results of behavior indicative of presence observed in the study, and insights gained that are useful in understanding future designs of immersive AR for informal learning activities.


Developing a VR tool for studying pedestrian movement and choice behavior (128)

Yan Feng, Dorine C. Duives, Serge P. Hoogendoorn

Session: Posters / Videos 2 & 4

Hubs Link: See in Hubs

Teaser Video: Watch Now

ABSTRACT: This paper presents a VR research tool to systemically study pedestrian movement and choice behavior. This new VR tool, called CivilEvac, features a complex multi-level building that is an exact copy of an existing building. CivilEvac allows participants to freely navigate through the building, records their movements and vision fields at 10 fps which assist the analysis of pedestrian movement and choice behavior. By showcasing CivilEvac, this paper contributes an example of a design process of a VR experiment specifically developed to study pedestrian movement and choice behavior. Thereby adding to the discussion surrounding the usage of VR technologies for studying pedestrian behavior.


Panoramic Image Quality-Enhancement by Fusing Neural Textures of the Adaptive Initial Viewport (129)

Shiyuan Li, Chunyu lin, Kang Liao, Yao Zhao, Xue Zhang

Session: Posters / Videos 2 & 4

Hubs Link: See in Hubs

Teaser Video: Watch Now

ABSTRACT: Due to the large size of panoramic images, the existing streaming methods prefer only transmitting contents corresponding to the current viewport in high quality. This viewport-based transmission suffers from severe delay or large quality degradation as the viewport changes. In order to solve the above problem, we introduce to use neural textures in the adaptive initial viewport to improve the quality of regions around it. When the viewport changes but the high-resolution content has not arrived, the enhanced image is still acceptable. With our quality enhancement strategy, only about 30% original data is required to achieve satisfying visual experience.