2017 IEEE VR Los Angeles logo
2017 IEEE VR Los Angeles logo
IEEE Computer Society IEEE

Exhibitors and Supporters

Silver Level

logo

logo


Bronze Level

logo

logo

logo

logo

logo

logo

logo

logo

logo

logo


Event Supporters

logo

logo


Publisher

logo


Other Supporters

logo

logo

Research Demos


Monday
March 20

Tuesday
March 21

Wednesday
March 22

10:30am - 5:15pm
8:30am - 5:15pm
8:30am - 3:00pm


 

FACETEQ Interface Demo for Emotion Expression in VR

Ifigeneia Mavridou, James T. McGhee, Mahyar Hamedi, Mohsen Fatoorechi, Andrew Cleal, Emili Ballaguer-Balester, Ellen Seiss, Graeme Cox, and Charles Nduka

Abstract: Faceteq prototype v.05 is a wearable technology for measuring facial expressions and biometric responses for experimental studies in Virtual Reality. Developed by Emteq Ltd laboratory, Faceteq can enable new avenues for virtual reality research through combination of high performance patented dry sensor technologies, proprietary algorithms and real-time data acquisition and streaming. Emteq founded the Faceteq project with the aim to provide a human-centered additional tool for emotion expression, affective human-computer interaction and social virtual environments. The proposed demonstration will exhibit the hardware and its functionality by allowing attendees to experience three of the showcasing applications we developed this year.

 

Immerj: A Novel System for Democratizing Immersive Storytelling

Sarang S. Bhadsavle, Xie Hunt Shannon Yap, Justin Segler, Rahul Jaisimha, Nishant Raman, Yengzhou Feng, Sierra J. Biggs, Micah Peoples, Robert B. Brenner, and Brian C. McCann

Abstract: Immersive technologies such as 360-degree cameras and head-mounted displays (HMDs) have become affordable to the average consumer, opening up new audiences for storytellers. However, existing immersive post-production software often requires too great a technical or financial investment for smaller creative shops, including most of the country’s newspapers and local broadcast journalism organizations. Game engines, for example, are unnecessarily complicated for simple 360-degree video projects. Introducing Immerj - an open source abstraction layer simplifying the Unity3D game engine’s interface for immersive content creators. Our primary collaborator, Professor R.B. Brenner, director of the School of Journalism at the University of Texas at Austin, organized hands-on demos with our team and journalists and designers from some of the top news organizations all over the country in order to follow a human-centered design process. In just over one year, a small team of undergraduate researchers at the Texas Advanced Computing Center (TACC) has created a potentially disruptive democratization of technology.

 

Diminished Hand: A Diminished Reality-Based Work Area Visualization

Shohei Mori, Momoko Maezaway, Naoto Ienagaz, and Hideo Saito

Abstract: Live instructor’s perspective videos are useful to present intuitive visual instructions for trainees in medical and industrial settings. In such videos, the instructor’s hands often hide the work area. In this demo, we present a diminished hand for visualizing the work area hidden by hands by capturing the work area with multiple cameras. To achieve the diminished reality, we use a light field rendering technique, in which light rays avoid passing through penalty points set in the unstructured light fields reconstructed from the multiple viewpoint images.

 

Jogging with a Virtual Runner using a See-Through HMD

Takeo Hamada, Michio Okada, and Michiteru Kitazaki

Abstract: We present a novel assistive method for leading casual joggers by showing a virtual runner on see-through head-mounted display they worn. It moves at a constant pace specified in advance by them, and its motion synchronizes the user’s one. People can always visually check the pace by looking at it as a personal pacemaker. They are also motivated to keep running by regarding it as a jogging companion. Moreover, proposed method overcomes safety problem of AR apps. Its most body parts are transparent so that it doesn’t obstruct their view. This study, thus, may contribute to augment daily jogging experience.

 

Demonstration: Rapid One-Shot Acquisition of Dynamic VR Avatars

Charles Malleson, Maggie Kosek, Martin Klaudiny, Ivan Huerta, Jean-Charles Bazin, Alexander Sorkine-Hornung, Mark Mine, and Kenny Mitchell

Abstract: In this demonstration, we showcase a system for rapid acquisition of bespoke avatars for each participant (subject) in a social VR environment is presented. For each subject, the system automatically customizes a parametric avatar model to match the captured subject by adjusting its overall height, body and face shape parameters and generating a custom face texture (in our application the body is in an astronaut character suit, thus we do not customize the texture of the body). A lightweight capture setup consisting of a stereo DSLR rig along with a depth sensor is used. A single snapshot of each participant in a T-pose is captured and after only a few seconds of processing time on a standard desktop PC, the avatars are available in the VR environment. The parametric avatar model contains blendshapes for face and body identity, as well for animation of the face via audio-based lip animation. Blendweights for the rig are fitted from segmented depth data and 3D facial landmarks, while a gender classification on the face image is used to drive a subtle male/female stylization of the avatar. The custom texture map for the face is generated by warping the input images to a reference texture according to the facial landmarks. The result is a light-weight, fully automatic system for end-to-end capturing of avatars with minimal hardware requirements, a simple, instant capture procedure and fast processing time. The output avatars have custom shape and texture and are skeleton-rigged ready for input into a game engine, where they can be animated directly.

 

Application of Redirected Walking in Room-Scale VR

Eike Langbehn, Paul Lubos, Gerd Bruder, and Frank Steinicke

Abstract: Redirected walking (RDW) promises to allow near-natural walking in an infinitely large virtual environment (VE) by subtle manipulations of the virtual camera. Previous experiments showed that a physical radius of at least 22 meters is required for undetectable RDW. However, we found that it is possible to decrease this radius and to apply RDW to room-scale VR, i.\,e., up to approximately $5$m\,$\times$\,$5$m. This is done by using curved paths in the VE instead of straight paths, and by coupling them together in a way that enables continuous walking. Furthermore, the corresponding paths in the real world are laid out in a way that fits perfectly into room-scale VR. In this research demo, users can experience RDW in a room-scale head-mounted display VR setup and explore a VE of approximately $25$m\,$\times$\,$25$m.

 

Immersive Virtual Training for Substation Electricians

Eduardo H. Tanaka, Juliana A. Paludo, Rafael Bacchetti, Edgar V. Gadbem, Leonardo R. Domingues, Carlúcio S. Cordeiro, Olavo Giraldi Jr., Guilherme Alcarde Gallo, Adam Mendes da Silva, and Marcos H. Cascone

Abstract: This research demonstration presents an Immersive Virtual Substation for Electricians Training. A substation is one of the most critical facilities of the electrical distribution system, so it is mandatory to keep its normal operation to deliver high standards of service, power quality and avoid blackouts to consumers. Therefore, it is necessary to give an effective training to the electricians who will operate the equipment in the substation and to prepare them to face emergencies on daily basis. The main purpose of the proposed Immersive Virtual Environment is to provide a realistic experience to trainees in a safe environment where they can interact with equipment, explore the facility and, mainly, practice basic and complex maneuvers to recover substation operations. Users can interact with this Immersive Virtual Environment using HMDs, joysticks or even an ordinary keyboard, mouse and monitor. Feedback from trainees and instructors who used the Immersive Virtual Environment was very positive, indicating that the objectives were fully achieved.

 

Experiencing Guidance in 3D Spaces with a Vibrotactile Head-Mounted Display

Victor Adriel, Luca Brayda, Luciana Nedel, and Anderson Maciel

Abstract: Vibrotactile feedback is broadly used to support different tasks in virtual and augmented reality applications, such as navigation, communication, attentional redirection, or to enhance the sense of presence in virtual environments. Thus, we aim to include the haptic component to the most popular wearable used in VR applications: the VR headset. After studying the acuity around the head for vibrating stimuli, and trying different parameters, actuators, and configurations, we developed a haptic guidance technique to be used in a vibrotactile Head-mounted Display (HMD). Our vibrotactile HMD was made to render the position of objects in a 3D space around the subject by varying both stimulus loci and vibration frequency. In this demonstration, the participants will interact with different scenarios where the mission is to select a number of predefined objects. However, instead of displaying occlusive graphical information to point to these objects, vibrotactile cues will provide guidance in the VR setup. Participants will see that our haptic guidance technique can be both easy to use and entertaining.

 

3DPS: An Auto-calibrated Three-Dimensional Perspective-Corrected Spherical Display

Qian Zhou, Kai Wu, Gregor Miller, Ian Stavness, and Sidney Fels

Abstract: We describe an auto-calibrated 3D perspective-corrected spherical display that uses multiple rear projected pico-projectors. The display system is auto-calibrated via 3D reconstruction of each projected pixel on the display using a single inexpensive camera. With the automatic calibration, the multiple-projector system supports a seamless blended imagery on the spherical screen. Furthermore, we incorporate head tracking with the display to present 3D content with motion parallax by rendering perspective-corrected images based on the viewpoint. To show the effectiveness of this design, we implemented a view-dependent application that allows walk-around visualization from all angles for a single head-tracked user. We also implemented a view-independent application that supports a wall-papered rendering for multi-user viewing. Thus, both view-dependent 3D VR content and spherical 2D content, such as a globe, can be easily experienced with this display.

 

WebVR Meets WebRTC: Towards 360-Degree Social VR Experiences

Simon Gunkel, Martin Prins, Hans Stokking, and Omar Niamut

Abstract: Virtual Reality (VR) and 360-degree video are reshaping the media landscape, creating a fertile business environment. During 2016 new 360-degree cameras and VR headsets entered the consumer market, distribution platforms are being established and new production studios are emerging. VR is evermore becoming a hot topic in research and industry and many new and exciting interactive VR content and experiences are emerging. The biggest gap we see in these experiences are social and shared aspects of VR. In this demo we present our ongoing efforts towards social and shared VR by developing a modular web based VR framework, that extends current video conferencing capabilities with new functionalities of Virtual and Mixed Reality. It allows us to connect two people together for mediated audio-visual interaction, while being able to engage in interactive content. Our framework allows to run extensive technological and user based trials in order to evaluate VR experiences and to build immersive multi-user interaction spaces. Our first results indicate that a high level of engagement and interaction between users is possible in our 360-degree VR set-up utilizing current web technologies.

 

mpCubee: Towards a Mobile Perspective Cubic Display using Mobile Phones

Jens Grubert and Matthias Kranz

While we witness significant changes in display technologies, to date, the majority of display form factors remain flat. The research community has investigated other geometric display configuration given the rise to cubic displays that create the illusion of a 3D virtual scene within the cube. We present a self-contained mobile perspective cubic display mpCubee) assembled from multiple smartphones. We achieve perspective correct projection of 3D content through head-tracking using built-in cameras in smartphones. Furthermore, our prototype allows to spatially manipulate 3D objects on individual axes due to the orthogonal configuration of touch displays.

 

Towards Ad Hoc Mobile Multi-display Environments on Commodity Mobile Devices

Jens Grubert and Matthias Kranz

We present a demonstration of HeadPhones (Headtracking + smart phones), a novel approach for the spatial registration of multiple mobile devices into an ad hoc multi-display environment. We propose to employ the user’s head as external reference frame for the registration of multiple mobile devices into a common coordinate system. Our approach allows for dynamic repositioning of devices during runtime without the need for external infrastructure such as separate cameras or fiducials. Specifically, our only requirements are local network connections and mobile devices with built-in front facing cameras. This way, HeadPhones enables spatially-aware multi-display applications in mobile contexts.

 

ArcheoVR: Exploring Itapeva’s Archeological Site

Eduardo Zilles Borba, Andre Montes, Marcio Almeida, Mario Nagamura, Roseli Lopes, Marcelo Zuffo, Astolfo Araujo, and Regis Kopper

Abstract: This demo presents a fully immersive and interactive virtual environment (VE) – the ArcheoVR, which represents Itapeva Rocky Shelter, a prehistoric archeological site in Brazil. W workflow started with a real world data capture – laser scanners, drones and photogrammetry. Captured information was transformed into a carefully designed realistic 3D scene and interactive features that allows users to experience the virtual archeological site in real-time. The main objective of this VR model is to allow the general public to feel and explore an otherwise restricted and ephemeral site and to assess prototype tools intended for future digital archaeological exploration.

 

NIVR: Neuro Imaging in Virtual Reality

Tyler Ard, David Krum, Thai Phan, Dominique Duncan, Ryan Essex, Mark Bolas, and Arthur Toga

Abstract: Visualization is a critical component of neuroimaging, and how to best view data that is naturally three dimensional is a long standing question in neuroscience. Many approaches, programs, and techniques have been developed specifically for neuroimaging. However, exploration of 3D information through a 2D screen is inherently limited. Many neuroscientific researchers hope that with the recent commercialization and popularization of VR, it can offer the next-step in data visualization and exploration. Neuro Imaging in Virtual Reality (NIVR), is a visualization suite that employs various immersive visualizations to represent neuroimaging information in VR. Some established techniques, such as raymarching volume visualization, are paired with newer techniques, such as near-field rendering, to provide a broad basis of how we can leverage VR to improve visualization and navigation of neuroimaging data. Several of the neuroscientific visualization approaches presented are, to our knowledge, the first of their kind. NIVR offers not only an exploration of neuroscientific data visualization, but also a tool to expose and educate the public regarding recent advancements in the field of neuroimaging. By providing an engaging experience to explore new techniques and discoveries in neuroimaging, we hope to spark scientific interest through a broad audience. Furthermore, neuroimaging offers deep and expansive datasets; a single scan can involve several gigabytes of information. Visualization and exploration of this type of information can be challenging, and real-time exploration of this information in VR even more so. NIVR explores pathways which make this possible, and offers preliminary stereo visualizations of these types of massive data.

 

VRAIN: Virtual Reality Assisted Intervention for Neuroimaging

Dominique Duncan, Bradley Newman, Adam Saslow, Emily Wanserski, Tyler Ard, Ryan Essex, and Arthur Toga

Abstract: The USC Stevens Neuroimaging and Informatics Institute in the Laboratory of Neuro Imaging (http://loni.usc.edu) has the largest collection/repository of neuroanatomical MRI scans in the world and is at the forefront of both brain imaging and data storage/processing technology. One of our workflow processes involves algorithmic segmentation of MRI scans into labeled anatomical regions (using FreeSurfer [9], currently the best software for this purpose). This algorithm is imprecise, and users must tediously correct errors manually by using a mouse and keyboard to edit individual MRI slices at a time. We demonstrate preliminary work to improve efficiency of this task by translating it into 3 dimensions and utilizing virtual reality user interfaces to edit multiple slices of data simultaneously.

 

Gesture-Based Augmented Reality Annotation

Yun Suk Chang, Benjamin Nuernberger, Bo Luan, Tobias Höllerer, and John O’Donovan

Abstract: Drawing annotations with 3D hand gestures in augmented reality is useful for creating visual and spatial references in the real world, especially when these gestures can be issued from a distance. Different techniques exist for highlighting physical objects with hand-drawn annotations from a distance, assuming an approximate 3D scene model (e.g., as provided by the Microsoft HoloLens). However, little is known about user preference and performance of such methods for annotating real-world 3D environments. To explore and evaluate different 3D hand-gesture-based annotation drawing methods, we have developed an annotation drawing application using the HoloLens augmented reality development platform. The application can be used for highlighting objects at a distance and multi-user collaboration by annotating in the real world.

 

Virtual Field Trips with Networked Depth-Camera-Based Teacher, Heterogeneous Displays, and Example Energy Center Application

Jason Woodworth, Sam Ekong, and Christoph W. Borst

Abstract: This demo presents an approach to networked educational virtual reality for virtual field trips and guided exploration. It shows an asymmetric collaborative interface in which a remote teacher stands in front of a large display and depth camera (Kinect) while students are immersed with HMDs. The teacher’s front-facing mesh is streamed into the environment to assist students and deliver instruction. Our project uses commodity virtual reality hardware and high-performance networks to allow students who are unable to visit a real facility with an alternative that provides similar educational benefits. Virtual facilities can further be augmented with educational content through interactables or small games. We discuss motivation, features, interface challenges, and ongoing testing.

 

Rapid Creation of Photorealistic Virtual Reality Content with Consumer Depth Cameras

Chih-Fan Chen, Mark Bolas, and Evan Suma Rosenberg

Abstract: Virtual objects are essential for building environments in virtual reality (VR) applications. However, creating photorealistic 3D models is not easy, and handcrafting the detailed 3D model from a real object can be time and labor intensive. An alternative way is to build a structured camera array such as a light-stage to reconstruct the model from a real object. However, these technologies are very expensive and not practical for most users. In this work, we demonstrate a complete end-to-end pipeline for the capture, processing, and rendering of view-dependent 3D models in virtual reality from a single consumer-grade RGB-D camera. The geometry model and the camera trajectories are automatically reconstructed from a RGB-D image sequence captured offline. Based on the HMD position, selected images are used for real-time model rendering. The result of this pipeline is a 3D mesh with view-dependent textures suitable for real-time rendering in virtual reality. Specular reflections and light-burst effects are especially noticeable when users view the objects from different perspectives in a head-tracked environment.

 

Travel in Large-Scale Head-Worn VR: Pre-oriented Teleportation with WIMs and Previews

Carmine Elvezio, Mengu Sukan, Steven Feiner, and Barbara Tversky

We demonstrate an interaction technique that allows a user to point at a world-in-miniature representation of a city-scale virtual environment and perform efficient and precise teleportation by pre-orienting an avatar. A preview of the post-teleport view of the full-scale virtual environment updates interactively as the user adjusts the position, yaw, and pitch of the avatar’s head with a pair of 6DoF-tracked controllers. We describe design decisions and contrast with alternative approaches to virtual travel.