2019 IEEE VR Osaka logo

March 23rd - 27th

2019 IEEE VR Osaka logo

March 23rd - 27th

IEEE Computer Society IEEE VRSJ

IEEE Computer Society IEEE VRSJ


Sponsors


Diamond

Osaka International Convention Center

Platinum

DELL + Intel Japan
Mercari
National Science Foundation
OSAKA CONVENTION & TOURISM BUREAU

Gold


Tateishi Science and Technology Foundation

The Telecommunications Advancement Foundation

Silver


DAQRI

Bronze

BARCO
Huawei Japan
Knowledge Service Network
Mozilla Corporation
Osaka Electro-Communication University
SenseTime Japan

Flower / Misc

GREE, Inc.
KYOHRITSU ELECTRONIC INDUSTRY Co.,Ltd.
Beijing Nokov Science & Technology Co., Ltd.
PoSTMEDIA
SoftCube Corporation
Sumitomo Electric Industries
Vicon

Exhibitors

Advanced Realtime Tracking (ART)
Archivetips
China's State Key Laboratory of Virtual Reality Technology and Systems
Computer Network Information Center, Chinese Academy of Sciences
Creact
Crescent
DELL + Intel Japan
Fujitsu
Fun Life Inc.
Haption
Kyohritsu
Nihon Binary Co., Ltd.
NIST - Public Safety Communications Research
Nokov
Optitrack Japan, Ltd.
PhaseSpace
QD Laser, Inc.
Qualisys
Solidray Co.,Ltd.
WESTUNITIS Co., Ltd.

Supporters


IEEE Kansai Section

Society for Information Display Japan Chapter

VR Consortium

The Institute of Systems, Control and Information Engineers

Human Interface Society

The Japanese Society for Artificial Intelligence

The Visualization Society of Japan

Information Processing Society of Japan

The Robotics Society of Japan

Japan Society for Graphic Science

The Japan Society of Mechanical Engineers

Japanese Society for Medical and Biological Engineering

The Institute of Image Information and Television Engineers

The Society of Instrument and Control Engineers

The Institute of Electronics, Information and Communication Engineers

The Institute of Electrical Engineers of Japan

The Society for Art and Science

Japan Ergonomics Society

The Japanese Society of Medical Imaging

Exhibitors and Supporters

Videos

A Dream Within A Dream

Duo Wang, Xiwei Wang, Yifeng Miao, Qingxiao Zheng

A Dream within A Dream is a story-based immersive 360 animated short film, which tells a story inspired by Edger Allen Poe’s poem A DREAM WITHIN A DREAM talking about the border between dream and reality. This is a full 3D realistic environment, which has been generated in Maya and assembled in Unreal Engine. In this VR experience, viewers will see different scenes include a cave, floating islands, water fall and so on. This animation demonstrates a journey of a girl’s unconscious mind during her dream. She had a dream within a dream, and she woke up inside her dream, where she could not recognize whether it is the dream or reality.

EMBRACE - a VR piece about disability and inclusion (2018)

Franziska Schroeder

“Embrace” is a work created as part of a UK AHRC (Arts Humanities Research Council) funded project on Immersive and Inclusive Music Technologies. The piece is for VR headset and was developed for one of the grant’s proposed outputs. The research conducted investigated how emerging technologies (such as VR) can best be adopted to suit people with different abilities (movement impaired people for example). “Embrace” allows the viewer to experience issues around disability. It tells the story about two disabled musicians (one visually impaired and one wheelchair bound) and how both experience exclusion before a concert situation. We also find out some background with regards to the nature of their disability. The work wants to stimulate the viewer to embrace difference; hence the title “Embrace”.

Using Culturally Responsive Narratives in Virtual Reality To Influence Cognition and Self Efficacy

Hope Idaewor

NeuroSpeculative AfroFeminism (NSAF) is a transformative Virtual Reality (VR) experience that gives women of color an entry point into the tech narrative at the intersection of neuroscience and speculative design [1]. As the experience begins, the user arrives in a speculative world where they explore a futuristic hair salon. The salon is a Neuro Cosmetology lab owned and occupied by women of color who are the lead scientists in the space. NSAF was created in response to the lack of narratives that include women of color at the center of New Media such as VR. This project is the focus of ongoing research to inform the design of culturally responsive VR experiences (tailored to minorities in Computing) in hopes of influencing self-efficacy within these groups. Research has shown that VR can be used to increase functional activity and influence brain reorganization [2]. However, there is little research that explores how VR could be used to influence self-efficacy through its affordance of embodied cognition [3]. The goal of this study is to show how VR—compared to traditional storytelling methods—could be used to tailor learning experiences. This video briefly describes the experience and introduces the project methodology, goals, and expected outcomes.

EPICSAVE Lifesaving Decisions - a Collaborative VR Training Game Sketch for Paramedics

Jonas Schild, Leonard Flock, Patrick Martens, Benjamin Roth, Niklas Schünemann, Eduard Heller, Sebastian MisztalJonas Schild, Leonard Flock, Patrick Martens, Benjamin Roth, Niklas Schünemann, Eduard Heller, Sebastian Misztal

Practical, collaborative training of severe emergencies that occur too rarely within regular curricular training programs (e.g., anaphylactic shock in children patients)is difficult to realize. Multi-user virtual reality and serious game technologies can be used to provide collaborative training in dynamic settings [1,2]. However, actual training effects seem to depend on a high presence and supportive usability [2]. EPICSAVE Lifesaving Decisions shows a novel approach that aims at further improving on these factors using an emotional scenario and collaborative game mechanics. We present a trailer video of a game sketch which creatively explores serious game design for collaborative virtual reality training systems. The game invites two paramedic trainees and one paramedic trainer into a dramatic scenario at a family theme park: A 5-year old child shows symptoms of anaphylactic shock. While the trainees begin their diagnostics procedures, a bystander, the girl’s grandfather, intervenes and challenges the players’ authority. Our research explores how VR game mechanics, i.e., optional narrative, authority skills and rewards, mini games, and interactive virtual characters may extend training quality and user experience over pure VR training simulations. The video exemplifies a concept that extends prior developments of a multi-user VR training simulation setup presented in [2,3].

A Case-study of Contemporary Presence Theory inside a Commercial Virtual Reality Game

Johannes Schirm, Gabriela Tullius, M. P. Jacob Habgood

A large body of literature is concerned with models of presence— the sensory illusion of being part of a virtual scene—but there is still no general agreement on how to measure it in an objective and reliable way. For the presented case-study, we applied and analyzed contemporary theory in order to measure presence in the context of a comparison between continuous locomotion and teleportation in virtual reality. Thirty-seven participants played through an existing virtual environment of commercial quality, in which they had to collect several hidden items. Three special events were naturally embedded in the environment to evoke physical reactions. During these, head and controller tracking data were recorded as real-time behavioral measures. A single-item questionnaire was used to repeatedly collect real-time presence assessments, and in order to analyze dependencies, we also included a post-study presence questionnaire. The results of the case-study suggest that continuity of locomotion has no significant effect on presence. However, our novel approach to employing behavioral measures lead to insights which could inform further research on presence. We propose a presence measure in which a startle reflex is evoked through unexpected social presence in private space, and compared to the head movement speed profile of baseline interactions.

Enchanting Your Noodles: GAN-based Real-time Food-to-Food Translation and Its Impact on Vision-induced Gustatory Manipulation

Kizashi Nakano, Daichi Horita, Nobuchika Sakata, Kiyoshi Kiyokawa, Keiji Yanai, Takuji Narumi

This video shows a novel gustatory manipulation interface which utilizes the cross-modal effect of vision on taste elicited with AR-based real-time food appearance modulation using a generative adversarial network (GAN). Unlike existing systems which only change color or texture pattern of a particular type of food in an inflexible manner, our system changes the appearance of food into multiple types of food in real-time flexibly, dynamically and interactively in accordance with the deformation of the food that the user is actually eating by using GAN-based image-to-image translation. We detail the system implementation and report on user studies that investigated the impact of our system on gustatory sensations in which somen noodles were turned into ramen noodles or fried noodles, or steamed rice into curry and rice or fried rice. The experimental results reveal that our system successfully manipulates gustatory sensations to some extent and that the effectiveness depends on the original and target types of food as well as each user’s food experience.

VR Kino+Theater

Luiz Velho

VR Kino+Theatre is a media platform that combines theatrical performance with live cinema using virtual reality technology. The platform integrates traditional forms of entertainment (theater and cinema), with advanced interactive media, (virtual reality and gaming). In this way, it solves scalability of audience and presentation familiarity, while providing greater flexibility for innovative formats. The foundations of our solution lies onto three pillars: On the technology front: i) 3D content captured from real data with the help of advanced sensors and machine learning; ii) procedural and real-time physical simulations powered by high-end graphics hardware; iii) distributed systems interconnected by low-latency wireless networks. On the production side: i) unified process, in terms of ubiquitous data access and augmented content generation; ii) collaborative real-time integrated authoring shared by all members of creative teams. On the delivery scenarium: i) diversified media and application options; ii) multiplicity of presentation formats; iii) stratified and complementary fruition allowing to fully explore the content in many forms. The operation of an ecosystem based on these principles entails new roles for producers, performers and participants. As a demonstration of the platform we produced a play “The Tempest”, by William Shakespeare. The project was developed by a multidisciplinary group at IMPA [1].

Echoes of Murlough

Michael McKnight

Echoes of Murlough is an electroacoustic composition presented in VR. The listener is enveloped in a virtual space that explores the intersection of musical and environmental sonic materials gathered from Murlough Beach, Co. Down in Northern Ireland. The user is a passive observer in the piece which unfolds around them in a spatially head tracked experience. The auditory virtual environment (AVE) has an authentic approach that moves into the creational, as described by Novo [1]. Taking the listener on a journey from the real to the ut where the sounds retain a connection to place. All sounds where gathered at the beach using a combination ambisonic, MS stereo, mono and contact microphones including the electric guitar except for two parts recorded in the studio. The music was composed of improvised guitar parts using an “Ebow” and recorded on the beach using an ambisonic microphone in conjunction with a lavalier wireless system to allow freedom of movement that would be captured and become integral to the piece. The contrast and interplay between environmental and instrumental sources are explored in relation to space. The intention is that the listener and composition itself will be rooted in a sense of place that will provide a foundation for immersion.

Creative learning in VR: an antidisciplinary approach

Michael Vallance, Yuto Kurashige, Takurou Magaki

Joichi Ito, MIT Media Lab Director, suggests that the way ahead in education is to support endeavors where learning processes and peer collaborations are valued above end products such as exam scores. He terms this antidisciplinary [1]. To engage students in an antidisciplinary construction of their learning environments, a 3D virtual Fukushima Dai-ichi nuclear power plant scenario is designed for familiarity with operator challenges, nuclear plant risks, and basic nuclear power content [2]. Donning the Oculus Rift HMD, students are immersed in the Fukushima nuclear power plant and tasked with retrieving 5 radioactive bins randomly positioned throughout the plant. Due to the radiation levels, students must maneuver a robot throughout the plant. To locate the bins, the students can maneuver a drone over the plant. While retrieving the bins the student also collects virtual ‘Information cards’ for later questioning. In addition, the student must locate the entrance (a teleport) to Reactor 2 and turn on the cooling water pump. The activity is timed so that on 8 minutes a tsumani alarm is sounded and water rises after 10 minutes. To learn about the accident, the student enters a Control Room and undertakes a ‘cause-and-effect’ quiz utilizing the collected Information cards.

Coretet: A Dynamic Virtual Musical Instrument for the Twenty-First Century

Rob Hamilton

Coretet is a virtual reality musical instrument that explores the translation of performance gesture and mechanic from traditional bowed string instruments into an inherently non-physical implementation. Built using the Unreal Engine 4 and Pure Data, Coretet offers musicians both a flexible and articulate musical instrument to play as well as a networked performance environment capable of supporting and presenting a traditional four-member string quartet. Building on traditional stringed instrument performance practices, Coretet was designed as a futuristic ‘21st Century’ implementation of the core gestural and interaction modalities that generate musical sound in the violin, viola and cello. Coretet exists as a client-server software system designed to be controlled using an Oculus Rift head-mounted display (HMD) and the Oculus Touch hand-tracking controllers. The instrument and performance environment are built using the Unreal Engine 4. Gesture and audio output is generated using interaction data from the engine streamed to a Pure Data (PD) [2] server via Open Sound Control (OSC) [3]. Within PD, gestural control data from Coretet is processed and used to control a variety of audio generation and manipulation processes including the [bowed~] string physical model from the Synthesis Toolkit (STK) [1].

Color Space: 360 VR Hanbok Art Performance

Seonock Park, Jusub Kim

This project explores the possibility of VR as an alternative theatre form for performing arts. For the past hundreds of years, the proscenium stage has been the most used form of stage in the performing arts. At the proscenium stage, the audience sees the dramatic facts through the frame, so it has the advantage of preventing the attention of the audience from being dispersed. However, it has the disadvantage of making the audience have a sense of distance from the stage since the world in the stage is completely separated from the world of the audience. In this 360 VR performance work, we remove the barrier between the audience and the stage allowing the audience to immerse themselves more in the performance, and experiment a new performance type where the performing is done around the audience rather than the audience surrounding performers. For this work, we used a 360 video camera (a rig of 6 GoPro cameras) to capture the stage, where a group of dancers wearing the Hanbok - the Korea traditional costume - performed the traditional dance specially choreographed for this show. This video was created to promote the beauty of the Hanbok as a more immersive approach.

Augmented Reality Floods and Smoke Smartphone App Disaster Scope utilizing Real-time Occlusion

Tomoki Itamiya, Hideaki Tohara, Yohei Nasuda

Natural disasters occur frequently in Japan. In the Great East Japan Earthquake in 2011 and the heavy rain disaster at western Japan in 2018, many people didn’t have the crisis consciousness enough to evacuate safely. We developed the augmented reality smartphone-application Disaster Scope that enables immersive experience in order to improve the crisis awareness of disasters in peacetime. The application can superimpose the occurrence situation of disasters such as CG floods and debris and fire smoke in the actual scenery, using only a smartphone and paper headset. By using a smartphone equipped with a 3D depth sensor, it is possible to sense the height from the ground and recognize surrounding objects. The real-time occlusion processing enabled using only by a smartphone. The collision detection of the real world’s objects and CG debris is possible. The floods height and flow speed can be changed by each user’s setting. As a result, it has become possible to understand more realistically the dangerous of floods and a fire smoke charge. We utilized this system in evacuation drills organized by elementary schools and municipalities. As a result of the survey and verification, it was very useful for improving crisis awareness of students and citizens.

Stand-alone, Wearable System for Full Body VR Avatars: Towards Physics-based 3D Interaction

Tuukka M. Takala, Chen Chun Hsin, Takashi Kawai

We introduce a stand-alone, wearable system with full body and finger tracking for first-person virtual reality (VR) avatars. The system does not rely on any external trackers or components. It comprises of a head-mounted display, inertial motion capture suit, VR gloves, and VR backpack PC.Making use of the wearable system and RUIS toolkit [1], we present an example implementation of our vision for physics-based full body avatar interaction. This envisioned interaction involves three elements from the reality-based interaction framework of Jacob et al. [2]: naïve physics, body awareness, and environment awareness. These elements lend common sense affordances within the virtual world and allow users to employ their everyday knowledge of the real world.We argue that when it comes to full body avatar interfaces, it is not only users, but also developers who benefit from utilizing physics simulation as the basis upon which different interaction techniques are built on. This physics-based approach provides intuitive manipulation and locomotion interactions without requiring individually crafted scripts. Our example implementation presents several such interactions.Furthermore, the many interaction techniques emerging from physical simulation are congruous with each other, which promotes user interface consistency. We also introduce the idea of using physics components (colliders, joints, materials, etc.) as 3D user interface building blocks, as opposed to scripting or visual programming.

Augmentation of Virtual Agents in Real Crowd Videos

Yalım Doğan, Serkan Demirci, Uğur Güdükbay

Augmentation of virtual agents in real crowd videos is an important task for different applications from design simulations of social environments to modeling abnormalities in crowd behavior. We propose a framework for this task, namely for augmenting virtual agents in real crowd videos. Our framework utilizes homography based video stabilization, Dalal-Triggs detector [1] for pedestrian detection and state-based tracking algorithms to automatically locate the pedestrians in video frames and project them into our 3D simulated environment, where the navigable area of the simulated environment is available as a manually designed and located navigation mesh. We represent the real pedestrians in the video as simple three-dimensional (3D) models in our simulation environment. 3D models representing real, projected agents and the augmented virtual agents are simulated using local path planning coupled with a collision detection and avoidance algorithm, called Reciprocal Velocity Obstacles (RVO) [2]. The virtual agents augmented into the video move plausibly without colliding with static and dynamic obstacles, including other virtual agents and real pedestrians. We provide an extensive graphical user interface for controlling the virtual agents in the scene, including collision avoidance parameters, adjusting the camera in the scene and some standard video player options.

Sustainable Production and Consumption in 360

Reese Muntean, Mei-Ling Park, Yulia Rubleva, Kate Hennessy

SCP in 360: Sustainable Production and Consumption in 360 Degrees [1] is a series of six 360° videos aiming to make sustainable production and consumption engaging, memorable, and relatable to a wider audience. Produced by the United Nations Environment Programme and researchers at Simon Fraser University, SCP in 360 takes viewers around the world to see the work of the One Planet Network on the ground, including projects on sustainable building in Nepal, community tourism in South Africa, consumer information in Chile, sustainable and healthy gastronomy in Costa Rica, circular procurement in the Netherlands, and low-carbon sustainable lifestyle initiatives in rural Armenia. For example, viewers visit organic vineyards and mushroom farms in Chile to learn about a project to inform Chilean citizens about the environmental and social impacts of everyday consumer products. Researchers at Simon Fraser University are using these videos to examine the use of new media technology and to investigate if viewers better understand concepts and values around sustainability or if they care more about sustainability after viewing such videos in a 360° environment. This research will explore if and how new immersive visual technologies might better communicate and transmit values and the importance of sustainability efforts.