March 23rd - 27th
March 23rd - 27th
The following tutorials will be held at IEEE Virtual Reality 2019:
Abstract: Spatial augmented reality (SAR), a.k.a projection mapping, alters the appearance of a real surface by visually overlaying computer generated images onto it. Compared to other AR approaches which apply either video or optical see-through displays, SAR has an important advantage, i.e., any display devices do not block observers’ sights, and consequently, the observers can see the augmentation directly on the surfaces with wide field of view and natural 3D cues without any physical constraints on their bodies. The ultimate goal of SAR is “believably manipulating the material properties of real world surfaces.” To this end, previous efforts solved various fundamental technical problems such as geometric registration and color correction enabling to display desired images onto non-planar and textured surfaces. However, they work only in limited situations where the surface is typically static, only view-independent material property can be manipulated, and so on.
In 2010s, Japanese researchers from multidisciplinary fields including computer science, psychophysics, and neuroscience, have tackled challenging technical issues to relax the limitations, under the support of Grant-in-Aid for Scientific Research on Innovative Areas by MEXT, Japan: “Brain and Information Science on SHITSUKAN” and “Understanding human recognition of material properties for innovation in SHITSUKAN science and technology”. Consequently, adopting advanced technologies such as high speed imaging and digital fabrication, they have successfully achieved various technical innovations allowing projection mapping on dynamic even deformable objects, view-dependent material property manipulation, and truthful appearance control in spectral domain. More than just relaxing the previously recognized limitations, they have also opened up new SAR research directions such as shape manipulation of real surface and material property manipulation in other modalities than vision. In this tutorial, we would like to share these advanced technologies with audiences. (: SHITSUKAN is a Japanese word meaning perceptual qualities of a material)
Abstract: As eye-trackers are being built into commodity head-mounted displays, applications such as gaze-based interaction are poised to enter the mainstream. Gaze is a natural indicator of what the user is interested in. Eye-tracking in virtual environments offers the opportunity to study human behavior in simulated settings, both for the purpose of creating realistic virtual avatars, and to learn models of saliency that apply to a three-dimensional scene. Research findings, such as consistency in where people look at in images and videos, and biases in two dimensional eye-tracking (e.g. center bias) will need to be replicated and/or rediscovered in VR (e.g. equator bias as a generalization of center bias). These are only a few examples of the rich lines of inquiry waiting to be explored by VR researchers and practitioners who have a working knowledge of eye-tracking.
In this tutorial, we will cover three topic areas:
Intended Audience The tutorial will be of interest to students, faculty and researchers interested in quantifying user priorities and preferences using data from eye-tracking, develop gaze-based interaction techniques, and apply eye-tracking data toward generating virtual avatars.
Expected Value for Audience Eye-trackers are now being built into commodity VR headsets (e.g. the FOVE headset, Tobii eye-trackers built into HTC Vive headsets). As a result, researchers and practitioners of VR must quickly develop a working understanding of eye-tracking. The audience members for this tutorial can expect to leave with the following:
For more information, to inquire about a particular tutorial topic, or to submit a proposal, please contact the Tutorials Chairs:
tutorials2019 [at] ieeevr.org