Perception in Augmented Reality

NII Shonan Meeting:

@ Shonan Village Center, November 14-18, 2016

Organizers

  • Christian Sandor, Nara Institute of Science and Technology, Japan
  • Dieter Schmalstieg, the Institute of Computer Graphics and Vision,
    Graz University of Technology, Austria
  • J. Edward Swan II, Mississippi State University, USA

Overview088_group-photo

Description of the meeting

Over the years, research on head-worn Augmented Reality (AR) has been complemented by work on new platforms such as handheld AR (Wagner et al., 2008) and projector-camera systems (Bandyopadhyay et al., 2001). With the rapid advent of applications on cell phones, AR has become almost mainstream. However, researchers and practitioners are still attempting to solve many fundamental problems in the design of effective AR. Although many researchers are tackling registration problems caused by tracking limitations, perceptually correct augmentation remains a crucial challenge, see e.g. Moser et al. (2015).

Some of the barriers to perceptually correct augmentation can be traced to issues with depth and illumination that are often interconnected, or by issues related to the appearance of an environment. These problems may cause scene and depth distortions, and visibility issues, which can lead to poor task performance (e.g., Swan II et al. (2015)). Some of these issues result from technological limitations. However, many are caused by limited understanding or by inadequate methods for displaying information.

In the mid 90s, Drascic and Milgram (1996) attempted to identify and classify these perceptual issues. Focusing on stereoscopic head-worn displays (HWDs), they provided useful insights into some of the perceptual issues in AR. Since then, considerable research has provided new insights into perceptual factors; e.g. Kruijff et al. (2010). Even though HWDs are still the predominant platform for perceptual experiments, the emphasis on a broader range of AR platforms has changed the problem space, resulting in the need to address new issues.

We believe that an overarching approach is needed to address these issues by bringing together both researchers who focus on technology issues, and researchers who focus on related psychology and cognitive science areas. We strongly feel that bringing both sides into an open meeting holds substantial promise for developing a research agenda that can address the serious challenges in AR, and therefore accelerate the promise of AR technology.

In 1950, Alan Turing introduced the Turing Test, an essential concept in the philosophy of Artificial Intelligence (AI). He proposed an “imitation game” to test the sophistication of AI software. At its core is a precise test protocol, where human participants are to determine whether a conversation partner is an actual human, or is instead an AI simulation. Similar tests have been suggested for fields including Computer Graphics (McGuigan, 2006) and Visual Computing (Shan et al., 2013). We strongly believe that an AR Turing Test must be added to the AR research agenda. However, it is not straightforward to define such a test. Nevertheless, it is crucial, in order to have a measurable goal for the attempts of others and ourselves to erase the boundary between real and virtual, which requires perceptually correct augmentations, that we need to be able to test for. We think that this proposed seminar can be a crucial stepping stone for defining an AR Turing Test.

Format of the Meeting

  • Monday: PechaKucha-style self-introductions
  • Tuesday: State of the art sessions
    • Morning: Technology for augmented reality (Chair: Schmalstieg)
    • Afternoon: Perception research for augmented reality (Chair: Swan)
  • Wednesday
    • Morning: Briding the gap between technology and perception research
  • (Chair: Sandor)
    • Afternoon: Excursion
    • Thursday
    • Morning: Late-breaking presentations
    • Afternoon: Wrap-up
  • Friday: check out after breakfast

Topics

  • Information Presentation
    • Visual, aural, haptic, and olfactory augmentation
    • Multisensory rendering, registration, and synchronization
    • Mediated and diminished reality
    • Photo-realistic and non-photo-realistic rendering
    • Real-time and non-real-time interactive rendering
  • Input
    • Acquisition of 3D video and scene descriptions
    • Calibration and registration of sensing systems
    • Sensor fusion
    • Touch, tangible and gesture interfaces
  • Output
    • Display hardware, including 3D, stereoscopic, and multi-user
    • Live video stream augmentation (e.g., in robotics and broadcast)
    • Wearable and situated displays (e.g., eyewear, smart watches, pico-projectors)
    • Wearable actuators and augmented humans
  • User Experience Design
    • Collaborative interfaces
    • Interaction techniques
    • Multi-modal input and output
    • Usability studies and experiments
    • Technology acceptance and social implications

References

Deepak Bandyopadhyay, Ramesh Raskar, and Henry Fuchs. Dynamic shader lamps: Painting on movable objects. In Proceedings of the IEEE and ACM International Symposium on Augmented Reality, pages 207?216, Washington, DC, USA, 2001. IEEE Computer Society.

Dragan Drasic and Paul Milgram. Perceptual issues in augmented reality. In Proceedings of SPIE, 2653: Stereoscopic Displays and Virtual Reality Systems III, pages 123?124, 1996.

Ernst Kruijff, J. Edward Swan II, and Steve Feiner. Perceptual issues in augmented reality revisited. In Proceedings of the Ninth IEEE International Symposium on Mixed and Augmented Reality (ISMAR’10), pages 3?12, Seoul, Korea, Oct. 13-16 2010.

Michael D. McGuigan. Graphics Turing Test. The Computing Research Repository, arxiv:cs/0603132v1, 2006.

Kenneth Moser, Yuta Itoh, Kohei Oshima, Edward Swan, Gudrun Klinker, and Christian Sandor. Subjective evaluation of a semi-automatic optical see-through head-mounted display calibration technique. IEEE Transactions on Visualization and Computer Graphics, 21(4):491?500, March 2015.

Qi Shan, Riley Adams, Brian Curless, Yasutaka Furukawa, and Steven M. Seitz. The Visual Turing Test for Scene Reconstruction. In Proceedings of the International Conference on 3D Vision, pages 25?32, 2013.

Edward Swan II, Gurjot Singh, and Stephen R. Ellis. Matching and reaching depth judgments with real and augmented reality targets. IEEE Transactions on Visualization and Computer Graphics, IEEE International Symposium on Mixed and Augmented Reality (ISMAR 2015), 2015. In press.

Daniel Wagner, Gerhard Reitmayr, Alessandro Mulloni, Tom Drummond, and Dieter Schmalstieg. Pose tracking from natural features on mobile phones. In Proceedings of the 7th IEEE/ACM International Symposium on Mixed and Augmented Reality, ISMAR ’08, pages 125?134, Washington, DC, USA, 2008. IEEE Computer Society.