Seminars

NO.135 Augmented Reality in Human-Computer Interaction

Shonan Village Center

June 4 - 7, 2018 (Check-in: June 3, 2018 )

Organizers

  • Yuta Itoh
    • Keio University, Japan
  • Kai Kunze
    • Keio University, Japan
  • Alexander Plopski
    • Nara Institute of Science and Technology, Japan
  • Christian Sandor
    • Nara Institute of Science and Technology, Japan

Overview

Description of the meeting

The first Shonan Meeting on Perceptual Issues in Augmented Reality (AR) was successfully organized by Christian Sandor, Dieter Schmalstieg, and Edward J. Swan II in 2016. Building on the success of the first meeting, we propose to conduct a Shonan Meeting on Augmented Reality in Human-Computer Interaction (HCI) to investigate yet another key issue that remains in the AR community.

AR expands the users world by presenting information, entertainment, or interaction surfaces as virtual content in the environment. Research into AR began more than 50 years ago with the vision of the Ultimate Display by Sutherland (1968). Over the years, research on head-worn AR has been complemented by work on new platforms such as handheld AR (Dey et al., 2012) and projector-camera systems (Bandyopadhyay et al., 2001). With the rapid advent of applications on cell phones, AR has become almost mainstream.

One major goal of AR is to expand the users perception of the surroundings by visualizing invisible data, like connections between different objects and the corresponding information (Sandor et al., 2005; Heun et al., 2013). By nature, AR is a technology for creating intuitive human-computer interfaces, using computers, sensors, and mobile devices. AR and HCI communities often focus on similar questions, e.g., eye-gaze tracking (Sewell and Komogortsev, 2010), haptic feedback (Bidmon et al., 2007), sensor tracking (Wagner et al., 2008) and perception of virtual content by the users (Moser et al., 2015).

Many premier conferences on HCI, such as ACM Conference on Human Factors in Computing Systems (CHI), ACM Symposium on User Interface Software and Technology (UIST), and ACM Special Interest Group on Computer Graphics and Interactive Techniques (SIGGRAPH), recognized the impact of AR, with many of their best papers focusing on AR. However, these papers commonly do not develop novel technologies that can enable AR, but apply already well-known algorithms and commercial devices to explore the human factor side of AR. On the contrary, the IEEE International Symposium on Mixed and Augmented Reality (ISMAR) community focuses on the technological aspects of AR. The ISMAR community has been pushing the boundaries of what’s possible with papers including PTAM (2007) and KinectFusion (2011). In that sense, both communities complement each other.

The goal of this meeting is to build a bridge between leading researchers from the HCI and ISMAR communities. Exchanging ideas in an open meeting will promote the dissemination of results and technologies between the communities. This will improve the integration of AR technology for HCI, lead to the development of novel interaction techniques, and provide new research questions of interest for both communities.

Format of the Meeting

Monday: PechaKucha-style self-introductions

 Tuesday: State of the art sessions
– Morning: Technology for augmented reality (Chair: Plopski)
– Afternoon: Challenges in computer human interaction (Chair: Kunze)

 Wednesday
– Morning: Briding the gap between CHI and AR (Chair: Sandor)
– Afternoon: Excursion

 Thursday
– Morning: Late-breaking presentations (Chair: Itoh)
– Afternoon: Wrap-up

 Friday: check out after breakfast

Topics

 Interaction
– Collaborative interfaces
– Interaction techniques
– Multi-modal input and output
– Usability studies and experiments
– Technology acceptance and social implications
– Touch, tangible and gesture interfaces

 Information Presentation
– Visual, aural, haptic, and olfactory augmentation
– Multisensory rendering, registration, and synchronization
– Mediated and diminished reality
– Photo-realistic and non-photo-realistic rendering
– Real-time and non-real-time interactive rendering

 Output
– Display hardware, including 3D, stereoscopic, and multi-user
– Live video stream augmentation (e.g., in robotics and broadcast)
– Wearable and situated displays (e.g., eyewear, smart watches, pico-projectors)
– Wearable actuators and augmented humans

 Perceptual issues
– Technology acceptance and social implications
– Cognitive load, mental workload or other cognitive states
– Preception manipulation
– Visualization techniques addressing perceptual or cognitive issues

References

Deepak Bandyopadhyay, Ramesh Raskar, and Henry Fuchs. Dynamic Shader Lamps: Painting on Movable Objects. In Proceedings of the IEEE/ACM International Symposium on Augmented Reality, pages 207–216, Washington, USA, October 2001.

Katrin Bidmon, Guido Reina, Fabian B¨os, J¨urgen Pleiss, and Thomas Ertl. Time-Based Haptic Analysis of Protein Dynamics. In Proceedings of the Joint EuroHaptics Conference and Symposium on Haptic Interfaces for Virtual Environment and Teleoperator Systems, pages 537–542, Tsukuba, Japan, 2007.

Arindam Dey, Graeme Jarvis, Christian Sandor, and Gerhard Reitmayr. Tablet versus Phone: Depth Perception in Handheld Augmented Reality. In Proceedings of the IEEE International Symposium on Mixed and Augmented Reality, pages 187–196, Atlanta, USA, November 2012.

Valentin Heun, James Hobin, and Pattie Maes. Reality Editor: Programming Smarter Objects. In Proceedings of the UbiComp (Adjunct Publication), pages 307–310, Zurich, Switzerland, September 2013.

Georg Klein and David Murray. Parallel Tracking and Mapping for Small AR Workspaces. In Proceedings of the IEEE/ACM International Symposium on Mixed and Augmented Reality, pages 225–234, Nara, Japan, November 2007.

Kenneth Moser, Yuta Itoh, Kohei Oshima, Edward Swan, Gudrun Klinker, and Christian Sandor. Subjective Evaluation of a Semi-Automatic Optical See-Through Head-Mounted Display Calibration Technique. IEEE Transactions on Visualization and Computer Graphics, 21(4):491–500, March 2015.

Richard A Newcombe, Shahram Izadi, Otmar Hilliges, David Molyneaux, David Kim, Andrew J Davison, Pushmeet Kohi, Jamie Shotton, Steve Hodges, and Andrew Fitzgibbon. KinectFusion: Real-Time Dense Surface Mapping and Tracking. In Proceedings of the IEEE International Symposium on Mixed and Augmented Reality, pages 127–136, 2011.

Christian Sandor, Alex Olwal, Blaine Bell, and Steven Feiner. Immersive Mixed-Reality Configuration of Hybrid User Interfaces. In Proceedings of the IEEE/ACM International Symposium on Mixed and Augmented Reality, pages 110–113, Washington, DC, USA, October 2005.

Weston Sewell and Oleg Komogortsev. Real-time Eye Gaze Tracking with an Unmodified Commodity Webcam Employing a Neural Network. In Proceedings of the ACM Conference on Human Factors in Computing Systems: Extended Abstracts on Human Factors in Computing Systems, pages 3739–3744, 2010.

Ivan E. Sutherland. A Head-Mounted Three Dimensional Display. In Proceedings of the Fall Joint Computer Conference, Part I, pages 757–764, New York, USA, December 1968.

Daniel Wagner, Gerhard Reitmayr, Alessandro Mulloni, Tom Drummond, and Dieter Schmalstieg. Pose Tracking from Natural Features on Mobile Phones. In Proceedings of the IEEE/ACM International Symposium on Mixed and Augmented Reality, pages 125–134, Washington, USA, September 2008.

Report

No-135.pdf