eyemR-Talk: Using Speech to Visualise Shared MR Gaze Cues

  • Full Access Full Access
  • Onsite Student Access Onsite Student Access
  • Onsite Experience Onsite Experience
  • Virtual Full Access Virtual Full Access
  • Virtual Basic Access Virtual Basic Access

*All presentations are available in the virtual platform on-demand. The posters will also be exhibited onsite in Hall E, Tokyo International Forum from 15 – 17 December 2021.


Abstract: In this poster, we present eyemR-Talk, a Mixed Reality (MR) collaboration system that uses speech input to trigger shared gaze visualisations between remote users. The system uses 360° panoramic video to support collaboration between a local user in the real world in an Augmented Reality (AR) view and a remote collaborator in Virtual Reality (VR). Using specific speech phrases to turn on virtual gaze visualisations, the system enables contextual speech-gaze interaction between collaborators. The overall benefit is to achieve more natural gaze awareness, leading to better communication and more effective collaboration.

Author(s)/Presenter(s):
Allison Jing, University of South Australia, Australia
Brandon Matthews, University of South Australia, Australia
Kieran May, University of South Australia, Australia
Thomas Clarke, University of South Australia, Australia
Gun Lee, University of South Australia, Australia
Mark Billinghurst, University of South Australia, Australia


Back