RealitySketch: Augmented Reality Sketching for Real-time Embedded and Responsive Visualizations

  • Full Access Full Access
  • Onsite Student Access Onsite Student Access
  • Onsite Experience Onsite Experience
  • Virtual Full Access Virtual Full Access
  • Virtual Basic Access Virtual Basic Access

*All presentations are available in the virtual platform on-demand. There will be a Real-Time Live! gallery onsite in Hall E, Tokyo International Forum from 15 – 17 December 2021. Watch the LIVE demonstrations from 10.30am-12.30pm on 17 December in Hall C, Tokyo International Forum and online.


Description: We present RealitySketch, an augmented reality interface for sketching interactive graphics and visualizations. In RealitySketch, the user draws graphical elements on a mobile AR screen and binds them with physical objects in real-time and improvisational ways, so that the sketched elements dynamically move with the corresponding physical motion.

Presenter(s):
Ryo Suzuki, University of Calgary, Canada
Rubaiat Habib Kazi, Adobe Research, United States of America
Li-Yi Wei, Adobe Research, United States of America
Stephen DiVerdi, Adobe Research, United States of America
Wilmot Li, Adobe Research, United States of America
Daniel Leithinger, University of Colorado, Boulder, United States of America

Ryo Suzuki (http://ryosuzuki.org/) is an Assistant Professor in Computer Science at the University of Calgary, starting January 2021, where he directs Programmable Reality Lab (https://programmable-reality-lab.github.io/). Prior to joining UCalgary, he graduated from PhD at the University of Colorado Boulder in 2020, where he was advised by Daniel Leithinger and Mark Gross. His research interest lies at the intersection of Human-Computer Interaction (HCI) and robotics. He explores how we can combine AR/VR and robotics technologies to make our environments programmable and further blend virtual and physical worlds. In the past 5 years, he has published more than sixteen peer-reviewed conference papers at top HCI and robotics venues, such as CHI, UIST, IROS, and received three awarded papers. Previously he also worked as a research intern at Stanford University, UC Berkeley, the University of Tokyo, Adobe Research, and Microsoft Research.

I am a Sr. Research Scientist at Adobe Research. I design and develop computing tools that facilitate powerful ways of thinking, design, and communication with sketching and gestures. My research in animation & dynamic drawings is turned into new products that reach to a global audience. Among them, Apple recognized SketchBook Motion as the best iPad app of the year 2016. Prior to Adobe, I worked at Autodesk Research, Microsoft Research, and Japan Science & Technology Agency.

Li-Yi Wei is a research scientist/manager with Adobe Research.

Stephen is a principal scientist who strives to develop new creative tools for digital artists by exploring novel interfaces and interaction modalities. His research covers a number of topics, including virtual reality, 360 degree video, augmented reality, natural media painting, vector graphics, color theory, and GPU computing. During over ten years combined at Adobe, Stephen has shipped a number of features in Photoshop, Illustrator, Premiere, After Effects, and our iOS apps including Sketch, Capture, Eazel, and Color Lava. Stephen received his B.S. in computer science from Harvey Mudd College in 2002 and his Ph.D. in augmented reality in 2007 from University of California in Santa Barbara, where he was advised by Tobias Höllerer in the Four Eyes Lab. He interned with Adobe in 2000, 2001, and 2003, joined full time from 2007 to 2012, went to Google for three years, and returned to Adobe Research in 2015.

I am a Principal Scientist in the Creative Intelligence Lab at Adobe Research. Before that, I earned my Ph.D. in computer science at the University of Washington, where I was a member of the Graphics and Imaging Laboratory (GRAIL) from 2000-2007. My thesis work presents new interactive visualization techniques that help users understand and explore complex 3D objects with many constituent parts (e.g., CAD models, anatomical datasets). I have also worked on interactive texture synthesis, adaptive document layout, and non-photorealistic rendering for virtual environments. From 1996-2000, I attended Princeton University, where I earned a BSE in computer science. I was born and raised in Toronto, Canada.

Daniel Leithinger, assistant professor (ATLAS Institute & Computer Science) creates shape-changing human computer interfaces that push digital information past the boundaries of flat displays, and into the real world. Motivated by the belief that computers must embrace the dexterity and expressiveness of the human body, his interfaces allow users to touch, grasp and deform data physically. Daniel received his PhD at the MIT Media Lab in 2015. His academic publications have been published at ACM UIST, TEI and CHI conferences, and he has received design awards from Fast Company, Red Dot and IDEA. Projects like "inFORM" have been exhibited at the Cooper Hewitt Design Museum, Ars Electronica Museum, and the Milan Design Week.

Back