Interactivity and Simulation [Q&A Session]

  • Full Access Full Access
  • Virtual Full Access Virtual Full Access

Date/Time: 06 – 17 December 2021
All presentations are available in the virtual platform on-demand.


Autocomplete Repetitive Stroking with Image Guidance

Contributor(s):
Yilan Chen, City University of Hong Kong, Hong Kong
Kin Chung Kwan, University of Konstanz, City University of Hong Kong, Germany
Li-Yi Wei, Adobe Research, United States of America
Hongbo Fu, City University of Hong Kong, Hong Kong

Description: We present a tool to help users autocomplete repetitive strokes while drawing over a reference image, with reduced manual labors and enhanced satisfaction.


GPU Cloth Simulation Pipeline in Lightchaser Animation Studio

Contributor(s):
Tiantian liu, Taichi Graphics, China
Haowei Han, Lightchaser Animation Studio, China
Meng Sun, Lightchaser Animation Studio, China
Dongying Liu, Lightchaser Animation Studio, China
Siyu Zhang, Lightchaser Animation Studio, China

Description: Our in-house GPU cloth simulation exhibits better performance and ability of alleviating jittering artifact in case of multi-layered cloth.


Inverse Free-form Deformation for interactive UV map editing

Contributor(s):
Seung-Tak Noh, The University of Tokyo, Japan
Takeo Igarashi, The University of Tokyo, Japan

Description: We presented a novel inverse FFD method that converts the dense image-to-texture mapping to a coarse FFD mapping to facilitate manual editing of the mapping.


Skeleton2Stroke: Interactive Stroke Correspondence Editing with Pose Features

Contributor(s):
Ryoma Miyauchi, Japan Advanced Institute of Science and Technology (JAIST), Japan
Yichen Peng, Japan Advanced Institute of Science and Technology (JAIST), Japan
Tsukasa Fukusato, University of Tokyo, Japan
Haoran Xie, Japan Advanced Institute of Science and Technology (JAIST), Japan

Description: This work proposes an editing user interface to interactively construct stroke correspondences between two hand-drawn character illustrations based on closed-area correspondences estimated by shape and pose features.


Back