03. Physically-based Simulation and Motion Control [Q&A Session]

  • Full Access Full Access
  • Onsite Student Access Onsite Student Access
  • Virtual Full Access Virtual Full Access

Date/Time: 06 – 17 December 2021
All presentations are available in the virtual platform on-demand.


A Material Point Method for Nonlinearly Magnetized Materials

Abstract: We propose a novel numerical scheme to simulate interactions between a magnetic field and nonlinearly magnetized objects immersed in it. Under our nonlinear magnetization framework, the strength of magnetic forces is effectively saturated to produce stable simulations without requiring any hyper-parameter tuning. The mathematical model of our approach is based upon Langevin’s nonlinear theory of paramagnetism, which bridges microscopic structures and macroscopic equations after a statistical derivation. We devise a hybrid Eulerian-Lagrangian numerical approach to simulating this strongly nonlinear process by leveraging the discrete material points to transfer both material properties and the number density of magnetic micro-particles in the simulation domain. The magnetic equations can then be built and solved efficiently on a background Cartesian grid, followed by a finite difference method to incorporate magnetic forces. The multi-scale coupling can be processed naturally by employing the established particle-grid interpolation schemes in a conventional MLS-MPM framework. We demonstrate the efficacy of our approach with a host of simulation examples governed by magnetic-mechanical coupling effects, ranging from magnetic deformable bodies to magnetic viscous fluids with nonlinear elastic constitutive laws.

Author(s)/Presenter(s):
Yuchen Sun, CFCS, Peking University, China
Xingyu Ni, CFCS, Peking University, China
Bo Zhu, Dartmouth College, United States of America
Bin Wang, Beijing Institute for General Artificial Intelligence, China
Baoquan Chen, CFCS, Peking University, China


Camera Keyframing with Style and Control

Abstract: In this work we present a tool that enables artists to synthesize camera motions following a learned camera behavior while enforcing user-designed keyframes as constraints along the sequence. To solve this motion in-betweening problem, we train a camera motion generator from a collection of trajectories using an additional conditioning on target keyframes. We also condition the generator with a style code automatically extracted from real film clips through the design of a gating LSTM network. This style code encodes the camera behavior defined as the correlation between the characters and camera motions. We further extend the system by incorporating a fine control of camera speed and direction via a hidden state mapping module. We then evaluate our method on two aspects: i) the capacity to synthesize camera trajectories by extracting camera behaviors from real movie film clips, and constraining them with user defined keyframes; ii) the capacity to ensure that in-between motions still comply with the reference camera behavior while satisfying the keyframe constraints. As a result, our system is the first behavior-aware keyframe in-betweening technique for camera control that balances behavior-driven automation with precise and interactive control.

Author(s)/Presenter(s):
Hongda Jiang, Center on Frontiers of Computing Studies, Peking University, China
Marc Christie, IRISA, INRIA, Univ Rennes, CNRS, France
Xi Wang, IRISA, INRIA, Univ Rennes, CNRS, France
Libin Liu, Center on Frontiers of Computing Studies, Peking University, China
Wang Bin, Beijing Institute for General Artificial Intelligence, China
Baoquan Chen, Center on Frontiers of Computing Studies, Peking University, China


Foids: Bio-Inspired Fish Simulation for Generating Synthetic Datasets

Abstract: We present a bio-inspired fish simulation platform, which we call "Foids", to generate realistic synthetic datasets for an use in computer vision algorithm training. This is a first-of-its-kind synthetic dataset platform for fish, which generates all the 3D scenes just with a simulation. One of the major challenges in deep learning based computer vision is the preparation of the annotated dataset. It is already hard to collect a good quality video dataset with enough variations; moreover, it is a painful process to annotate a sufficiently large video dataset frame by frame. This is especially true when it comes to a fish dataset because it is difficult to set up a camera underwater and the number of fish (target objects) in the scene can range up to 30,000 in a fish cage on a fish farm. All of these fish need to be annotated with labels such as a bounding box or silhouette, which can take hours to complete manually, even for only a few minutes of video. We solve this challenge by introducing a realistic synthetic dataset generation platform that incorporates details of biology and ecology studied in the aquaculture field. Because it is a simulated scene, it is easy to generate the scene data with annotation labels from the 3D mesh geometry data and transformation matrix. To this end, we develop an automated fish counting system utilizing the part of synthetic dataset that shows comparable counting accuracy to human eyes, which reduces the time compared to the manual process, and reduces physical injuries sustained by the fish.

Author(s)/Presenter(s):
Yuko Ishiwaka, SoftBank Corp., Japan
Masaki Nakada, NeuralX Inc., United States of America
Xiao S. Zeng, NeuralX Inc., United States of America
Michael Lee Eastman, SoftBank Corp., Japan
Sho Kakazu, SoftBank Corp., Japan
Sarah Gross, NeuralX Inc., United States of America
Ryosuke Mizutani, Nosan Corporation, Japan


Human Dynamics from Monocular Video with Dynamic Camera Movements

Abstract: We propose a new method that reconstructs 3D human motion from in-the-wild video by making full use of prior knowledge on the laws of physics. Previous studies focus on reconstructing joint angles and positions in the body local coordinate frame. Body translations and rotations in the global reference frame are partially reconstructed only when the video has a static camera view. We are interested in overcoming this static view limitation to deal with dynamic view videos. The camera may pan, tilt, and zoom to track the moving subject. Since we do not assume any limitations on camera movements, body translations and rotations from the video do not correspond to absolute positions in the reference frame. The key technical challenge is inferring body translations and rotations from a sequence of 3D full-body poses, assuming the absence of root motion. This inference is possible because human motion obeys the law of physics. Our reconstruction algorithm produces a control policy that simulates 3D human motion imitating the one in the video. Our algorithm is particularly useful for reconstructing highly dynamic movements, such as sports, dance, gymnastics, and parkour actions.

Author(s)/Presenter(s):
Ri Yu, Seoul National University, Seoul National University Hospital, South Korea
Hwangpil Park, Seoul National University, Samsung Electronics, South Korea
Jehee Lee, Seoul National University, South Korea


Weatherscapes: Nowcasting Heat Transfer and Water Continuity

Abstract: Due to the complex interplay of various meteorological phenomena, simulating weather is a challenging and open research problem. In this contribution, we propose a novel physics-based model that enables simulating weather at interactive rates. By considering atmosphere and pedosphere we can define the hydrologic cycle – and consequently weather – in unprecedented detail. Specifically, our model captures different warm and cold clouds, such as mammatus, hole-punch, multi-layer, and cumulonimbus clouds as well as their dynamic transitions. We also model different precipitation types, such as rain, snow, and graupel by introducing a comprehensive microphysics scheme. The Wegener-Bergeron-Findeisen process is incorporated into our Kessler-type microphysics formulation covering ice crystal growth occurring in mixed-phase clouds. Moreover, we model the water run-off from the ground surface, the infiltration into the soil, and its subsequent evaporation back to the atmosphere. We account for daily temperature changes, as well as heat transfer between pedosphere and atmosphere leading to a complex feedback loop. Our framework enables us to interactively explore various complex weather phenomena. Our results are assessed visually and validated by simulating weatherscapes for various setups covering different precipitation events and environments, by showcasing the hydrologic cycle, and by reproducing common effects such as Foehn winds. We also provide quantitative evaluations creating high-precipitation cumulonimbus clouds by prescribing atmospheric conditions based on infrared satellite observations. With our model we can generate dynamic 3D scenes of weatherscapes with high visual fidelity and even nowcast real weather conditions as simulations by streaming weather data into our framework.

Author(s)/Presenter(s):
Daniel T. Banuti, University of New Mexico, United States of America
Sören Pirk, Google Research, United States of America
Dominik L. Michels, KAUST, Saudi Arabia
Jorge Alejandro Amador Herrera, KAUST, Saudi Arabia
Torsten Hädrich, KAUST, Saudi Arabia
Wojtek Pałubicki, University of Poznan, Poland


Back