08: Natural Phenomena [Q&A Session]
- Full Access
- Onsite Student Access
- Virtual Full Access
Date/Time: 06 – 17 December 2021
All presentations are available in the virtual platform on-demand.
ICTree: Automatic Perceptual Metrics for Tree Models
Abstract: Many algorithms for virtual tree generation exist, but the visual realism of the 3D models is unknown. This problem is usually addressed by performing limited user studies or by a side-by-side visual comparison. We introduce an automated system for realism assessment of the tree model based on their perception. We conducted a user study in which 4,000 participants compared over one million pairs of images to collect subjective perceptual scores of a large dataset of virtual trees. The scores were used to train two neural-network-based predictors. A view independent ICTreeF uses the tree model's geometric features that are easy to extract from any model. The second is ICTreeI that estimates the perceived visual realism of a tree from its image. Moreover, to provide an insight into the problem, we deduce intrinsic attributes and evaluate which features make trees look like real trees. In particular, we show that branching angles, length of branches, and widths are critical for perceived realism. We also provide three datasets: carefully curated 3D tree geometries and tree skeletons with their perceptual scores, multiple views of the tree geometries with their scores, and a large dataset of images with scores suitable for training deep neural networks.
Author(s)/Presenter(s):
Tomas Polasek, Brno University of Technology, CPhoto@FIT, Czech Republic
David Hrusa, Purdue University, United States of America
Bedrich Benes, Purdue University; Czech Technical University in Prague, FEL, United States of America
Martin Čadík, Brno University of Technology, CPhoto@FIT; Czech Technical University in Prague, FEL, Czech Republic
Learning to Reconstruct Botanical Trees from Single Images
Abstract: We introduce a novel method for reconstructing the 3D geometry of botanical trees from single photographs. Faithfully reconstructing a tree from single-view sensor data is a challenging and open problem because many possible 3D trees exist that fit the tree's shape observed from a single view. We address this challenge by defining a reconstruction pipeline based on three neural networks. The networks simultaneously mask out trees in input photographs, identify a tree's species, and obtain its 3D radial bounding volume -- our novel 3D representation for botanical trees. Radial bounding volumes (RBV) are used to orchestrate a procedural model primed on learned parameters to grow a tree that matches the main branching structure and the overall shape of the captured tree. While the RBV allows us to reconstruct the main branching structure faithfully, we use the procedural model's morphological constraints to generate realistic branching for the tree crown. This constraints the number of solutions of tree models for a given photograph of a tree. We show that our method reconstructs various tree species even when the trees are captured in front of complex backgrounds. Moreover, although our neural networks have been trained on synthetic data with data augmentation, we show that our pipeline performs well for real tree photographs. We evaluate the reconstructed geometries with a number metrics, including leaf area index and maximum radial tree distances.
Author(s)/Presenter(s):
Bosheng Li, Purdue University, United States of America
Jacek Kałużny, University of Poznan, Poland
Jonathan Klein, University of Bonn, Germany
Dominik L. Michels, KAUST, Saudi Arabia
Wojtek Palubicki, University of Poznan, Poland
Bedrich Benes, Purdue University, United States of America
Soren Pirk, Google Research, United States of America
Modeling Flower Pigmentation Patterns
Abstract: Although many simulation models of natural phenomena have been developed to date, little attention was given to a major contributor to the beauty of nature: the colorful patterns of flowers. We survey typical patterns and propose methods for simulating them inspired by the current understanding of the biology of floral patterning. The patterns are generated directly on geometric models of flowers, using different combinations of key mathematical models of morphogenesis: vascular patterning, positional information, reaction-diffusion, and random pattern generation. The integration of these models makes it possible to capture a wide range of the flower pigmentation patterns observed in nature.
Author(s)/Presenter(s):
Lee Ringham, University of Calgary, Canada
Andrew Owens, University of Calgary, Canada
Mikolaj Cieslak, University of Calgary, Canada
Lawrence Harder, University of Calgary, Canada
Przemyslaw Prusinkiewicz, University of Calgary, Canada
Practical Pigment Mixing for Digital Painting
Abstract: There is a significant flaw in today's painting software: the colors do not mix like actual paints. E.g., blue and yellow make gray instead of green. This is because the software is built around the RGB representation, which models the mixing of colored lights. Paints, however, get their color from pigments, whose mixing behavior is predicted by the Kubelka-Munk model (K-M). Although it was introduced to computer graphics almost 30 years ago, the K-M model has never been adopted by painting software in practice as it would require giving up the RGB representation, growing the number of per-pixel channels substantially, and depriving the users of painting with arbitrary RGB colors. In this paper, we introduce a practical approach that enables mixing colors with K-M while keeping everything in RGB. We achieve this by establishing a latent color space, where RGB colors are represented as mixtures of primary pigments together with additive residuals. The latents can be manipulated with linear operations, leading to expected, plausible results. We describe the conversion between RGB and our latent representation and show how to implement it efficiently. We prove the viability of our approach on the case of major painting software whose developers integrated our mixing method with minimal effort, making it the first real-world software to provide realistic color mixing in history.
Author(s)/Presenter(s):
Šárka Sochorová, Czech Technical University in Prague, Faculty of Electrical Engineering; Secret Weapons, Czech Republic
Ondřej Jamriška, Czech Technical University in Prague, Faculty of Electrical Engineering; Secret Weapons, Czech Republic
TreePartNet: Neural Decomposition of Point Clouds for 3D Tree Reconstruction
Abstract: We present TreePartNet, a neural network aimed at reconstructing tree geometry from point clouds obtained by scanning real trees. Our key idea is to learn a natural neural decomposition exploiting the assumption that a tree comprises locally cylindrical shapes. In particular, reconstruction is a two-step process. First, two networks are used to detect priors from the point clouds. One detects semantic branching points, and the other network is trained to learn a cylindrical representation of the branches. In the second step, we apply a neural merging module to reduce the cylindrical representation to a final set of generalized cylinders combined by branches. We demonstrate results of reconstructing realistic tree geometry for a variety of input models and with varying input point quality, e.g., noise, outliers, and incompleteness. We intensively evaluate our approach using data from both synthetic and real trees and comparing with alternative methods.
Author(s)/Presenter(s):
Yanchao Liu, University of Chinese Academy of Sciences, Shenzhen University, China
Jianwei Guo, NLPR, Institute of Automation, Chinese Academy Of Sciences, China
Bedrich Benes, Purdue University, United States of America
Oliver Deussen, University of Konstanz, Germany
Xiaopeng Zhang, NLPR, Institute of Automation, Chinese Academy of Sciences, China
Hui Huang, Shenzhen University, China