20. Light Interactions and Differentiable Rendering [Q&A Session]
- Full Access
- Onsite Student Access
- Virtual Full Access
Date/Time: 06 – 17 December 2021
All presentations are available in the virtual platform on-demand.
Beyond Mie Theory: Systematic Computation of Bulk Scattering Parameters based on Microphysical Wave Optics
Abstract: Light scattering in participating media and translucent materials is typically modeled using the radiative transfer theory. Under the assumption of independent scattering between particles, it utilizes several bulk scattering parameters to statistically characterize light-matter interactions at the macroscale. To calculate these parameters based on microscale material properties, the Lorenz-Mie theory has been considered the gold standard. In this paper, we present a generalized framework capable of systematically and rigorously computing bulk scattering parameters beyond the far-field assumption of Lorenz-Mie theory. Our technique accounts for microscale wave-optics effects such as diffraction and interference as well as interactions between nearby particles. Our framework is general, can be plugged in any renderer supporting Lorenz-Mie scattering, and allows arbitrary packing rates and particles correlation; we demonstrate this generality by computing bulk scattering parameters for a wide range of materials, including anisotropic and correlated media.
Author(s)/Presenter(s):
Yu Guo, University of California, Irvine, United States of America
Adrian Jarabo, Universidad de Zaragoza, Spain
Shuang Zhao, University of California, University of California Irvine, United States of America
Differentiable Time-Gated Rendering
Abstract: The continued advancements of time-of-flight imaging devices have enabled new imaging pipelines with numerous applications. Consequently, several forward rendering techniques capable of accurately and efficiently simulating these devices have been introduced. However, general-purpose differentiable rendering techniques that estimate derivatives of time-of-flight images are still lacking. In this paper, we introduce a new theory of differentiable time-gated rendering that enjoys the generality of differentiating with respect to arbitrary scene parameters. Our theory also allows the design of advanced Monte Carlo estimators capable of handling cameras with near-delta or discontinuous time gates. We validate our theory by comparing derivatives generated with our technique and finite differences. Further, we demonstrate the usefulness of our technique using a few proof-of-concept inverse-rendering examples that simulate several time-of-flight imaging scenarios.
Author(s)/Presenter(s):
Lifan Wu, NVIDIA, United States of America
Guangyan Cai, University of California Irvine, United States of America
Ravi Ramamoorthi, University of California San Diego, United States of America
Shuang Zhao, University of California Irvine, United States of America
Differentiable Transient Rendering
Abstract: Recent differentiable rendering techniques have become key tools to tackle many inverse problems in graphics and vision. Existing models, however, assume steady-state light transport, i.e., infinite speed of light. While this is a safe assumption for many applications, recent advances in ultrafast imaging leverage the wealth of information that can be extracted from the exact time of flight of light. In this context, physically-based transient rendering allows to efficiently simulate and analyze light transport considering that the speed of light is indeed finite. In this paper, we introduce a novel differentiable transient rendering framework, to help bring the potential of differentiable approaches into the transient regime. To differentiate the transient path integral we need to take into account that scattering events at path vertices are no longer independent; instead, tracking the time of flight of light requires treating such scattering events at path vertices jointly as a multidimensional, evolving manifold. We thus turn to the generalized transport theorem, and introduce a novel correlated importance term, which links the time-integrated contribution of a path to its light throughput, and allows us to handle discontinuities in the light and sensor functions. Last, we present results in several challenging scenarios where the time of flight of light plays an important role such as optimizing indices of refraction, non-line-of-sight tracking with nonplanar relay walls, and non-line-of-sight tracking around two corners.
Author(s)/Presenter(s):
Shinyoung Yi, Korea Advanced Institute of Science and Technology (KAIST), South Korea
Donggun Kim, Korea Advanced Institute of Science and Technology (KAIST), South Korea
Kiseok Choi, Korea Advanced Institute of Science and Technology (KAIST), South Korea
Adrian Jarabo, Universidad de Zaragoza, Spain
Diego Gutierrez, Universidad de Zaragoza, Spain
Min H. Kim, Korea Advanced Institute of Science and Technology (KAIST), South Korea
Generative Modelling of BRDF Textures from Flash Images
Abstract: We learn a latent space for easy capture, consistent interpolation, and efficient reproduction of visual material appearance. When users provide a photo of a stationary natural material captured under flashlight illumination, first it is converted into a latent material code. Then, in the second step, conditioned on the material code, our method produces an infinite and diverse spatial field of BRDF model parameters (diffuse albedo, normals, roughness, specular albedo) that subsequently allows rendering in complex scenes and illuminations, matching the appearance of the input picture. Technically, we jointly embed all flash images into a latent space using a convolutional encoder, and -- conditioned on these latent codes -- convert random spatial fields into fields of BRDF parameters using a convolutional neural network (CNN). We condition these BRDF parameters to match the visual characteristics (statistics and spectra of visual features) of the input under matching light. A user study compares our approach favorably to previous work, even those with access to BRDF supervision.
Author(s)/Presenter(s):
Philipp Henzler, University College London, United Kingdom
Valentin Deschaintre, Adobe Research, Imperial College London, United Kingdom
Niloy J. Mitra, University College London (UCL), Adobe Research, United Kingdom
Tobias Ritschel, University College London (UCL), United Kingdom
Neural Radiosity
Abstract: We introduce Neural Radiosity, an algorithm to solve the rendering equation by minimizing the norm of its residual similar as in traditional radiosity techniques. Traditional basis functions used in radiosity techniques, such as piecewise polynomials or meshless basis functions are typically limited to representing isotropic scattering from diffuse surfaces. Instead, we propose to leverage Neural Networks to represent the full four-dimensional radiance distribution, and we optimize the Neural Network parameters directly to minimize the norm of the residual. Our approach decouples solving the rendering equation from rendering (perspective) images similar as in traditional radiosity techniques, and allows us to efficiently synthesize arbitrary views of a scene. In addition, we propose a network architecture using geometric learnable features that improves convergence of our solver compared to previous techniques. Our approach leads to an algorithm that is simple to implement and effective on a variety of scenes with non-diffuse surfaces.
Author(s)/Presenter(s):
Saeed Hadadan, University of Maryland, College Park, United States of America
Shuhong Chen, University of Maryland, College Park, United States of America
Matthias Zwicker, University of Maryland, College Park, United States of America
Physical Light-Matter Interaction in Hermite-Gauss Space
Abstract: Our purpose in this paper is two-fold: introduce a computationally-tractable decomposition of the coherence properties of light; and, present a general-purpose light-matter interaction framework for partially-coherent light. In a recent publication, Steinberg and Yan [2021] introduced a framework that generalizes the classical radiometry-based light transport to physical optics. This facilitates a qualitative increase in the scope of optical phenomena that can be rendered, however with the additional expressibility comes greater analytic difficulty: This coherence of light, which is the core quantity of physical light transport, depends initially on the characteristics of the light source, and mutates on interaction with matter and propagation. Furthermore, current tools that aim to quantify the interaction of partially-coherent light with matter remain limited to specific materials and are computationally intensive. To practically represent a wide class of coherence functions, we decompose their modal content in Hermite-Gauss space and derive a set of light-matter interaction formulae, which quantify how matter scatters light and affects its coherence properties. Then, we model matter as a locally-stationary random process, generalizing the prevalent deterministic and stationary stochastic descriptions. This gives rise to a framework that is able to formulate the interaction of arbitrary partially-coherent light with a wide class of matter. Indeed, we will show that our presented formalism unifies a few of the state-of-the-art scatter and diffraction formulae into one cohesive theory. This formulae include the sourcing of partially-coherent light, scatter by rough surfaces and microgeometry, diffraction grating and interference by a layered structure.
Author(s)/Presenter(s):
Shlomi Steinberg, University of California Santa Barbara, United States of America
Ling-Qi Yan, University of California Santa Barbara, United States of America