18. Sampling and Denoising [Q&A Session]
- Full Access
- Onsite Student Access
- Virtual Full Access
Date/Time: 06 – 17 December 2021
All presentations are available in the virtual platform on-demand.
Cascaded Sobol' Sampling
Abstract: Rendering quality is largely influenced by the samplers used in Monte Carlo integration. Important factors include sample uniformity (e.g., low discrepancy) in the high-dimensional integration domain, sample uniformity in lower-dimensional projections, and lack of dominant structures that could result in aliasing artifacts. A widely used and successful construction is the Sobol' sequence that guarantees good high-dimensional uniformity and consequently results in faster convergence of quasi-Monte Carlo integration. We show that this sequence exhibits low uniformity and dominant structures in low-dimensional projections. These structures impair quality in the context of rendering, as they precisely occur in the 2-dimensional projections used for sampling light sources, reflectance functions, or the camera lens or sensor. We propose a new cascaded construction, which, despite dropping the sequential aspect of Sobol' samples, produces point sets exhibiting provably perfect dyadic partitioning (and therefore, excellent uniformity) in consecutive 2-dimensional projections, while preserving good high-dimensional uniformity. By optimizing the initialization parameters and performing Owen scrambling at finer levels of binary representations, we further improve over Sobol's integration convergence rate. Our method does not incur any overhead as compared to the generation of the Sobol' sequence, is compatible with Owen scrambling and can be used in rendering applications.
Author(s)/Presenter(s):
Loïs Paulin, Université de Lyon, LIRIS, France
David Coeurjolly, Université de Lyon, CNRS, LIRIS, France
Jean-Claude Iehl, Université de Lyon, CNRS, LIRIS, France
Nicolas Bonneel, Université de Lyon, CNRS, LIRIS, France
Alexander Keller, NVIDIA, Germany
Victor Ostromoukhov, Université Claude Bernard Lyon 1, France
Ensemble Denoising for Monte Carlo Renderings
Abstract: Various denoising methods have been proposed to clean up the noise in Monte Carlo (MC) renderings, each having different advantages, disadvantages, and applicable scenarios. In this paper, we present Ensemble Denoising, an optimization-based technique that combines multiple individual MC denoisers. The combined image is modeled as a per-pixel weighted sum of output images from the individual denoisers. Computation of the optimal weights is formulated as a constrained quadratic programming problem, where we apply a dual-buffer strategy to estimate the overall MSE. We further propose an iterative solver to overcome practical issues involved in the optimization. Besides nice theoretical properties, our ensemble denoiser is demonstrated to be effective and robust and outperforms any individual denoiser across dozens of scenes and different levels of sample rates. We also perform a comprehensive analysis on the selection of individual denoisers to be combined, providing important and practical guides for users.
Author(s)/Presenter(s):
Shaokun Zheng, Tsinghua University, China
Fengshi Zheng, Tsinghua University, China
Kun Xu, Tsinghua University, China
Ling-Qi Yan, University of California Santa Barbara, United States of America
Learning to Cluster for Rendering with Many Lights
Abstract: We present an unbiased online Monte Carlo method for rendering with many lights. Our method adapts both the hierarchical light clustering and the sampling distribution to our collected samples. Designing such a method requires us to make clustering decisions under noisy observation, and making sure that the sampling distribution adapts to our target. Our method is based on two key ideas: a coarse-to-fine clustering scheme that can find good clustering configurations even with noisy samples, and a discrete stochastic successive approximation method that starts from a prior distribution and provably converges to a target distribution. We compare to other state-of-the-art light sampling methods, and show better results both numerically and visually.
Author(s)/Presenter(s):
Yu-Chen Wang, National Taiwan University, Taiwan
Yu-Ting Wu, National Taiwan University, Taiwan
Tzu-Mao Li, MIT CSAIL; University of California, San Diego, United States of America
Yung-Yu Chuang, National Taiwan University, Taiwan
Monte Carlo Denoising via Auxiliary Feature Guided Self-Attention
Abstract: While self-attention has been successfully applied in a variety of natural language processing and computer vision tasks, its application in Monte Carlo (MC) image denoising has not yet been well explored. This paper presents a self-attention based MC denoising deep learning network based on the fact that self-attention is essentially non-local means filtering in the embedding space which makes it inherently very suitable for the denoising task. Particularly, we modify the standard self-attention mechanism to an auxiliary feature guided self-attention that considers the by-products (e.g., auxiliary feature buffers) of the MC rendering process. As a critical prerequisite to fully exploit the performance of self-attention, we design a multi-scale feature extraction stage, which provides a rich set of raw features for the later self-attention module. As self-attention poses a high computational complexity, we describe several ways that accelerate it. Ablation experiments validate the necessity and effectiveness of the above design choices. Comparison experiments show that the proposed self-attention based MC denoising method outperforms the current state-of-the-art methods.
Author(s)/Presenter(s):
Jiaqi Yu, South China University of Technology, China
Yongwei Nie, South China University of Technology, China
Chengjiang Long, JD Finance America Corporation, United States of America
Wenju Xu, OPPO US Research Center, InnoPeak Technology Inc, United States of America
Qing Zhang, Sun Yat-sen University, China
Guiqing Li, South China University of Technology, China
Path Graphs: Iterative Path Space Filtering
Abstract: To render higher quality images from the samples generated by path tracing with a very low sample count, we propose a novel path-space filtering approach that processes a fixed collection of paths to refine and improve radiance estimates throughout the scene. Our method operates on a \emph{path graph} consisting of the union of the traced paths with additional neighbor edges inserted between spatially nearby vertices. The approach refines the initial noisy radiance estimates via an aggregation operator, which effectively treats direct and indirect radiance estimates on neighboring path vertices as independent sampling techniques and combines them using well-chosen weights. We also introduce a propagation operator to forward the refined estimates along the paths to successive bounces. We apply the aggregation and propagation operations to the graph iteratively, progressively refining the radiance estimates, converging to fixed-point radiance estimates with lower variance than the original ones. Our approach is lightweight, in the sense that it can be easily plugged into any standard path tracer and neural final image denoiser. Furthermore, it is independent of scene complexity, as the graph size only depends on image resolution and average path depth. We demonstrate that our technique leads to realistic rendering results starting from as low as 1 path per pixel, even in complex indoor scenes dominated by multi-bounce indirect illumination.
Author(s)/Presenter(s):
Xi Deng, Cornell University, United States of America
Milos Hasan, Adobe Research, United States of America
Zexiang Xu, Adobe Research, United States of America
Nathan Carr, Adobe Research, United States of America
Steve Marschner, Cornell University, United States of America