A Dataset for Training and Testing Rendering Methods
Loading...
Date
Authors
Journal Title
Journal ISSN
Volume Title
Publisher
Abstract
Thorough physically-based rendering is a computationally taxing and time-consuming process. High-quality rendering requires a significant amount of memory and time to calculate the trajectories and colors of each ray that goes into a pixel, especially as scenes become more complex and more computations are required for clear renders. Each ray used to calculate the color of a pixel requires a large number of calculations to accurately represent the red-green-blue value, taking into account not only the material of the object the ray hits but the surrounding objects and lighting. In a bid to implement this process, Monte Carlo Rendering was developed as an algorithm to flexibly and realistically render an image from a three dimensional scene. This is a common method of rendering now, but it leaves significant flaws in the image if used to truly speed up the process due to the use of random sampling to determine which rays are created and approximate the pixel values. These flaws appear similar to TV static and are called noise. To keep this faster use of the algorithm while continuing to produce high-quality renders, denoising algorithms were developed to take the noisy images rendered by the Monte Carlo algorithm and scrub the noise to recreate a clean version of the render. To be efficient, these denoising algorithms need to be able to avoid using enough memory and time to fully offset the time saved by the low-sample rendering method. This is very difficult, as the accuracy in a render originally came from careful and thorough calculations for each pixel, and so these algorithms are forced to develop an alternate method to fill in the gaps using the approximations done in the noisy image. As this occurred, denoising algorithms evolved to use increasingly elaborate neural networks to negate issues in accuracy and clarity found in previous methods. These neural networks, though essential to faster and cheaper rendering, require extensive training from currently limited datasets. This research aims to expand the pool of data available to test denoising algorithms on renders created from three dimensional scenes as well as test its effectiveness in training current denoising algorithms in conjunction with the current data. The accuracy of the final tests were measured using the mean square error and the peak signal-to-noise ratio, both commonly utilized to objectively evaluate the difference in the control image and the output of the algorithm. It was found that the new data, when used in combination with the current datasets, was effective in improving the results of these algorithms. This supports the idea that larger, and more importantly more diverse, data with distinct characteristics is beneficial for creating increasingly effective denoising algorithms. This is especially true as renderers become more efficient and methods of expressing sundry real world visual phenomena become more accurate. With those improvements, more robust denoising neural networks will be necessary to create professional appearing renders. An extended dataset will allow for neural networks to be trained as accurately as possible to quickly and accurately create renders that can be used for high importance products, such as frames of final versions of movies for animation studios.
Description
Keywords
Physically-Based Rendering, Computer Graphics