| ▲ | bob1029 a day ago | |
> With a small light source even a small change in position on the surface has big effects on the light’s visibility – it quickly becomes fully visible or fully occluded. On the other hand, with a big light source that transition is much smoother – the distance on the floor surface between a completely exposed and completely invisible light source is much larger. This part of the demo illustrates the point vs area light issue really well. In designing practical 3d scenes and selecting tools, we would often prefer to use 2d area or 3d volumetric lights over point lights. Difficult problems like hard shadows and hotspots in reflection probes are magically resolved if we can afford to use these options. Unfortunately, in many realtime scenarios you cannot get access to high quality area or volumetric lighting without resorting to baking lightmaps (static objects only; lots of iteration delay) or nasty things like temporal antialiasing. | ||
| ▲ | chaboud 11 hours ago | parent | next [-] | |
Having come from graphics in the 90's, practical high-performance answers typically involve fakery on both primary surface shading and shadow calculation. I've pulled some tricks like "object-pre-pufficiation" (low-frequency model manifold encapsulation, then following the same bones for deformation) mixed with normal recording in shadow layers (for realtime work on old mobile hardware), but, these days, so much can be done with sampling and proper ray-tracing, the old tricks are more novelty than necessary. It still pays to fake it, though. | ||
| ▲ | cubefox a day ago | parent | prev [-] | |
There is a solution called Radiance Cascades [1] which doesn't require a denoiser for rendering real-time shadows for volumetric lights. Unfortunately the approach is relatively slow, so solutions based on denoising are still more efficient (though also expensive) in terms of the quality/performance tradeoff. One issue with modern ReSTIR path tracing is that currently the algorithm relies on white (random) noise, which contains low-frequency (large-scale) noise, which produces blotchy boiling artifacts at low sample counts. Optimally an algorithm should use some form of spatio-temporal blue noise with exponential decay to only get evenly distributed high frequency samples. But that's still an open research problem. | ||