Conversation
|
Thank you for the pull request, @danielzhong! ✅ We can confirm we have a CLA on file for you. |
|
This is very promising, but the noise is really bad. The additional TAA definitely helps, but it has to be configured so aggressively to do so that any sort of movement causes ghosting that goes far beyond the line of acceptability, since it impacts the user experience so heavily. I think the only way this could work is if we could reduce the noise to about half as bad as it is and make it opt-in. The TAA also can't cause ghosting that is so bad that it impacts accurately moving the camera. Realistically, the only way I see that being possible at the moment is through a deep learning denoiser. I know I have seen a couple experimental ones in the past built on TensorFlow.js, but I don't think any of them went very far beyond brief experiments. Maybe @lilleyse or @donmccurdy have some ideas. |
The current TAA implementation is not a motion vector based TAA. It is essentially a simple temporal accumulation that blends the current frame with the previous one. Because of this, ghosting is expected with this approach, A full motion vector based TAA implementation would likely provide improved visual quality. Update: |
Yeah, I did look closely and see that you weren't using motion vectors with your TAA implementation but didn't mention anything because I expected that their introduction would lessen the effect against the noise. If I have time at the end of the day today, I may look closer at this and see if there's something we can do. |
Description
Sandcastle_._.CesiumJS.-.Google.Chrome.2026-02-11.18-10-48.mp4
This PR experiments with stochastic transparency for Gaussian splats (inspired by Spark), and adds Temporal Anti-Aliasing (TAA) to stabilize the result
The idea is Instead of requiring fully precise transparent compositing for every overlapping splat, stochastic rendering treats alpha as probabilistic coverage. Each fragment uses a hash-based accept/discard test, so the image becomes an approximation that converges visually over space/time.
Why this can be faster:
Issue number and link
https://github.com/iTwin/platform-engineering/issues/368
Testing plan
Please host LOD GS data, then run following code in local build:
Author checklist
CONTRIBUTORS.mdCHANGES.mdwith a short summary of my change