You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
currently all particle components / emitters are updated (simulated) at the start of the frame - which means even particles off-screen are always simulated
in order for the particle system to use few camera properties, it depends on a hacky way the forward renderer sets up _activeCamera - note that this works as expected for a single camera only
there is this comment related to this in the particle-emitter suggesting that due to not having camera during the first update (before forward renderer sets _activeCamera), and incorrect shader might need to be compiled initially .. this is fixed later when the camera changes
the emitter calls material.getShaderVariant() without any parameters, and so we don't have access to camera rendering settings (gamma, tone mapping) - workaround is used
Ideally:
This should work similarly to skinned/morphed meshes and splats, where forward renderer executes culling first, and the expensive update only takes place for the visible visuals. This should be at least case for the procedural particles where the bounds can be estimated.
user might need an option for simulate off-screen or disable
The text was updated successfully, but these errors were encountered:
This issue reminds me a bit of gaussian splats which also have both global (shared) state and per-view state.
In the engine, meshes and materials are assumed identical in all camera views (except for shader pass and matrices) and it would be nice if we had a formal way of creating and updating arbitrary per-view state.
material.getShaderVariant()
without any parameters, and so we don't have access to camera rendering settings (gamma, tone mapping) - workaround is usedIdeally:
The text was updated successfully, but these errors were encountered: