Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

VertexData Cache #30

Open
john-chapman opened this issue Jun 7, 2018 · 2 comments
Open

VertexData Cache #30

john-chapman opened this issue Jun 7, 2018 · 2 comments
Labels

Comments

@john-chapman
Copy link
Owner

  • Reduce draw cost for high order primitives by caching vertex data (+ primitive type).
  • Each cache maps to an ID.
  • Expose this system to the user e.g. BeginCache(_id);, EndCache(), DrawCache(_id) (but with better names).
  • Cached vertex positions are transformed by the current draw state as they are copied into the final buffer.
  • Transform other properties of the cached data (size/color).
@ChemistAion
Copy link

I was wondering if there have been any updates or progress made since?

@john-chapman
Copy link
Owner Author

I've not really looked into this since the issue was created, but I think it would be quite simple to implement an initial version. Updated thoughts:

The main goal here would be to provide an API allowing the user to avoid the overhead of making lots of Im3d calls. User code would look like this:

if (Im3d::BeginCache("BigScene")) // If false, we skip Im3d calls below and use the cached vertex data directly.
{
    // ... lots of Im3d calls here.

    Im3d::EndCache(); // Cache the vertex data we just created.
}

Internally, cached vertex data would follow the structure of m_vertexData (= one list per layer, per primitive * 2 for sorted/unsorted). This way, the cache can be copied into m_vertexData such that sorting/layering still works. There would also be an InvalidateCache(id) function.

The approach outlined above has a couple of caveats:

Transform modification:

  • Could store the top of the transform stack per cache, if it's different when we go to copy the cached data we can re-transform the data there.
  • That would only support a single global transform modification per cache. Anything more complicated requires invalidating the cached data.

Nested caches:

  • Nested caches won't invalidate properly, as the nested cache data will effectively be duplicated in the enclosing cache.
  • Could potentially solve by storing a list of nested caches per cache. Invalidating a cache would also invalidate any referents.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

2 participants