-
-
Notifications
You must be signed in to change notification settings - Fork 275
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add cli BVH file generation, sample GLTF BVH extension #367
Comments
The example gist looks good to me, maybe just a few things there I should streamline in the glTF-Transform TypeScript definitions. My only other comment would be that to support quantized or normalized attributes and meshopt compression, the code constructing the position BufferAttribute should be mapped with something like this: function dequantizeAttribute(attribute: Accessor): Float32Array {
const srcArray = attribute.getArray()!;
if (attribute.getComponentSize() >= 4) return srcArray;
const dstArray = new Float32Array(srcArray.length);
for (let i = 0, il = attribute.getCount(), el = [] as number[]; i < il; i++) {
// get/setElement handle quantization automatically, as we swap the src/dst arrays.
el = attribute.getElement(i, el);
attribute.setArray(dstArray).setElement(i, el).setArray(srcArray);
}
return dstArray;
} |
If you're referring to this line: geom.setAttribute('position', new Float32BufferAttribute(primitive.getAttribute("POSITION").getArray(), 3)); It might be superfluous at this point. I don't think there should be any reason to transform the position buffer to a float array for BVH construction. And raycasting generally doesn't work with normalized attributes in three.js so if that's a scenario that could happen it would probably be best to just throw an error in this case. So I think meshopt, interleaved, and quantized attributes should be easily supported. Regarding gltf-transform am I correct in understanding that external extension plugins for the cli are not supported at the moment? Thinking through this a bit more I think relying on the gltf-transform cli directly rather than creating and maintaining a separate one would be best. It looks like donmccurdy/glTF-Transform#85 provides some ideas on how that could work. I imagine that once that works the three-mesh-bvh instructions would be to install |
Ouch that's news to me... any quantized or meshopt-compressed glTF models written by gltf-transform will have normalized positions, we should fix something there I think. gltfpack doesn't normalize positions as far as I know.
I think there are two ways we could do this:
|
Yeah I've been waiting to see these things come up in real life before raising them. So far I haven't seen issues reported related to raycasting against normalized attributes. Like you said from what I've seen gltfpack doesn't mark the attributes as normalized that it quantizes -- possibly for this kind of reason? Interleaved index buffers are another case that this library doesn't support but I haven't seen models in the wild or reports of issues that use that.
I'm personally not a huge fan of global bashrc-style files or magic home directories if only because it can then be difficult to get things set up consistently in other environments or understand why something is working a certain way. I feel the same about environment variables -- they just feel a bit opaque and untraceable. But I know I may be in the minority on that. It's very possibly I've just shot myself in the foot too many times 😁. Have you considered something like a special "plugin" command that takes a node-resolvable path to a file that exports a plugin with options:
Or if the plugin were just specified in a local file:
I'm not familiar enough with writing gltf-transform plugins / extensions, though, to say how the options should be passed into the custom command programmatically. |
My reason for doing normalized positions in gltf-transform was partly aesthetic – if you extract a quantized mesh from a glTF scene (removing its scaling factors) it fits nicely in a unit sphere centered at the origin, as opposed to having some arbitrarily huge scale. I could change that if it turned out to be a problem, but I haven't gotten reports of problems yet either.
glTF doesn't allow interleaved index buffers (end of Section 3.6.1.1) – I'm not sure if WebGL does, but this sounds like a totally reasonable thing to skip or throw an error on.
Yeah, I've been hoping there was some convention (opaque or otherwise) among npm packages that support plugin ecosystems. Those reasons are exactly why I'm not keen to invent my own. 😅 Another idea might be to check
Could you say more about what you'd want this CLI to do? Correct me if I'm wrong, but I think there are two components here. First, we want to be able to load a GLB, compute the BVH, and write it back to the file. Second, we want for any further processing on the file with gltf-transform to preserve that BVH data. So we want to register a new command (e.g. We could do that with a global # Print extended usage (including `bvh` command).
gltf-transform --help --plugin three-mesh-bvh
# Compute BVH.
gltf-transform bvh input.glb output.glb --split-strategy SAH --plugin three-mesh-bvh
# Compress.
gltf-transform meshopt output.glb output.glb --level medium --plugin three-mesh-bvh |
Okay I see what you're saying now -- one of the purposes of having the the BVH plugin always available is to ensure that even after geometry transformations the BVH extension data is kept up to date if it's there. I suppose in that case the BVH construction parameters will have to be stored in the extension so it can be rerun when needed but that's probably good practice, anyway. I think in my mind I always expected it to be the users responsibility to run the BVH construction step once every other mesh operation has been finished in part because it can be a particularly long running operation. If you generate a BVH first and then run center, quantize, weld, etc then you'll incur the potentially lengthy operation on each subsequent command. And without the ability to tell if the geometry has changed between runs it will always have to rerun regardless of operation. That's probably a bit of a specific problem to something like this BVH plugin, though, and is maybe just something to ensure the user is aware of. Right now what happens if an extension is encountered that is unknown? Is it stripped? Or is it retained in its current state? Ie if a gltf has a BVH generated in an environment that is configured with the BVH plugin and then processed in an environment without it what will happen? Will a warning be logged if an unknown extension is encountered?
I think requiring a |
I think it's fair to expect that, yeah. I've been keeping the representation of an extension's data separate from the logic that constructs the data (see the I am curious what happens if you do Draco or Meshopt compression after computing the BVH? Either of those could change the vertex order, and Draco could change the vertex count – does that invalidate the BVH? Could be a tricky edge case if so. Currently, I recommend that users do Meshopt or Draco compression last when using the CLI. Otherwise you're decompressing and recompressing, and that's slow and lossy. Scripting the pipeline (as opposed to using the CLI) doesn't run into this problem because it doesn't get written to disk until the end.
Prints a warning and and
We could start with |
Right. If I'm creating my own extension for the BVH, though, I can dictate that a proper format requires storing certain settings that three-mesh-bvh requires for the sake of tracking the BVH qualities and reconstruction. I don't imagine other extensions are caching otherwise derivable data the way we're talking about here which is what seems to make this case a bit unique.
This is trickier. My initial reaction is to throw an error if an operation or extension that's incompatible is present, though this does mean having to know the effect of any given operation and extension. The "reorder" command would cause a similar conflict. #294 would afford the ability to store a separate indirect buffer that gets sorted for the BVH representation rather than the index buffer. This would let the index buffer remain in some arbitrary order if needed. It requires more memory but would allow for the BVH (and extension) to be compatible with a sorted index for render order benefits, compression, or other user needs.
By "skips" do you mean it's removed from the file?
This sounds good to me! |
To confirm – Even the second could be a challenge with Draco. We'd have to compress the data first, decompress it in memory, compute the BVH from that, and write the BVH back into the file with the original compressed data. I don't really have a solution for that in glTF-Transform at the moment. Meshopt is more forgiving, in a script this is totally fine as long as the import { NodeIO } from '@gltf-transform/core';
import { KHRONOS_EXTENSIONS, MeshoptCompression } from '@gltf-transform/extensions';
import { reorder, quantize } from '@gltf-transform/functions';
import { MeshBVH, computeBVH } from 'three-mesh-bvh';
import { MeshoptDecoder } from 'meshoptimizer';
await MeshoptDecoder.ready;
const io = new NodeIO()
.registerExtensions([...KHRONOS_EXTENSIONS, MeshBVH])
.registerDependencies({ 'meshopt.encoder': MeshoptEncoder });
const document = io.read('input.glb');
document.createExtension(MeshoptCompression)
.setRequired(true)
.setEncoderOptions({ method: MeshoptCompression.EncoderMethod.QUANTIZE });
await document.transform(
reorder({ encoder: MeshoptEncoder }),
quantize(),
computeBVH(),
);
io.write('output.glb', document); For CLI use I think we'd need to add a gltf-transform reorder input.glb tmp.glb
gltf-transform quantize tmp.glb tmp.glb
gltf-transform bvh tmp.glb tmp.glb --plugin three-mesh-bvh
gltf-transform meshopt tmp.glb output.glb --plugin three-mesh-bvh --no-reorder Could make this easier if the CLI had an interactive mode that let you select all the processing you need and do it in one pass, but maybe later. 😅
Ah yeah sorry, it's omitted. Most extensions contain references to things (binary data, etc.) and glTF-Transform can't preserve the extension if it doesn't know what those references are. |
Correct. It normally reorders the index buffer to lay vertices out as they relate to BVH leaf nodes. But with the indirect buffer the indices would remain the same but of course they still need to be in a known order so the leaf nodes can point to a range of triangles. I think at least as a first pass it's okay if the user is responsible for making sure that the commands are run with compatible settings which ultimately means being aware of the impact of different processes on the model. It would be possible to relatively quickly "validate" the BVH structure as a final pass to ensure nothing broke it between generation and running other commands so the user could be informed but it sounds like that would still be difficult with draco.
This makes sense to me. At least then the user is informed and will know that maybe a plugin will have to be used to properly include the extension. |
For BVHs that take a long time to generate it could be best to generate them offline, download them, and apply them to the appropriate geometry.
Use Cases:
The text was updated successfully, but these errors were encountered: