-
Notifications
You must be signed in to change notification settings - Fork 6
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Ongoing design goals #6
Comments
Oh, kind of "attribute forwarding", I can see that being useful! :) In terms of API, it might be a minor issue that by the time the output mesh is
Since we want (2E), I feel that couting only loose edges would be confusing. Let me propose (2F): Add an integer property This is equivalent to stating the number of loose edges, since Without (2F), the host has to overallocate I imagine it could work this way:
In the end, it's not that big of a deal; for me the reasons to have (2F) are mostly:
We could make setting |
I tried looking at how Blender mesh native representation looks like, and it's a different story that the edge proposals for Open Mesh Effects are so far; it looks like faces have distinct vertices, but shared edges. This makes "face" edges harder to handle and to have edge attributes, we would need to define the order of edges and what vertices do they connect. This makes me wary of (2F) and bringing edge attributes into the spec at all, until the use case is more clear. Or I might be missing something obvious, I'm not well versed in 3D graphics implementations... |
Indeed, we'll have to remember to sort this out.
Added a note in the doc about attribute forwarding: https://github.com/eliemichel/OpenMeshEffect/commit/44c3fc623832c2f347198d1c3f03d8a5a38c7753
Annoying indeed. :/ Maybe we need to checkout how other software handle this, like Maya's API (its C++ API is quite good iirc). |
I looked into some of the APIs, here's what I gathered: Maya
3DS Max
Cinema4D
Universal Scene Description format
It seems to me that edge representation is motivated by efficient mesh traversal, not just storing per-edge data. Personally I quite like the current point/vertex/face representation (à la Houdini or USD), I think it's easy to work with and agnostic w.r.t. plugin/host implementation details. Committing to a particular edge representation seems to add complexity and increase impedance mismatch when converting to/from OFX mesh. The Maya model doesn't look bad, but I'd rather just generate faces/vertices and not have to worry about edges and any deduplication that may be required. Regarding edge attributes themselves, it would be handy that "if the effect preserves topology, make it preserve any edge attributes as well" (this can be a host implementation detail), however I don't know of a particular use case for them in an effect which couldn't be solved with vertex/face attributes. (In the special case of loose edges, we have "loose edge attributes" already in form of face attributes.) As for the other proposals, can we add
to the specification? :) I'm preparing a patch for the Blender host to fix loose edge issues I've encountered with MfxVTK (eg. Blender->OFX doesn't pass loose edges currently) and this would make sense to implement at the same time. I think it's independent of the edge representation issue above (ie. (2B), (2C) make sense in current spec as well as in hypotetical future spec with explicit edges separate from faces). |
I have officialy added (2B) and (2C) in bef660d |
Regarding (2D) it must not be an action. It is an action to check whether an effect with a given valuation of its parameters is the Identity, but for deformers we need to know whether the effect is intrinsically a deformer, which ever values its parameters have (deformers for only some parameters' valuation are handled by attribute forwarding). So this must actually be a property of an effect. I suggest (edited the initial message accordingly) |
Great! :) With (2B) and (2C) in the spec, I will prepare a patch for the Blender host to support it, as well as address any remaining issues with loose edge meshes, if that's okay? (I've tried the latest binaries of OFX Blender and MfxVTK and it seems that it "just works" now, but I want to check in debug build as well.) Ah, I see. Yes, the deformation should be declared early, at describe time. (2D) After these optimization opportunities and edge handling, another big one is (B) attributes. These are some of my thoughts:
|
I was thinking about making some attribute names conventional. This is inspired by Houdini, who tries to be as agnostic as your dataviz tools are wrt to attributes ( In Blender, these are called "layers" but it is quite similar. Layers that have a specific meaning like UVs are accessibles twice iirc: once as agnostic float2 layers, and once flagged as UVs. General purpose layers are not advertised anywhere in the UI but accessible for instance in Python through the BMesh module. So far the Blender implementation does not use them and only look at specific UVs because I discovered this only lately actually. I've started working on an add-on to visualize them in the viewport by the way. In the end I think I like your idea of tagging attributes, so adding a |
I've added |
Yes, having conventional attributes like
There's a question of what the effect is supposed to "see":
So far, I feel that the design leads more into (A). The use case for (B) is non-deformer effects - for deformers, attribute forwarding may be done automatically by the host (with the plugin optionally adding/overriding non-geometry attributes), but for non-deformers, creating output attributes has to be done by the effect. Some examples:
In these cases, the effect can copy/interpolate attributes (eg. Blender Decimate does it), but it would be tedious/impossible to declare all these attributes explicitly (I believe they should only be declared when they are changing the way the effect works, not when you're essentially passing them through). We probably need the tagging/"attribute subtypes" idea for this to be feasible (eg. for decimation, you can average RGB values but not material IDs). C++ API sketch, variant (A) with deformer effect (Blender's Cast modifier), all non-geometry attributes of CastEffect::Describe() {
auto input_mesh_main = AddInput(kOfxMeshMainInput);
auto input_mesh_control = AddInput("ControlObject").Label("Control object");
auto output_mesh = AddInput(kOfxMeshMainOutput);
input_mesh_main.AddVertexAttribute(kOfxMeshAttribVertexWeight, // name
1, // num of components
MfxAttributeType::Float,
MfxAttributeSubtype::Weight) // 0.0 <= x <= 1.0
.Label("Influence");
// possibly "sweetened" even more:
// input_mesh_main.AddVertexWeightAttribute(kOfxMeshAttribVertexWeight).Label("Influence");
// regular parameters
AddParam("Factor", 1.0).Range(0.1, 100.0);
IsDeformer(true); // declare the deformer property
}
CastEffect::Cook() {
auto input_mesh_main = GetInput(kOfxMeshMainInput).GetMesh();
auto input_mesh_control = GetInput("ControlObject").GetMesh();
auto attrib_influence = input_mesh_main.GetVertexAttribute(kOfxMeshAttribVertexWeight);
// ...
} C++ API sketch, variant (B) with decimate effect, no automatic forwarding by host: DecimateEffect::Describe() {
AddInput(kOfxMeshMainInput);
AddInput(kOfxMeshMainOutput);
AddParam("TargetReduction", 0.5).Range(0.0, 1.0);
}
DecimateEffect::Cook() {
auto input_mesh = GetInput(kOfxMeshMainInput).GetMesh();
auto output_mesh = GetInput(kOfxMeshMainOutput).GetMesh();
// ... decimate ...
// Allocate all attributes for custom passthrough.
// Possibly the following block could be a pre-defined method in the C++ helper,
// eg. output_mesh.AddAttributes(input_mesh)
for (int i = 1; i < input_mesh.GetPointAttributeCount(); i++) {
// ^ to skip geometry attributes, we could define that they are always the first
MfxAttributeProps attribute_props;
auto input_attribute = input_mesh.GetPointAttribute(i); // new functionality
input_attribute.FetchProperties(attribute_props);
attribute_props.isOwned = true;
output_mesh.AddAttribute(attribute_props); // just sugar
}
// repeat for vertex/face/mesh attributes
output_mesh.Allocate(...)
// fill attributes
output_mesh.Release();
input_mesh.Release();
} Looking at the code, it seems that (B) does not conflict with (A) at all; the effect is allowed to access its defined attributes by names it used to define them ( A drawback to (B) is that it forces the host to convert more stuff into the OFX world, so that the effect can see it. With reusing host buffers, the cost of doing so could be effectively zero, though, and for deformer effects we could omit the extra attributes and leave them on auto-forward. With regards to Blender implementation, I was looking through it and also saw the useful generic attributes which are nowhere to be seen in the GUI. Add-on to be able to show them would be useful :) The implementation details are not clear to me, but I get the impression that attributes/layers are supposed to be consistent for all meshes of a given object, ie. it's the object that defines the layers (or the number of layer slots)? See the "Generate Data Layers" button of the Data Transfer modifier. It has also happened to me that the CustomData buffer was NULL, even tough the layer was supposed to be there based on layer count. |
For instance, an "inflate" effect (moving vertices along their normal), would typically use the standard attribute for normals by default ('N' in Houdini) but still let the possibility in (advanced) parameters to read from a different attribute. The string field to input the custom attribute name will typically feature a helper dropdown listing the list of currently available attributes, but would not fail if the attribute is not existent (just use a default value, even though the result may be irrelevant). We could have "rename attribute" effects to handle special cases (Houdini does as well). Anyway, I like your "slot" idea, it feels user friendly, but this could be up to the (blender) implementation.
We are again facing an ease of use versus efficiency question. To be consistent, I belive we should adopt a similar solution as for constant face counts for instance: easy use by default, and flags to turn on extra optimizations.
In the "inflate" effect scenario this does not work because the user may change the name of the attribtue to query. So I suggest the following (breaking) change: (F1) we can add an action in between Describe and Cook, for which the instance parameters are set up and the available input attributes are listed, and the effects must tell which of the attributes it will need. If this action is not implemented, all attributes are exposed to the cook action.
Yes, we reaching something very interesting here. I think that saying "interpolate like this all remaining poitn attributes" is a job for the C++ plugin support API, the base C API only needs to provide access to the list of available attributes and their data. And I guess you think the same since you suggested sketches in C++. ^^ I mean I think there is no need to flag these "pass-through" attributes differently at the C level, since in the end the effect needs to read them in the same way. For you example (A), I think it should be called std::vector<std::pair<int,float>> DecimateEffect::GetPointInfluencers(int pointIndex, int outputIndex) {
std::vector<std::pair<int,float>> influencers;
influencers.push_back(std::make_pair(42, 0, 0.3));
influencers.push_back(std::make_pair(43, 0, 0.6));
influencers.push_back(std::make_pair(58, 0, 0.1));
return influencers;
} I don't like the term "influencer" at all, but the idea is to tell that a point (PS: I redacted your previous message but don't worry it was only to add C++ syntax highlight to code blocks) |
The way I imagine it, it would work even in this case -- there is a layer of abstraction between attributes in the host application and attributes the effect receives (the "slot" idea): InflateEffect::Describe() {
auto input_mesh = AddInput(kOfxMeshMainInput); // perhaps: AddMainInput();
auto output_mesh = AddInput(kOfxMeshMainOutput); // perhaps: AddMainOutput();
input_mesh.RequestVertexNormalsAttribute();
// ie:
// input_mesh.RequestAttribute(kOfxMeshAttribVertex, // location
// 3, // components
// kOfxMeshAttribVertexNormals, // name
// MfxAttributeType::Float, // type
// MfxAttributeSubtype::Normals); // subtype
// regular parameters
AddParam("Offset", 1.0).Range(-100.0, 100.0);
IsDeformer(true); // declare the deformer property
}
InflateEffect::Cook() {
auto input_mesh = GetInput(kOfxMeshMainInput).GetMesh(); // perhaps: GetMainInput().GetMesh();
auto output_mesh = GetInput(kOfxMeshMainOutput).GetMesh(); // perhaps: GetMainOutput().GetMesh();
auto vertex_normals = input_mesh.GetVertexAttribute(kOfxMeshAttribVertexNormals);
// perhaps: input_mesh.GetVertexNormalsAttribute();
// ...
} Depending on host capabilities, this would show a drop-down menu in effect UI, to select which vertex vector attribute to pass as An alternative approach would be to pass native host attribute names as ordinary string parameters and let the effect look them up at cooking time -- while possible in the current API, I would consider this an anti-pattern (it requires (F1) to avoid passing in all attributes; it leaks host attribute names; it makes it difficult to tell if the effect is ready to be cooked, whether the attribute names are mandatory, refer to correct attribute type, etc.).
Ideally, I would prefer to shield the effects from native host names of things, as that could encourage relying on host implementation details and making plugins less portable between hosts. With more conventional attributes (UVs, normals, vertex colors, vertex weights) and ability to have custom attributes, I think the API would be expressive enough. The only use-case I can think of for listing not-explicitly-declared attributes is for passing them through, as with non-deformer effects. (In this case, I would let the effect see attribute attachment and type, but erase its name, so that the feature doesn't get used to get around attribute declaration.) In this spirit, I would default to passing the basic implicit attributes ( DeformEffect::Describe() {
auto input_mesh = AddInput(kOfxMeshMainInput);
auto output_mesh = AddInput(kOfxMeshMainOutput);
input_mesh.RequestAllAttributes(); // note: this is a special use-case, it doesn't replace RequestAttribute()
// regular parameters
AddParam("Ratio", 0.5).Range(0.0, 1.0);
}
DeformEffect::Cook() {
auto input_mesh = GetInput(kOfxMeshMainInput).GetMesh();
auto output_mesh = GetInput(kOfxMeshMainOutput).GetMesh();
// ... decimate ...
for (int i = 1; i < input_mesh.GetPointAttributeCount(); i++) {
auto input_attribute = input_mesh.GetPointAttribute(i);
// ...
}
// ...
} In terms of optimization, there is one use-case supported by (F1) and not this, namely excluding some of the basic attributes (
The usual case is optimized by the effect itself only asking for attributes it needs, and the case of non-deformer passthrough doesn't seem to benefit from (F1): either the effect needs all attributes to transform it itself, or it somehow communicates the transformation to the host, in which case it can skip There is one thing about attributes (and mesh inputs, too) that we didn't mention so far, which is whether they are mandatory or optional. For parameters, it doesn't really matter since they all must provide defaults, but with attributes/mesh inputs, things are different. I can think of:
If there was a way to declare which inputs are mandatory or not, hosts could present better UI for the user and skip attempting to cook an effect that is not ready (needlessly converting stuff from host environment to OFX environment). Regarding conventional attributes, I would propose to add the following (F2):
None of these would be "requested" by default, unlike
Yes, this looks useful :) I can imagine that any output point/vertex/face attribute would typically be a linear combination of at most a few input elements. For an instancing effect, this could even be just copying from input attributes (though these attributes may not be from main input mesh). |
If I refer to the way I (and others) use attributes in Houdini, besides somes standardized ones the attribute name is a convention with myself (as a creator) because the attribute was generated at a previous stage. I am not assuming that the host will expose its internal naming: available attributes are either standard ones or user created ones from previous effects in the tree.
Actually I just started to think about an issue we may encounter: let's say we have three sequential effects Good points about mandatory or optional.
This should be a property of the mechanism to request inputs, related to inputs.
What should be done in this case?
A property of mesh inputs. What is the difference between "optional" and "not mandatory"?
I am really not a fan of forcing the color to be an
But then isn't the subtype enough actually? |
Good points :)
I understand that, it's very much how one works with VTK/Paraview ^^ My point was that it would be beneficial if the "function signature" of an effect is decoupled from the "runtime" attribute names, like in the C++ language: // effect side
struct MyEffectInputMesh { float *OfxMeshAttribVertexUV; };
struct MyEffectOutputMesh { float *OfxMeshAttribVertexUV; };
void my_effect(MyEffectInputMesh mainInput, MyEffectOutputMesh mainOutput) { ... }
// host side
float *uv0, *uv1;
float *my_custom_uv_map;
my_effect( { uv0 } , { uv0 } );
my_effect( { my_custom_uv_map } , { uv1 } );
// everything works despite no variables actually being named "OfxMeshAttribVertexUV" For input attributes, it has to work that way since the effect has to name them at describe time, before it gets the actual mesh. With output attributes, there is some flexibility since they are not declared at describe time, but only created later at cooking time. The host can presumably keep the attribute names from the effect (if they are custom) or rename them according to host application convention (if they use the conventional OFX names). I think this area is where we'll run into host limitations, so I wouldn't specify this too strictly, something like:
I agree, it should be driven by the host, I would propose the following:
It can be easy for the host to know that an attribute will be needed later (when the consumer is another OFX effect, as in your example), but pretty difficult to prove an attribute will not be needed (its consumer may be other non-OFX effect, a shader, or even file export). I imagine a more practical way to skip unnecessary processing would be to have a switch in the host UI where the user can override which attributes should/should not be passed to the effect.
At the API level, I would only provide a way to specify whether a requested attribute is mandatory or not. The rest is up to the host - whether the attribute can be made user-selectable or not, and whether it should have a pre-selected value or not. (In a hypothetical host, all attributes could be user-selectable, even normals, point coordinates etc. - I understand that this is the case in Houdini, but other hosts may be more limited.)
No difference? ^^ Either the (mesh / attribute) is "optional" ie. it's okay to cook without it, or it's "mandatory" and the host should skip cooking until user selects the input (the cooking would presumably fail with
This ties into the larger question of what to do when the attribute location/datatype/components don't match, which seems important for usability, and I'd love to support this as well. As you note, it seems that it's really the subtype that is saying "what" the attribute is, and the rest could be left up to the effect, with the host doing converting behind the scenes. Ie., for attribute
Some combinations may be disallowed by host, but many things should "just work", though only few combinations will hit the fast path in host (match the underlying buffer). I'm not sure how it should behave with output attributes (if you output float color, but host uses uint8 - should it narrow down float to uint8, or keep float color in a "hidden" attribute that may be otherwise inaccessible?). With that being said, instead of (F2) I would propose the following (F3):
From the C++ helper point of view, it could be quite ergonomic (not sure about the names, or if it should look like template parameters, but you get the idea): InflateEffect::Describe() {
auto input_mesh = AddInput(kOfxMeshMainInput);
auto output_mesh = AddInput(kOfxMeshMainOutput);
input_mesh.RequestVertexNormalsAttribute();
input_mesh.RequestFloatVertexColorsAttribute();
}
InflateEffect::Cook() {
auto input_mesh = GetInput(kOfxMeshMainInput).GetMesh();
auto output_mesh = GetInput(kOfxMeshMainOutput).GetMesh();
auto input_normals = input_mesh.GetVertexNormalsAttribute();
auto input_colors = input_mesh.GetFloatVertexColorsAttribute();
output_mesh.AddPointColorAttribute();
} |
I am working on integrating these ideas to the C API. In the design of the original Image Effect API, In some cases, we can still imaging calling this in the // Request the kOfxMeshPropTransformMatrix property to be filled by the host
propSetInt(inputProperties, kOfxInputPropRequireTransformMatrix, 0, 1) where (BTW how does your API handles the case were e.g. several normal attributes are needed, should one call A logical thing to do would be to introduce a new function to the My concern is that this requires the The only drawback to my eyes is that it make it weird that edit: My bad, there is the same indirection in Image Effects: edit2: The more I think about "subtype" the more I think it should have a different name because e.g. "UV" is not a subtype of "float" (the "type" in our current naming convention) but rather a subtype of "float2" (the combination of type+componentCount). I am thinking about calling it semantics, like in HLSL. |
A lot of news with 89d428f SemanticsIt introduces Semantics: Attribute RequestIt also introduces OfxStatus (*inputRequestAttribute)(OfxMeshInputHandle input,
const char *attachment,
const char *name,
int componentCount,
const char *type,
const char *semantic,
int mandatory); Breaking changesAs well as two breaking changes:
All of this has been ported to the examples, host code and plugin support, as well as to the Blender implementation in a6445ef (only a first base, it does not actually ensure that requested attributes will be present). The CppPluginSupport implementation of attribute request is the bare minimum so far but we'll add nicer utility functions on top of this. edit: FIY the CppPluginSupport is now documented here: https://openmesheffect.org/Sdk/reference.html I called it "SDK" as it seemed more appropriate. |
Exciting news!
From the C++ SDK point of view, it prevents the user from shooting themselves in the foot (calling
Good call. The OpenFX API has a lot of properties IMO, and semantics should be easy to find and use.
I was thinking about having the name as default parameter with the option to override it if you wanted multiple attributes: RequestVertexNormalsAttribute(const char *name=kOfxMeshAttribNormals, bool mandatory=true) { // the "conventional" name for normals attribute
return RequestAttribute(kOfxMeshAttribVertex, name, 2, kOfxMeshAttribTypeFloat, kOfxMeshAttribSemanticNormal, mandatory);
} C++ API documentation is a good thing to have, I'll keep that it mind and write docstrings if I do something there. I've updated MfxVTK to use the new API: ...and I can report that it works with the latest nightly branch and C++ SDK. As the I have at least three effects that will test the attribute system and host interaction:
In terms of the attribute interface, there isn't yet a way to iterate over attributes using |
Correct, I realized this when reading again for documentation. It could even make sense to split some MfxEffectDef from the MfxEffect but we'd lose the ability to define an effect just by subclassing one thing so I'd rather keep it this way for the sake of ease of use and check for misuses with additional code when building in debug mode.
Ok, makes sense!
Yes, it's a todo. Good examples indeed, I've seen you made a nice documentation on readthedoc btw :) |
This started as a follow up of #27 of the Blender Host, from which a lot of relevant insights and ideas emerged.
Already integrated:
kOfxMeshPropNoEdge
boolean property turned on by default, that one must turn off when building a mesh with loose edges.kOfxMeshPropConstantFaceCount
property type to meshes to skip the use of thekOfxMeshAttribFaceCounts
buffer when it is not -1.kOfxMeshEffectPropIsDeformation
boolean property turned off by default that one can turn on to speed up processing when the effect does not affect connectivity.kOfxMeshAttribPropSemantic
) to attributes in order to flag their usage (UV, weight group, etc).Summary of the proposals:
Unsolved issues:
Rejected:
A solution for (B) would indeed take place during the Describe action. And this is needed beyond just connectivity: an effect must be able to tell whether it needs UVs or not, or this or that specially named attribute, etc. It would enable sending only the relevant attributes, and also let the Host's UI tell the user about important missing attributes. I imagine this as a set of
kOfxInputProp*
maybe.@tkarabela Regarding your issue (A), there's this little advertised fact: It is possible already to point the output at the same buffer as the input, for unchanged attributes, and in particular for connectivity information when the effect only is a modifier.
About the "knowing the exact edge count in mixed cases" problem, do you believe that iterating once over the faceCounts attribute prior to allocating teh mesh introduces too much overhead? I think that anyway that's how many plugin writers would fill the loose edge count property anyways.
The text was updated successfully, but these errors were encountered: