Skip to content

Conversation

@skewballfox
Copy link
Contributor

So for making custom element types, not every type will have all the current properties required of elements. An example of this is the fact that complex numbers don't have a linear ordering. Outside of ordering alone, making it to where groups of ops are optional also lowers the amount of initial work necessary to add a new data type, or to make a new variant of an existing data type.

Checklist

  • Confirmed that cargo run-checks command has been executed.
  • Made sure the book is up to date with changes in this PR.

Related Issues/PRs

Changes

  • I separated out elementComparison from element (may rename to ElementOrdering) from Element
  • I added a new trait to Element for equality checks
  • I split out all methods from numeric that require ordering into an Orderable trait
    • right now, mainly due to one-hot functions, I have made it a supertrait of numeric, i.e. in order for a element tensor to be orderable, it has to implement numeric. I think though I could either rework onehot to not require E to be orderable, or to somehow make onehot it's own thing that's only an option if both traits are implemented.
  • I think I need to make some changes to how libtorch is structured to be closer to the API of the other backends. I was able to get this to build by disabling torch altogether, though I still need to check whether test pass.

Testing

confirmed the project builds, still need to run the actual checks

@skewballfox skewballfox changed the title Refac/comparison2 Make ElementComparison optional for dtypes Dec 27, 2025
@codecov
Copy link

codecov bot commented Jan 2, 2026

Codecov Report

❌ Patch coverage is 73.02452% with 99 lines in your changes missing coverage. Please review.
✅ Project coverage is 69.01%. Comparing base (439a26c) to head (1e08156).
⚠️ Report is 3 commits behind head on main.

Files with missing lines Patch % Lines
crates/burn-tch/src/ops/int_tensor.rs 0.00% 46 Missing ⚠️
crates/burn-tch/src/ops/tensor.rs 0.00% 20 Missing ⚠️
crates/burn-backend/src/tensor/ops/int.rs 65.95% 16 Missing ⚠️
crates/burn-backend/src/tensor/ops/float.rs 90.78% 7 Missing ⚠️
crates/burn-backend/src/data/compare.rs 33.33% 4 Missing ⚠️
crates/burn-tch/src/ops/bool_tensor.rs 0.00% 3 Missing ⚠️
crates/burn-tch/src/ops/module.rs 0.00% 3 Missing ⚠️

❌ Your patch check has failed because the patch coverage (73.02%) is below the target coverage (80.00%). You can increase the patch coverage or adjust the target coverage.
❌ Your project check has failed because the head coverage (69.01%) is below the target coverage (80.00%). You can increase the head coverage or adjust the target coverage.

Additional details and impacted files
@@            Coverage Diff             @@
##             main    #4255      +/-   ##
==========================================
- Coverage   69.03%   69.01%   -0.03%     
==========================================
  Files        1409     1410       +1     
  Lines      165880   165941      +61     
==========================================
+ Hits       114520   114524       +4     
- Misses      51360    51417      +57     

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.

Copy link
Member

@laggui laggui left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Splitting Orderable from Numeric makes sense!

Backend implementations will still require float/int ops with orderable methods (e.g., float_greater) but can be left out as unimplemented, and for custom tensor kinds the API can be restricted so as to not expose the high-lever tensor methods that way.

Not sure why the changes to LibTorch elements were necessary for these changes though 🤔 we could accept different int elem types if we want to extend support, but unclear why it's needed for this PR specifically

And maybe Ordered could be a better name instead of Orderable?

type FloatTensorPrimitive: TensorMetadata + 'static;
/// Default float element type.
type FloatElem: Element;
type FloatElem: Element + ElementComparison;
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

iirc ElementComparison was added specially for sorting ops, which have a default cpu implementation that operates on TensorData elements. We could likely remove the stricter ElementComparison bound, and only require ElementEquality, except for the data sorting implementation

/// Compare two elements
fn compare<E: ElementComparison>(a: &E, b: &E, descending: bool) -> Ordering {
if descending { b.cmp(a) } else { a.cmp(b) }
}

Would have to check, but I believe everything else should still compile and work.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

iirc ElementComparison was added specially for sorting ops, which have a default cpu implementation that operates on TensorData elements. We could likely remove the stricter ElementComparison bound, and only require ElementEquality, except for the data sorting implementation

right now, {Float,Int}TensorOps bundles methods that rely on both Orderable and Numeric, hence the added constraint. How would removing the bound work? Would we just remove the ordering ops from {Float,Int}TensorOps traits and then leave it to the stuff defined under orderable, tensorData sort impl, or would we take a similar approach to what was done to numeric and define a float,int ordering trait that would be implemented directly on the end element types (just moving the existing methods to a new home)? or were you thinking of another approach entirely?

@skewballfox
Copy link
Contributor Author

skewballfox commented Jan 2, 2026

Splitting Orderable from Numeric makes sense!

What should be done about the onehot methods that need both?

Backend implementations will still require float/int ops with orderable methods (e.g., float_greater) but can be left out as unimplemented, and for custom tensor kinds the API can be restricted so as to not expose the high-lever tensor methods that way.

sort of. The first approach I took you would have issues converting from element types without ordering to element types with ordering. I left ordering off of bool tensors mainly to test that conversion still worked.

Not sure why the changes to LibTorch elements were necessary for these changes though 🤔 we could accept different int elem types if we want to extend support, but unclear why it's needed for this PR specifically

I can remove the constraint for int (since ElementComparison is already implemented for i64), but I had to add a type for to make it so that other element types didn't have the additional constraint applied to floats

And maybe Ordered could be a better name instead of Orderable?

sure

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants