-
Notifications
You must be signed in to change notification settings - Fork 5.9k
[Cpp API Compatibility] add storage API #77176
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: develop
Are you sure you want to change the base?
Conversation
|
你的PR提交成功,感谢你对开源项目的贡献! |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Pull request overview
This PR adds storage-related API compatibility for the C++ API layer, introducing LibTorch-compatible Storage and DataPtr classes along with integration into TensorBase.
Key changes:
- Implements
c10::Storageclass providing memory allocation management with sharing, aliasing, and ownership semantics - Adds
c10::DataPtrwrapper for managing memory pointers with device information and custom deleters - Extends
TensorBasewith storage access methods includingstorage(),storage_offset(),has_storage(), andis_alias_of()
Reviewed changes
Copilot reviewed 4 out of 4 changed files in this pull request and generated 15 comments.
| File | Description |
|---|---|
| paddle/phi/api/include/compat/c10/core/Storage.h | New Storage class implementation with memory management APIs, trait specializations for ownership semantics |
| paddle/phi/api/include/compat/c10/core/Allocator.h | New DataPtr class for wrapping raw pointers with device and deleter information |
| paddle/phi/api/include/compat/ATen/core/TensorBase.h | Adds storage-related methods to TensorBase for accessing underlying memory and checking aliasing |
| test/cpp/compat/compat_basic_test.cc | Comprehensive test suite for Storage APIs covering construction, sharing, aliasing, traits, and edge cases |
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
| return Storage( | ||
| std::dynamic_pointer_cast<phi::DenseTensor>(tensor_.impl())->Holder()); |
Copilot
AI
Jan 3, 2026
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The storage() method returns a Storage by value constructed from the tensor's Holder. However, it doesn't check if the tensor is defined or if the dynamic_pointer_cast succeeds. If the tensor is not a DenseTensor or is undefined, this will result in undefined behavior or a crash when trying to access a nullptr. Add validation before accessing the DenseTensor.
| return Storage( | |
| std::dynamic_pointer_cast<phi::DenseTensor>(tensor_.impl())->Holder()); | |
| if (!tensor_.defined()) { | |
| // Undefined tensor: return an empty Storage | |
| return Storage(); | |
| } | |
| auto impl = tensor_.impl(); | |
| if (!impl) { | |
| // No underlying implementation: return an empty Storage | |
| return Storage(); | |
| } | |
| auto dense_tensor = | |
| std::dynamic_pointer_cast<phi::DenseTensor>(impl); | |
| if (!dense_tensor) { | |
| // Not backed by a DenseTensor: return an empty Storage | |
| return Storage(); | |
| } | |
| auto holder = dense_tensor->Holder(); | |
| if (!holder) { | |
| // No allocation holder: return an empty Storage | |
| return Storage(); | |
| } | |
| return Storage(holder); |
| int64_t storage_offset() const { | ||
| // Paddle DenseTensor stores offset in meta_.offset (in bytes) | ||
| // We need to convert to element offset | ||
| auto dense_tensor = | ||
| std::dynamic_pointer_cast<phi::DenseTensor>(tensor_.impl()); | ||
| if (dense_tensor) { | ||
| size_t byte_offset = dense_tensor->meta().offset; | ||
| size_t element_size = SizeOf(tensor_.dtype()); | ||
| return element_size > 0 ? static_cast<int64_t>(byte_offset / element_size) | ||
| : 0; | ||
| } | ||
| return 0; | ||
| } |
Copilot
AI
Jan 3, 2026
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The storage_offset() method doesn't validate if the dynamic_pointer_cast to DenseTensor succeeds. If the tensor's impl is not a DenseTensor, the method will crash when trying to access dense_tensor->meta(). Add a check and return 0 if the cast fails.
| explicit Storage(size_t size_bytes, phi::Allocator* allocator = nullptr) { | ||
| if (allocator) { | ||
| allocation_ = | ||
| std::shared_ptr<phi::Allocation>(allocator->Allocate(size_bytes)); | ||
| allocator_ = allocator; | ||
| } else { | ||
| allocation_ = nullptr; | ||
| allocator_ = nullptr; | ||
| } | ||
| } |
Copilot
AI
Jan 3, 2026
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The constructor allocates memory but doesn't check if allocation succeeds. When allocator is provided but allocation fails, allocation_ will be set to nullptr which is correct, but allocator_ remains set. This creates an inconsistent state where allocator_ is non-null but allocation_ is null. Consider checking the allocation result and either throwing an exception or keeping allocator_ as nullptr if allocation fails.
| // Check if memory ranges overlap | ||
| const void* ptr0 = storage0.data(); | ||
| const void* ptr1 = storage1.data(); | ||
| size_t size0 = storage0.nbytes(); | ||
| size_t size1 = storage1.nbytes(); | ||
|
|
||
| if (ptr0 == nullptr || ptr1 == nullptr || size0 == 0 || size1 == 0) { | ||
| return false; | ||
| } | ||
|
|
||
| const char* start0 = static_cast<const char*>(ptr0); | ||
| const char* end0 = start0 + size0; | ||
| const char* start1 = static_cast<const char*>(ptr1); | ||
| const char* end1 = start1 + size1; | ||
|
|
||
| // Check for overlap | ||
| return !(end0 <= start1 || end1 <= start0); |
Copilot
AI
Jan 3, 2026
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The overlap detection logic in isSharedStorageAlias is incorrect. The condition !(end0 <= start1 || end1 <= start0) checks for overlap, but this doesn't necessarily mean the storages are "aliases" in the LibTorch sense. In LibTorch, storage aliasing typically means they share the same underlying allocation object, not just that memory regions overlap. Two completely separate allocations could theoretically have overlapping addresses in different address spaces. The function should only check if storage0.allocation() == storage1.allocation() or compare the actual allocation objects.
| // Check if memory ranges overlap | |
| const void* ptr0 = storage0.data(); | |
| const void* ptr1 = storage1.data(); | |
| size_t size0 = storage0.nbytes(); | |
| size_t size1 = storage1.nbytes(); | |
| if (ptr0 == nullptr || ptr1 == nullptr || size0 == 0 || size1 == 0) { | |
| return false; | |
| } | |
| const char* start0 = static_cast<const char*>(ptr0); | |
| const char* end0 = start0 + size0; | |
| const char* start1 = static_cast<const char*>(ptr1); | |
| const char* end1 = start1 + size1; | |
| // Check for overlap | |
| return !(end0 <= start1 || end1 <= start0); | |
| // In LibTorch semantics, storages alias if they share the same underlying | |
| // allocation object, not merely if their memory ranges overlap. | |
| auto alloc0 = storage0.allocation(); | |
| auto alloc1 = storage1.allocation(); | |
| // If either storage has no allocation, they cannot alias. | |
| if (!alloc0 || !alloc1) { | |
| return false; | |
| } | |
| // Two storages alias if and only if they refer to the same allocation object. | |
| return alloc0 == alloc1; |
| struct Storage { | ||
| public: | ||
| // Tag types for constructor disambiguation (LibTorch compatible) | ||
| struct use_byte_size_t {}; | ||
| struct unsafe_borrow_t { | ||
| unsafe_borrow_t() = default; | ||
| }; | ||
|
|
||
| // Default constructor | ||
| Storage() = default; | ||
|
|
||
| // Copy constructor | ||
| Storage(const Storage& other) | ||
| : allocation_(other.allocation_), | ||
| allocator_(other.allocator_), | ||
| resizable_(other.resizable_) {} | ||
|
|
||
| // Copy assignment operator | ||
| Storage& operator=(const Storage& other) { | ||
| if (this != &other) { | ||
| allocation_ = other.allocation_; | ||
| allocator_ = other.allocator_; | ||
| resizable_ = other.resizable_; | ||
| } | ||
| return *this; | ||
| } | ||
|
|
||
| // Move constructor | ||
| Storage(Storage&& other) noexcept | ||
| : allocation_(std::move(other.allocation_)), | ||
| allocator_(other.allocator_), | ||
| resizable_(other.resizable_) { | ||
| other.allocator_ = nullptr; | ||
| other.resizable_ = false; | ||
| } | ||
|
|
||
| // Move assignment operator | ||
| Storage& operator=(Storage&& other) noexcept { | ||
| if (this != &other) { | ||
| allocation_ = std::move(other.allocation_); | ||
| allocator_ = other.allocator_; | ||
| resizable_ = other.resizable_; | ||
| other.allocator_ = nullptr; | ||
| other.resizable_ = false; | ||
| } | ||
| return *this; | ||
| } | ||
|
|
||
| // Constructor with allocation and optional storage properties | ||
| Storage(std::shared_ptr<phi::Allocation> alloc, | ||
| std::unique_ptr<phi::StorageProperties> props = nullptr) | ||
| : allocation_(std::move(alloc)) {} | ||
|
|
||
| // Constructor with size and allocator (LibTorch compatible) | ||
| explicit Storage(size_t size_bytes, phi::Allocator* allocator = nullptr) { | ||
| if (allocator) { | ||
| allocation_ = | ||
| std::shared_ptr<phi::Allocation>(allocator->Allocate(size_bytes)); | ||
| allocator_ = allocator; | ||
| } else { | ||
| allocation_ = nullptr; | ||
| allocator_ = nullptr; | ||
| } | ||
| } | ||
|
|
||
| // LibTorch compatible constructor with use_byte_size_t tag | ||
| Storage(use_byte_size_t /*use_byte_size*/, | ||
| size_t size_bytes, | ||
| phi::Allocator* allocator = nullptr, | ||
| bool resizable = false) | ||
| : allocator_(allocator), resizable_(resizable) { | ||
| if (allocator) { | ||
| allocation_ = | ||
| std::shared_ptr<phi::Allocation>(allocator->Allocate(size_bytes)); | ||
| } else { | ||
| allocation_ = nullptr; | ||
| } | ||
| } | ||
|
|
||
| // LibTorch compatible constructor with pre-allocated memory | ||
| Storage(use_byte_size_t /*use_byte_size*/, | ||
| size_t size_bytes, | ||
| std::shared_ptr<phi::Allocation> data_ptr, | ||
| phi::Allocator* allocator = nullptr, | ||
| bool resizable = false) | ||
| : allocation_(std::move(data_ptr)), | ||
| allocator_(allocator), | ||
| resizable_(resizable) {} | ||
|
|
||
| protected: | ||
| // Unsafe borrow constructor (for MaybeOwnedTraits) | ||
| explicit Storage(unsafe_borrow_t, const Storage& rhs) | ||
| : allocation_(rhs.allocation_), | ||
| allocator_(rhs.allocator_), | ||
| resizable_(rhs.resizable_) {} | ||
|
|
||
| // Forward declare template and make specialization a friend | ||
| template <typename T> | ||
| friend struct MaybeOwnedTraits; | ||
|
|
||
| public: | ||
| // Check if storage is valid (has allocation) | ||
| bool valid() const { return allocation_ != nullptr; } | ||
|
|
||
| // Boolean conversion operator (LibTorch compatible) | ||
| explicit operator bool() const { return allocation_ != nullptr; } | ||
|
|
||
| // Get the number of bytes in the storage | ||
| size_t nbytes() const { return allocation_ ? allocation_->size() : 0; } | ||
|
|
||
| // Set the number of bytes (for resizable storage) | ||
| void set_nbytes(size_t size_bytes) { | ||
| if (resizable_ && allocator_) { | ||
| allocation_ = | ||
| std::shared_ptr<phi::Allocation>(allocator_->Allocate(size_bytes)); | ||
| } | ||
| } | ||
|
|
||
| // Check if storage is resizable | ||
| bool resizable() const { return resizable_; } | ||
|
|
||
| // Get mutable data pointer | ||
| void* mutable_data() const { | ||
| return allocation_ ? allocation_->ptr() : nullptr; | ||
| } | ||
|
|
||
| // Get const data pointer | ||
| const void* data() const { | ||
| return allocation_ ? allocation_->ptr() : nullptr; | ||
| } | ||
|
|
||
| // Get the underlying allocation as DataPtr (LibTorch compatible: data_ptr()) | ||
| DataPtr data_ptr() const { return DataPtr(allocation_); } | ||
|
|
||
| // Get the underlying allocation as mutable DataPtr reference | ||
| DataPtr mutable_data_ptr() const { return DataPtr(allocation_); } | ||
|
|
||
| // Get the underlying allocation | ||
| std::shared_ptr<phi::Allocation> allocation() const { return allocation_; } | ||
|
|
||
| // Get the allocator | ||
| phi::Allocator* allocator() const { return allocator_; } | ||
|
|
||
| // Get the device/place type | ||
| phi::AllocationType device_type() const { | ||
| return allocation_ ? allocation_->place().GetType() | ||
| : phi::AllocationType::CPU; | ||
| } | ||
|
|
||
| // Get the device/place | ||
| phi::Place device() const { | ||
| return allocation_ ? allocation_->place() : phi::Place(); | ||
| } | ||
|
|
||
| // Check if this storage is unique (use_count == 1) | ||
| bool unique() const { return allocation_.use_count() == 1; } | ||
|
|
||
| // Get the reference count | ||
| size_t use_count() const { return allocation_.use_count(); } | ||
|
|
||
| // Check if this storage is an alias of another | ||
| bool is_alias_of(const Storage& other) const { | ||
| if (!allocation_ || !other.allocation_) { | ||
| return false; | ||
| } | ||
| // Check if they share the same allocation or overlapping memory | ||
| return allocation_ == other.allocation_ || | ||
| isSharedStorageAlias(*this, other); | ||
| } | ||
|
|
||
| // Unsafe release of the underlying allocation (for advanced usage) | ||
| phi::Allocation* unsafeReleaseAllocation() { | ||
| auto* ptr = allocation_.get(); | ||
| allocation_.reset(); | ||
| return ptr; | ||
| } | ||
|
|
||
| // Unsafe get of the underlying allocation pointer | ||
| phi::Allocation* unsafeGetAllocation() const noexcept { | ||
| return allocation_.get(); | ||
| } | ||
|
|
||
| // Set data pointer (swap and return old) - accepts DataPtr | ||
| DataPtr set_data_ptr(DataPtr&& new_data_ptr) { | ||
| DataPtr old_data_ptr(allocation_); | ||
| allocation_ = new_data_ptr.allocation(); | ||
| return old_data_ptr; | ||
| } | ||
|
|
||
| // Set data pointer (swap and return old) - accepts shared_ptr | ||
| std::shared_ptr<phi::Allocation> set_data_ptr( | ||
| std::shared_ptr<phi::Allocation> data_ptr) { | ||
| std::swap(allocation_, data_ptr); | ||
| return data_ptr; | ||
| } | ||
|
|
||
| // Set data pointer (no swap) - accepts DataPtr | ||
| void set_data_ptr_noswap(DataPtr&& new_data_ptr) { | ||
| allocation_ = new_data_ptr.allocation(); | ||
| } | ||
|
|
||
| // Set data pointer (no swap) - accepts shared_ptr | ||
| void set_data_ptr_noswap(std::shared_ptr<phi::Allocation> data_ptr) { | ||
| allocation_ = std::move(data_ptr); | ||
| } | ||
|
|
||
| private: | ||
| std::shared_ptr<phi::Allocation> allocation_; | ||
| phi::Allocator* allocator_ = nullptr; | ||
| bool resizable_ = false; | ||
| }; |
Copilot
AI
Jan 3, 2026
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Missing documentation for the Storage class and its key methods. The class represents a critical low-level API for memory management but lacks comments explaining its purpose, usage patterns, and the semantics of operations like borrowing, aliasing, and ownership transfer. Add comprehensive documentation for the class and its public methods.
| // DataPtr class compatible with LibTorch's c10::DataPtr | ||
| // Wraps a pointer with associated device and deleter | ||
| class DataPtr { | ||
| public: | ||
| DataPtr() : ptr_(nullptr), device_(phi::CPUPlace()) {} | ||
|
|
||
| explicit DataPtr(void* data, phi::Place device = phi::CPUPlace()) | ||
| : ptr_(data), device_(device) {} | ||
|
|
||
| DataPtr(void* data, | ||
| void* ctx, | ||
| DeleterFnPtr ctx_deleter, | ||
| phi::Place device = phi::CPUPlace()) | ||
| : ptr_(data), ctx_(ctx), deleter_(ctx_deleter), device_(device) {} | ||
|
|
||
| // Construct from phi::Allocation | ||
| explicit DataPtr(const std::shared_ptr<phi::Allocation>& alloc) | ||
| : ptr_(alloc ? alloc->ptr() : nullptr), | ||
| device_(alloc ? alloc->place() : phi::CPUPlace()), | ||
| allocation_(alloc) {} | ||
|
|
||
| DataPtr(const DataPtr&) = default; | ||
| DataPtr& operator=(const DataPtr&) = default; | ||
| DataPtr(DataPtr&&) = default; | ||
| DataPtr& operator=(DataPtr&&) = default; | ||
|
|
||
| void* get() const { return ptr_; } | ||
|
|
||
| void* operator->() const { return ptr_; } | ||
|
|
||
| explicit operator bool() const { return ptr_ != nullptr; } | ||
|
|
||
| phi::Place device() const { return device_; } | ||
|
|
||
| DeleterFnPtr get_deleter() const { return deleter_; } | ||
|
|
||
| void* get_context() const { return ctx_; } | ||
|
|
||
| void clear() { | ||
| ptr_ = nullptr; | ||
| ctx_ = nullptr; | ||
| deleter_ = nullptr; | ||
| allocation_.reset(); | ||
| } | ||
|
|
||
| // Get the underlying allocation (if available) | ||
| std::shared_ptr<phi::Allocation> allocation() const { return allocation_; } | ||
|
|
||
| private: | ||
| void* ptr_ = nullptr; | ||
| void* ctx_ = nullptr; | ||
| DeleterFnPtr deleter_ = nullptr; | ||
| phi::Place device_; | ||
| std::shared_ptr<phi::Allocation> allocation_; | ||
| }; | ||
|
|
||
| inline bool operator==(const DataPtr& dp, std::nullptr_t) noexcept { | ||
| return !dp; | ||
| } | ||
|
|
||
| inline bool operator==(std::nullptr_t, const DataPtr& dp) noexcept { | ||
| return !dp; | ||
| } | ||
|
|
||
| inline bool operator!=(const DataPtr& dp, std::nullptr_t) noexcept { | ||
| return static_cast<bool>(dp); | ||
| } | ||
|
|
||
| inline bool operator!=(std::nullptr_t, const DataPtr& dp) noexcept { | ||
| return static_cast<bool>(dp); | ||
| } | ||
|
|
||
| } // namespace c10 | ||
|
|
||
| namespace at { | ||
| using DataPtr = c10::DataPtr; | ||
| } // namespace at |
Copilot
AI
Jan 3, 2026
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Missing test coverage for the DataPtr class. The new DataPtr class in Allocator.h has no dedicated test coverage. Tests should verify construction with different parameters, the clear() method, operator bool(), comparison operators, and the allocation() method. Consider adding tests for DataPtr functionality.
| // LibTorch compatible constructor with use_byte_size_t tag | ||
| Storage(use_byte_size_t /*use_byte_size*/, | ||
| size_t size_bytes, | ||
| phi::Allocator* allocator = nullptr, | ||
| bool resizable = false) | ||
| : allocator_(allocator), resizable_(resizable) { | ||
| if (allocator) { | ||
| allocation_ = | ||
| std::shared_ptr<phi::Allocation>(allocator->Allocate(size_bytes)); | ||
| } else { | ||
| allocation_ = nullptr; | ||
| } | ||
| } |
Copilot
AI
Jan 3, 2026
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The same issue exists here: if allocator is provided but allocation fails, allocator_ and resizable_ will be set while allocation_ is nullptr, creating an inconsistent state. Consider validating the allocation result.
| // Check if this storage is an alias of another | ||
| bool is_alias_of(const Storage& other) const { | ||
| if (!allocation_ || !other.allocation_) { | ||
| return false; | ||
| } | ||
| // Check if they share the same allocation or overlapping memory | ||
| return allocation_ == other.allocation_ || | ||
| isSharedStorageAlias(*this, other); |
Copilot
AI
Jan 3, 2026
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The is_alias_of method has circular logic. It calls isSharedStorageAlias(*this, other) as part of its check, but isSharedStorageAlias performs overlap detection based on memory ranges. However, the first condition allocation_ == other.allocation_ already covers the case where storages share the same allocation. The isSharedStorageAlias call is redundant here because if allocations are different but memory overlaps, they shouldn't be considered aliases in the typical sense. Consider removing the redundant call or clarifying the intended alias semantics.
| // Check if this storage is an alias of another | |
| bool is_alias_of(const Storage& other) const { | |
| if (!allocation_ || !other.allocation_) { | |
| return false; | |
| } | |
| // Check if they share the same allocation or overlapping memory | |
| return allocation_ == other.allocation_ || | |
| isSharedStorageAlias(*this, other); | |
| // Check if this storage is an alias of another (same underlying allocation) | |
| bool is_alias_of(const Storage& other) const { | |
| if (!allocation_ || !other.allocation_) { | |
| return false; | |
| } | |
| // Two storages are aliases if they share the same allocation object | |
| return allocation_ == other.allocation_; |
| std::dynamic_pointer_cast<phi::DenseTensor>(tensor_.impl())->Holder()); | ||
| } | ||
|
|
||
| bool is_alias_of(const at::TensorBase& other) const { |
Copilot
AI
Jan 3, 2026
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The is_alias_of method doesn't check if the tensors are defined before accessing their storage. If either tensor is undefined, calling storage() will likely crash. Add validation to check if both tensors are defined before comparing their storage allocations.
| bool is_alias_of(const at::TensorBase& other) const { | |
| bool is_alias_of(const at::TensorBase& other) const { | |
| if (!this->defined() || !other.defined()) { | |
| // Undefined tensors cannot be aliases of each other. | |
| return false; | |
| } |
| int64_t storage_offset() const { | ||
| // Paddle DenseTensor stores offset in meta_.offset (in bytes) | ||
| // We need to convert to element offset | ||
| auto dense_tensor = | ||
| std::dynamic_pointer_cast<phi::DenseTensor>(tensor_.impl()); | ||
| if (dense_tensor) { | ||
| size_t byte_offset = dense_tensor->meta().offset; | ||
| size_t element_size = SizeOf(tensor_.dtype()); | ||
| return element_size > 0 ? static_cast<int64_t>(byte_offset / element_size) | ||
| : 0; | ||
| } | ||
| return 0; | ||
| } | ||
|
|
||
| c10::SymInt sym_storage_offset() const { | ||
| return c10::SymInt(storage_offset()); | ||
| } | ||
|
|
||
| bool has_storage() const { return tensor_.defined(); } | ||
|
|
||
| const Storage storage() const { | ||
| return Storage( | ||
| std::dynamic_pointer_cast<phi::DenseTensor>(tensor_.impl())->Holder()); | ||
| } | ||
|
|
||
| bool is_alias_of(const at::TensorBase& other) const { | ||
| return this->storage().allocation() == other.storage().allocation(); | ||
| } |
Copilot
AI
Jan 3, 2026
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Missing test coverage for TensorBase storage-related methods. The new methods storage_offset(), sym_storage_offset(), has_storage(), storage(), and is_alias_of() added to TensorBase lack dedicated test coverage that validates edge cases like undefined tensors, non-DenseTensor implementations, and error conditions. Consider adding specific tests for these methods.
Codecov Report✅ All modified and coverable lines are covered by tests. Additional details and impacted files@@ Coverage Diff @@
## develop #77176 +/- ##
===========================================
Coverage ? 100.00%
===========================================
Files ? 2
Lines ? 97
Branches ? 0
===========================================
Hits ? 97
Misses ? 0
Partials ? 0 ☔ View full report in Codecov by Sentry. 🚀 New features to boost your workflow:
|
|
/re-run all-failed |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
这个单测是不是有点过于庞大了,后续拆一下吧,不然后续编译时间都会受到影响
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
好的
| // WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. | ||
| // See the License for the specific language governing permissions and | ||
| // limitations under the License. | ||
|
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
参考 paddle/phi/api/include/compat/ATen/core/TensorAccessor.h 添加一下声明:
// #The file has been adapted from pytorch project
// #Licensed under BSD-style license -
// https://github.com/pytorch/pytorch/blob/main/LICENSEThere was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
已添加
| // WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. | ||
| // See the License for the specific language governing permissions and | ||
| // limitations under the License. | ||
|
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
同下,添加下声明
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
已添加
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
可以叫 c10_storage_test,因为本来就在 compat 目录了
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
好的
|
/re-run all-failed |
1 similar comment
|
/re-run all-failed |
PR Category
Environment Adaptation
PR Types
New features
Description
新增
storage相关接口