-
Notifications
You must be signed in to change notification settings - Fork 5.9k
[API Compatibility No.361] Param alias for lerp using decorator -part #77170
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: develop
Are you sure you want to change the base?
Conversation
|
你的PR提交成功,感谢你对开源项目的贡献! |
Codecov Report❌ Patch coverage is
❌ Your patch status has failed because the patch coverage (66.66%) is below the target coverage (90.00%). You can increase the patch coverage or adjust the target coverage. Additional details and impacted files@@ Coverage Diff @@
## develop #77170 +/- ##
==========================================
Coverage ? 66.66%
==========================================
Files ? 1
Lines ? 6
Branches ? 0
==========================================
Hits ? 4
Misses ? 2
Partials ? 0 ☔ View full report in Codecov by Sentry. 🚀 New features to boost your workflow:
|
|
/re-run all-failed |
zhwesky2010
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
python/paddle/tensor/math.py
Outdated
| if in_dynamic_or_pir_mode(): | ||
| return _C_ops.lerp(x, y, weight) | ||
| if out is None: | ||
| return _C_ops.lerp(x, y, weight) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
这个为何不采用下沉的方式
python/paddle/tensor/math.py
Outdated
| if in_dynamic_or_pir_mode(): | ||
| return _C_ops.lerp(x, y, weight) | ||
| if out is None: | ||
| return _C_ops.lerp(x, y, weight) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
这个看起来是可以直接采用下沉的方式。再看看吧。
8b99dde to
6a4886a
Compare
| weight = paddle.full(shape=[], fill_value=weight, dtype=x.dtype) | ||
|
|
||
| if in_dynamic_or_pir_mode(): | ||
| return _C_ops.lerp(x, y, weight) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
如果out非None,这里没有处理。这里其实不需要处理,因为_C_ops里面自己会处理是否为None的情况
| if in_dynamic_or_pir_mode(): | ||
| return _C_ops.lerp(x, y, weight) | ||
| if out is None: | ||
| return _C_ops.lerp(x, y, weight, out=out) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
这个可以看下ops.yaml里,使用 Scalar(double) weight 的形式能否下沉
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
没太理解,可以稍微解释一下吗
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
没太理解,可以稍微解释一下吗
下沉的主要问题是这个weight可以传float或Tensor,paddle内部内型Scalar可以接收Tensor以及任意的float/int/double等scalar类型,所以可以将op参数改成Scalar类型。这个你可以试一下,参考下allclose就是用scalar实现的,但是还需要改kernel的参数类型,不排除有兼容性问题。如果有问题就还是用装饰器。
PR Category
User Experience
PR Types
New features
Description
Param alias for paddle.lerp using decorator
Add out param