-
Notifications
You must be signed in to change notification settings - Fork 76
feat[gpu]: jit kernels #4920
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat[gpu]: jit kernels #4920
Conversation
Signed-off-by: Joe Isaacs <[email protected]>
Signed-off-by: Joe Isaacs <[email protected]>
# Conflicts: # vortex-gpu/src/lib.rs
Deploying vortex-bench with
|
Latest commit: |
916f93e
|
Status: | ✅ Deploy successful! |
Preview URL: | https://c62bdc1f.vortex-93b.pages.dev |
Branch Preview URL: | https://ji-jit-bp-for.vortex-93b.pages.dev |
Codecov Report✅ All modified and coverable lines are covered by tests. ☔ View full report in Codecov by Sentry. 🚀 New features to boost your workflow:
|
Signed-off-by: Joe Isaacs <[email protected]>
Signed-off-by: Joe Isaacs <[email protected]>
Signed-off-by: Joe Isaacs <[email protected]>
Signed-off-by: Joe Isaacs <[email protected]>
Signed-off-by: Joe Isaacs <[email protected]>
Signed-off-by: Joe Isaacs <[email protected]>
Signed-off-by: Joe Isaacs <[email protected]>
Signed-off-by: Joe Isaacs <[email protected]>
Signed-off-by: Joe Isaacs <[email protected]>
PType::I64 => "long long", | ||
PType::F32 => "float", | ||
PType::F64 => "double", | ||
PType::F16 => todo!(), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this todo made me go down the rabbit hole https://stackoverflow.com/questions/32735292/can-anyone-provide-sample-code-demonstrating-the-use-of-16-bit-floating-point-in. You need an include and then you can use half
but you really want to use half2
, i.e. pack two halfs into 32 bits
Not quite sure how continuation passing style would look like differently? You are nesting the code emitting statements recursively at any given step. You could have a function that will in turn when called emit the code instead of directly emitting the code BUT I can't quite assess in abstract if its easier... maybe? |
Signed-off-by: Joe Isaacs <[email protected]>
Signed-off-by: Joe Isaacs <[email protected]>
Signed-off-by: Joe Isaacs <[email protected]>
Signed-off-by: Joe Isaacs <[email protected]>
Add support for jitting bitpacking, FoR and alp kernels to cuda.