-
-
Notifications
You must be signed in to change notification settings - Fork 20
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
onnxruntime v1.19.2 #128
onnxruntime v1.19.2 #128
Conversation
…nda-forge-pinning 2024.09.04.00.11.56
Hi! This is the friendly automated conda-forge-linting service. I just wanted to let you know that I linted all conda-recipes in your PR ( |
…nda-forge-pinning 2024.09.06.13.38.35
…nda-forge-pinning 2024.09.06.13.38.35
This comment was marked as outdated.
This comment was marked as outdated.
The file seems there, but maybe just not found:
|
Only the Cuda builds are failing now, and I'm a little puzzled by this. The Cuda 11 builds fail with a very similar but not identical error. |
cc: @jakirkham any ideas? |
Just bumping this for visibility. I could imagine that it is a pretty straightforward fix, but I don't have the hardware to debug this easily. Any help is much appreciated! |
unfortunately, even looking at the cuda 12 i can't figure it out on linux. |
I think jakirkham was AFK last week, so maybe we can ping him again next? |
Just going through some findings; Th ediff between v1.19.2 shows that we still might need to disable installing requirements on windows.
|
oh seems like you fixed pip for windows great! |
@cbourjau just to confirm, you don't need a CUDA enabled machine on linux, just linux+docker. I give up for today, it just doesn't seem like their build scripts changed all that much, and the previous build is found fine.... |
I'm afraid I won't have much time to spend on this in the next couple of weeks. Do you think it may be reasonable to temporarily disable the Cuda builds and to release the CPU-only 1.19.2 packages, @hmaarrfk ? |
Is this really what you want? Generally speaking CUDA is a great enabling technology for ML. Do you want to field the slew of questions that will come from CUDA users updating to 19.2 with CPU only support? I would love it if instead you:
Generally this is what I would suggest for others that have "incomplete" packages. However, I do understand that CUDA is alot of work, but the performance loss so great that it is almost worst to have a onnx package without cuda support...... I am however unable to help for the last few weeks so I can just as easily limit our version of onnx to 18..... (on my own private channel) |
Yeah was on PTO for a bit Noticed that upstream made both a 1.18.2 and a 1.19.2 release around the same time Given this is on 1.18.1, would it be worth trying 1.18.2? This might be a smaller step with fewer changes. Also it might give us the opportunity to fix a few issues before jumping to the 1.19.x series |
Part of the sad sad thing that made me sad is that even rerendering failed on windows: |
It's looking for Please see this CI job with snippet below:
Compare this to where it finds
|
Hi! This is the friendly automated conda-forge-linting service. I wanted to let you know that I linted all conda-recipes in your PR ( Here's what I've got... For recipe/meta.yaml:
|
@traversaro you'll be added on the next rerender. I'm trying to cut the cuda architectures, to get it to work on the CIs..... but..... we are now at 80,86,90 which is likely too few even for my taste. Are you able to build out the windows matrix? |
Hi! This is the friendly automated conda-forge-linting service. I just wanted to let you know that I linted all conda-recipes in your PR ( |
@conda-forge-admin please rerender |
I can look into that, but it will probably take me a few days. If this is blocking, probably we can skip Windows for the time being? By skipping the whole Windows build (instead of just CUDA) we avoid the problem of people updating and ending up with CPU-only onnxruntime. |
I wonder if we can agree on a smaller build matrix for CFEP03. Historically that hasn’t been favored due to confusion. I for example only use python 3.10. Looking into the “even versions” of python due to the more frequent releases. I would be fine with python 3.10 and 3.12 only. |
I would to ensure that CFEP03 is like: Just "run a single command" come back "in 6, 12, 24, 48, 72hrs, doesn't really matter" and upload the jobs. I'm trying to see if we can implement a "megabuild strategy" |
…nda-forge-pinning 2024.10.08.15.30.26
57a4846
to
e0e7e82
Compare
@conda-forge-admin please rerender |
…nda-forge-pinning 2024.10.14.14.02.19
Well. Very nice. Everything seems borkes |
…nda-forge-pinning 2024.10.15.19.09.52
seems like the bot has trouble rerendering. |
…nda-forge-pinning 2024.10.15.19.09.52
Windows logs for cuda 12.0 packages Let me know if you are all OK with this. |
sorry about that, the thumbs up didn't trigger a notification. will upload. |
My bad, I am well aware of this, and I did not provide a comment. Thanks a lot for handling the Windows builds here. |
its ok, just explaining the delay. |
Closes #131
Closes #133
It is very likely that the current package version for this feedstock is out of date.
Checklist before merging this PR:
license_file
is packagedInformation about this PR:
@conda-forge-admin,
please add bot automerge
in the title and merge the resulting PR. This command will add our bot automerge feature to your feedstock.bot-rerun
label to this PR. The bot will close this PR and schedule another one. If you do not have permissions to add this label, you can use the phrase@conda-forge-admin, please rerun bot
in a PR comment to have theconda-forge-admin
add it for you.Pending Dependency Version Updates
Here is a list of all the pending dependency version updates for this repo. Please double check all dependencies before merging.
This PR was created by the regro-cf-autotick-bot. The regro-cf-autotick-bot is a service to automatically track the dependency graph, migrate packages, and propose package version updates for conda-forge. Feel free to drop us a line if there are any issues! This PR was generated by - please use this URL for debugging.