OnnxRuntime.TrainingSession -> provide model as byte array #17907
-
Hey I have a hopefully trivial answer. Is there a way for the ONNX TrainingSession (in C#) to take the model as a byte[] instead of the path? Have a nice day |
Beta Was this translation helpful? Give feedback.
Replies: 3 comments 4 replies
-
Loading from a buffer is available in the C/C++ API. This has not yet been exposed in the C# API. This should be easy to add. But I don't have a timeline as to when we can get to this. If you like, you could contribute to the project. Otherwise, I will try to get to it before the ONNX Runtime 1.17 release (scheduled for December). |
Beta Was this translation helpful? Give feedback.
-
Yes, it will take in a pretrained ONNX model. We expect the input models to adhere to some constraints. As long as the input training artifacts are generated using the artifact generation utility in |
Beta Was this translation helpful? Give feedback.
-
Hi @baijumeswani, just checking in to see if you think it would be possible to have this feature soon-ish, ideally released in the next couple of months. On top of that, how feasible do you think it would be to have an overload of ExportModelForInferencing that writes to byte array, instead to a file? |
Beta Was this translation helpful? Give feedback.
@phineasng We'll work on exposing a way to load the model from a byte array in C# this quarter. I think that should be quick to do.
As for writing the inference model to a byte array, this might take some time since it will involve reworking how we create the inference model from the trained model. But I will try to get to this as well.