Replies: 3 comments
-
Yes, good idea. Here is the code of saving: Line 815 in 532ac4c and restore: Line 870 in 532ac4c Feel free to submit a PR. |
Beta Was this translation helpful? Give feedback.
-
Okay great! I've been looking at the code and I am wondering what the best way to incorporate ONNX into your design. Each backend has a different interface to ONNX. For example, PyTorch exports an ONNX model like |
Beta Was this translation helpful? Give feedback.
-
I am not sure what the best way is, as I am not familiar with ONNX. What you describe seems good, i.e., we can implement the code for each backend separately. |
Beta Was this translation helpful? Give feedback.
-
I'm interesting in exporting a deepxde model to ONNX. This is pretty straightforward using some of the backends. For example, using PyTorch is as simple as
torch.onnx.export(model, "model_saved_file.onnx",...)
. That's the gist of the command but needs modelinputs
and other arguments for protobuf (please see documentation).https://pytorch.org/tutorials/advanced/super_resolution_with_onnxruntime.html
Since DeepXDE supports multiple backends, I figured having the functionality to transfer to one backend to another would be useful in general (for example, performance analysis with different backends). Looking at DeepXDE briefly, adding the ability to talk to ONNX on the DeepXDE layer on top of the backends doesn't seem difficult.
Any interest?
Beta Was this translation helpful? Give feedback.
All reactions