Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Modify RunONNXModel.py for pip install #2857

Open
wants to merge 20 commits into
base: main
Choose a base branch
from

Conversation

chentong319
Copy link
Collaborator

The goal is to create a python package so that we can run an inference like onnxruntime in python script. Since the RunONNXModel.py provides most of the necessary functionality, I added extra support in that script. The pip package related code will be checked later. I tested with a local pip install. We can upload the package later.

Major changes:

  1. Add the function interface to call the main in RunONNXModel.py with the parameters similar to onnxruntime:
  2. Accept list of arrays as input.
  3. Return the outputs

Example:

import numpy as np
import onnxmlirrun

a = np.zeros((3,4,5), dtype=np.float32)
b = a + 4
r = onnxmlirrun.RunONNXModel.onnxmlirrun(compiled_so="/Users/chentong/Projects/onnx-mlir/build/test_add.so", inputs=[a,b])
print(r)
r = onnxmlirrun.RunONNXModel.onnxmlirrun(onnx_model="/Users/chentong/Projects/onnx-mlir/build/test_add.onnx", inputs=[a,b])
print(r)

We may have different name for the functions and the package. Comments are welcome.

@AlexandreEichenberger
Copy link
Collaborator

@chentong319 Can you add a reference to the ORT interface that this PR is imitating.

In your first example

r = onnxmlirrun.RunONNXModel.onnxmlirrun(compiled_so="/Users/chentong/Projects/onnx-mlir/build/test_add.so", inputs=[a,b])

was the onnx_model parameter omitted on purpose? Just for me to understand better the interface.

@chentong319
Copy link
Collaborator Author

@chentong319 Can you add a reference to the ORT interface that this PR is imitating.

In your first example

r = onnxmlirrun.RunONNXModel.onnxmlirrun(compiled_so="/Users/chentong/Projects/onnx-mlir/build/test_add.so", inputs=[a,b])

was the onnx_model parameter omitted on purpose? Just for me to understand better the interface.

Yes, onnx_model is not needed if the compiled .so is provided. By the way, the two onnxmilrrun.RunONNXModel.onnxmlirrun commands are two independent ways to do inference.

@AlexandreEichenberger
Copy link
Collaborator

Got it, so we can provide a onnx file or a pre-compiled binary, smart, thanks.

if onnx_model :
args.model = onnx_model
if compiled_so :
args.load_so = compiled_so
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should we check if both onnx_model and compiled_so are not given?

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Make sense, and maybe if none are given, an error?

@tungld
Copy link
Collaborator

tungld commented Jul 1, 2024

onnxmlirrun.RunONNXModel.onnxmlirrun

My two cents here. For the package name (the first onnxmlirrun), just onnxmlir is enough to avoid typing two Rs.
I would remove RunONNXModel, but it seems not straightforward unless we reorganize RunONNXModel.py.

Anyway, we have several python utilities, perhaps it's time to reorganize them into a single python package.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

3 participants