Skip to content

how to set opset in generated model by to_model_proto? #1559

@vincentme

Description

@vincentme

behavior

The auto-generated opset in the model by to_model_proto is not consistent with the opset set in individual subfunction by @script. The former is opset_import: ["this" : 1, "" : 21] and the latter is say 20, so the onnx.checker complains.

I tried to set opset in @script or to_model_proto(), but seems does not work.

code to reproduce

from onnxscript import opset20 as op
from onnxscript import FLOAT
from onnxscript import script
from onnx import TensorProto

@script()
def to_float32(x):
    return op.Cast(x, to = TensorProto.FLOAT)

@script()
def self_norm(x):
    x_max = op.ReduceMax(x)
    x_min = op.ReduceMin(x)
    y = (x-x_min)/(x_max-x_min) # (x-x_min)/(x_max-x_min)
    
    return y, x_max, x_min

@script()
def algorithm(img_in: FLOAT['row', 'col']) -> tuple[FLOAT['row', 'col'], FLOAT[1, 1], FLOAT[1, 1]]:
    img_out, img_in_max, img_in_min = self_norm(to_float32(img_in))
    return img_out, img_in_max, img_in_min


if __name__ == '__main__':
    import numpy as np
    import onnx
    
    img_in_np = (np.random.random_sample((3, 4))*100).astype(np.uint16)
    
    img_out, img_in_max, img_in_min = algorithm(img_in_np)
    
    model = algorithm.to_model_proto()
    model = onnx.shape_inference.infer_shapes(model)
    print(onnx.printer.to_text(model))
    onnx.checker.check_model(model)

error message

ValidationError: Opset import for domain  in function op Castis not compatible with the version imported by model. FunctionOp imports version 20 whereas model imports version 21

Metadata

Metadata

Assignees

No one assigned

    Labels

    bugSomething isn't workingcontribution welcomeWe welcome code contributions for this

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions