Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Yolact][ONNX Frontend][Paritioning Issue] Yolact - Instance segmentation model compilation issue #86

Open
abdulazizm opened this issue Jan 20, 2022 · 1 comment

Comments

@abdulazizm
Copy link

abdulazizm commented Jan 20, 2022

Related to #85

Exported Yolact model to ONNX and tried compiling with ONNX frontend

onnx_model = onnx.load('yolact.onnx')
onnx.checker.check_model(onnx_model)

With mod = relay.transform.DynamicToStatic()(mod) just before mod = partition_for_vitis_ai(mod, params, dpu=target),

File "/opt/vitis_ai/conda/envs/vitis-ai-tensorflow/lib/python3.6/site-packages/pyxir-0.3.2-py3.6-linux-x86_64.egg/pyxir/graph/layer/xlayer_factory.py", line 43, in factory_func
    d = register_func(attrs, in_xlayers)
File "/opt/vitis_ai/conda/envs/vitis-ai-tensorflow/lib/python3.6/site-packages/pyxir-0.3.2-py3.6-linux-x86_64.egg/pyxir/graph/ops/l1_basic_nn.py", line 630, in sub
    shape = TensorShape(get_numpy_broadcasted_shape(lX.shapes[:], rX.shapes[:]))
File "/opt/vitis_ai/conda/envs/vitis-ai-tensorflow/lib/python3.6/site-packages/pyxir-0.3.2-py3.6-linux-x86_64.egg/pyxir/shapes/tools.py", line 39, in get_numpy_broadcasted_shape
    " {} and {}".format(shape_a, shape_b))
ValueError: Invalid shapes for broadcasted additions: [-1, 18225, 81] and [1, 19248, 1]

FYI:
Without mod = relay.transform.DynamicToStatic()(mod) before mod = partition_for_vitis_ai(mod, params, dpu=target)

  File "/opt/vitis_ai/conda/envs/vitis-ai-tensorflow/lib/python3.6/site-packages/pyxir-0.3.2-py3.6-linux-x86_64.egg/pyxir/frontend/tvm/relay_tools/relay_l0_expr_and_others.py", line 437, in <listcomp>
    relay_shape = TensorShape([int(s.value) for s in list(ty.shape)])
  File "/opt/vitis_ai/conda/envs/vitis-ai-tensorflow/lib/python3.6/site-packages/tvm-0.8.dev1859+g627e92e7c-py3.6-linux-x86_64.egg/tvm/runtime/object.py", line 67, in __getattr__
    raise AttributeError("%s has no attribute %s" % (str(type(self)), name))
AttributeError: tir.Any object has no attributed value
During handling of the above exception, another exception occurred:

AttributeError: <class 'tvm.tir.expr.Any'> has no attribute value

With pytorch frontend, compilation goes up to 3rd step, but with ONNX frontend struck in 1st step

  • RELAY IR TO PYXIR
  • LAYOUT TRANSFORMATION PASS
  • GRAPH IMPORTED FROM RELAY
@jtuyls
Copy link

jtuyls commented Feb 21, 2022

@abdulazizm , I put a fix for the yolact model in https://github.com/Xilinx/pyxir/tree/fix-yolact and used following script. Could you verify whether this works for you? The performance isn't great yet as the the bilinear upsample layers are prohibiting a large part of the model to be offloaded to the DPU. I am verifying whether these could be offloaded to the DPU as well.

import os
import sys
import numpy as np
import cv2
import time
from typing import List
from pathlib import Path
from PIL import Image

import onnx

import pyxir
import pyxir.contrib.target.DPUCADF8H

import logging

import tvm
from tvm import contrib
import tvm.relay as relay
from tvm.relay import transform
from tvm.contrib import utils, graph_executor as graph_runtime
from tvm.contrib.target import vitis_ai
from tvm.relay.build_module import bind_params_by_name
from tvm.relay.op.contrib.vitis_ai import partition_for_vitis_ai

import logging
logging.basicConfig()
logger = logging.getLogger('pyxir')
# logger.setLevel(logging.DEBUG)
logger.setLevel(logging.INFO)


input_name  = 'input.1'
input_shape = (1, 3, 550, 550)
shape_dict  = {input_name:input_shape}
dpu_target  = "DPUCAHX8H-u50lv"
tvm_target  = 'llvm'
lib_kwargs  = {}


model_path = "yolact_resnet50.onnx"
onnx_model = onnx.load(model_path)
mod, params = relay.frontend.from_onnx(onnx_model, shape_dict)
mod = relay.transform.InferType()(mod)

# import pdb; pdb.set_trace()

mod = partition_for_vitis_ai(mod, params, dpu=dpu_target)

# import pdb; pdb.set_trace()

export_rt_mod_file = os.path.join(os.getcwd(), 'vitis_ai.rtmod')
build_options = {
    'dpu': dpu_target,
    'export_runtime_module': export_rt_mod_file
}
with tvm.transform.PassContext(opt_level=3, config={'relay.ext.vitis_ai.options': build_options}):   
	lib = relay.build(mod, tvm_target, params=params)

rt_mod = graph_runtime.GraphModule(lib["default"](tvm.cpu()))

## QUANTIZATION ##

def transform_image(image):
    image = np.array(image) - np.array([123., 117., 104.])
    image /= np.array([58.395, 57.12, 57.375])
    image = image.transpose((2, 0, 1))
    image = image[np.newaxis, :]
    return image

def inputs_func(img_files: List[str]):
    inputs = []
    for img_path in img_files:
        img = Image.open(img_path)
        img = img.convert('RGB')
        img = img.resize(input_shape[2:])
       
        inputs.append(transform_image(img))
    return inputs

px_quant_size = int(os.environ['PX_QUANT_SIZE']) \
    if 'PX_QUANT_SIZE' in os.environ else 128

print("Start OTF Quantization on first {} images".format(px_quant_size))
QUANT_DIR = "./data"
quant_files = [os.path.join(QUANT_DIR, f) for f in os.listdir(QUANT_DIR)
             if f.endswith(('JPEG', 'jpg', 'png'))][:px_quant_size]
quant_images = inputs_func(quant_files)
print('Loaded {} inputs successfully.'.format(len(quant_images)))

for i in range(px_quant_size):
    rt_mod.set_input(input_name, quant_images[i]) 
    rt_mod.run()

lib.export_library('tvm_dpu_cpu.so')


Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants