Skip to content

Add responsive inference support in bundle #8134

@Nic-Ma

Description

@Nic-Ma

Hi @ericspod , @KumoLiu ,

Recently, I got more and more feature requests to run bundle for real-time inference in MONAI Label, MONAI Deploy, and NVIDIA NIMs, etc.
There are 2 main blockers to support it:

  1. Our current inference examples are for batch inference, for example:
    https://github.com/Project-MONAI/model-zoo/blob/dev/models/spleen_ct_segmentation/configs/inference.json
    We have lazy-instantiation for all the components in the config and pre-define all the datalist in the config.
    But for real-time inference, we should instantiate all the python components defined in the config, and keep the model idle in GPU, then waiting for input data request.
    Our current design can't change the config content once instantiated, because we do deep copy during parsing:
    https://github.com/Project-MONAI/MONAI/blob/dev/monai/bundle/config_parser.py#L347
    I made a very hacky method to replace input data, it works but obviously not general for all bundles:
ConfigWorkflow.parser.ref_resolver.items["dataset"].config["data"][0] = input_data
  1. [Optional] 3rd party applications usually have their own input and output pipelines, they need to remove or replace the LoadImage and SaveImage transforms in the bundle config. We only have MERGE_KEY, missing the delete key:
    https://github.com/Project-MONAI/MONAI/blob/dev/monai/bundle/utils.py#L252

Could you please help investigate this problem and make an ideal solution together?
It can be an important feature for MONAI 1.5.

Thanks in advance.

Metadata

Metadata

Labels

Type

No type

Projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions