-
Notifications
You must be signed in to change notification settings - Fork 1.4k
Closed
Labels
Description
Recently, I got more and more feature requests to run bundle for real-time inference in MONAI Label, MONAI Deploy, and NVIDIA NIMs, etc.
There are 2 main blockers to support it:
- Our current inference examples are for
batch inference, for example:
https://github.com/Project-MONAI/model-zoo/blob/dev/models/spleen_ct_segmentation/configs/inference.json
We have lazy-instantiation for all the components in the config and pre-define all the datalist in the config.
But for real-time inference, we should instantiate all the python components defined in the config, and keep the model idle in GPU, then waiting for input data request.
Our current design can't change the config content once instantiated, because we dodeep copyduring parsing:
https://github.com/Project-MONAI/MONAI/blob/dev/monai/bundle/config_parser.py#L347
I made a very hacky method to replace input data, it works but obviously not general for all bundles:
ConfigWorkflow.parser.ref_resolver.items["dataset"].config["data"][0] = input_data- [Optional] 3rd party applications usually have their own
inputandoutputpipelines, they need to remove or replace theLoadImageandSaveImagetransforms in the bundle config. We only haveMERGE_KEY, missing thedeletekey:
https://github.com/Project-MONAI/MONAI/blob/dev/monai/bundle/utils.py#L252
Could you please help investigate this problem and make an ideal solution together?
It can be an important feature for MONAI 1.5.
Thanks in advance.