-
Notifications
You must be signed in to change notification settings - Fork 16
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
intel gpu support via target_device parameter #74
base: main
Are you sure you want to change the base?
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We should add some testing in our CI for the support target_device.
Something like this: https://github.com/triton-inference-server/server/blob/main/qa/L0_libtorch_inference_mode/test.sh
We can call it L0_openvino_target_device?
LOG_IF_ERROR( | ||
ReadParameter(params, "CPU_EXTENSION_PATH", &(cpu_ext_path)), | ||
"error when reading parameters"); | ||
ReadParameter(params, "CPU_EXTENSION_PATH", &(cpu_ext_path)); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What is the reason for removing the error checking on reading and parsing the parameters, here and everywhere?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That is not related to the GPU support but those error messages were misleading. Error suggests that something is wrong with the setup but it is perfectly fine to skip those parameters
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Suggestion: Update ReadParameter to take a value indicating if a missing value is ok / provide a default value
RETURN_IF_ERROR(ReadParameter(params,"OPTIONAL_KEY",&(cpu_ext_path),default_value='foo');
That way it is a little clearer the intent.
Note: suggestion only.
@@ -88,6 +98,7 @@ to skip the dynamic batch sizes in backend. | |||
* `ENABLE_BATCH_PADDING`: By default an error will be generated if backend receives a request with batch size less than max_batch_size specified in the configuration. This error can be avoided at a cost of performance by specifying `ENABLE_BATCH_PADDING` parameter as `YES`. | |||
* `RESHAPE_IO_LAYERS`: By setting this parameter as `YES`, the IO layers are reshaped to the dimensions provided in | |||
model configuration. By default, the dimensions in the model is used. | |||
* `TARGET_DEVICE`: Choose the OpenVINO device for running the inference. It could be CPU (default), GPU or any of the virtual devices like AUTO, MULTI, HETERO. Note: using Intel GPU is possible only if `--device /dev/dri` is passed to the container and is supported only on linux with x86_64 arch. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Does OpenVINO support models whose computation is spread across CPU and GPU cores?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, that is possible via using a virtual target device called MULTI which is loadbalancing the request or HETERO target device which spreads a single inference.
{ | ||
if (Kind() != TRITONSERVER_INSTANCEGROUPKIND_CPU) { | ||
if ((Kind() != TRITONSERVER_INSTANCEGROUPKIND_CPU) && |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I believe this check needs to be updated. From what I see we support following kinds:
- CPU
- GPU
- AUTO: If GPU cores are available and model supports GPU deployment, then use GPU otherwise use CPU.
@@ -88,6 +98,7 @@ to skip the dynamic batch sizes in backend. | |||
* `ENABLE_BATCH_PADDING`: By default an error will be generated if backend receives a request with batch size less than max_batch_size specified in the configuration. This error can be avoided at a cost of performance by specifying `ENABLE_BATCH_PADDING` parameter as `YES`. | |||
* `RESHAPE_IO_LAYERS`: By setting this parameter as `YES`, the IO layers are reshaped to the dimensions provided in | |||
model configuration. By default, the dimensions in the model is used. | |||
* `TARGET_DEVICE`: Choose the OpenVINO device for running the inference. It could be CPU (default), GPU or any of the virtual devices like AUTO, MULTI, HETERO. Note: using Intel GPU is possible only if `--device /dev/dri` is passed to the container and is supported only on linux with x86_64 arch. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't think it is a good idea introducing a model level TARGET_DEVICE parameter in the model config. Triton allows you to have multiple model instances. And each instance can specify which device to use for the inference.
So they can have model A, which can specify something like below:
instance_group [
{
count: 1
kind: KIND_GPU
},
{
count: 1
kind: KIND_CPU
}
]
You can rely on information from TRITONBACKEND_ModelInstanceKind API call to determine the kind of the instance.
If the kind is CPU and the model is not loaded (within model_state_), then load the model on CPU and use it.
If the kind is GPU and the model is not loaded (within model_state_), then load the model on GPU and use it.
This allows sharing of the model across the model instances.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Even if the use-case of loading two separate kinds of model instances is not required, I would still prefer using TRITONBACKEND_ModelInstanceKind API instance of another config parameter.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In openvino, target device contains more options then just the selection between CPU and GPU. It can also set virtual device like MULTI, HETERO or AUTO or BATCH. It can also include extra options like a priority list AUTO:GPU,CPU. My understanding is that settings kind as GPU validated if CUDA GPU is present so that could not be used with other types of devices like iGPU. I couldn't find a way for using KIND as the target device without changes outside of the openvino backend code
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In openvino, target device contains more options then just the selection between CPU and GPU. It can also set virtual device like MULTI, HETERO or AUTO or BATCH. It can also include extra options like a priority list AUTO:GPU,CPU. My understanding is that settings kind as GPU validated if CUDA GPU is present so that could not be used with other types of devices like iGPU. I couldn't find a way for using KIND as the target device without changes outside of the openvino backend code
@tanmayv25 do you think we can move forward? Do you need more input from @dtrawins?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@ryanloney - proposal in works - will update
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We are in middle of discussing how to best support new hardware platforms. Directing this to @nnshah1.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think we can update the model configuration to accommodate these fields. It would help keeping the device specification in a single place. The benefit is that in future we might get other backends targetting these devices besides current openVINO.
Functional tests are under preparation in another PR. They should be easy to integrate in CI. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
pending discussion
No description provided.