Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fix some of the errors I've encountered while following tutorials #26

Open
wants to merge 3 commits into
base: main
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion Conceptual_Guide/Part_1-model_deployment/client.py
Original file line number Diff line number Diff line change
Expand Up @@ -207,6 +207,6 @@ def recognition_postprocessing(scores: np.ndarray) -> str:
)

# Process response from recognition model
final_text = recognition_postprocessing(recognition_response.as_numpy("308"))
final_text = recognition_postprocessing(recognition_response.as_numpy("307"))

print(final_text)
Empty file.
Original file line number Diff line number Diff line change
Expand Up @@ -36,7 +36,7 @@ input [
]
output [
{
name: "308"
name: "307"
data_type: TYPE_FP32
dims: [ 1, 26, 37 ]
}
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -110,7 +110,7 @@ model.load_state_dict(state)

# Create ONNX file by tracing model
trace_input = torch.randn(1, 1, 32, 100)
torch.onnx.export(model, trace_input, "str.onnx", verbose=True, dynamic_axes={'input.1':[0],'308':[0]})
torch.onnx.export(model, trace_input, "str.onnx", verbose=True, dynamic_axes={'input.1':[0],'307':[0]})
```

### Launching the server
Expand Down Expand Up @@ -231,7 +231,7 @@ Request concurrency: 16
```
As each of the requests had a batch size (of 2), while the maximum batch size of the model was 8, dynamically batching these requests resulted in considerably improved throughput. Another consequence is a reduction in the latency. This reduction can be primarily attributed to reduced wait time in queue wait time. As the requests are batched together, multiple requests can be processed in parallel.

* **Dynamic Batching with multiple model instances**: To set up the Triton Server in this configuration, add `instance_group` in `config.pbtxt` and make sure to include `--gpus=1` and make sure to include `--gpus=1` in the `docker run` command to set up the server. Include `dynamic_batching` per instructions of the previous section in the model configuration. A point to note is that peak GPU utilization on the GPU shot up to 74% (A100 in this case) while just using a single model instance with dynamic batching. Adding one more instance will definitely improve performance but linear perf scaling will not be achieved in this case.
* **Dynamic Batching with multiple model instances**: To set up the Triton Server in this configuration, add `instance_group` in `config.pbtxt` and make sure to include `--gpus=1` in the `docker run` command to set up the server. Include `dynamic_batching` per instructions of the previous section in the model configuration. A point to note is that peak GPU utilization on the GPU shot up to 74% (A100 in this case) while just using a single model instance with dynamic batching. Adding one more instance will definitely improve performance but linear perf scaling will not be achieved in this case.

```
# Query
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -36,7 +36,7 @@ input [
]
output [
{
name: "308"
name: "307"
data_type: TYPE_FP32
dims: [ 26, 37 ]
}
Expand Down
4 changes: 2 additions & 2 deletions Conceptual_Guide/Part_5-Model_Ensembles/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -316,10 +316,10 @@ ensemble_scheduling {
We'll again be launching Triton using docker containers. This time, we'll start an interactive session within the container instead of directly launching the triton server.

```bash
docker run --gpus=all -it --shm-size=256m --rm \
docker run --gpus=all -it --shm-size=512m --rm \
-p8000:8000 -p8001:8001 -p8002:8002 \
-v ${PWD}:/workspace/ -v ${PWD}/model_repository:/models \
nvcr.io/nvidia/tritonserver:22.12-py3
nvcr.io/nvidia/tritonserver:yy.mm-py3
```

We'll need to install a couple of dependencies for our Python backend scripts.
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -100,7 +100,7 @@ ensemble_scheduling {
value: "cropped_images"
}
output_map {
key: "308"
key: "307"
value: "recognition_output"
}
},
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -36,7 +36,7 @@ input [
]
output [
{
name: "308"
name: "307"
data_type: TYPE_FP32
dims: [ 26, 37 ]
}
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -45,5 +45,5 @@
trace_input,
model_directory / "model.onnx",
verbose=True,
dynamic_axes={"input.1": [0], "308": [0]},
dynamic_axes={"input.1": [0], "307": [0]},
)
Original file line number Diff line number Diff line change
Expand Up @@ -40,11 +40,6 @@ output [
name: "last_hidden_state"
data_type: TYPE_FP32
dims: [-1, -1]
},
{
name: "1519"
data_type: TYPE_FP32
dims: [768]
}
]
ensemble_scheduling {
Expand Down Expand Up @@ -72,10 +67,6 @@ ensemble_scheduling {
key: "last_hidden_state"
value: "last_hidden_state"
}
output_map {
key: "1519"
value: "1519"
}
}
]
}
8 changes: 4 additions & 4 deletions HuggingFace/python_model_repository/python_vit/1/model.py
Original file line number Diff line number Diff line change
Expand Up @@ -30,22 +30,22 @@

class TritonPythonModel:
def initialize(self, args):
self.feature_extractor = ViTFeatureExtractor.from_pretrained('google/vit-base-patch16-224-in21k').to("cuda")
self.feature_extractor = ViTFeatureExtractor.from_pretrained('google/vit-base-patch16-224-in21k')#.to("cuda")
self.model = ViTModel.from_pretrained("google/vit-base-patch16-224-in21k").to("cuda")

def execute(self, requests):
responses = []
for request in requests:
inp = pb_utils.get_input_tensor_by_name(request, "image")
input_image = np.squeeze(inp.as_numpy()).transpose((2,0,1))
inputs = self.feature_extractor(images=input_image, return_tensors="pt")
inputs = self.feature_extractor(images=input_image, return_tensors="pt").to("cuda")

outputs = self.model(**inputs)

inference_response = pb_utils.InferenceResponse(output_tensors=[
pb_utils.Tensor(
"label",
outputs.last_hidden_state.numpy()
"last_hidden_state",
outputs.last_hidden_state.detach().cpu().numpy()
)
])
responses.append(inference_response)
Expand Down
2 changes: 1 addition & 1 deletion Quick_Deploy/ONNX/client.py
Original file line number Diff line number Diff line change
Expand Up @@ -39,7 +39,7 @@ def rn50_preprocess(img_path="img1.jpg"):
transforms.ToTensor(),
transforms.Normalize(mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225]),
])
return np.expand_dims(preprocess(img).numpy(),axis=0)
return np.expand_dims(preprocess(img).numpy(), axis=0).squeeze()

transformed_img = rn50_preprocess()

Expand Down
44 changes: 44 additions & 0 deletions Quick_Deploy/ONNX/model_repository/densenet_onnx/config.pbtxt
Original file line number Diff line number Diff line change
@@ -0,0 +1,44 @@
# Copyright 2023, NVIDIA CORPORATION & AFFILIATES. All rights reserved.
#
# Redistribution and use in source and binary forms, with or without
# modification, are permitted provided that the following conditions
# are met:
# * Redistributions of source code must retain the above copyright
# notice, this list of conditions and the following disclaimer.
# * Redistributions in binary form must reproduce the above copyright
# notice, this list of conditions and the following disclaimer in the
# documentation and/or other materials provided with the distribution.
# * Neither the name of NVIDIA CORPORATION nor the names of its
# contributors may be used to endorse or promote products derived
# from this software without specific prior written permission.
#
# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS ``AS IS'' AND ANY
# EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
# IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
# PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR
# CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL,
# EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
# PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR
# PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY
# OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.

name: "densenet_onnx"
platform: "onnxruntime_onnx"
max_batch_size : 0
input [
{
name: "data_0"
data_type: TYPE_FP32
dims: [ 3, 224, 224 ]
reshape { shape: [ 1, 3, 224, 224 ] }
}
]
output [
{
name: "fc6_1"
data_type: TYPE_FP32
dims: [ 1, 1000 ,1, 1]
}
]