File tree 4 files changed +5
-5
lines changed
4 files changed +5
-5
lines changed Original file line number Diff line number Diff line change 30
30
31
31
# Triton Inference Server
32
32
33
- ** LATEST RELEASE: You are currently on the master branch which tracks
33
+ ** LATEST RELEASE: You are currently on the main branch which tracks
34
34
under-development progress towards the next release. The latest
35
35
release of the Triton Inference Server is 2.10.0 and is available on
36
36
branch
Original file line number Diff line number Diff line change @@ -69,7 +69,7 @@ Triton and so does not appear in /opt/tritonserver/backends).
69
69
The first step for any build is to checkout the
70
70
[ triton-inference-server/server] ( https://github.com/triton-inference-server/server )
71
71
repo branch for the release you are interested in building (or the
72
- master branch to build from the development branch). Then run build.py
72
+ master/main branch to build from the development branch). Then run build.py
73
73
as described below. The build.py script performs these steps when
74
74
building with Docker.
75
75
@@ -129,7 +129,7 @@ without Docker.
129
129
The first step for any build is to checkout the
130
130
[ triton-inference-server/server] ( https://github.com/triton-inference-server/server )
131
131
repo branch for the release you are interested in building (or the
132
- master branch to build from the development branch). Then run build.py
132
+ master/main branch to build from the development branch). Then run build.py
133
133
as described below. The build.py script will perform the following
134
134
steps (note that if you are building with Docker that these same steps
135
135
will be performed during the Docker build within the
Original file line number Diff line number Diff line change @@ -39,7 +39,7 @@ protocols](https://github.com/kubeflow/kfserving/tree/master/docs/predict-api/v2
39
39
that have been proposed by the [ KFServing
40
40
project] ( https://github.com/kubeflow/kfserving ) . To fully enable all
41
41
capabilities Triton also implements a number [ HTTP/REST and GRPC
42
- extensions] ( https://github.com/triton-inference-server/server/tree/master /docs/protocol ) .
42
+ extensions] ( https://github.com/triton-inference-server/server/tree/main /docs/protocol ) .
43
43
to the KFServing inference protocol.
44
44
45
45
The HTTP/REST and GRPC protcols provide endpoints to check server and
Original file line number Diff line number Diff line change @@ -221,7 +221,7 @@ configuration file.
221
221
When using --strict-model-config=false you can see the model
222
222
configuration that was generated for a model by using the [ model
223
223
configuration
224
- endpoint] ( https://github.com/triton-inference-server/server/blob/master /docs/protocol/extension_model_configuration.md ) . The
224
+ endpoint] ( https://github.com/triton-inference-server/server/blob/main /docs/protocol/extension_model_configuration.md ) . The
225
225
easiest way to do this is to use a utility like * curl* :
226
226
227
227
``` bash
You can’t perform that action at this time.
0 commit comments