Skip to content

Commit

Permalink
ENG-12182: Updates based on peer review feedback
Browse files Browse the repository at this point in the history
  • Loading branch information
bredamc committed Oct 16, 2024
1 parent 5e424d9 commit af3b1de
Show file tree
Hide file tree
Showing 11 changed files with 19 additions and 19 deletions.
Original file line number Diff line number Diff line change
Expand Up @@ -39,7 +39,7 @@ endif::[]
endif::[]

ifdef::cloud-service[]
* You have installed the required distributed workloads components as described in link:{rhoaidocshome}{default-format-url}/installing_and_uninstalling_{url-productname-short}/installing-and-deploying-openshift-ai_install#installing-distributed-workload-components_component-install[Installing the distributed workloads components]
* You have installed the required distributed workloads components as described in link:{rhoaidocshome}{default-format-url}/installing_and_uninstalling_{url-productname-short}/installing-and-deploying-openshift-ai_install#installing-distributed-workload-components_component-install[Installing the distributed workloads components].
endif::[]


Expand Down
2 changes: 1 addition & 1 deletion modules/configuring-the-codeflare-operator.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -32,7 +32,7 @@ endif::[]
endif::[]

ifdef::cloud-service[]
* You have installed the required distributed workloads components as described in link:{rhoaidocshome}{default-format-url}/installing_and_uninstalling_{url-productname-short}/installing-and-deploying-openshift-ai_install#installing-distributed-workload-components_component-install[Installing the distributed workloads components]
* You have installed the required distributed workloads components as described in link:{rhoaidocshome}{default-format-url}/installing_and_uninstalling_{url-productname-short}/installing-and-deploying-openshift-ai_install#installing-distributed-workload-components_component-install[Installing the distributed workloads components].
endif::[]


Expand Down
4 changes: 2 additions & 2 deletions modules/configuring-the-training-job.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -17,10 +17,10 @@ ifdef::cloud-service[]
endif::[]

ifndef::upstream[]
* You have access to a data science cluster that is configured to run distributed workloads as described in link:{rhoaidocshome}{default-format-url}/managing_openshift_ai/managing_distributed_workloads_cluster-admin[Managing distributed workloads].
* You have access to a data science cluster that is configured to run distributed workloads as described in link:{rhoaidocshome}{default-format-url}/managing_openshift_ai/managing_distributed_workloads[Managing distributed workloads].
endif::[]
ifdef::upstream[]
* You have access to a data science cluster that is configured to run distributed workloads as described in link:{odhdocshome}/managing-openshift-ai/#managing_distributed_workloads_cluster-admin[Managing distributed workloads].
* You have access to a data science cluster that is configured to run distributed workloads as described in link:{odhdocshome}/managing-openshift-ai/#managing_distributed_workloads[Managing distributed workloads].
endif::[]

ifndef::upstream[]
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -10,10 +10,10 @@ If you do not want to run distributed workloads from notebooks, you can skip thi

.Prerequisites
ifndef::upstream[]
* You can access a data science cluster that is configured to run distributed workloads as described in link:{rhoaidocshome}{default-format-url}/managing_openshift_ai/managing_distributed_workloads_cluster-admin[Managing distributed workloads].
* You can access a data science cluster that is configured to run distributed workloads as described in link:{rhoaidocshome}{default-format-url}/managing_openshift_ai/managing_distributed_workloads[Managing distributed workloads].
endif::[]
ifdef::upstream[]
* You can access a data science cluster that is configured to run distributed workloads as described in link:{odhdocshome}/managing-openshift-ai/#managing_distributed_workloads_cluster-admin[Managing distributed workloads].
* You can access a data science cluster that is configured to run distributed workloads as described in link:{odhdocshome}/managing-openshift-ai/#managing_distributed_workloads[Managing distributed workloads].
endif::[]

ifndef::upstream[]
Expand Down
4 changes: 2 additions & 2 deletions modules/installing-the-distributed-workloads-components.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -215,9 +215,9 @@ When the status of the *codeflare-operator-manager-_<pod-id>_*, *kuberay-operato

.Next Step
ifdef::upstream[]
Configure the distributed workloads feature as described in link:{odhdocshome}/managing-openshift-ai/#managing_distributed_workloads_cluster-admin[Managing distributed workloads].
Configure the distributed workloads feature as described in link:{odhdocshome}/managing-openshift-ai/#managing_distributed_workloads[Managing distributed workloads].
endif::[]
ifndef::upstream[]
Configure the distributed workloads feature as described in link:{rhoaidocshome}{default-format-url}/managing_openshift_ai/managing_distributed_workloads_cluster-admin[Managing distributed workloads].
Configure the distributed workloads feature as described in link:{rhoaidocshome}{default-format-url}/managing_openshift_ai/managing_distributed_workloads[Managing distributed workloads].
endif::[]

4 changes: 2 additions & 2 deletions modules/monitoring-the-training-job.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -11,10 +11,10 @@ The example training job in this section is based on the IBM and Hugging Face tu
.Prerequisites

ifndef::upstream[]
* You have access to a data science cluster that is configured to run distributed workloads as described in link:{rhoaidocshome}{default-format-url}/managing_openshift_ai/managing_distributed_workloads_cluster-admin[Managing distributed workloads].
* You have access to a data science cluster that is configured to run distributed workloads as described in link:{rhoaidocshome}{default-format-url}/managing_openshift_ai/managing_distributed_workloads[Managing distributed workloads].
endif::[]
ifdef::upstream[]
* You have access to a data science cluster that is configured to run distributed workloads as described in link:{odhdocshome}/managing-openshift-ai/#managing_distributed_workloads_cluster-admin[Managing distributed workloads].
* You have access to a data science cluster that is configured to run distributed workloads as described in link:{odhdocshome}/managing-openshift-ai/#managing_distributed_workloads[Managing distributed workloads].
endif::[]

ifndef::upstream[]
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -29,7 +29,7 @@ endif::[]


.Procedure
. Configure the disconnected data science cluster to run distributed workloads as described in link:{rhoaidocshome}{default-format-url}/managing_openshift_ai/managing_distributed_workloads_cluster-admin[Managing distributed workloads].
. Configure the disconnected data science cluster to run distributed workloads as described in link:{rhoaidocshome}{default-format-url}/managing_openshift_ai/managing_distributed_workloads[Managing distributed workloads].
. In the `ClusterConfiguration` section of the notebook or pipeline, ensure that the `image` value specifies a Ray cluster image that you can access from the disconnected environment:
* Notebooks use the Ray cluster image to create a Ray cluster when running the notebook.
* Pipelines use the Ray cluster image to create a Ray cluster during the pipeline run.
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -8,10 +8,10 @@ To run a distributed workload from a pipeline, you must first update the pipelin

.Prerequisites
ifndef::upstream[]
* You can access a data science cluster that is configured to run distributed workloads as described in link:{rhoaidocshome}{default-format-url}/managing_openshift_ai/managing_distributed_workloads_cluster-admin[Managing distributed workloads].
* You can access a data science cluster that is configured to run distributed workloads as described in link:{rhoaidocshome}{default-format-url}/managing_openshift_ai/managing_distributed_workloads[Managing distributed workloads].
endif::[]
ifdef::upstream[]
* You can access a data science cluster that is configured to run distributed workloads as described in link:{odhdocshome}/managing-openshift-ai/#managing_distributed_workloads_cluster-admin[Managing distributed workloads].
* You can access a data science cluster that is configured to run distributed workloads as described in link:{odhdocshome}/managing-openshift-ai/#managing_distributed_workloads[Managing distributed workloads].
endif::[]


Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -11,10 +11,10 @@ In the examples in this procedure, you edit the demo notebooks to provide the re

.Prerequisites
ifndef::upstream[]
* You can access a data science cluster that is configured to run distributed workloads as described in link:{rhoaidocshome}{default-format-url}/managing_openshift_ai/managing_distributed_workloads_cluster-admin[Managing distributed workloads].
* You can access a data science cluster that is configured to run distributed workloads as described in link:{rhoaidocshome}{default-format-url}/managing_openshift_ai/managing_distributed_workloads[Managing distributed workloads].
endif::[]
ifdef::upstream[]
* You can access a data science cluster that is configured to run distributed workloads as described in link:{odhdocshome}/managing-openshift-ai/#managing_distributed_workloads_cluster-admin[Managing distributed workloads].
* You can access a data science cluster that is configured to run distributed workloads as described in link:{odhdocshome}/managing-openshift-ai/#managing_distributed_workloads[Managing distributed workloads].
endif::[]

* You can access the following software from your data science cluster:
Expand Down
4 changes: 2 additions & 2 deletions modules/running-the-training-job.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -11,10 +11,10 @@ The example training job in this section is based on the IBM and Hugging Face tu
.Prerequisites

ifndef::upstream[]
* You have access to a data science cluster that is configured to run distributed workloads as described in link:{rhoaidocshome}{default-format-url}/managing_openshift_ai/managing_distributed_workloads_cluster-admin[Managing distributed workloads].
* You have access to a data science cluster that is configured to run distributed workloads as described in link:{rhoaidocshome}{default-format-url}/managing_openshift_ai/managing_distributed_workloads[Managing distributed workloads].
endif::[]
ifdef::upstream[]
* You have access to a data science cluster that is configured to run distributed workloads as described in link:{odhdocshome}/managing-openshift-ai/#managing_distributed_workloads_cluster-admin[Managing distributed workloads].
* You have access to a data science cluster that is configured to run distributed workloads as described in link:{odhdocshome}/managing-openshift-ai/#managing_distributed_workloads[Managing distributed workloads].
endif::[]

ifndef::upstream[]
Expand Down
4 changes: 2 additions & 2 deletions modules/viewing-kueue-alerts-for-distributed-workloads.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -18,10 +18,10 @@ ifdef::cloud-service[]
endif::[]

ifndef::upstream[]
* You can access a data science cluster that is configured to run distributed workloads as described in link:{rhoaidocshome}{default-format-url}/managing_openshift_ai/managing_distributed_workloads_cluster-admin[Managing distributed workloads].
* You can access a data science cluster that is configured to run distributed workloads as described in link:{rhoaidocshome}{default-format-url}/managing_openshift_ai/managing_distributed_workloads[Managing distributed workloads].
endif::[]
ifdef::upstream[]
* You can access a data science cluster that is configured to run distributed workloads as described in link:{odhdocshome}/managing-openshift-ai/#managing_distributed_workloads_cluster-admin[Managing distributed workloads].
* You can access a data science cluster that is configured to run distributed workloads as described in link:{odhdocshome}/managing-openshift-ai/#managing_distributed_workloads[Managing distributed workloads].
endif::[]

ifndef::upstream[]
Expand Down

0 comments on commit af3b1de

Please sign in to comment.