Skip to content
This repository has been archived by the owner on Jan 31, 2024. It is now read-only.

Conversation

shalberd
Copy link
Contributor

@shalberd shalberd commented Jul 6, 2023

related to: opendatahub-io/kubeflow#113 (imagePullPolicy)

also related to opendatahub-io/opendatahub-operator#572 (consistent and absolutely stable version of ose-oauth-proxy across all components in terms of sha256 digest)

docker pull registry.redhat.io/openshift4/ose-oauth-proxy:v4.10
v4.10: Pulling from openshift4/ose-oauth-proxy
1df162fae087: Already exists 
d20a374ee8f7: Already exists 
e88a8f7a57fc: Pull complete 
dcbbfecd111e: Pull complete 
Digest: sha256:ab112105ac37352a2a4916a39d6736f5db6ab4c29bad4467de8d613e80e9bb33
Status: Downloaded newer image for registry.redhat.io/openshift4/ose-oauth-proxy:v4.10
registry.redhat.io/openshift4/ose-oauth-proxy:v4.10

Using imagePullPolicy: IfNotPresent is ensuring cached image versions on nodes are used for tags of odh notebook controller and kubeflow notebook controller. Since we are using static tags here and no :latest, imagePullPolicy: IfNotPresent increases stability of the cluster and reduces load times of images.

Description

Changed imagePullPolicy of odh notebook controller manager and kubeflow notebook controller manager to IfNotPresent.
Changed ODH Notebook controller argument for ose-oauth-proxy to an sha256 digest notation equivalent of ose-oauth-proxy:v4.10, the latest digest. Doing this because v4.10 does change all the time, so having a unique digest is important for both disconnected cluster support and well as ensuring the image of ose-oauth-question in cache on the openshift nodes is always the same.

Using sha256 manifest link digest value of v4.10 from July 6 2023

https://catalog.redhat.com/software/containers/openshift4/ose-oauth-proxy/5cdb2133bed8bd5717d5ae64?architecture=amd64&tag=v4.10.0-202306170106.p0.g799d414.assembly.stream&push_date=1688610772000&container-tabs=gti

Static tags of odh notebook controller https://github.com/opendatahub-io/odh-manifests/blob/master/odh-notebook-controller/odh-notebook-controller/base/kustomization.yaml#L9

and kubeflow notebook controller, so node image cache can be used.

https://github.com/opendatahub-io/odh-manifests/blob/master/odh-notebook-controller/kf-notebook-controller/overlays/openshift/kustomization.yaml#L14

How Has This Been Tested?

Changed manually in odh-manifests, tarred and zipped, used on an OCP on-prem cluster. Led to reduced startup times of components.

Merge criteria:

  • The commits are squashed in a cohesive manner and have meaningful messages.
  • Testing instructions have been added in the PR body (for PRs involving changes that are not immediately obvious).
  • The developer has manually tested the changes and verified that the changes work

…ger imagePullPolicy IfNotPresent. Also unique sha256 ref to sidecar image
@openshift-ci openshift-ci bot requested review from atheo89 and harshad16 July 6, 2023 18:09
@openshift-ci
Copy link

openshift-ci bot commented Jul 6, 2023

[APPROVALNOTIFIER] This PR is NOT APPROVED

This pull-request has been approved by:
Once this PR has been reviewed and has the lgtm label, please assign lavlas for approval. For more information see the Kubernetes Code Review Process.

The full list of commands accepted by this bot can be found here.

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@openshift-ci
Copy link

openshift-ci bot commented Jul 6, 2023

Hi @shalberd. Thanks for your PR.

I'm waiting for a opendatahub-io member to verify that this patch is reasonable to test. If it is, they should reply with /ok-to-test on its own line. Until that is done, I will not automatically test new commits in this PR, but the usual testing commands by org members will still work. Regular contributors should join the org to skip this step.

Once the patch is verified, the new status will be reflected by the ok-to-test label.

I understand the commands that are listed here.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@harshad16
Copy link
Member

/ok-to-test

command:
- /manager
args: ["--oauth-proxy-image", "registry.redhat.io/openshift4/ose-oauth-proxy:v4.10"]
# latest v4.10 manifest link digest for architecture AMD64. Used instead of tag format to be compatible with imagePullPolicy: IfNotPresent in ose-oauth sidecars
args: ["--oauth-proxy-image", "registry.redhat.io/openshift4/ose-oauth-proxy@sha256:ab112105ac37352a2a4916a39d6736f5db6ab4c29bad4467de8d613e80e9bb33"]
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Are you sure we need to use manifest list digest and not image digest itself?
The image digest seems to be different from the catalog.
Observed it also on opendatahub-io/kubeflow#113

Copy link
Contributor Author

@shalberd shalberd Jul 10, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@atheo89 Yes, manifest list digest is the digest used for pull. I has to do some background research on this half a year ago myself. You could not pull an image by just the image digest. It contains infos on layers, location repo and so on. Image digest never equals manifest list digest. You can pull an Image by tag e.g. v4.10 and you get back what is referred to as manifest list digest. If you docker inspect an Image pulled by tag, you see sha256 Image id section and manifest link including digest section.

See also https://www.opensourcerers.org/2020/11/16/container-images-multi-architecture-manifests-ids-digests-whats-behind/

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Another link and explanation saying the same

https://learn.microsoft.com/en-us/azure/container-registry/container-registry-concepts

To address a registry artifact for push and pull operations with Docker or other client tools, combine the fully qualified registry name, repository name (including namespace path if applicable), and an artifact tag or manifest digest.

Manifest digest

Manifests are identified by a unique SHA-256 hash, or manifest digest. Each image or artifact--whether tagged or not--is identified by its digest. The digest value is unique even if the artifact's layer data is identical to that of another artifact. This mechanism is what allows you to repeatedly push identically tagged images to a registry. For example, you can repeatedly push myimage:latest to your registry without error because each image is identified by its unique digest.

Copy link
Contributor Author

@shalberd shalberd Jul 10, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe e.g. @Jooho can also give input.

I went to
https://catalog.redhat.com/software/containers/openshift4/ose-oauth-proxy/5cdb2133bed8bd5717d5ae64?architecture=amd64&tag=v4.10.0-202306170106.p0.g799d414.assembly.stream&push_date=1688610772000&container-tabs=gti searched for v4.10 tag and the under tab 'Get this image' used the Manifest List Digest as of July 6.

See also the output of docker pull on July 6, it was identical to the manifest list digest

docker pull registry.redhat.io/openshift4/ose-oauth-proxy:v4.10
v4.10: Pulling from openshift4/ose-oauth-proxy
1df162fae087: Already exists 
d20a374ee8f7: Already exists 
e88a8f7a57fc: Pull complete 
dcbbfecd111e: Pull complete 
Digest: sha256:ab112105ac37352a2a4916a39d6736f5db6ab4c29bad4467de8d613e80e9bb33
Status: Downloaded newer image for registry.redhat.io/openshift4/ose-oauth-proxy:v4.10
registry.redhat.io/openshift4/ose-oauth-proxy:

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Great, and thank you for providing the references! I am ok with the manifests digest; I just wanted to confirm which of the two options is more appropriate.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

While we should be updating the ose-oauth-proxy image at a regular cadence, we should not be hardcoding the image digest into the container command argument of the Deployment object. This should be replaced with an environment variable that can be configured at when the kustomize manifest is parsed. This would also allow us to replace with as a CSV relatedImage or modified by the new operator.

Copy link
Contributor Author

@shalberd shalberd Jul 24, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@LaVLaS I kept it off the table in this PR as that, i.e. not having oauth as a command line arg of a container, is more a a conceptual change that the component makers should make (in odh kubeflow Github repo as well). The reason I added digest is to be IfNotPresent compatible for a regularly changing image behind tag v4.10. Same thing could be argued for PR 869 But, I now realize that the IfNotPresent refers to the main odh notebook controller container image and not to the ose-oauth container referenced here, so admittedly a little misplaced. What do you think about including your suggestion in a separate PR both here in odh-manifests as well as in kubeflow? Your point is totally valid, without question. We also have that issue in kubeflow repo odh notebook controller manager. Both @harshad16 and @atheo89 would be good sparring partners for that next PR.

Copy link
Contributor Author

@shalberd shalberd Jul 24, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Let me remove that particular change (digest value ose-oauth) here out of the PR, as it is not directly related. Your suggestion stands, though and needs to be addressed, agreed. I added a kubeflow issue so it is not forgotten opendatahub-io/kubeflow#132

@VaishnaviHire
Copy link
Member

Repo archived

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

6 participants