Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat: drift detection and takeover implementation (1/) #985

Merged
merged 6 commits into from
Jan 4, 2025

Conversation

michaelawyu
Copy link
Contributor

Description of your changes

This PR adds the implementation of the new work applier with drift detection and takeover experience.

I have:

  • Run make reviewable to ensure this PR is ready for review.

How has this code been tested

  • [*] Unit tests
  • [*] Integration tests (see the RC branch)
  • [*] Manual tests

Special notes for your reviewer

As the new drift detection and takeover experience requires significant changes to the work applier (e.g., originally the work applier simply applies with no need to account for drifts/diffs/ownership mgmt, etc.), to avoid difficulties caused by maintenance of working intermediate states, the new experience will be submitted as a new controller that is not currently enabled.

A preview of the full experience is available in the preview PR (and in the preview env.).

Comment on lines +407 to +409
if ownerRefs == nil {
ownerRefs = []metav1.OwnerReference{}
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't think you need this, the append func handles ownerRef == nil case


// prepareExistingManifestCondQIdx returns a map that allows quicker look up of a manifest
// condition given a work resource identifier.
func prepareExistingManifestCondQIdx(existingManifestConditions []fleetv1beta1.ManifestCondition) map[string]int {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this function is very similar to the prepareRebuiltManifestCondQIdx function in status.go. Is there a way to unify them and move it to util.go?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hi Ryan! Yeah, the two functions are very similar; but currently I fear that I do not seem to have an elegant enough solution to unify the two.

I could use generics and write something like this:

// prepareManifestCondQIdx returns a map that allows quicker look up of a manifest
// condition given a work resource identifier.
func prepareManifestCondQIdx[E fleetv1beta1.ManifestCondition | *manifestProcessingBundle](S []E) map[string]int {
	manifestQIdx := make(map[string]int)

	for idx := range S {
		E := S[idx]

		var id *fleetv1beta1.WorkResourceIdentifier
		mc, mcOK := any(E).(fleetv1beta1.ManifestCondition)
		b, bOK := any(E).(*manifestProcessingBundle)
		switch {
		case mcOK:
			id = &mc.Identifier
		case bOK:
			id = b.id
		default:
                        // This will never run, or the code will not compile.
			panic("unexpected type is passed in")
		}

		wriStr, err := formatWRIString(id)
		if err != nil {
			continue
		}

		manifestQIdx[wriStr] = idx
	}
	return manifestQIdx
}

But it seems to be overcomplicating things 😣 Alternatives include a) introducing an interface that allows retrieval of identifiers from ManifestCondition and manifestProcessingBundle types, or b) extracting the identifiers into a separate slice first, both might not be fully satisfactory (overkilling, or too much overhead) 😣 If you have a preference, please let me know.

@michaelawyu
Copy link
Contributor Author

Per offline discussions, will merge this PR now and open a new one to address comments. If there’s any further concern, please let me know.

@michaelawyu michaelawyu merged commit 43acf41 into Azure:main Jan 4, 2025
12 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants