Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

📖 refactor: finalizer in CronJob example #4397

Draft
wants to merge 4 commits into
base: master
Choose a base branch
from
Draft
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
15 changes: 14 additions & 1 deletion docs/book/src/cronjob-tutorial/testdata/finalizer_example.go
Original file line number Diff line number Diff line change
Expand Up @@ -67,10 +67,23 @@ func (r *CronJobReconciler) Reconcile(ctx context.Context, req ctrl.Request) (ct
// then lets add the finalizer and update the object. This is equivalent
// to registering our finalizer.
if !controllerutil.ContainsFinalizer(cronJob, myFinalizerName) {
controllerutil.AddFinalizer(cronJob, myFinalizerName)
log.Info("Adding Finalizer for CronJob")
if ok := controllerutil.AddFinalizer(cronJob, myFinalizerName); !ok {
log.Error(err, "Failed to add finalizer into the custom resource")
return ctrl.Result{Requeue: true}, nil
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

would not be semantically better to error out here? For example

return ctrl.Result{}, fmt.Errorf("message")

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We have no error acctually, right?
The code should create an error and then return as you suggested
@mateusoliveira43 would you like to help us by doing this change in the deploy image plugin?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

sure, will try to make changes there

}

if err := r.Update(ctx, cronJob); err != nil {
return ctrl.Result{}, err
}

// we re-fetch after having updated the CronJob, so that we have the latest
// state of the resource, and avoid the error "the object has been modified,
// please apply your changes to the latest version and try again".
if err := r.Get(ctx, req.NamespacedName, cronJob); err != nil {
log.Error(err, "unable to fetch CronJob")
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

would be better to say re-fetch here?

return ctrl.Result{}, client.IgnoreNotFound(err)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

would not be better to always error out here?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes you are right. We need to change it in the deploy image too.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

may I ask why? If in the meantime the resource was deleted shouldn't that be ignored?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

On top we should have : https://github.com/kubernetes-sigs/kubebuilder/blob/cbc6e383c342f1337ab37ee4aa0755957a01f9c7/testdata/project-v4-with-plugins/internal/controller/busybox_controller.go#L84C2-L99C3

If the resource is deleted, then the reconciliation stop at this point.

What we need to do is fix the tutorials to have the structure of https://github.com/kubernetes-sigs/kubebuilder/blob/master/testdata/project-v4-with-plugins/internal/controller/busybox_controller.go

The changes here seems for me either incomplete if we address only the finalizer

}
Copy link
Member

@camilamacedo86 camilamacedo86 Nov 27, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thank you for the contribution 🥇

See that:

We need to change the code under hack/docs so that when we run make generate-docs we re-scaffold all and just add the right code on top to ensure that all is always updated.

Therefore, can you please add the code there and run the command to ensure that all is fine?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Indeed it will result in multi-version and crojob tutorial changed when you run make generate-docs

}
} else {
// The object is being deleted
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -144,9 +144,12 @@ const (

// CronJobStatus defines the observed state of CronJob.
type CronJobStatus struct {
// INSERT ADDITIONAL STATUS FIELD - define observed state of cluster
// Important: Run "make" to regenerate code after modifying this file

// Represents the observations of a CronJob's current state.
// For further information see: https://github.com/kubernetes/community/blob/master/contributors/devel/sig-architecture/api-conventions.md#typical-status-properties
Conditions []metav1.Condition `json:"conditions,omitempty" patchStrategy:"merge" patchMergeKey:"type" protobuf:"bytes,1,rep,name=conditions"`

// A list of pointers to currently running jobs.
// +optional
Active []corev1.ObjectReference `json:"active,omitempty"`
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -31,6 +31,8 @@ import (
"github.com/robfig/cron"
kbatch "k8s.io/api/batch/v1"
corev1 "k8s.io/api/core/v1"
apierrors "k8s.io/apimachinery/pkg/api/errors"
"k8s.io/apimachinery/pkg/api/meta"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/apimachinery/pkg/runtime"
ref "k8s.io/client-go/tools/reference"
Expand Down Expand Up @@ -66,6 +68,14 @@ type Clock interface {
Now() time.Time
}

/*
Definitions to manage status conditions of the CronJob
*/
const (
typeAvailableCronJob = "AvailableCronJob"
typeDegradedCronJob = "DegradedCronJob"
)

// +kubebuilder:docs-gen:collapse=Clock

/*
Expand Down Expand Up @@ -111,15 +121,41 @@ func (r *CronJobReconciler) Reconcile(ctx context.Context, req ctrl.Request) (ct

Many client methods also take variadic options at the end.
*/
var cronJob batchv1.CronJob
if err := r.Get(ctx, req.NamespacedName, &cronJob); err != nil {
log.Error(err, "unable to fetch CronJob")
// we'll ignore not-found errors, since they can't be fixed by an immediate
// requeue (we'll need to wait for a new notification), and we can get them
// on deleted requests.
return ctrl.Result{}, client.IgnoreNotFound(err)
cronJob := &batchv1.CronJob{}
if err := r.Get(ctx, req.NamespacedName, cronJob); err != nil {
if apierrors.IsNotFound(err) {
// we'll ignore not-found errors, since they can't be fixed by an immediate
// requeue (we'll need to wait for a new notification), and we can get them
// on deleted requests.
// If the CronJob is not found then it usually means that it was deleted or
// not created. In this way we will stop the reconciliation
log.Info("CronJob resource not found. Ignoring since object must be deleted")
return ctrl.Result{}, nil
}
// Error reading the object - requeue the request.
log.Error(err, "Failed to fetch CronJob")
return ctrl.Result{}, err
}

// Let's just set the status as Unknown when no status is available
if cronJob.Status.Conditions == nil || len(cronJob.Status.Conditions) == 0 {
meta.SetStatusCondition(&cronJob.Status.Conditions, metav1.Condition{Type: typeAvailableCronJob, Status: metav1.ConditionUnknown, Reason: "Reconciling", Message: "Starting reconciliation"})
if err := r.Status().Update(ctx, cronJob); err != nil {
log.Error(err, "Failed to update CronJob status")
return ctrl.Result{}, err
}

// Re-fetch the CronJob after updating the status so that we have
// the latest state of the resource on the cluster and avoid raising
// an error should we try to update it again in the following operations
if err := r.Get(ctx, req.NamespacedName, cronJob); err != nil {
log.Error(err, "Failed to re-fetch CronJob")
return ctrl.Result{}, err
}
}
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@lorenzofelletti Yep, that is the idea!!!
Thank you a lot


// TODO(dev): add finalizer logic

/*
### 2: List all active jobs, and update the status

Expand Down Expand Up @@ -255,7 +291,7 @@ func (r *CronJobReconciler) Reconcile(ctx context.Context, req ctrl.Request) (ct
The status subresource ignores changes to spec, so it's less likely to conflict
with any other updates, and can have separate permissions.
*/
if err := r.Status().Update(ctx, &cronJob); err != nil {
if err := r.Status().Update(ctx, cronJob); err != nil {
log.Error(err, "unable to update CronJob status")
return ctrl.Result{}, err
}
Expand Down Expand Up @@ -396,7 +432,7 @@ func (r *CronJobReconciler) Reconcile(ctx context.Context, req ctrl.Request) (ct

// figure out the next times that we need to create
// jobs at (or anything we missed).
missedRun, nextRun, err := getNextSchedule(&cronJob, r.Now())
missedRun, nextRun, err := getNextSchedule(cronJob, r.Now())
if err != nil {
log.Error(err, "unable to figure out CronJob schedule")
// we don't really care about requeuing until we get an update that
Expand Down Expand Up @@ -500,7 +536,7 @@ func (r *CronJobReconciler) Reconcile(ctx context.Context, req ctrl.Request) (ct
// +kubebuilder:docs-gen:collapse=constructJobForCronJob

// actually make the job...
job, err := constructJobForCronJob(&cronJob, missedRun)
job, err := constructJobForCronJob(cronJob, missedRun)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hi @lorenzofelletti

In a way that I would do that is: (see that this task is not so trivial, you need attention to details and use a lot the IDE to compare)

Then, you can push a PR against your on fork.
See that the samples are tested in the CI here: https://github.com/kubernetes-sigs/kubebuilder/blob/master/.github/workflows/test-e2e-book.yml

If this workflow passes, then you can use the code changes.
Note that the test data check will fail because we generate the code of the book samples by running the commands and injecting the code on top. The whole code implementation for it is here: https://github.com/kubernetes-sigs/kubebuilder/tree/master/hack/docs/internal

When we call make generate-docs, the scripts are called, the projects generated, and the changes made afterwards; it ensures that the docs are always updated with the latest changes.

But we can do it step by step.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hi @camilamacedo86!

Thanks for taking the time to write these instructions. I'll follow them and come back when I have something to share.

if err != nil {
log.Error(err, "unable to construct job from template")
// don't bother requeuing until we get a change to the spec
Expand Down