-
Notifications
You must be signed in to change notification settings - Fork 3.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
fix: controller: ensures workflow reconciling task result properly when failing to received timely updates from api server #14026
base: main
Are you sure you want to change the base?
Conversation
…en failing to received timely updates from api server error scenario: a pod for a step in a workflow has completed, and its task result are properly created and finalized by its wait container (judging from the exit status of the wait container), however, the task result informer in the controller leader has not received any updates about it (due to overloaded api server or etcd). currently, the argo workflow controller doesn't handle the above scenario properly. it would mark the workflow node succeeded and shows no artifact outputs (even though they are already uploaded to the repository). we did run into this situation in our production instance (it's v3.5.8). it's not easy to reproduce this problem, but we can have a manual fault injection in `workflow/controller/taskresult.go:func (woc *wfOperationCtx) taskResultReconciliation()` to simulate the situation and I did reproduce the issue on release v3.6.2: ```diff +++ workflow/controller/taskresult.go @@ -1,7 +1,9 @@ package controller import ( + "os" "reflect" + "strings" "time" log "github.com/sirupsen/logrus" @@ -62,6 +64,12 @@ func (woc *wfOperationCtx) taskResultReconciliation() { objs, _ := woc.controller.taskResultInformer.GetIndexer().ByIndex(indexes.WorkflowIndex, woc.wf.Namespace+"/"+woc.wf.Name) woc.log.WithField("numObjs", len(objs)).Info("Task-result reconciliation") + if strings.Contains(woc.wf.Name, "-xhu-debug-") { + if _, err := os.Stat("/tmp/xhu-debug-control"); err != nil { + return + } + } ``` the change is to forcefully mark the workflow having incomplete TaskResult in assessNodeStatus. this fix doesn't handle the case when a pod failed, there are too many potentially failure scenarios (like the wait container might not be able to insert a task result). plus, a retry is probably needed when there are failures. the loss is probably not as great as a successful one. Signed-off-by: Xiaofan Hu <[email protected]>
63cd4c4
to
747f1e6
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Great finding.
A delay in taskresultInformer will result in the issue you are seeing.
However, if you experience k8sapi pressure and the retry inside informer fails, you will see a much bigger problem. e.g. argo uses the cache from informer, the k8sapi pressure may result in this cache out of sync, and they will never sync until you restarts controller
@shuangkun has more context related to this change, will ask him to review instead
// this fix doesn't handle the case when a pod failed, there are too many | ||
// potentially failure scenarios (like the wait container might not be able to | ||
// insert a task result). plus, a retry is probably needed when there are | ||
// failures. the loss is probably not as great as a successful one. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
we have retry for transient errors in wait container to handle this
argo-workflows/workflow/executor/executor.go
Lines 852 to 858 in 27a5938
func (we *WorkflowExecutor) reportResult(ctx context.Context, result wfv1.NodeResult) error { | |
return retryutil.OnError(wait.Backoff{ | |
Duration: time.Second, | |
Factor: 2, | |
Jitter: 0.1, | |
Steps: 5, | |
Cap: 30 * time.Second, |
yep, but in a way, a controller restart could help us resolving the issue when things went terribly wrong. |
There seems to be no related open issue, so I just submit a pr. but it's related to #12537
Motivation
error scenario: a pod for a step in a workflow has completed, and its task result are properly created and finalized by its wait container (judging from the exit status of the wait container), however, the task result informer in the controller leader has not received any updates about it (due to overloaded api server or etcd).
currently, the argo workflow controller doesn't handle the above scenario properly. it would mark the workflow node succeeded and shows no artifact outputs (even though they are already uploaded to the repository).
we did run into this situation in our production instance (it's v3.5.8).
it's not easy to reproduce this problem, but we can have a manual fault injection in
workflow/controller/taskresult.go:func (woc *wfOperationCtx) taskResultReconciliation()
to simulate the situation and I did reproduce the issue on release v3.6.2:Modifications
the change is to forcefully mark the workflow having incomplete TaskResult in assessNodeStatus.
this fix doesn't handle the case when a pod failed, there are too many potentially failure scenarios (like the wait container might not be able to insert a task result). plus, a retry is probably needed when there are failures. the loss is probably not as great as a successful one.
Verification
it's covered by the updated test case and my manual verification details are included bellow (using the fault injection above).
test workflow to verify the fix:
with the manual fault injection, without the fix:
with the manual fault injection, with the fix: