-
Notifications
You must be signed in to change notification settings - Fork 10
feat: implement cancelAllPipelines for CI providers #205
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
WalkthroughAdds pipeline cancellation APIs and implementations across CI providers (Azure, GitHub Actions, GitLab, Jenkins, Tekton), new KubeClient patch method, replaces legacy cancelAllInitialPipelines with cancelAllPipelines(options?) returning detailed CancelResult, updates tests and removes an integration test file. Changes
Sequence Diagram(s)sequenceDiagram
participant Test as Test/Client
participant CI as CI Provider
participant Svc as Provider Service
participant API as External CI API
Test->>CI: cancelAllPipelines(options?)
activate CI
CI->>CI: normalizeOptions()
CI->>CI: fetchAllPipelines()/fetchAllBuilds()
CI->>CI: filterPipelines()
CI->>CI: chunkArray()+for each batch
loop batch
CI->>Svc: cancel(pipelineId)
activate Svc
Svc->>API: HTTP PATCH/POST cancel endpoint
alt success
API-->>Svc: 200/204
Svc-->>CI: success
else mapped error (404/403/409/400)
API-->>Svc: error
Svc-->>CI: throw mapped error
end
deactivate Svc
end
CI->>CI: aggregate CancelResult (total,cancelled,failed,skipped,details,errors)
CI-->>Test: CancelResult
deactivate CI
Estimated code review effort🎯 4 (Complex) | ⏱️ ~60 minutes
Possibly related PRs
Suggested reviewers
Provide extra attention to testing reliability (ensure unit/mocked integration coverage for batch cancellations and dry-run paths), security (validate any token/credential usage in cancellation helpers), and maintainability (consolidate shared cancellation utilities across providers where possible). Pre-merge checks and finishing touches✅ Passed checks (3 passed)
✨ Finishing touches
🧪 Generate unit tests (beta)
Tip 📝 Customizable high-level summaries are now available in beta!You can now customize how CodeRabbit generates the high-level summary in your pull requests — including its content, structure, tone, and formatting.
Example instruction:
Note: This feature is currently in beta for Pro-tier users, and pricing will be announced later. Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
dbbbf5f to
f8c1055
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 14
🧹 Nitpick comments (2)
src/api/github/services/github-actions.service.ts (2)
604-608: Add initial logging for consistency with service patterns.Other methods in this service log at the start of operations (e.g., lines 259, 314, 382) to aid debugging. Adding similar logging here improves observability.
Apply this diff:
async cancelWorkflowRun( owner: string, repo: string, runId: number ): Promise<void> { + console.log(`Cancelling workflow run #${runId} for ${owner}/${repo}`); + try {
634-653: Consider adding fallback error status check for robustness.Other error-handling blocks in this file check both
error.statusanderror.response?.status(see lines 219, 278, 334, 408). Adding the same fallback pattern here ensures consistent error handling across different error shapes from the Octokit library.Apply this diff to the error status checks:
// Handle specific error cases - if (error.status === 404) { + if (error.status === 404 || (error.response && error.response.status === 404)) { throw new GithubNotFoundError( 'workflow run', `${runId} in repository ${owner}/${repo}` ); } - if (error.status === 403) { + if (error.status === 403 || (error.response && error.response.status === 403)) { throw new GithubApiError( `Insufficient permissions to cancel workflow run ${runId}`, error.status ); } - if (error.status === 409) { + if (error.status === 409 || (error.response && error.response.status === 409)) { throw new GithubApiError( `Workflow run ${runId} cannot be cancelled (already completed or not cancellable)`, error.status ); }
📜 Review details
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (16)
src/api/azure/services/azure-pipeline.service.ts(1 hunks)src/api/github/services/github-actions.service.ts(1 hunks)src/api/jenkins/services/jenkins-build.service.ts(1 hunks)src/api/ocp/kubeClient.ts(1 hunks)src/api/tekton/services/tekton-pipelinerun.service.ts(1 hunks)src/api/tekton/tekton.client.ts(1 hunks)src/rhtap/core/integration/ci/baseCI.ts(2 hunks)src/rhtap/core/integration/ci/ciInterface.ts(2 hunks)src/rhtap/core/integration/ci/providers/azureCI.ts(2 hunks)src/rhtap/core/integration/ci/providers/githubActionsCI.ts(2 hunks)src/rhtap/core/integration/ci/providers/gitlabCI.ts(2 hunks)src/rhtap/core/integration/ci/providers/jenkinsCI.ts(3 hunks)src/rhtap/core/integration/ci/providers/tektonCI.ts(3 hunks)src/utils/test/common.ts(0 hunks)tests/api/git/github.test.ts(0 hunks)tests/tssc/full_workflow.test.ts(1 hunks)
💤 Files with no reviewable changes (2)
- tests/api/git/github.test.ts
- src/utils/test/common.ts
🧰 Additional context used
📓 Path-based instructions (6)
src/api/**
⚙️ CodeRabbit configuration file
Focus on error handling, retry mechanisms, proper async/await usage, API client patterns, authentication security, and comprehensive logging for debugging. Ensure all API calls have appropriate timeouts and circuit breaker patterns.
Files:
src/api/github/services/github-actions.service.tssrc/api/azure/services/azure-pipeline.service.tssrc/api/ocp/kubeClient.tssrc/api/tekton/services/tekton-pipelinerun.service.tssrc/api/tekton/tekton.client.tssrc/api/jenkins/services/jenkins-build.service.ts
tests/**/*.test.ts
📄 CodeRabbit inference engine (README.md)
Test files should use the .test.ts naming convention
Files:
tests/tssc/full_workflow.test.ts
tests/**
⚙️ CodeRabbit configuration file
Focus on test coverage, maintainability, proper test structure, test isolation, mock usage, async test patterns, and test stability. Ensure tests are deterministic and properly clean up after themselves.
Files:
tests/tssc/full_workflow.test.ts
**/*.test.ts
⚙️ CodeRabbit configuration file
Ensure comprehensive test coverage, proper assertions, test isolation, async pattern correctness, mock usage, and test maintainability. Review for potential flaky tests and race conditions.
Files:
tests/tssc/full_workflow.test.ts
src/rhtap/core/**
⚙️ CodeRabbit configuration file
Review component architecture, interface design, abstraction patterns, extensibility, integration reliability, and proper type definitions. Ensure the factory and strategy patterns are implemented correctly.
Files:
src/rhtap/core/integration/ci/baseCI.tssrc/rhtap/core/integration/ci/ciInterface.tssrc/rhtap/core/integration/ci/providers/githubActionsCI.tssrc/rhtap/core/integration/ci/providers/tektonCI.tssrc/rhtap/core/integration/ci/providers/jenkinsCI.tssrc/rhtap/core/integration/ci/providers/azureCI.tssrc/rhtap/core/integration/ci/providers/gitlabCI.ts
src/rhtap/core/integration/**
⚙️ CodeRabbit configuration file
Focus on integration reliability, error handling, factory pattern implementation, provider abstraction, configuration management, and testing of external dependencies. Validate connection pooling and resource management.
Files:
src/rhtap/core/integration/ci/baseCI.tssrc/rhtap/core/integration/ci/ciInterface.tssrc/rhtap/core/integration/ci/providers/githubActionsCI.tssrc/rhtap/core/integration/ci/providers/tektonCI.tssrc/rhtap/core/integration/ci/providers/jenkinsCI.tssrc/rhtap/core/integration/ci/providers/azureCI.tssrc/rhtap/core/integration/ci/providers/gitlabCI.ts
🧠 Learnings (1)
📚 Learning: 2025-08-19T06:42:15.469Z
Learnt from: jkopriva
PR: redhat-appstudio/tssc-test#114
File: tests/templates/import-templates.test.ts:138-168
Timestamp: 2025-08-19T06:42:15.469Z
Learning: In the TSSC E2E testing framework, test cleanup (like deleting Git repositories and catalog entities) is not handled via afterAll blocks in individual test files, following the pattern established in the full workflow suite.
Applied to files:
tests/tssc/full_workflow.test.ts
🧬 Code graph analysis (8)
src/api/github/services/github-actions.service.ts (1)
src/api/github/errors/github.errors.ts (2)
GithubNotFoundError(22-27)GithubApiError(65-70)
src/rhtap/core/integration/ci/baseCI.ts (1)
src/rhtap/core/integration/ci/ciInterface.ts (2)
CancelPipelineOptions(29-66)CancelResult(71-101)
src/rhtap/core/integration/ci/providers/githubActionsCI.ts (2)
src/rhtap/core/integration/ci/ciInterface.ts (4)
CancelPipelineOptions(29-66)CancelResult(71-101)PipelineCancelDetail(106-141)CancelError(146-171)src/api/github/types/github.types.ts (1)
WorkflowRun(49-65)
src/rhtap/core/integration/ci/providers/tektonCI.ts (1)
src/rhtap/core/integration/ci/ciInterface.ts (4)
CancelPipelineOptions(29-66)CancelResult(71-101)PipelineCancelDetail(106-141)CancelError(146-171)
src/rhtap/core/integration/ci/providers/jenkinsCI.ts (1)
src/rhtap/core/integration/ci/ciInterface.ts (5)
CancelPipelineOptions(29-66)CancelResult(71-101)PipelineCancelDetail(106-141)PipelineStatus(225-225)CancelError(146-171)
src/rhtap/core/integration/ci/providers/azureCI.ts (2)
src/rhtap/core/integration/ci/ciInterface.ts (4)
CancelPipelineOptions(29-66)CancelResult(71-101)PipelineCancelDetail(106-141)CancelError(146-171)src/api/azure/types/azure.types.ts (1)
AzureBuild(66-83)
src/rhtap/core/integration/ci/providers/gitlabCI.ts (1)
src/rhtap/core/integration/ci/ciInterface.ts (4)
CancelPipelineOptions(29-66)CancelResult(71-101)PipelineCancelDetail(106-141)CancelError(146-171)
src/api/jenkins/services/jenkins-build.service.ts (2)
src/api/jenkins/utils/jenkins.utils.ts (1)
JenkinsPathBuilder(7-52)src/api/jenkins/errors/jenkins.errors.ts (1)
JenkinsBuildNotFoundError(51-58)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
- GitHub Check: Red Hat Konflux / tssc-test-on-pull-request
🔇 Additional comments (1)
src/api/azure/services/azure-pipeline.service.ts (1)
171-204: Timeout configuration is properly implemented and safeguards this method.The
BaseHttpClientsets a sensible default timeout of 30 seconds (config.timeout ?? 30000), which is automatically applied to all HTTP requests including the PATCH call incancelBuild. This prevents hangs on slow or unresponsive endpoints and satisfies the coding guidelines for API clients. The timeout is configured at the HTTP client level, making it a reliable, framework-wide protection rather than something needing per-method configuration.
| await retry( | ||
| async () => { | ||
| await this.client.patch( | ||
| `${this.project}/_apis/build/builds/${buildId}?${this.getApiVersionParam()}`, | ||
| { status: 'cancelling' } | ||
| ); | ||
| }, | ||
| { | ||
| retries: 3, | ||
| minTimeout: 1000, | ||
| maxTimeout: 5000, | ||
| onRetry: (error: Error, attempt: number) => { | ||
| console.log(`[Azure] Retry ${attempt}/3 - Cancelling build ${buildId}: ${error.message}`); | ||
| }, | ||
| } | ||
| ); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Non-retryable errors should not trigger retries.
The retry mechanism will attempt 3 retries for all errors, including HTTP 400, 403, and 404 responses. These status codes indicate permanent failures (bad request, forbidden, not found) that won't succeed on retry, wasting time and resources.
Refactor to check the error status and bail early for non-retryable errors:
await retry(
async () => {
- await this.client.patch(
+ try {
+ await this.client.patch(
`${this.project}/_apis/build/builds/${buildId}?${this.getApiVersionParam()}`,
{ status: 'cancelling' }
);
+ } catch (error: any) {
+ // Don't retry on permanent failures
+ if (error.response?.status && [400, 403, 404].includes(error.response.status)) {
+ throw new retry.AbortError(error);
+ }
+ throw error;
+ }
},
{
retries: 3,Committable suggestion skipped: line range outside the PR's diff.
| } catch (error: any) { | ||
| // Handle specific error cases | ||
| if (error.response?.status === 404) { | ||
| throw new Error(`Build ${buildId} not found`); | ||
| } | ||
| if (error.response?.status === 403) { | ||
| throw new Error(`Insufficient permissions to cancel build ${buildId}`); | ||
| } | ||
| if (error.response?.status === 400) { | ||
| throw new Error(`Build ${buildId} cannot be cancelled (already completed or not cancellable)`); | ||
| } | ||
| throw new Error(`Failed to cancel build ${buildId}: ${error.message}`); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Use safer error type checking and avoid any.
The catch block uses error: any and accesses error.response?.status without verifying the error is an axios error. If the error comes from network failure, timeout, or retry exhaustion, error.response may be undefined, leading to incorrect error messages or undefined behavior.
Apply this diff to use proper type guards:
- } catch (error: any) {
+ } catch (error: unknown) {
// Handle specific error cases
- if (error.response?.status === 404) {
+ if (axios.isAxiosError(error) && error.response?.status === 404) {
throw new Error(`Build ${buildId} not found`);
}
- if (error.response?.status === 403) {
+ if (axios.isAxiosError(error) && error.response?.status === 403) {
throw new Error(`Insufficient permissions to cancel build ${buildId}`);
}
- if (error.response?.status === 400) {
+ if (axios.isAxiosError(error) && error.response?.status === 400) {
throw new Error(`Build ${buildId} cannot be cancelled (already completed or not cancellable)`);
}
- throw new Error(`Failed to cancel build ${buildId}: ${error.message}`);
+ const message = error instanceof Error ? error.message : String(error);
+ throw new Error(`Failed to cancel build ${buildId}: ${message}`);
}🤖 Prompt for AI Agents
In src/api/azure/services/azure-pipeline.service.ts around lines 191 to 202, the
catch block types the error as any and directly accesses error.response?.status;
change the catch parameter to error: unknown and use a proper type guard (e.g.,
axios.isAxiosError(error) or check (error as AxiosError)?.response) before
reading response.status, handle cases where response is undefined
(network/timeout) by throwing an appropriate message using error?.message or
JSON.stringify(error) as fallback, and avoid casting blindy to any — ensure
non-Axios errors fall through to a generic failure error with the original
message included.
| await retry( | ||
| async () => { | ||
| await this.octokit.actions.cancelWorkflowRun({ | ||
| owner, | ||
| repo, | ||
| run_id: runId, | ||
| }); | ||
| }, | ||
| { | ||
| retries: this.maxRetries, | ||
| minTimeout: this.minTimeout, | ||
| maxTimeout: this.maxTimeout, | ||
| factor: this.factor, | ||
| onRetry: (error: Error, attempt: number) => { | ||
| console.log( | ||
| `[GitHub Actions] Retry ${attempt}/${this.maxRetries} - Cancelling workflow run ${runId}: ${error.message}` | ||
| ); | ||
| }, | ||
| } | ||
| ); | ||
|
|
||
| console.log(`[GitHub Actions] Successfully cancelled workflow run ${runId}`); | ||
| } catch (error: any) { | ||
| // Handle specific error cases | ||
| if (error.status === 404) { | ||
| throw new GithubNotFoundError( | ||
| 'workflow run', | ||
| `${runId} in repository ${owner}/${repo}` | ||
| ); | ||
| } | ||
|
|
||
| if (error.status === 403) { | ||
| throw new GithubApiError( | ||
| `Insufficient permissions to cancel workflow run ${runId}`, | ||
| error.status | ||
| ); | ||
| } | ||
|
|
||
| if (error.status === 409) { | ||
| throw new GithubApiError( | ||
| `Workflow run ${runId} cannot be cancelled (already completed or not cancellable)`, | ||
| error.status | ||
| ); | ||
| } | ||
|
|
||
| // Re-throw with more context | ||
| throw new GithubApiError( | ||
| `Failed to cancel workflow run ${runId}: ${error.message}`, | ||
| error.status || 500, | ||
| error | ||
| ); | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Use bail() to exit retry early for non-retryable errors.
The current implementation retries all errors, including 404 (not found), 403 (permissions), and 409 (already completed). These errors are non-retryable and should use the bail() function to exit early, as demonstrated in other methods (see lines 221, 280, 338). Retrying these errors wastes time and resources.
Apply this diff:
try {
await retry(
- async () => {
+ async (bail) => {
await this.octokit.rest.actions.cancelWorkflowRun({
owner,
repo,
run_id: runId,
});
+ },
+ {
+ retries: this.maxRetries,
+ minTimeout: this.minTimeout,
+ maxTimeout: this.maxTimeout,
+ factor: this.factor,
+ onRetry: (error: Error, attempt: number) => {
+ console.log(
+ `[GitHub Actions] Retry ${attempt}/${this.maxRetries} - Cancelling workflow run ${runId}: ${error.message}`
+ );
+ },
+ }
+ );
+
+ console.log(`[GitHub Actions] Successfully cancelled workflow run ${runId}`);
+ } catch (error: any) {
+ // Handle specific error cases - bail immediately for non-retryable errors
+ if (error.status === 404) {
+ throw new GithubNotFoundError(
+ 'workflow run',
+ `${runId} in repository ${owner}/${repo}`
+ );
+ }
+
+ if (error.status === 403) {
+ throw new GithubApiError(
+ `Insufficient permissions to cancel workflow run ${runId}`,
+ error.status
+ );
+ }
+
+ if (error.status === 409) {
+ throw new GithubApiError(
+ `Workflow run ${runId} cannot be cancelled (already completed or not cancellable)`,
+ error.status
+ );
+ }
+
+ // Re-throw with more context
+ throw new GithubApiError(
+ `Failed to cancel workflow run ${runId}: ${error.message}`,
+ error.status || 500,
+ error
+ );
+ }should be:
try {
await retry(
async (bail) => {
- await this.octokit.rest.actions.cancelWorkflowRun({
- owner,
- repo,
- run_id: runId,
- });
+ try {
+ await this.octokit.rest.actions.cancelWorkflowRun({
+ owner,
+ repo,
+ run_id: runId,
+ });
+ } catch (error: any) {
+ // Bail immediately for non-retryable errors
+ if (error.status === 404) {
+ bail(new GithubNotFoundError(
+ 'workflow run',
+ `${runId} in repository ${owner}/${repo}`,
+ error.status
+ ));
+ return;
+ }
+
+ if (error.status === 403) {
+ bail(new GithubApiError(
+ `Insufficient permissions to cancel workflow run ${runId}`,
+ error.status
+ ));
+ return;
+ }
+
+ if (error.status === 409) {
+ bail(new GithubApiError(
+ `Workflow run ${runId} cannot be cancelled (already completed or not cancellable)`,
+ error.status
+ ));
+ return;
+ }
+
+ // Throw for retryable errors
+ throw error;
+ }
},
{
retries: this.maxRetries,
minTimeout: this.minTimeout,
maxTimeout: this.maxTimeout,
factor: this.factor,
onRetry: (error: Error, attempt: number) => {
console.log(
`[GitHub Actions] Retry ${attempt}/${this.maxRetries} - Cancelling workflow run ${runId}: ${error.message}`
);
},
}
);
console.log(`[GitHub Actions] Successfully cancelled workflow run ${runId}`);
} catch (error: any) {
- // Handle specific error cases
- if (error.status === 404) {
- throw new GithubNotFoundError(
- 'workflow run',
- `${runId} in repository ${owner}/${repo}`
- );
- }
-
- if (error.status === 403) {
- throw new GithubApiError(
- `Insufficient permissions to cancel workflow run ${runId}`,
- error.status
- );
- }
-
- if (error.status === 409) {
- throw new GithubApiError(
- `Workflow run ${runId} cannot be cancelled (already completed or not cancellable)`,
- error.status
- );
- }
-
- // Re-throw with more context
+ // Add context to unknown errors
throw new GithubApiError(
`Failed to cancel workflow run ${runId}: ${error.message}`,
error.status || 500,
error
);
}Committable suggestion skipped: line range outside the PR's diff.
🤖 Prompt for AI Agents
In src/api/github/services/github-actions.service.ts around lines 610-661, the
retry wrapper currently retries all errors including non-retryable 404/403/409;
modify the async function passed to retry to detect these statuses and call
bail(...) with the appropriate GithubNotFoundError or GithubApiError (matching
the existing error messages) so retry aborts immediately for those cases, and
only let other errors throw to trigger retry; keep the existing outer catch as a
fallback for unexpected errors.
| try { | ||
| await retry( | ||
| async () => { | ||
| await this.octokit.actions.cancelWorkflowRun({ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fix the API path to include .rest.
All other methods in this service use this.octokit.rest.actions.* pattern (see lines 93, 100, 265, 321, 389, 517), but this line uses this.octokit.actions.* without the .rest segment. This inconsistency will likely cause a runtime error.
Apply this diff:
- await this.octokit.actions.cancelWorkflowRun({
+ await this.octokit.rest.actions.cancelWorkflowRun({📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| await this.octokit.actions.cancelWorkflowRun({ | |
| await this.octokit.rest.actions.cancelWorkflowRun({ |
🤖 Prompt for AI Agents
In src/api/github/services/github-actions.service.ts around line 612, the call
uses this.octokit.actions.cancelWorkflowRun which is inconsistent with the rest
of the file and will fail at runtime; change it to use
this.octokit.rest.actions.cancelWorkflowRun so it matches the other Octokit REST
calls (keep the same arguments and error handling).
| } catch (error) { | ||
| // If build is not found or already stopped, throw specific error | ||
| throw new JenkinsBuildNotFoundError(jobName, buildNumber, folderName); | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Preserve non-404 failures from stopBuild
Catching every failure and rethrowing JenkinsBuildNotFoundError makes a 403/500 look like a simple “build not found,” which is painful for diagnosing flaky cancellations in our E2E runs. Please only map actual 404s to JenkinsBuildNotFoundError and let other errors bubble so the caller can see the real cause.
- } catch (error) {
- // If build is not found or already stopped, throw specific error
- throw new JenkinsBuildNotFoundError(jobName, buildNumber, folderName);
- }
+ } catch (error) {
+ const status =
+ (error as { statusCode?: number })?.statusCode ??
+ (error as { response?: { status?: number } })?.response?.status;
+
+ if (status === 404) {
+ throw new JenkinsBuildNotFoundError(jobName, buildNumber, folderName);
+ }
+
+ throw error;
+ }📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| } catch (error) { | |
| // If build is not found or already stopped, throw specific error | |
| throw new JenkinsBuildNotFoundError(jobName, buildNumber, folderName); | |
| } | |
| } catch (error) { | |
| const status = | |
| (error as { statusCode?: number })?.statusCode ?? | |
| (error as { response?: { status?: number } })?.response?.status; | |
| if (status === 404) { | |
| throw new JenkinsBuildNotFoundError(jobName, buildNumber, folderName); | |
| } | |
| throw error; | |
| } |
🤖 Prompt for AI Agents
In src/api/jenkins/services/jenkins-build.service.ts around lines 181 to 184,
the catch block unconditionally converts every error from stopBuild into
JenkinsBuildNotFoundError; change it to detect a real 404 and only throw
JenkinsBuildNotFoundError for that case (e.g. check error.response?.status ===
404 or error.status === 404 depending on the HTTP client) and for any other
status or unexpected error rethrow the original error so the caller can observe
the true failure and stack.
| private normalizeOptions( | ||
| options?: CancelPipelineOptions | ||
| ): Required<Omit<CancelPipelineOptions, 'eventType' | 'branch'>> & Pick<CancelPipelineOptions, 'eventType' | 'branch'> { | ||
| return { | ||
| excludePatterns: options?.excludePatterns || [], | ||
| includeCompleted: options?.includeCompleted || false, | ||
| eventType: options?.eventType, | ||
| branch: options?.branch, | ||
| concurrency: options?.concurrency || 10, | ||
| dryRun: options?.dryRun || false, | ||
| }; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Clamp concurrency to keep cancellation reliable.
If someone passes a negative concurrency, chunkArray(..., options.concurrency) spins forever, hanging the cancel flow and any tests that rely on it. Clamping to at least 1 keeps the new API safe without surprising callers.
- concurrency: options?.concurrency || 10,
+ concurrency: Math.max(1, options?.concurrency ?? 10),🤖 Prompt for AI Agents
In src/rhtap/core/integration/ci/providers/gitlabCI.ts around lines 348 to 358,
the normalizeOptions function currently passes negative concurrency through
which can cause chunkArray(..., concurrency) to spin forever; clamp the
concurrency value to a minimum of 1 when building the returned object (e.g., use
Math.max(1, options?.concurrency ?? 10) or similar) so concurrency is never
below 1 while keeping the default of 10 and preserving other fields.
| private normalizeOptions( | ||
| options?: CancelPipelineOptions | ||
| ): Required<Omit<CancelPipelineOptions, 'eventType' | 'branch'>> & Pick<CancelPipelineOptions, 'eventType' | 'branch'> { | ||
| return { | ||
| excludePatterns: options?.excludePatterns || [], | ||
| includeCompleted: options?.includeCompleted || false, | ||
| eventType: options?.eventType, | ||
| branch: options?.branch, | ||
| concurrency: options?.concurrency || 10, | ||
| dryRun: options?.dryRun || false, | ||
| }; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Prevent negative concurrency from stalling cancellations.
Here too, a negative concurrency propagates into chunkArray, resulting in a non-terminating loop and stuck cleanup. Let’s clamp to at least 1 so Jenkins-based runs don’t hang.
- concurrency: options?.concurrency || 10,
+ concurrency: Math.max(1, options?.concurrency ?? 10),🤖 Prompt for AI Agents
In src/rhtap/core/integration/ci/providers/jenkinsCI.ts around lines 643 to 653,
normalizeOptions currently passes a possibly negative concurrency through to
chunkArray which can cause an infinite loop; clamp the computed concurrency to a
minimum of 1 (e.g. compute the defaulted value then apply Math.max(1, ...)) so
concurrency is never less than 1 before returning the options.
| // 4. Apply filters | ||
| const pipelineRunsToCancel = this.filterPipelineRuns(allPipelineRuns, opts); | ||
|
|
||
| console.log(`[Tekton] ${pipelineRunsToCancel.length} PipelineRuns match filters`); | ||
| console.log(`[Tekton] ${allPipelineRuns.length - pipelineRunsToCancel.length} PipelineRuns filtered out`); | ||
|
|
||
| // 5. Cancel PipelineRuns in batches | ||
| await this.cancelPipelineRunsInBatches(pipelineRunsToCancel, opts, result); | ||
|
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Update skipped counter to match filtered PipelineRuns.
Just like the GitHub provider, we never increment result.skipped for runs dropped by filters. That misleads our cancellation telemetry and test assertions. Capture the delta right after filtering:
- const pipelineRunsToCancel = this.filterPipelineRuns(allPipelineRuns, opts);
+ const pipelineRunsToCancel = this.filterPipelineRuns(allPipelineRuns, opts);
+ result.skipped += allPipelineRuns.length - pipelineRunsToCancel.length;📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| // 4. Apply filters | |
| const pipelineRunsToCancel = this.filterPipelineRuns(allPipelineRuns, opts); | |
| console.log(`[Tekton] ${pipelineRunsToCancel.length} PipelineRuns match filters`); | |
| console.log(`[Tekton] ${allPipelineRuns.length - pipelineRunsToCancel.length} PipelineRuns filtered out`); | |
| // 5. Cancel PipelineRuns in batches | |
| await this.cancelPipelineRunsInBatches(pipelineRunsToCancel, opts, result); | |
| // 4. Apply filters | |
| const pipelineRunsToCancel = this.filterPipelineRuns(allPipelineRuns, opts); | |
| result.skipped += allPipelineRuns.length - pipelineRunsToCancel.length; | |
| console.log(`[Tekton] ${pipelineRunsToCancel.length} PipelineRuns match filters`); | |
| console.log(`[Tekton] ${allPipelineRuns.length - pipelineRunsToCancel.length} PipelineRuns filtered out`); | |
| // 5. Cancel PipelineRuns in batches | |
| await this.cancelPipelineRunsInBatches(pipelineRunsToCancel, opts, result); |
🤖 Prompt for AI Agents
In src/rhtap/core/integration/ci/providers/tektonCI.ts around lines 447 to 455,
after applying filters you fail to increment result.skipped for the PipelineRuns
dropped by filtering; compute the number filtered (allPipelineRuns.length -
pipelineRunsToCancel.length) and add that delta to result.skipped immediately
after filtering (before logging/cancelling) so telemetry/tests reflect the
skipped count correctly.
| private matchesEventType(pipelineRun: any, eventType: EventType): boolean { | ||
| const tektonEventType = pipelineRun.metadata?.labels?.['pipelinesascode.tekton.dev/event-type']; | ||
|
|
||
| if (!tektonEventType) { | ||
| return true; // If no event type label, allow all | ||
| } | ||
|
|
||
| switch (eventType) { | ||
| case EventType.PUSH: | ||
| return tektonEventType === 'push'; | ||
| case EventType.PULL_REQUEST: | ||
| return tektonEventType === 'pull_request'; | ||
| default: | ||
| return false; | ||
| } | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Honor explicit event-type filters.
When callers request a specific EventType, we must not cancel runs whose Tekton label is missing; returning true here silently widens the scope and breaks targeted clean-up tests. Please treat the absence of pipelinesascode.tekton.dev/event-type as a mismatch (skip) unless includeCompleted or other options explicitly override it.
🤖 Prompt for AI Agents
In src/rhtap/core/integration/ci/providers/tektonCI.ts around lines 608-623, the
matchesEventType function currently treats a missing
pipelinesascode.tekton.dev/event-type label as a match (returning true); change
this so a missing label is treated as a mismatch (return false) so explicit
EventType filters are honored. If there is a legitimate case to allow missing
labels, add an explicit override flag (e.g. includeCompleted or
allowMissingEventType) to the function signature and use it to allow the old
behavior only when explicitly set; update call sites and tests accordingly.
| // It is possible to trigger multiple pipelines when a new component is created and make some changes | ||
| // to the both source and gitops repos. These pipelines are not needed for the test and should be cancelled. | ||
| await ci.cancelAllPipelines(); | ||
| console.log('All pipelines have been cancelled!'); | ||
| console.log('Component creation is complete!'); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Assert cancelAllPipelines result for reliability.
We’re swallowing the CancelResult, so a failed cancellation would still let this workflow test pass, masking real regressions in the new cancellation path. Let’s assert the failure count so the suite stays trustworthy.
- await ci.cancelAllPipelines();
- console.log('All pipelines have been cancelled!');
+ const cancelResult = await ci.cancelAllPipelines();
+ expect(cancelResult.failed).toBe(0);
+ console.log(`All pipelines cancelled: ${cancelResult.cancelled}/${cancelResult.total}`);
console.log('Component creation is complete!');🤖 Prompt for AI Agents
In tests/tssc/full_workflow.test.ts around lines 71 to 75, the call to await
ci.cancelAllPipelines() ignores its CancelResult so cancellation failures are
swallowed; capture the returned result into a variable and add an assertion that
result.failed === 0 (or equivalent test assertion) so the test fails when any
pipeline cancellation did not succeed, optionally including the result in the
assertion message for debugging.
Assisted-by: Claude Code
f8c1055 to
e934eec
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 5
♻️ Duplicate comments (8)
src/api/ocp/kubeClient.ts (1)
268-297: Set merge‑patch Content-Type onpatchNamespacedCustomObjectto avoid 415sFor CRDs like Tekton PipelineRuns/TaskRuns,
patchNamespacedCustomObjecttypically requiresContent-Type: application/merge-patch+json. Without this, the API server commonly returns 415, meaning your newpatchResource(and thuscancelAllPipelines) will silently fail at runtime and make the E2E cancellation path unreliable.Please set the header so these patches are accepted consistently:
public async patchResource<T>(options: K8sApiOptions, patchData: any): Promise<T> { @@ - const response = await this.customApi.patchNamespacedCustomObject({ - group: options.group, - version: options.version, - namespace: options.namespace, - plural: options.plural, - name: options.name, - body: patchData, - }); + const response = await this.customApi.patchNamespacedCustomObject({ + group: options.group, + version: options.version, + namespace: options.namespace, + plural: options.plural, + name: options.name, + body: patchData, + headers: { + 'Content-Type': 'application/merge-patch+json', + }, + }); return response as T;This is important for the reliability of the new cancellation APIs in CI/E2E scenarios.
src/api/azure/services/azure-pipeline.service.ts (1)
171-203: Abort retries on permanent HTTP errors and harden error typing incancelBuild.Right now 400/403/404 responses will be retried up to 3 times and the catch block relies on
error: anywith unguardederror.response/error.message. For CI/E2E test stability and faster failure feedback, this should (a) stop retrying on clearly non-retryable codes and (b) use safer error typing and guards.You can address both in one pass like this:
public async cancelBuild(buildId: number): Promise<void> { try { await retry( - async () => { - await this.client.patch( - `${this.project}/_apis/build/builds/${buildId}?${this.getApiVersionParam()}`, - { status: 'cancelling' } - ); - }, + async () => { + try { + await this.client.patch( + `${this.project}/_apis/build/builds/${buildId}?${this.getApiVersionParam()}`, + { status: 'cancelling' } + ); + } catch (error: unknown) { + if (axios.isAxiosError(error)) { + const status = error.response?.status; + if (status && [400, 403, 404].includes(status)) { + // Permanent failures – abort further retries + throw new retry.AbortError(error); + } + } + throw error; + } + }, { retries: 3, minTimeout: 1000, maxTimeout: 5000, onRetry: (error: Error, attempt: number) => { console.log(`[Azure] Retry ${attempt}/3 - Cancelling build ${buildId}: ${error.message}`); }, } ); console.log(`[Azure] Successfully cancelled build ${buildId}`); - } catch (error: any) { - // Handle specific error cases - if (error.response?.status === 404) { - throw new Error(`Build ${buildId} not found`); - } - if (error.response?.status === 403) { - throw new Error(`Insufficient permissions to cancel build ${buildId}`); - } - if (error.response?.status === 400) { - throw new Error(`Build ${buildId} cannot be cancelled (already completed or not cancellable)`); - } - throw new Error(`Failed to cancel build ${buildId}: ${error.message}`); + } catch (error: unknown) { + if (axios.isAxiosError(error)) { + const status = error.response?.status; + if (status === 404) { + throw new Error(`Build ${buildId} not found`); + } + if (status === 403) { + throw new Error(`Insufficient permissions to cancel build ${buildId}`); + } + if (status === 400) { + throw new Error( + `Build ${buildId} cannot be cancelled (already completed or not cancellable)` + ); + } + } + + const message = error instanceof Error ? error.message : String(error); + throw new Error(`Failed to cancel build ${buildId}: ${message}`); } }This reduces wasted retry time for doomed cancellations (good for faster, more deterministic E2E runs) and makes the error handling more robust and maintainable.
Please double-check the
async-retryAbortErrorbehavior andaxios.isAxiosErrorusage against the versions used in this repo to ensure the typings and runtime behavior match expectations.src/rhtap/core/integration/ci/providers/githubActionsCI.ts (2)
400-405: Account for filtered-out workflow runs inresult.skippedto honor the CancelResult contract.After filtering, you log how many runs were filtered out but never add them to
result.skipped, even though the interface comment says “skipped (due to filters or already completed)”:const workflowRunsToCancel = this.filterWorkflowRuns(allWorkflowRuns, opts); console.log(`[GitHubActions] ${workflowRunsToCancel.length} workflow runs match filters`); console.log( `[GitHubActions] ${allWorkflowRuns.length - workflowRunsToCancel.length} workflow runs filtered out`, );For reliable test assertions and consistent telemetry, bump
skippedhere:- const workflowRunsToCancel = this.filterWorkflowRuns(allWorkflowRuns, opts); + const workflowRunsToCancel = this.filterWorkflowRuns(allWorkflowRuns, opts); + result.skipped += allWorkflowRuns.length - workflowRunsToCancel.length;This keeps per-run details focused on attempted cancellations while still reflecting how many pipelines were avoided due to filters.
444-483: PaginategetWorkflowRunsso “cancel all” really covers all pipelines.
fetchAllWorkflowRunscurrently grabs only the first page (up to 100 runs) from each repo:const responseSource = await this.githubClient.actions.getWorkflowRuns(..., { per_page: 100 }); ... const responseGitops = await this.githubClient.actions.getWorkflowRuns(..., { per_page: 100 });Busy repos will have far more than 100 historical runs, so older ones will never be seen or cancelled. That weakens cleanup guarantees and can leave residue affecting long-running E2E environments.
Consider paginating until no more pages are returned (or following the GitHub pagination mechanism exposed by your
GithubClient), accumulating and tagging each page’sworkflow_runsintoallWorkflowRunsbefore filtering/cancelling.src/rhtap/core/integration/ci/providers/azureCI.ts (1)
526-537: Clampconcurrencyto avoid hangs in Azure cancellation batches.As with the other providers,
normalizeOptionspassesoptions?.concurrency || 10directly intochunkArray, so a negativeconcurrencyproduces a non-terminating loop in the batch splitter. That can wedge Azure-backed E2E runs.Recommend clamping:
- private normalizeOptions( - options?: CancelPipelineOptions - ): Required<Omit<CancelPipelineOptions, 'eventType' | 'branch'>> & Pick<CancelPipelineOptions, 'eventType' | 'branch'> { - return { + private normalizeOptions( + options?: CancelPipelineOptions + ): Required<Omit<CancelPipelineOptions, 'eventType' | 'branch'>> & + Pick<CancelPipelineOptions, 'eventType' | 'branch'> { + const concurrency = Math.max(1, options?.concurrency ?? 10); + return { excludePatterns: options?.excludePatterns || [], includeCompleted: options?.includeCompleted || false, eventType: options?.eventType, branch: options?.branch, - concurrency: options?.concurrency || 10, + concurrency, dryRun: options?.dryRun || false, }; }src/rhtap/core/integration/ci/providers/gitlabCI.ts (1)
348-359: Guardconcurrencyin GitLab normalizeOptions to prevent non-terminating loops.
normalizeOptionscurrently lets negativeconcurrencyvalues flow intochunkArray, which uses that as the step in itsforloop and can hang cancellation runs:const batches = this.chunkArray(pipelines, options.concurrency);Use a safe minimum:
- private normalizeOptions( - options?: CancelPipelineOptions - ): Required<Omit<CancelPipelineOptions, 'eventType' | 'branch'>> & Pick<CancelPipelineOptions, 'eventType' | 'branch'> { - return { + private normalizeOptions( + options?: CancelPipelineOptions + ): Required<Omit<CancelPipelineOptions, 'eventType' | 'branch'>> & + Pick<CancelPipelineOptions, 'eventType' | 'branch'> { + const concurrency = Math.max(1, options?.concurrency ?? 10); + return { excludePatterns: options?.excludePatterns || [], includeCompleted: options?.includeCompleted || false, eventType: options?.eventType, branch: options?.branch, - concurrency: options?.concurrency || 10, + concurrency, dryRun: options?.dryRun || false, }; }src/rhtap/core/integration/ci/providers/jenkinsCI.ts (1)
643-653: Clamp Jenkinsconcurrencyto keep batch cancellation from stalling.
normalizeOptionscurrently allows negativeconcurrencyto reachchunkArray, which makes thefor (i += size)loop never terminate and can wedge Jenkins-based test runs:const batches = this.chunkArray(builds, options.concurrency);Suggest:
- private normalizeOptions( - options?: CancelPipelineOptions - ): Required<Omit<CancelPipelineOptions, 'eventType' | 'branch'>> & Pick<CancelPipelineOptions, 'eventType' | 'branch'> { - return { + private normalizeOptions( + options?: CancelPipelineOptions + ): Required<Omit<CancelPipelineOptions, 'eventType' | 'branch'>> & + Pick<CancelPipelineOptions, 'eventType' | 'branch'> { + const concurrency = Math.max(1, options?.concurrency ?? 10); + return { excludePatterns: options?.excludePatterns || [], includeCompleted: options?.includeCompleted || false, eventType: options?.eventType, branch: options?.branch, - concurrency: options?.concurrency || 10, + concurrency, dryRun: options?.dryRun || false, }; }src/rhtap/core/integration/ci/providers/tektonCI.ts (1)
418-487: Tighten Tekton cancellation options: concurrency clamp, skipped accounting, and event-type semantics.In this cancellation entrypoint there are three issues that affect reliability and observability:
- Concurrency can be negative, stalling batches
normalizeOptionspassesoptions?.concurrency || 10directly intochunkArray, so a negativeconcurrencywill cause the batch loop never to terminate. Clamp it:- private normalizeOptions( - options?: CancelPipelineOptions - ): Required<Omit<CancelPipelineOptions, 'eventType' | 'branch'>> & Pick<CancelPipelineOptions, 'eventType' | 'branch'> { - return { + private normalizeOptions( + options?: CancelPipelineOptions + ): Required<Omit<CancelPipelineOptions, 'eventType' | 'branch'>> & + Pick<CancelPipelineOptions, 'eventType' | 'branch'> { + const concurrency = Math.max(1, options?.concurrency ?? 10); + return { excludePatterns: options?.excludePatterns || [], includeCompleted: options?.includeCompleted || false, eventType: options?.eventType, branch: options?.branch, - concurrency: options?.concurrency || 10, + concurrency, dryRun: options?.dryRun || false, }; }
- Filtered PipelineRuns aren’t reflected in
result.skipped(previously flagged)After filtering you only log the counts:
const pipelineRunsToCancel = this.filterPipelineRuns(allPipelineRuns, opts); console.log(`[Tekton] ${pipelineRunsToCancel.length} PipelineRuns match filters`); console.log( `[Tekton] ${allPipelineRuns.length - pipelineRunsToCancel.length} PipelineRuns filtered out`, );but
result.skippedstays unchanged. To align withCancelResult’s definition:- const pipelineRunsToCancel = this.filterPipelineRuns(allPipelineRuns, opts); + const pipelineRunsToCancel = this.filterPipelineRuns(allPipelineRuns, opts); + result.skipped += allPipelineRuns.length - pipelineRunsToCancel.length;
- Missing event-type labels still pass explicit filters (previously flagged)
matchesEventTypecurrently treats the absence ofpipelinesascode.tekton.dev/event-typeas a match:if (!tektonEventType) { return true; // If no event type label, allow all }When callers set
options.eventType, this widens the cancellation scope unexpectedly and can break targeted cleanup tests. Consider instead treating “no label” as a mismatch when an explicit eventType filter is present, so only labeled runs are included unless you introduce an explicit override flag.
🧹 Nitpick comments (2)
src/api/ocp/kubeClient.ts (1)
268-297: Align patch error handling withlistResourcesfor better retryabilityRight now any failure in
patchResourceis logged and rethrown as a genericError, losing signal like 429/rate‑limit vs hard failures. For the new cancelAllPipelines flow, distinguishing transient errors would help upstream callers implement retries/circuit breaking and reduce flaky E2E runs.Consider mirroring the pattern from
listResources:
- Inspect the error object/HTTP status.
- For 429/TooManyRequests, throw a specific error (or a typed error) so callers can decide to retry.
- For non‑transient errors, keep logging but preserve as much of the original error as possible (e.g., rethrow
errorif it is already anErrorinstead of always wrapping it).This will make pipeline cancellation more robust under API server load without changing the public API.
src/rhtap/core/integration/ci/ciInterface.ts (1)
68-101: Cancellation contract is solid; ensure providers fully honorskippedsemantics.The new
CancelPipelineOptions/CancelResult/PipelineCancelDetail/CancelErrortypes andcancelAllPipelines(...)interface provide a clean, testable contract across CI providers. In particular, the doc forCancelResult.skipped(“skipped (due to filters or already completed)”) is important for E2E assertions.Right now, several provider implementations (GitHub, Azure, GitLab, Jenkins, Tekton) only increment
skippedfor dry-run paths and not for pipelines dropped by filters or because they’re already completed. When you wire up provider-level tests against this interface, it would be good to verify thatskippedis incremented consistently according to this definition so cross-provider cleanup behavior is predictable.Also applies to: 198-223
📜 Review details
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (16)
src/api/azure/services/azure-pipeline.service.ts(1 hunks)src/api/github/services/github-actions.service.ts(1 hunks)src/api/jenkins/services/jenkins-build.service.ts(1 hunks)src/api/ocp/kubeClient.ts(1 hunks)src/api/tekton/services/tekton-pipelinerun.service.ts(1 hunks)src/api/tekton/tekton.client.ts(1 hunks)src/rhtap/core/integration/ci/baseCI.ts(2 hunks)src/rhtap/core/integration/ci/ciInterface.ts(2 hunks)src/rhtap/core/integration/ci/providers/azureCI.ts(2 hunks)src/rhtap/core/integration/ci/providers/githubActionsCI.ts(2 hunks)src/rhtap/core/integration/ci/providers/gitlabCI.ts(2 hunks)src/rhtap/core/integration/ci/providers/jenkinsCI.ts(3 hunks)src/rhtap/core/integration/ci/providers/tektonCI.ts(3 hunks)src/utils/test/common.ts(0 hunks)tests/api/git/github.test.ts(0 hunks)tests/tssc/full_workflow.test.ts(1 hunks)
💤 Files with no reviewable changes (2)
- src/utils/test/common.ts
- tests/api/git/github.test.ts
🚧 Files skipped from review as they are similar to previous changes (4)
- src/api/github/services/github-actions.service.ts
- src/api/tekton/tekton.client.ts
- tests/tssc/full_workflow.test.ts
- src/api/jenkins/services/jenkins-build.service.ts
🧰 Additional context used
📓 Path-based instructions (3)
src/api/**
⚙️ CodeRabbit configuration file
Focus on error handling, retry mechanisms, proper async/await usage, API client patterns, authentication security, and comprehensive logging for debugging. Ensure all API calls have appropriate timeouts and circuit breaker patterns.
Files:
src/api/azure/services/azure-pipeline.service.tssrc/api/ocp/kubeClient.tssrc/api/tekton/services/tekton-pipelinerun.service.ts
src/rhtap/core/**
⚙️ CodeRabbit configuration file
Review component architecture, interface design, abstraction patterns, extensibility, integration reliability, and proper type definitions. Ensure the factory and strategy patterns are implemented correctly.
Files:
src/rhtap/core/integration/ci/baseCI.tssrc/rhtap/core/integration/ci/providers/gitlabCI.tssrc/rhtap/core/integration/ci/providers/tektonCI.tssrc/rhtap/core/integration/ci/ciInterface.tssrc/rhtap/core/integration/ci/providers/azureCI.tssrc/rhtap/core/integration/ci/providers/githubActionsCI.tssrc/rhtap/core/integration/ci/providers/jenkinsCI.ts
src/rhtap/core/integration/**
⚙️ CodeRabbit configuration file
Focus on integration reliability, error handling, factory pattern implementation, provider abstraction, configuration management, and testing of external dependencies. Validate connection pooling and resource management.
Files:
src/rhtap/core/integration/ci/baseCI.tssrc/rhtap/core/integration/ci/providers/gitlabCI.tssrc/rhtap/core/integration/ci/providers/tektonCI.tssrc/rhtap/core/integration/ci/ciInterface.tssrc/rhtap/core/integration/ci/providers/azureCI.tssrc/rhtap/core/integration/ci/providers/githubActionsCI.tssrc/rhtap/core/integration/ci/providers/jenkinsCI.ts
🧬 Code graph analysis (6)
src/rhtap/core/integration/ci/baseCI.ts (1)
src/rhtap/core/integration/ci/ciInterface.ts (2)
CancelPipelineOptions(29-66)CancelResult(71-101)
src/rhtap/core/integration/ci/providers/gitlabCI.ts (1)
src/rhtap/core/integration/ci/ciInterface.ts (4)
CancelPipelineOptions(29-66)CancelResult(71-101)PipelineCancelDetail(106-141)CancelError(146-171)
src/rhtap/core/integration/ci/providers/tektonCI.ts (2)
src/rhtap/core/integration/ci/ciInterface.ts (4)
CancelPipelineOptions(29-66)CancelResult(71-101)PipelineCancelDetail(106-141)CancelError(146-171)src/api/tekton/tekton.client.ts (1)
pipelineRuns(52-54)
src/rhtap/core/integration/ci/providers/azureCI.ts (2)
src/rhtap/core/integration/ci/ciInterface.ts (4)
CancelPipelineOptions(29-66)CancelResult(71-101)PipelineCancelDetail(106-141)CancelError(146-171)src/api/azure/types/azure.types.ts (1)
AzureBuild(66-83)
src/rhtap/core/integration/ci/providers/githubActionsCI.ts (2)
src/rhtap/core/integration/ci/ciInterface.ts (4)
CancelPipelineOptions(29-66)CancelResult(71-101)PipelineCancelDetail(106-141)CancelError(146-171)src/api/github/types/github.types.ts (1)
WorkflowRun(49-65)
src/rhtap/core/integration/ci/providers/jenkinsCI.ts (1)
src/rhtap/core/integration/ci/ciInterface.ts (5)
CancelPipelineOptions(29-66)CancelResult(71-101)PipelineCancelDetail(106-141)PipelineStatus(225-225)CancelError(146-171)
🔇 Additional comments (2)
src/rhtap/core/integration/ci/baseCI.ts (1)
129-138: Base abstraction for cancellation looks consistent and testable.The new
cancelAllPipelines(options?: CancelPipelineOptions): Promise<CancelResult>matches the CI interface and gives a clear, uniform contract for providers to implement and test against. No issues from a correctness or maintainability standpoint.src/rhtap/core/integration/ci/providers/tektonCI.ts (1)
656-726: Tekton single-run cancellation wiring looks good, subject to underlying status value.
cancelSinglePipelineRunandcancelPipelineRunViaAPIare wired cleanly intoTektonClient.cancelPipelineRun, updateCancelResulttotals correctly for dry-run, success, and failure, and capture provider errors intoCancelError. From a maintainability and testing perspective, this arrangement is solid.The main caveat is ensuring the underlying
TektonPipelineRunService.cancelPipelineRunuses the correctspec.statusvalue so these calls actually cancel runs (see comment ontekton-pipelinerun.service.ts).
| /** | ||
| * Cancel a running PipelineRun by patching its spec | ||
| */ | ||
| public async cancelPipelineRun(namespace: string, name: string): Promise<void> { | ||
| try { | ||
| const options = this.kubeClient.createApiOptions( | ||
| this.API_GROUP, | ||
| this.API_VERSION, | ||
| this.PIPELINE_RUNS_PLURAL, | ||
| namespace, | ||
| { name } | ||
| ); | ||
|
|
||
| // Patch the PipelineRun to set status to cancelled | ||
| const patchData = { | ||
| spec: { | ||
| status: 'PipelineRunCancelled' | ||
| } | ||
| }; | ||
|
|
||
| await this.kubeClient.patchResource(options, patchData); | ||
|
|
||
| console.log(`Successfully cancelled PipelineRun: ${name} in namespace: ${namespace}`); | ||
| } catch (error: unknown) { | ||
| const errorMessage = (error as Error).message; | ||
| console.error(`Failed to cancel PipelineRun ${name}: ${errorMessage}`); | ||
| throw new Error(`Failed to cancel PipelineRun ${name}: ${errorMessage}`); | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧩 Analysis chain
Verify Tekton spec.status value; current 'PipelineRunCancelled' is likely wrong.
The cancel path patches:
const patchData = {
spec: {
status: 'PipelineRunCancelled',
},
};Tekton controllers generally look for specific spec.status values (commonly Cancelled / related variants). Using a non-standard string here would silently fail to cancel runs while logging success, which undermines test reliability and CI cleanup.
I recommend verifying the exact spec.status value expected by your Tekton version and updating this constant accordingly, e.g.:
- spec: {
- status: 'PipelineRunCancelled'
- }
+ spec: {
+ status: 'Cancelled'
+ }(or whichever value Tekton officially documents).
Please confirm this against the Tekton PipelineRun cancellation documentation for the cluster version you target and adjust the string if needed so tests actually observe cancelled runs.
I'll verify the Tekton spec.status value and check the codebase for context on the target version and any existing patterns.
Now let me check the codebase for any version indicators, test patterns, or related cancellation logic:
---
🏁 Script executed:
```shell
#!/bin/bash
# Search for Tekton version references or cancellation patterns
fd -e ts -e js -e yaml -e json | xargs grep -l "tekton\|version\|Cancelled" 2>/dev/null | head -20
Length of output: 810
🏁 Script executed:
#!/bin/bash
# Look for test files related to tekton pipelinerun service
fd -type f "*.spec.ts|*.test.ts" | grep -i tektonLength of output: 170
🏁 Script executed:
#!/bin/bash
# Check for any related error handling or logging patterns in the service
grep -r "PipelineRunCancelled\|Cancelled\|spec.*status" src/api/tekton --include="*.ts" -nLength of output: 98
🏁 Script executed:
#!/bin/bash
# Check package.json or dependency files for Tekton version
fd -e "package.json" -e "package-lock.json" -e "yarn.lock" | xargs grep -i tekton 2>/dev/nullLength of output: 0
Update spec.status to use 'Cancelled' instead of deprecated 'PipelineRunCancelled' for reliable test teardown.
The status 'PipelineRunCancelled' is deprecated and replaced by 'Cancelled', and will be removed in v1. Using the deprecated value here causes silent failures: the patch accepts it, but the Tekton controller won't process it as a cancellation instruction, leaving runs in a stalled state.
For immediate correctness and forward compatibility:
- spec: {
- status: 'PipelineRunCancelled'
- }
+ spec: {
+ status: 'Cancelled'
+ }This is particularly critical for E2E framework reliability—silent cancellation failures undermine test cleanup and CI stability. Additionally, consider adding unit test coverage for this cancellation path to catch such issues early.
🤖 Prompt for AI Agents
In src/api/tekton/services/tekton-pipelinerun.service.ts around lines 194 to
221, the patch sets spec.status to the deprecated 'PipelineRunCancelled'; change
the patchData to set spec.status to 'Cancelled' instead and update any related
constants/strings; after that, run/add a unit test for the cancelPipelineRun
path to assert that patchResource is called with spec.status 'Cancelled' to
prevent regressions.
| // 4. Apply filters | ||
| const buildsToCancel = this.filterBuilds(allBuilds, opts); | ||
|
|
||
| console.log(`[Azure] ${buildsToCancel.length} builds match filters`); | ||
| console.log(`[Azure] ${allBuilds.length - buildsToCancel.length} builds filtered out`); | ||
|
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Update result.skipped to reflect builds removed by filters.
After filtering builds, you log how many were filtered out but do not update result.skipped:
const buildsToCancel = this.filterBuilds(allBuilds, opts);
console.log(`[Azure] ${buildsToCancel.length} builds match filters`);
console.log(`[Azure] ${allBuilds.length - buildsToCancel.length} builds filtered out`);Given the shared CancelResult contract (“skipped (due to filters or already completed)”), these filtered builds should be counted as skipped:
- const buildsToCancel = this.filterBuilds(allBuilds, opts);
+ const buildsToCancel = this.filterBuilds(allBuilds, opts);
+ result.skipped += allBuilds.length - buildsToCancel.length;This makes Azure’s behavior line up with other providers and keeps cleanup-related tests honest.
🤖 Prompt for AI Agents
In src/rhtap/core/integration/ci/providers/azureCI.ts around lines 498 to 503,
after computing buildsToCancel = this.filterBuilds(allBuilds, opts) and logging
counts, update the CancelResult object’s skipped count to include the builds
removed by filters (i.e., increment result.skipped by allBuilds.length -
buildsToCancel.length or set it explicitly) so that filtered-out builds are
reported as skipped per the shared contract; ensure this change happens before
returning result.
| private normalizeOptions( | ||
| options?: CancelPipelineOptions | ||
| ): Required<Omit<CancelPipelineOptions, 'eventType' | 'branch'>> & Pick<CancelPipelineOptions, 'eventType' | 'branch'> { | ||
| return { | ||
| excludePatterns: options?.excludePatterns || [], | ||
| includeCompleted: options?.includeCompleted || false, | ||
| eventType: options?.eventType, | ||
| branch: options?.branch, | ||
| concurrency: options?.concurrency || 10, | ||
| dryRun: options?.dryRun || false, | ||
| }; | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Clamp concurrency to avoid infinite loops in batching.
normalizeOptions currently passes options?.concurrency || 10 straight into chunkArray, which means a negative concurrency yields a negative step in:
for (let i = 0; i < array.length; i += size) {
chunks.push(array.slice(i, i + size));
}This will never terminate and will hang cancellations (and any tests depending on them).
Guard and clamp the value, e.g.:
- private normalizeOptions(
- options?: CancelPipelineOptions
- ): Required<Omit<CancelPipelineOptions, 'eventType' | 'branch'>> & Pick<CancelPipelineOptions, 'eventType' | 'branch'> {
- return {
+ private normalizeOptions(
+ options?: CancelPipelineOptions
+ ): Required<Omit<CancelPipelineOptions, 'eventType' | 'branch'>> &
+ Pick<CancelPipelineOptions, 'eventType' | 'branch'> {
+ const concurrency = Math.max(1, options?.concurrency ?? 10);
+ return {
excludePatterns: options?.excludePatterns || [],
includeCompleted: options?.includeCompleted || false,
eventType: options?.eventType,
branch: options?.branch,
- concurrency: options?.concurrency || 10,
+ concurrency,
dryRun: options?.dryRun || false,
};
}Also applies to: 709-715
🤖 Prompt for AI Agents
In src/rhtap/core/integration/ci/providers/githubActionsCI.ts around lines
428-439 (and similarly at 709-715), normalizeOptions currently uses
options?.concurrency || 10 which can allow zero or negative values and cause
infinite loops when used as a chunk size; update normalizeOptions to validate
and clamp concurrency to a sensible positive range (e.g., ensure at least 1 and
optionally cap to a max like 1000), so replace the raw value with a
sanitizedNumber = Math.max(1, Math.min(max, Number(options?.concurrency) || 10))
and return that sanitizedNumber for concurrency to guarantee chunking/loops
always progress.
| // 4. Apply filters | ||
| const pipelinesToCancel = this.filterPipelines(allPipelines, opts); | ||
|
|
||
| console.log(`[GitLabCI] ${pipelinesToCancel.length} pipelines match filters`); | ||
| console.log(`[GitLabCI] ${allPipelines.length - pipelinesToCancel.length} pipelines filtered out`); | ||
|
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Count filtered GitLab pipelines in result.skipped for accurate reporting.
You compute and log how many pipelines are filtered out, but result.skipped remains unaffected:
const pipelinesToCancel = this.filterPipelines(allPipelines, opts);
console.log(`[GitLabCI] ${pipelinesToCancel.length} pipelines match filters`);
console.log(
`[GitLabCI] ${allPipelines.length - pipelinesToCancel.length} pipelines filtered out`,
);To match the documented semantics of CancelResult.skipped and keep E2E expectations consistent:
- const pipelinesToCancel = this.filterPipelines(allPipelines, opts);
+ const pipelinesToCancel = this.filterPipelines(allPipelines, opts);
+ result.skipped += allPipelines.length - pipelinesToCancel.length;🤖 Prompt for AI Agents
In src/rhtap/core/integration/ci/providers/gitlabCI.ts around lines 320 to 325,
you log how many pipelines were filtered out but never record them in the
CancelResult, leaving result.skipped unchanged; compute the filteredOut
pipelines as the difference between allPipelines and pipelinesToCancel and
assign them to result.skipped (or map to IDs if CancelResult.skipped expects
IDs/count), so the result reflects the actual skipped pipelines and keeps E2E
expectations consistent.
| // 4. Apply filters | ||
| const buildsToCancel = this.filterBuilds(allBuilds, opts); | ||
|
|
||
| console.log(`[Jenkins] ${buildsToCancel.length} builds match filters`); | ||
| console.log(`[Jenkins] ${allBuilds.length - buildsToCancel.length} builds filtered out`); | ||
|
|
||
| // 5. Cancel builds in batches | ||
| await this.cancelBuildsInBatches(buildsToCancel, opts, result); | ||
|
|
||
| // 6. Log summary | ||
| console.log(`[Jenkins] Cancellation complete:`, { | ||
| total: result.total, | ||
| cancelled: result.cancelled, | ||
| failed: result.failed, | ||
| skipped: result.skipped, | ||
| }); | ||
|
|
||
| } catch (error: any) { | ||
| console.error(`[Jenkins] Error in cancelAllPipelines: ${error.message}`); | ||
| throw new Error(`Failed to cancel pipelines: ${error.message}`); | ||
| } | ||
|
|
||
| return result; | ||
| } | ||
|
|
||
| /** | ||
| * Normalize options with defaults | ||
| */ | ||
| private normalizeOptions( | ||
| options?: CancelPipelineOptions | ||
| ): Required<Omit<CancelPipelineOptions, 'eventType' | 'branch'>> & Pick<CancelPipelineOptions, 'eventType' | 'branch'> { | ||
| return { | ||
| excludePatterns: options?.excludePatterns || [], | ||
| includeCompleted: options?.includeCompleted || false, | ||
| eventType: options?.eventType, | ||
| branch: options?.branch, | ||
| concurrency: options?.concurrency || 10, | ||
| dryRun: options?.dryRun || false, | ||
| }; | ||
| } | ||
|
|
||
| /** | ||
| * Fetch all builds from Jenkins API (both source and gitops jobs) | ||
| */ | ||
| private async fetchAllBuilds(): Promise<any[]> { | ||
| try { | ||
| const allBuilds: any[] = []; | ||
| const folderName = this.componentName; | ||
|
|
||
| // Fetch builds from source job | ||
| const sourceJobName = this.componentName; | ||
| try { | ||
| const sourceBuilds = await this.jenkinsClient.builds.getRunningBuilds(sourceJobName, folderName); | ||
|
|
||
| // Tag builds with their job name for later cancellation | ||
| const taggedSourceBuilds = (sourceBuilds || []).map(build => ({ | ||
| ...build, | ||
| _jobName: sourceJobName | ||
| })); | ||
| allBuilds.push(...taggedSourceBuilds); | ||
| } catch (sourceError: any) { | ||
| console.log(`[Jenkins] Source job ${sourceJobName} not found or no builds: ${sourceError.message}`); | ||
| } | ||
|
|
||
| // Fetch builds from gitops job | ||
| const gitopsJobName = `${this.componentName}-gitops`; | ||
| try { | ||
| const gitopsBuilds = await this.jenkinsClient.builds.getRunningBuilds(gitopsJobName, folderName); | ||
|
|
||
| // Tag builds with their job name for later cancellation | ||
| const taggedGitopsBuilds = (gitopsBuilds || []).map(build => ({ | ||
| ...build, | ||
| _jobName: gitopsJobName | ||
| })); | ||
| allBuilds.push(...taggedGitopsBuilds); | ||
| } catch (gitopsError: any) { | ||
| // Gitops job might not exist, log but don't fail | ||
| console.log(`[Jenkins] Gitops job ${gitopsJobName} not found or no builds: ${gitopsError.message}`); | ||
| } | ||
|
|
||
| return allBuilds; | ||
|
|
||
| } catch (error: any) { | ||
| console.error(`[Jenkins] Failed to fetch builds: ${error.message}`); | ||
| throw error; | ||
| } | ||
| } | ||
|
|
||
| /** | ||
| * Filter builds based on cancellation options | ||
| */ | ||
| private filterBuilds( | ||
| builds: any[], | ||
| options: Required<Omit<CancelPipelineOptions, 'eventType' | 'branch'>> & Pick<CancelPipelineOptions, 'eventType' | 'branch'> | ||
| ): any[] { | ||
| return builds.filter(build => { | ||
| // Filter 1: Skip completed builds unless includeCompleted is true | ||
| if (!options.includeCompleted && this.isCompletedStatus(build)) { | ||
| console.log(`[Filter] Skipping completed build ${build.number} (${build.result})`); | ||
| return false; | ||
| } | ||
|
|
||
| // Filter 2: Check exclusion patterns | ||
| if (this.matchesExclusionPattern(build, options.excludePatterns)) { | ||
| console.log(`[Filter] Excluding build ${build.number} by pattern`); | ||
| return false; | ||
| } | ||
|
|
||
| // Filter 3: Filter by event type if specified | ||
| if (options.eventType && !this.matchesEventType(build, options.eventType)) { | ||
| console.log(`[Filter] Skipping build ${build.number} (event type mismatch)`); | ||
| return false; | ||
| } | ||
|
|
||
| // Note: Jenkins builds don't have direct branch information in getRunningBuilds | ||
| // Branch filtering would require fetching full build details, skipping for performance | ||
| if (options.branch) { | ||
| console.log(`[Filter] Branch filtering not supported for Jenkins running builds, ignoring branch filter`); | ||
| } | ||
|
|
||
| return true; // Include this build for cancellation | ||
| }); | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Align Jenkins skipped accounting and event-type filtering with the shared contract.
Two related maintainability/reliability points in this block:
result.skippedignores filtered builds
After filtering, you only log filtered builds:
const buildsToCancel = this.filterBuilds(allBuilds, opts);
console.log(`[Jenkins] ${buildsToCancel.length} builds match filters`);
console.log(`[Jenkins] ${allBuilds.length - buildsToCancel.length} builds filtered out`);but never increment result.skipped. To keep cancel telemetry and tests accurate:
- const buildsToCancel = this.filterBuilds(allBuilds, opts);
+ const buildsToCancel = this.filterBuilds(allBuilds, opts);
+ result.skipped += allBuilds.length - buildsToCancel.length;- Missing
triggerTypestill passes event-type filters
matchesEventType returns true when build.triggerType is missing, even if options.eventType was explicitly set. That widens the cancellation scope beyond what callers requested and can surprise targeted-cleanup tests. Consider treating “no triggerType” as a mismatch when an explicit eventType filter is present (or at least logging that ambiguity clearly) so behavior is consistent with other providers.
🤖 Prompt for AI Agents
In src/rhtap/core/integration/ci/providers/jenkinsCI.ts around lines 615-737,
the filter/logging code records how many builds were filtered but never
increments result.skipped, and matchesEventType currently treats missing
build.triggerType as a pass which widens filtering when an explicit eventType
was requested; update filterBuilds to increment result.skipped for every build
it excludes (completed, excluded-by-pattern, event-type mismatch) so
telemetry/tests remain accurate, and change event-type logic so that if
options.eventType is provided and build.triggerType is undefined it is treated
as a mismatch (and log a clear message about ambiguous/missing triggerType)
rather than allowing the build to pass.
Assisted-by: Claude Code
Summary by CodeRabbit
New Features
Refactor
✏️ Tip: You can customize this high-level summary in your review settings.