-
Notifications
You must be signed in to change notification settings - Fork 42
Audit test pipelines #3053
Comments
For the Macos Daily -> it was originally implemented in #2626, and using the Orka ephemeral workers, and superseded #2336 the error is something the @elastic/ci-systems might need to help with:
IIUC, the recent upgrade in the CI controllers added a host key verification by default, we reported this in the past and it was partially fixed since we dont' see the below error but a new one: but the error now happens in a subsequent stage to clone a private repository -- see the above console log It worked in the past |
Docker images generated the Systemd Docker images used in the e2e tests, probably we are the stakeholders. |
@v1v Thanks, that helps. I'm also trying to figure out what it actually does so that I can figure out how the stakeholders should be. I'm code-diving right now a bit to try and get a sense of that. |
Observability Helm Charts can be removed |
@kuisathaverat Thanks! Regarding the Docker images -- that pipeline hasn't been executed for over a year. Does it still need to exist? |
It is the only way to generate those images, when they change should be executed. These images are for making a test on installation on a systems environment. The main changes that can have are bumping the systems version or the Linux version. |
@cmacknz and @jlind23 Are you tracking any issues for the flakiness in the K8s Autodiscover pipeline? |
There was an original request to test on MacOS, for such, it was initially attempted with the AWS MacOS, but it was declined for vary reasons:
I guess the stakeholder might be @jlind23 as he was the original requester for the MacOS in AWS |
@cachedout this is the issue we will use for the first half of 8.6. @AndersonQ is already assigned to this and will closely work with you in order to get back to a better place. |
@jlind23 That link seems wrong? :) |
Sorry, this one - elastic/elastic-agent#1174 |
No, it may make sense to follow up with the Observability Cloudnative monitoring team to see if they have interest in fixing these tests faster than the agent team can get to them. They have done the majority of the recent work for autodiscovery features in agent. |
Looping in @gizas . We are trying to stabilize the E2E test suite. Are you aware of the flakiness in the k8s autodiscover tests, and if so, is anybody on your time investigating them? |
I have disabled most of the tests in the Fleet E2E suite while we eval what to do with the remaining: elastic/elastic-agent#1174 (comment) |
Sorry for delayed answer, @cachedout , @cmacknz just checking K8s Autodiscover pipeline. Can you point me to a fail instance to have a look? Indeed in the past we had provided some fixes |
Any reason this hasn't been done yet? Seeing it fail on a few PR runs recently and couldn't find the issue to track removing these. |
Hi @joshdover . The issue is this one: https://github.com/elastic/observability-robots/issues/1325 We were considering this blocked until they sorted out the future regarding charts, but TBH it's probably not a big deal if we just pull it out now if it's failing in PRs. LMK what you think. |
Makes sense. I've only seen it fail once recently, but will flag it if it's
more of a problem.
…On Mon, Oct 17, 2022 at 2:01 PM Mike Place ***@***.***> wrote:
Any reason this hasn't been done yet?
Hi @joshdover <https://github.com/joshdover> . The issue is this one:
elastic/observability-robots#1325
<https://github.com/elastic/observability-robots/issues/1325>
We were considering this blocked until they sorted out the public
communication regarding chart deprecation, but TBH it's probably not a big
deal if we just pull it out now if it's failing in PRs. LMK what you think.
—
Reply to this email directly, view it on GitHub
<#3053 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AAN2UEF52N6EMXGDFWEMIPTWDU5YFANCNFSM6AAAAAAQXTAD34>
.
You are receiving this because you were mentioned.Message ID:
***@***.***>
|
This is a master tracking issue issue for auditing which E2E test pipelines need to remain enabled.
Beats CI pipelines
Fleet CI pipelines
1.1 If the pipeline remains and is broken, what is the link to an issue tracking a fix?
1.2 If the pipeline should remain, how is it monitored by the team to ensure that build artifacts are not produced when the tests fail?
Next steps
Proposed pipeline criteria
I am proposing that we remove all pipelines which do not meet any of the following criteria:
Timeline
Related efforts
There is a separate effort to try and reduce the scope of E2E testing back to a point where stability can be maintained, but it is limited to tests for the Agent. That effort can be found here: elastic/elastic-agent#1174
The text was updated successfully, but these errors were encountered: