-
Notifications
You must be signed in to change notification settings - Fork 143
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Switch Dockerfile image to wolfi and add pipeline for vulnerability scanning #3063
base: main
Are you sure you want to change the base?
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
great start! I added a few comments
RUN make clean install | ||
RUN ln -s .venv/bin /app/bin | ||
RUN make clean install && ln -s .venv/bin /app/bin | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I had this line
RUN apk del make git
so that we could remove now-unnecessary packages from the final images, with the assumption that we do not need make
or git
at runtime, but we could confirm with someone from @elastic/search-extract-and-transform that it makes sense
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@acrewdson My thought process for omitting that line was:
- since we're going to scan the built image we'd want to be able to detect vulnerabilities on these packages, even if they're not required at runtime.
- they've been included in the Dockerfile so removing them now can cause a "breaking" change for users who already build on top of this. Changing the base image itself can be a much more substantial breaking change though so perhaps that shouldn't be factored in here.
Let me know what you think?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm ok with removing make and git. I expect very little being extended on the image, and more likely that the connectors code is what is seeing extension. We shouldn't need make or git for that.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
+1. git
is needed at build-time, but for run-time we need neither git nor make
.buildkite/dockerfiles-pipeline.yml
Outdated
# ---- | ||
# Dockerfile build and tests on amd64 | ||
# ---- | ||
- label: "Build amd64 image from Dockerfile" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
instead of only building amd64
images, we could consider building a multi-arch image using buildah
or drivah
, like we are doing here, and that way we could also potentially avoid using these hefty ec2 instances
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thank you for the suggestion about a multi-arch image, reading in more detail how buildah/drivah multiarch push works, it pulls the already built amd64/arm64 images from the previous steps and combines them into a common manifest which references both builds. Unfortunately doesn't save us any build resources.
It means that we'd build the image on both architectures to create a multiarch image that isn't going to be published or used for any other purpose. I do not see the need for this since the images aren't published, what do you think?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I added pipeline steps to build and test both amd64 and arm64 architectures. Given the above analysis I don't see a use for a multiarch image step since we aren't publishing that artifact either way, let me know if you have any concerns.
schedules: | ||
Daily main: | ||
branch: main | ||
cronline: '@daily' |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
wondering if we should consider running this less frequently? I think it would depend on what @elastic/search-extract-and-transform prefer, but the scenario I picture is something like:
- this job runs once a week, or maybe even once every two weeks
- if Trivy identifies any vulnerabilities, a message is sent to a Slack channel, or a GitHub issue is created
let's see what folks think 👍
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
- if Trivy identifies any vulnerabilities, a message is sent to a Slack channel, or a GitHub issue is created
I think these notifications could potentially be a duplicate of existing Snyk issues detected on the official container image. What's the value added by these new notifications?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
is Snyk going to scan the images produced by these CI jobs?
I think this PR is doing 3 things, but maybe doesn't need to.
- switches our
Dockerfile
to use chainguard (has customer value) - Adds CI to validate the
Dockerfile
works (has customer value) - Adds CI to fail if a vuln is found (internal value, with indirect customer value)
I think (3) could probably be done separately, and reuse whatever machinery we use to scan other artifacts, especially since (1) and (2) will significantly cut down on false-positives that we'd have if we tried to do (3) on its own today.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The CI jobs in this PR do not push images to the docker registry, instead it relies on Trivy for vulnerability scanning, which runs within the context of the pipeline.
The alternative approach is to push the images to an internal namespace on the docker registry and request for these to be added to Snyk. This would result in duplicate reports though as we're already scanning the docker.elastic.co/integrations/elastic-connectors
with Snyk built from the Dockerfile.wolfi image which is technically the exact same base OS build.
.buildkite/dockerfiles-pipeline.yml
Outdated
command: |- | ||
mkdir -p .artifacts | ||
buildkite-agent artifact download '.artifacts/*.tar.gz*' .artifacts/ --step build_dockerfile_image_amd64 | ||
trivy --version | ||
env | grep TRIVY | ||
find .artifacts -type f -name '*.tar.gz*' -exec trivy image --quiet --input {} \; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nice 👍
I'm thinking it could also be nice to add one additional stage to this pipeline, if it's not too difficult:
a smoke-test/acceptance-test that verifies that connectors actually starts up in the resultant image, just as a way of gaining confidence that the built image does what we expect it to do
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Good point, this pipeline includes the same test step that is used for the production docker images built from Dockerfile.wolfi here so that the testing approach is aligned on both built images.
The pipeline step test_extensible_dockerfile_image_amd64
runs the .buildkite/publish/test-docker.sh which performs basic sanity checks and verifies the output of elastic-ingest --version
here.
Since adding functional tests at the connector level is a more involved task which could benefit both the production and the "extensible" image I'd suggest we create a dedicated issue for that, what do you think?
Since in this PR we are building a Dockerfile on a different base image ( For reference, here is a list of packages which, among others (like bash and curl), are available in the wolfi python-3.11 image used for the production image builds. Do we want this list of packages added to the wolfi-base image? Are connectors requiring some, or all, of them? @elastic/search-extract-and-transform |
I'm not sure if we need the packages mentioned, we really only need dependencies for 3.11 python + |
84d8123
to
cdfd17b
Compare
Thanks, then I'll stick with the bare minimum packages and we can iterate if needed. I added both amd64 and arm64 image builds, mainly for the purpose of smoke testing (with Do we need some sort of release note for the new base image? It could be a "breaking" change for those users who build their images on top of our Dockerfiles. |
25f6298
to
a1b50bd
Compare
Update re. the packages included with the base image vs the python one that we do docker image builds on: @oli-g suggested to check whether installing |
Just to check - |
Yes, we've confirmed that this is a publicly available image that can be downloaded by unauthenticated (not logged in to any registry) docker instances. |
We can mention it in our release notes indeed. I don't expect a lot of customers to build customer docker images and likely it's just gonna work for them, but it would not hurt to mention that we've updated our docker images recently to make them more secure with potential incompatible changes that the customers will have to fix |
Thank you, I added a release note in the issue description. |
https://github.com/elastic/search-developer-productivity/issues/3547
Description
The goal of this PR is to:
This is based on @acrewdson 's draft. It uses
wolfi-base
images and adds buildkite pipelines to build, test and scan the resulting docker images using trivy.A notification method for vulnerability reports from Trivy is TBD.
Checklists
Pre-Review Checklist
config.yml.example
)v7.13.2
,v7.14.0
,v8.0.0
)Changes Requiring Extra Attention
Release Note
The extensible Dockerfile provided in the connectors repo has been updated to use a different base image for improved security. Users who build custom docker images based on this Dockerfile may need to review their configuration for compatibility with the new base image.