-
Notifications
You must be signed in to change notification settings - Fork 28
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add SPDX format support for SBOM #608
Add SPDX format support for SBOM #608
Conversation
I thought I will eventually integrate the changes into new subcommand implemented here: #593 but it wasn't merged yet when I started working on this one |
5e78923
to
e64cec4
Compare
e64cec4
to
4a10a9d
Compare
typo in commit message and PR: s/SDPX/SPDX/ |
e08cfd6
to
0f308c0
Compare
Thanks for all the reviews and comments. I also wanted to add spdx support into merge-sboms subcommand but I noticed that in the tests you don't really test merged output. Or am I missing something? |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Mostly LGTM, a couple of minor changes is needed. Edit: you would also need to make sure UTs pass.
|
50b901b
to
3d0817c
Compare
@joejstuart I applied the changes I was talking about yesterday. This is how your sample component would look like in SPDX now: SPDXPackage(
SPDXID="SPDXRef-Package-archive.zip-None-965cfdf16e9275d1b3b562dee596de0474cdc751ba4c30715cfc3934fab3b300",
name="archive.zip",
versionInfo=None,
externalRefs=[
SPDXPackageExternalRefPackageManagerPURL(
referenceLocator="pkg:generic/archive.zip?checksum=sha256:386428a82f37345fa24b74068e0e79f4c1f2ff38d4f5c106ea14de4a2926e584&download_url=https://github.com/cachito-testing/cachi2-generic/archive/refs/tags/v2.0.0.zip",
referenceType="purl",
referenceCategory="PACKAGE-MANAGER",
),
],
annotations=[
SPDXPackageAnnotation(
annotator="Tool: cachi2:jsonencoded",
annotationDate="2021-07-01T00:00:00Z",
annotationType="OTHER",
comment='{"name": "cachi2:found_by", "value": "cachi2"}',
),
],
downloadLocation="NOASSERTION",
) |
/ok-to-test |
nitpick: typo in commit message s/Co-authered-by/Co-authored-by |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This LGTM!
There are some outstanding comments about the SPDX content, but I believe this is more than enough for our first iteration.
pAnnotation = partial(SPDXPackageAnnotation, **args) | ||
|
||
# noqa for trivial helpers. | ||
mkcomm = lambda p: json.dumps(dict(name=f"{p.name}", value=f"{p.value}")) # noqa: E731 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Tiny nitpick that should be addressed as a follow up (if ever): if we're ok with defining these lambdas, shouldn't we just configure flake8 to ignore E731?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
At some point definitely yes, currently it is only the new code in one SPDX-related commit that relies on named helper lambdas, I suggest waiting until the approach is actually adopted.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
as a follow up (if ever)
I vote for a follow-up :). I don't object against named lambdas in our code, but there was apparently a reason a rule was added to flake8. Okay, there's no obligation for us to be 100% conformant with flake8 rules (that's too harsh), but in that case we should disable a given rule globally in the mentioned follow up, not just in a single module as I'm not a fan of such practices, that just hinders readability and clarity.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
To the best of my understanding the reason behind this rule is clarity of traceback in case something fails within a lambda vs when something fails in a named function i.e. to ensure that the ability to dynamically bind function objects to names is not abused. All I do with named lambdas is lifting them from comprehensions to appease black (which would make me go multiline with LCs otherwise which, in my opinion, is way worse).
3d0817c
to
ee64f54
Compare
Thank you. When you know what it will look like, can you please post? I'm working on a rule to verify the package sources. https://issues.redhat.com/browse/EC-1022 |
@joejstuart here is how it is supposed to look like: #608 (comment) |
I completely missed that. Thanks! |
/ok-to-test |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Consider this a "shallow" ack - the results meet what I expect for our Konflux needs, but I didn't do a thorough review
pAnnotation = partial(SPDXPackageAnnotation, **args) | ||
|
||
# noqa for trivial helpers. | ||
mkcomm = lambda p: json.dumps(dict(name=f"{p.name}", value=f"{p.value}")) # noqa: E731 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
as a follow up (if ever)
I vote for a follow-up :). I don't object against named lambdas in our code, but there was apparently a reason a rule was added to flake8. Okay, there's no obligation for us to be 100% conformant with flake8 rules (that's too harsh), but in that case we should disable a given rule globally in the mentioned follow up, not just in a single module as I'm not a fan of such practices, that just hinders readability and clarity.
ee64f54
to
05545df
Compare
Having merge_outputs in global utils can result in import loops in rather unexpected places. Moving it to a different scope to prevent those from appearing. Signed-off-by: Alexey Ovchinnikov <[email protected]>
`first_for` and `partition_by` helpers are introduced to make code easier to follow. Signed-off-by: Alexey Ovchinnikov <[email protected]>
This will remove a risk of circular dependencies when SPDXSbom is introduced. Signed-off-by: Alexey Ovchinnikov <[email protected]>
Adding models of SPDX elements to introduce proper SPDX support in a later commit. Signed-off-by: Alexey Ovchinnikov <[email protected]>
05545df
to
614cea1
Compare
614cea1
to
3ba7a8b
Compare
e6fbd8b
to
f3480b8
Compare
Added SDPX format support for SBOM Support for SPDX format was added to fetch-depds command and also to merge_syft_sboms. No changes were made in particular package manager generating components which are then converted to cyclonedx format. SPDX sbom can be obtained by calling Sbom.to_spdx(). New switch sbom-type was added to merge_syft_sboms, so user can choose which output format should be generated - default is cyclonedx. Once all tooling is ready to consume spdx sboms, cutoff changes in this repository can be started. SPDXRef-DocumentRoot-File- includes all spdx packages and is set to be described by SPDXRef-DOCUMENT. This way of spdx generation is closer to way syft generates spdx Signed-off-by: Jindrich Luza <[email protected]> Signed-off-by: Alexey Ovchinnikov <[email protected]>
f3480b8
to
53e3a53
Compare
b4a413d
Here is the new policy for this. enterprise-contract/ec-policies#1270 |
Support for SPDX format was added to fetch-depds command and also to merge_syft_sboms.
No changes were made in particular package manager generating components which are then converted to cyclonedx format. SPDX sbom can be obtained by calling Sbom.to_spdx().
New switch sbom-type was added to merge_syfy_sboms, so user can choose which output format should be generated - default is cyclonedx. Once all tooling is ready to consume spdx sboms, cutoff changes in this repository can be started.
Maintainers will complete the following section
Note: if the contribution is external (not from an organization member), the CI
pipeline will not run automatically. After verifying that the CI is safe to run:
/ok-to-test
(as is the standard for Pipelines as Code)