Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Targetconfig crd #716

Draft
wants to merge 43 commits into
base: main
Choose a base branch
from
Draft

Conversation

aerosouund
Copy link
Contributor

@aerosouund aerosouund commented Jan 24, 2025

  • Crd definition
  • Crd storage
  • Informer initialization
  • Testing
  • Makefile cleanup

@@ -203,6 +206,27 @@ func (r *Resolver) CustomIDGenerators() map[string]result.IDGenerator {
return generators
}

func (r *Resolver) AddTargetConfigEventHandlers() {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

you should move this into a controller/client struct. The resolver is only to manage dependencies and not this kind of business logic

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You are right, this is me still testing out stuff. The Informer initialization task i have in the description is to mark this exact problem

@@ -40,6 +40,9 @@ func (k *k8sPolicyReportClient) Stop() {
func (k *k8sPolicyReportClient) Sync(stopper chan struct{}) error {
factory := metadatainformer.NewSharedInformerFactory(k.metaClient, 15*time.Minute)

// tcInformer := tcinformer.NewSharedInformerFactory(tcv1alpha1.New(&rest.RESTClient{}), time.Second)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I assume you wanted to try things out, don't forget to clean everything up again.

mx *sync.Mutex
clients []Client
targets map[string]*Target
crdTargets map[string]*Target
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

is there a reason to separate crd targets from config targets?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I wanted to first write out all the flows for storing crd targets separate from the existing config and once everything is figured out merge them. but no, they both can be stored in the same map as i see the map key is just a random uuid

@fjogeleit
Copy link
Member

the test make target is missing. Please ensure to add it again and that all tests passing

@fjogeleit fjogeleit linked an issue Jan 26, 2025 that may be closed by this pull request
cmd/run.go Outdated
} else {
k8sConfig, err = rest.InClusterConfig()
}
k8sConfig, err := clientcmd.BuildConfigFromFlags("", "/Users/ammaryasser/Downloads/ips")
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

cleanup

// +optional
GCS GCSOptions `json:"gcs,omitempty"`

TargetType string `json:"targetType,omitempty"`
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't think this is needed, you could make S3 etc. to pointers and check which property is set. You can add a schema validation that only one of this fields can be set and not multiple ones.

func (f *TargetFactory) CreateSingleClient(tc *v1alpha1.TargetConfig) (*target.Target, error) {
var t *target.Target
switch tc.Spec.TargetType {
case "s3":
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

as mentioned I would make S3 etc to pointers and check if they are not nil

Copy link
Contributor Author

@aerosouund aerosouund Jan 30, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Wouldn't this enforce priority on some level ? if the code goes: if s3 isn't nil.. else if slack isn't nil.. etc then for a config that has both (even though this is invalid) it will make it so that s3 takes precedence over whatever. with this field i can treat them all the same and only look at the field set in TargetType and also it simplifies the validation logic. if a TargetConfig is of type s3 and the s3 field is nil then its invalid. rather than having "if all the fields are not set then the config is invalid"

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You can enforce that only one field is set on the schema, so the API Server will not allow multiple types to be set.
(https://github.com/kyverno/kyverno/blob/main/api/kyverno/v1/common_types.go#L99)

Maybe @JimBugwadia or @eddycharly has an opinion on the TargetType. I think it should not be more cumbersome if needed. If the user configures the s3 and has to set targetType might confuse people.

@coveralls
Copy link

coveralls commented Feb 6, 2025

Pull Request Test Coverage Report for Build 13371519744

Warning: This coverage report may be inaccurate.

This pull request's base commit is no longer the HEAD commit of its target branch. This means it includes changes from outside the original pull request, including, potentially, unrelated coverage changes.

Details

  • 96 of 506 (18.97%) changed or added relevant lines in 14 files are covered.
  • 12 unchanged lines in 4 files lost coverage.
  • Overall coverage decreased (-4.0%) to 76.851%

Changes Missing Coverage Covered Lines Changed/Added Lines %
pkg/listener/scope_results.go 12 13 92.31%
pkg/cache/redis.go 0 3 0.0%
pkg/listener/new_result.go 3 8 37.5%
pkg/target/client.go 1 10 10.0%
pkg/target/collection.go 3 13 23.08%
pkg/cache/memory.go 0 16 0.0%
pkg/config/resolver.go 19 55 34.55%
pkg/target/factory/factory.go 31 77 40.26%
pkg/targetconfig/tc.go 0 66 0.0%
hack/main.go 0 218 0.0%
Files with Coverage Reduction New Missed Lines %
pkg/kubernetes/policy_report_client.go 1 83.08%
pkg/listener/send_result.go 1 89.47%
pkg/target/discord/discord.go 4 88.89%
pkg/listener/new_result.go 6 55.45%
Totals Coverage Status
Change from base Build 12974201379: -4.0%
Covered Lines: 6424
Relevant Lines: 8359

💛 - Coveralls

Signed-off-by: aerosouund <[email protected]>
Signed-off-by: aerosouund <[email protected]>
@fjogeleit
Copy link
Member

The TargetConfig "slack-notifier" is invalid: 
* spec.slack.certificate: Required value
* spec.slack.channel: Required value
* spec.slack.headers: Required value
* spec.slack.skipTLS: Required value

They are optional and should not be required, please check also other targets to ensure that all optional values are correctly configured.

@fjogeleit
Copy link
Member

Slack Notifications for a new resource are published twice

@fjogeleit
Copy link
Member

Following Scenario did not work:

  • Start Policy Reporter
  • Apply new Slack TargetConfiguration
  • Apply new Policy
  • (x) Get Pushes for new violation of existing resources (got no pushes at all)

Expected:

  • Get pushes with new violations only

TargetConfig:

kind: TargetConfig
apiVersion: wgpolicyk8s.io/v1alpha1
metadata:
  name: slack-notifier
spec:
  targetType: slack
  slack:
    webhook: https://hooks.slack.com/...
    certificate: ""
    channel: "kyverno"
    headers: {}
    skipTLS: false
  skipExistingOnStartup: true

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Feature: Allow Reconfiguration when config file changes
3 participants