Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fix: handled skip rule processing in anyPattern field #16

Open
wants to merge 1 commit into
base: develop
Choose a base branch
from

Conversation

ansalamdaniel
Copy link

@ansalamdaniel ansalamdaniel commented Oct 27, 2022

Signed-off-by: ansalamdaniel [email protected]

Explanation

Global anchor with anyPattern fails when none of the pattern matches. The global anchor generally skips when the criteria is not matched and when used in the anyPattern it throws failure message. But here the generated skip error is not handled in the anyPattern validation function.
This PR aims to fix the global anchor functioning with the anyPattern field.

Related issue

Closes 4221

Milestone of this PR

What type of PR is this

/kind bug

Proposed Changes

Handling the skip error in in the anyPattern field.

Proof Manifests

Below attached test file grabbed from slack discussion. Before this fix, the last test must be skipped but it fails instead.

Before fix

$ go run ./cmd/cli/kubectl-kyverno/main.go test ./test_policy

Executing validate-service-loadbalancer-test...
applying 1 policy to 7 resources... 

│───│───────────────────────────────│─────────────────────────────│──────────────────────────────────│────────│
│ # │ POLICY                        │ RULE                        │ RESOURCE                         │ RESULT │
│───│───────────────────────────────│─────────────────────────────│──────────────────────────────────│────────│
│ 1 │ validate-service-loadbalancer │ check-loadbalancer-internal │ /Service/service-internal-pass   │ Pass   │
│ 2 │ validate-service-loadbalancer │ check-loadbalancer-internal │ /Service/service-internal-2-pass │ Pass   │
│ 3 │ validate-service-loadbalancer │ check-loadbalancer-public   │ /Service/service-public-pass     │ Pass   │
│ 4 │ validate-service-loadbalancer │ check-loadbalancer-public   │ /Service/service-public-2-pass   │ Pass   │
│ 5 │ validate-service-loadbalancer │ check-loadbalancer-public   │ /Service/service-public-fail     │ Pass   │
│ 6 │ validate-service-loadbalancer │ check-loadbalancer-internal │ /Service/service-internal-fail   │ Pass   │
│ 7 │ validate-service-loadbalancer │ check-loadbalancer-internal │ /Service/service-clusterip-pass  │ Pass   │
│ 8 │ validate-service-loadbalancer │ check-loadbalancer-public   │ /Service/service-clusterip-pass  │ Fail   │
│───│───────────────────────────────│─────────────────────────────│──────────────────────────────────│────────│

Test Summary: 7 tests passed and 1 tests failed

Aggregated Failed Test Cases : 
│───│───────────────────────────────│───────────────────────────│─────────────────────────────────│────────│
│ # │ POLICY                        │ RULE                      │ RESOURCE                        │ RESULT │
│───│───────────────────────────────│───────────────────────────│─────────────────────────────────│────────│
│ 8 │ validate-service-loadbalancer │ check-loadbalancer-public │ /Service/service-clusterip-pass │ Fail   │
│───│───────────────────────────────│───────────────────────────│─────────────────────────────────│────────│
exit status 1

policy.yaml

apiVersion: kyverno.io/v1
kind: ClusterPolicy
metadata:
  name: validate-service-loadbalancer
spec:
  validationFailureAction: enforce
  rules:
  - name: check-loadbalancer-internal
    match:
      resources:
        kinds:
        - Service
    validate:
      message: "Service of type 'LoadBalancer' is internal and does not explicitly define network security. To set the LB to internal, use annotation 'service.beta.kubernetes.io/aws-load-balancer-internal' with value 'true' or '0.0.0.0/0' "
      pattern:
        spec:
          <(type): LoadBalancer
        metadata: # If an annotation of type service.beta.kubernetes.io/aws-load-balancer-internal exists, then it must be either of those values https://kyverno.io/docs/writing-policies/validate/#anchors-and-child-elements-conditional-and-equality
          annotations:
            =(service.beta.kubernetes.io/aws-load-balancer-internal): "0.0.0.0/0|true"
  - name: check-loadbalancer-public
    match:
      resources:
        kinds:
        - Service
    validate:
      message: "Service of type 'LoadBalancer' is public and does not explicitly define network security. To use a public LB you must supply either spec[loadBalancerSourceRanges] or the 'service.beta.kubernetes.io/aws-load-balancer-security-groups' annotation."
      anyPattern:
      - spec:
          <(type): LoadBalancer
        metadata:
          annotations:
            service.beta.kubernetes.io/aws-load-balancer-security-groups: "?*"
      - spec:
          <(type): LoadBalancer
          loadBalancerSourceRanges: "*"

resources.yaml

apiVersion: v1
kind: Service
metadata:
  annotations:
    service.beta.kubernetes.io/aws-load-balancer-internal: 0.0.0.0/0
  labels:
    app.kubernetes.io/managed-by: Helm
  name: service-internal-pass
spec:
  type: LoadBalancer
  clusterIP: 100.69.148.11
  loadBalancerSourceRanges:
  - 3.224.166.65/32
  - 3.210.193.151/32
  - 3.226.136.65/32
  - 10.0.0.0/8
---
apiVersion: v1
kind: Service
metadata:
  annotations:
    service.beta.kubernetes.io/aws-load-balancer-internal: "true"
  labels:
    app.kubernetes.io/managed-by: Helm
  name: service-internal-2-pass
spec:
  type: LoadBalancer
  clusterIP: 100.69.148.11
  loadBalancerSourceRanges:
  - 3.224.166.65/32
  - 3.210.193.151/32
  - 3.226.136.65/32
  - 10.0.0.0/8
---
apiVersion: v1
kind: Service
metadata:
  annotations:
    meta.helm.sh/release-name: app-my-release
  labels:
    app.kubernetes.io/managed-by: Helm
  name: service-public-pass
spec:
  type: LoadBalancer
  loadBalancerSourceRanges:
  - 3.224.166.65/32
  - 3.210.193.151/32
  - 3.226.136.65/32
  ports:
  - name: http
    nodePort: 31207
    port: 80
    protocol: TCP
    targetPort: 3000
  - name: https
    nodePort: 30400
    port: 443
    protocol: TCP
    targetPort: 3000
---
apiVersion: v1
kind: Service
metadata:
  annotations:
    service.beta.kubernetes.io/aws-load-balancer-cross-zone-load-balancing-enabled: "true"
    service.beta.kubernetes.io/aws-load-balancer-security-groups: sg-01d98fabb057f8842
  labels:
    app.kubernetes.io/instance: app-hello-node-test
  name: service-public-2-pass
spec:
  type: LoadBalancer
  clusterIP: 100.67.234.84
---
apiVersion: v1
kind: Service
metadata:
  annotations:
    external-dns.alpha.kubernetes.io/hostname: hello-node-public-test.[REDACTED]
    meta.helm.sh/release-name: app-hello-node-test
    meta.helm.sh/release-namespace: app-hello-node-test
    service.beta.kubernetes.io/aws-load-balancer-additional-resource-tags: Environment=test,ops_environment=test,ops_costcenter=220,service=hello-node,kubernetesManager=true,region=us-east-1,elb_name=public
    service.beta.kubernetes.io/aws-load-balancer-backend-protocol: http
    service.beta.kubernetes.io/aws-load-balancer-connection-draining-enabled: "true"
    service.beta.kubernetes.io/aws-load-balancer-connection-draining-timeout: "60"
    service.beta.kubernetes.io/aws-load-balancer-cross-zone-load-balancing-enabled: "true"
    service.beta.kubernetes.io/aws-load-balancer-healthcheck-healthy-threshold: "2"
    service.beta.kubernetes.io/aws-load-balancer-healthcheck-interval: "5"
    service.beta.kubernetes.io/aws-load-balancer-healthcheck-timeout: "2"
    service.beta.kubernetes.io/aws-load-balancer-healthcheck-unhealthy-threshold: "2"
    service.beta.kubernetes.io/aws-load-balancer-ssl-ports: "443"
  labels:
    app.kubernetes.io/managed-by: Helm
  name: service-public-fail
spec:
  type: LoadBalancer
  clusterIP: 100.67.234.84
  clusterIPs:
  - 100.67.234.84
  externalTrafficPolicy: Cluster
  ipFamilies:
  - IPv4
  ipFamilyPolicy: SingleStack
  ports:
  - name: http
    nodePort: 31207
    port: 80
    protocol: TCP
    targetPort: 3000
  - name: https
    nodePort: 30400
    port: 443
    protocol: TCP
    targetPort: 3000
---
apiVersion: v1
kind: Service
metadata:
  annotations:
    service.beta.kubernetes.io/aws-load-balancer-internal: "anything"
  labels:
    app.kubernetes.io/managed-by: Helm
  name: service-internal-fail
spec:
  type: LoadBalancer
  clusterIP: 10.96.0.2
  ports:
  - name: http
    nodePort: 31207
    port: 80
    protocol: TCP
    targetPort: 3000
---
apiVersion: v1
kind: Service
metadata:
  labels:
    app.kubernetes.io/managed-by: Helm
  name: service-clusterip-pass
spec:
  type: ClusterIP
  ports:
  - name: http
    port: 80
    protocol: TCP
    targetPort: 3000

test.yaml

name: validate-service-loadbalancer-test
policies:
  - policy.yaml
resources:
  - resources.yaml
results:
  - policy: validate-service-loadbalancer
    rule: check-loadbalancer-internal
    resource: service-internal-pass
    kind: Service
    result: pass
  - policy: validate-service-loadbalancer
    rule: check-loadbalancer-internal
    resource: service-internal-2-pass
    kind: Service
    result: pass
  - policy: validate-service-loadbalancer
    rule: check-loadbalancer-public
    resource: service-public-pass
    kind: Service
    result: pass
  - policy: validate-service-loadbalancer
    rule: check-loadbalancer-public
    resource: service-public-2-pass
    kind: Service
    result: pass
  - policy: validate-service-loadbalancer
    rule: check-loadbalancer-public
    resource: service-public-fail
    kind: Service
    result: fail
  - policy: validate-service-loadbalancer
    rule: check-loadbalancer-internal
    resource: service-internal-fail
    kind: Service
    result: fail
  - policy: validate-service-loadbalancer
    rule: check-loadbalancer-internal
    resource: service-clusterip-pass
    kind: Service
    result: skip
  - policy: validate-service-loadbalancer
    rule: check-loadbalancer-public
    resource: service-clusterip-pass
    kind: Service
    result: skip

Kyverno test output (After fix)

$ go run ./cmd/cli/kubectl-kyverno/main.go test ./test_policy

Executing validate-service-loadbalancer-test...
applying 1 policy to 7 resources... 

│───│───────────────────────────────│─────────────────────────────│──────────────────────────────────│────────│
│ # │ POLICY                        │ RULE                        │ RESOURCE                         │ RESULT │
│───│───────────────────────────────│─────────────────────────────│──────────────────────────────────│────────│
│ 1 │ validate-service-loadbalancer │ check-loadbalancer-internal │ /Service/service-internal-pass   │ Pass   │
│ 2 │ validate-service-loadbalancer │ check-loadbalancer-internal │ /Service/service-internal-2-pass │ Pass   │
│ 3 │ validate-service-loadbalancer │ check-loadbalancer-public   │ /Service/service-public-pass     │ Pass   │
│ 4 │ validate-service-loadbalancer │ check-loadbalancer-public   │ /Service/service-public-2-pass   │ Pass   │
│ 5 │ validate-service-loadbalancer │ check-loadbalancer-public   │ /Service/service-public-fail     │ Pass   │
│ 6 │ validate-service-loadbalancer │ check-loadbalancer-internal │ /Service/service-internal-fail   │ Pass   │
│ 7 │ validate-service-loadbalancer │ check-loadbalancer-internal │ /Service/service-clusterip-pass  │ Pass   │
│ 8 │ validate-service-loadbalancer │ check-loadbalancer-public   │ /Service/service-clusterip-pass  │ Pass   │
│───│───────────────────────────────│─────────────────────────────│──────────────────────────────────│────────│

Test Summary: 8 tests passed and 0 tests failed

Checklist

  • I have read the contributing guidelines.
  • I have read the PR documentation guide and followed the process including adding proof manifests to this PR.
  • This is a bug fix and I have added unit tests that prove my fix is effective.
  • This is a feature and I have added CLI tests that are applicable.
  • My PR needs to be cherry picked to a specific release branch which is .
  • My PR contains new or altered behavior to Kyverno and
    • CLI support should be added and my PR doesn't contain that functionality.
    • I have added or changed the documentation myself in an existing PR and the link is:
    • I have raised an issue in kyverno/website to track the documentation update and the link is:

Further Comments

Copy link

@sandeshlmore sandeshlmore left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

lgtm

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants