Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add support for JSON-formatted logging #7083

Merged

Conversation

zarvd
Copy link
Contributor

@zarvd zarvd commented Sep 17, 2024

What type of PR is this?

/kind cleanup

What this PR does / why we need it:

It adds a CLI flag --log-format to cloud-controller-manager, which supports a new logging format JSON. When --log-format=json is set, the loggers of both klog and slog will print log messages as JSON. Structured logging is easier to query and filter.

NOTE

  1. The verbosity of klog.V(N) will be attached to the log key-value with the name v, as in {"level":"info", "msg":"foo", "v":"5"}. But logs with level=error always have a verbosity of 0.
  2. slog will be used as the klog backend iff JSON is enabled.
  3. Plain text remains the default log format if not set.

Sample Output

{"level":"INFO","source":{"function":"sigs.k8s.io/cloud-provider-azure/cmd/cloud-controller-manager/app.startControllers","file":"/go/src/sigs.k8s.io/cloud-provider-azure/cmd/cloud-controller-manager/app/controllermanager.go","line":447},"msg":"startControllers: starting shared informers","v":2}
{"level":"INFO","source":{"function":"k8s.io/client-go/tools/cache.(*Reflector).Run","file":"/go/src/sigs.k8s.io/cloud-provider-azure/vendor/k8s.io/client-go/tools/cache/reflector.go","line":305},"msg":"Starting reflector *v1.EndpointSlice (21h2m54.929090295s) from k8s.io/client-go/informers/factory.go:160","v":3}
{"level":"INFO","source":{"function":"k8s.io/client-go/tools/cache.(*Reflector).ListAndWatch","file":"/go/src/sigs.k8s.io/cloud-provider-azure/vendor/k8s.io/client-go/tools/cache/reflector.go","line":341},"msg":"Listing and watching *v1.EndpointSlice from k8s.io/client-go/informers/factory.go:160","v":3}
{"level":"INFO","source":{"function":"k8s.io/client-go/tools/cache.(*Reflector).Run","file":"/go/src/sigs.k8s.io/cloud-provider-azure/vendor/k8s.io/client-go/tools/cache/reflector.go","line":305},"msg":"Starting reflector *v1.Service (30s) from k8s.io/client-go/informers/factory.go:160","v":3}
{"level":"INFO","source":{"function":"k8s.io/client-go/tools/cache.(*Reflector).ListAndWatch","file":"/go/src/sigs.k8s.io/cloud-provider-azure/vendor/k8s.io/client-go/tools/cache/reflector.go","line":341},"msg":"Listing and watching *v1.Service from k8s.io/client-go/informers/factory.go:160","v":3}
{"level":"INFO","source":{"function":"k8s.io/client-go/tools/cache.(*Reflector).Run","file":"/go/src/sigs.k8s.io/cloud-provider-azure/vendor/k8s.io/client-go/tools/cache/reflector.go","line":305},"msg":"Starting reflector *v1.Node (1m40s) from k8s.io/client-go/informers/factory.go:160","v":3}
{"level":"INFO","source":{"function":"k8s.io/client-go/tools/cache.(*Reflector).ListAndWatch","file":"/go/src/sigs.k8s.io/cloud-provider-azure/vendor/k8s.io/client-go/tools/cache/reflector.go","line":341},"msg":"Listing and watching *v1.Node from k8s.io/client-go/informers/factory.go:160","v":3}
{"level":"INFO","source":{"function":"k8s.io/client-go/tools/cache.(*Reflector).ListAndWatch","file":"/go/src/sigs.k8s.io/cloud-provider-azure/vendor/k8s.io/client-go/tools/cache/reflector.go","line":368},"msg":"Caches populated for *v1.EndpointSlice from k8s.io/client-go/informers/factory.go:160","v":2}
{"level":"INFO","source":{"function":"k8s.io/client-go/tools/cache.(*Reflector).ListAndWatch","file":"/go/src/sigs.k8s.io/cloud-provider-azure/vendor/k8s.io/client-go/tools/cache/reflector.go","line":368},"msg":"Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160","v":2}
{"level":"INFO","source":{"function":"k8s.io/client-go/tools/cache.(*Reflector).ListAndWatch","file":"/go/src/sigs.k8s.io/cloud-provider-azure/vendor/k8s.io/client-go/tools/cache/reflector.go","line":368},"msg":"Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160","v":2}
{"level":"INFO","source":{"function":"k8s.io/client-go/tools/cache.WaitForNamedCacheSync","file":"/go/src/sigs.k8s.io/cloud-provider-azure/vendor/k8s.io/client-go/tools/cache/shared_informer.go","line":320},"msg":"Caches are synced for service","v":0}
{"level":"INFO","source":{"function":"k8s.io/cloud-provider/controllers/service.(*Controller).syncNodes","file":"/go/src/sigs.k8s.io/cloud-provider-azure/vendor/k8s.io/cloud-provider/controllers/service/controller.go","line":737},"msg":"Syncing backends for all LB services.","v":2}
{"level":"INFO","source":{"function":"k8s.io/cloud-provider/controllers/service.(*Controller).updateLoadBalancerHosts","file":"/go/src/sigs.k8s.io/cloud-provider-azure/vendor/k8s.io/cloud-provider/controllers/service/controller.go","line":816},"msg":"Running updateLoadBalancerHosts(len(services)==0, workers==1)","v":4}
{"level":"INFO","source":{"function":"k8s.io/cloud-provider/controllers/service.(*Controller).updateLoadBalancerHosts","file":"/go/src/sigs.k8s.io/cloud-provider-azure/vendor/k8s.io/cloud-provider/controllers/service/controller.go","line":832},"msg":"Finished updateLoadBalancerHosts","v":4}
{"level":"INFO","source":{"function":"k8s.io/cloud-provider/controllers/service.(*Controller).syncNodes","file":"/go/src/sigs.k8s.io/cloud-provider-azure/vendor/k8s.io/cloud-provider/controllers/service/controller.go","line":741},"msg":"Successfully updated 0 out of 0 load balancers to direct traffic to the updated set of nodes","v":2}
{"level":"INFO","source":{"function":"k8s.io/cloud-provider/controllers/service.(*Controller).syncNodes.func1","file":"/go/src/sigs.k8s.io/cloud-provider-azure/vendor/k8s.io/cloud-provider/controllers/service/controller.go","line":733},"msg":"It took 0.000240699 seconds to finish syncNodes","v":4}
{"level":"INFO","source":{"function":"k8s.io/client-go/tools/cache.WaitForNamedCacheSync","file":"/go/src/sigs.k8s.io/cloud-provider-azure/vendor/k8s.io/client-go/tools/cache/shared_informer.go","line":320},"msg":"Caches are synced for route","v":0}
{"level":"INFO","source":{"function":"k8s.io/client-go/tools/cache.WaitForNamedCacheSync","file":"/go/src/sigs.k8s.io/cloud-provider-azure/vendor/k8s.io/client-go/tools/cache/shared_informer.go","line":320},"msg":"Caches are synced for node","v":0}
{"level":"INFO","source":{"function":"sigs.k8s.io/cloud-provider-azure/pkg/nodeipam/ipam.(*rangeAllocator).Run","file":"/go/src/sigs.k8s.io/cloud-provider-azure/pkg/nodeipam/ipam/range_allocator.go","line":177},"msg":"Starting range CIDR allocator","v":0}
{"level":"INFO","source":{"function":"k8s.io/client-go/tools/cache.WaitForNamedCacheSync","file":"/go/src/sigs.k8s.io/cloud-provider-azure/vendor/k8s.io/client-go/tools/cache/shared_informer.go","line":313},"msg":"Waiting for caches to sync for cidrallocator","v":0}
{"level":"INFO","source":{"function":"k8s.io/client-go/tools/cache.WaitForNamedCacheSync","file":"/go/src/sigs.k8s.io/cloud-provider-azure/vendor/k8s.io/client-go/tools/cache/shared_informer.go","line":320},"msg":"Caches are synced for cidrallocator","v":0}
{"level":"INFO","source":{"function":"k8s.io/cloud-provider/controllers/route.(*RouteController).reconcile","file":"/go/src/sigs.k8s.io/cloud-provider-azure/vendor/k8s.io/cloud-provider/controllers/route/route_controller.go","line":214},"msg":"action for Node \"aks-nodes-26761298-vmss000000\" with CIDR \"10.244.0.0/24\": \"keep\"","v":0}
{"level":"INFO","source":{"function":"k8s.io/cloud-provider/controllers/route.(*RouteController).updateNetworkingCondition","file":"/go/src/sigs.k8s.io/cloud-provider-azure/vendor/k8s.io/cloud-provider/controllers/route/route_controller.go","line":376},"msg":"set node aks-nodes-26761298-vmss000000 with NodeNetworkUnavailable=false was canceled because it is already set","v":2}
{"level":"INFO","source":{"function":"k8s.io/client-go/tools/leaderelection.(*LeaderElector).renew.func1","file":"/go/src/sigs.k8s.io/cloud-provider-azure/vendor/k8s.io/client-go/tools/leaderelection/leaderelection.go","line":293},"msg":"successfully renewed lease kube-system/cloud-controller-manager","v":5}

Which issue(s) this PR fixes:

part of #1575

Special notes for your reviewer:

Does this PR introduce a user-facing change?

NONE

Additional documentation e.g., KEPs (Kubernetes Enhancement Proposals), usage docs, etc.:


@k8s-ci-robot k8s-ci-robot added release-note-none Denotes a PR that doesn't merit a release note. kind/cleanup Categorizes issue or PR as related to cleaning up code, process, or technical debt. cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. labels Sep 17, 2024
@k8s-ci-robot k8s-ci-robot added the needs-ok-to-test Indicates a PR that requires an org member to verify it is safe to test. label Sep 17, 2024
@k8s-ci-robot
Copy link
Contributor

Hi @zarvd. Thanks for your PR.

I'm waiting for a kubernetes-sigs member to verify that this patch is reasonable to test. If it is, they should reply with /ok-to-test on its own line. Until that is done, I will not automatically test new commits in this PR, but the usual testing commands by org members will still work. Regular contributors should join the org to skip this step.

Once the patch is verified, the new status will be reflected by the ok-to-test label.

I understand the commands that are listed here.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.

@k8s-ci-robot k8s-ci-robot added the size/L Denotes a PR that changes 100-499 lines, ignoring generated files. label Sep 17, 2024
@feiskyer
Copy link
Member

/ok-to-test

@k8s-ci-robot k8s-ci-robot added ok-to-test Indicates a non-member PR verified by an org member that is safe to test. and removed needs-ok-to-test Indicates a PR that requires an org member to verify it is safe to test. labels Sep 17, 2024
// utilflag.InitFlags() (by removing its pflag.Parse() call). For now, we have to set the
// normalize func and add the go flag set by hand.
// utilflag.InitFlags()
logs.InitLogs()
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

would it impacts existing klog logic when json format is disabled?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

No, I have moved the logic from logs.InitLogs() to log.Setup(). It should not impact the default klog logger.

log.SetOutput(logs.KlogWriter{})
log.SetFlags(0)
klog.StartFlushDaemon(opts.FlushFrequency)

@coveralls
Copy link

Coverage Status

coverage: 77.08% (-0.2%) from 77.28%
when pulling bd1057d on zarvd:build/log/enable=json-format
into 22b2b75 on kubernetes-sigs:master.

@zarvd zarvd force-pushed the build/log/enable=json-format branch 2 times, most recently from 6bac80a to 49e8501 Compare September 18, 2024 23:37
@feiskyer
Copy link
Member

/retest

@feiskyer
Copy link
Member

/lgtm
/approve

@k8s-ci-robot k8s-ci-robot added the lgtm "Looks good to me", indicates that a PR is ready to be merged. label Sep 19, 2024
@k8s-ci-robot
Copy link
Contributor

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: feiskyer, zarvd

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@k8s-ci-robot k8s-ci-robot added the approved Indicates a PR has been approved by an approver from all required OWNERS files. label Sep 19, 2024
@zarvd zarvd force-pushed the build/log/enable=json-format branch from 49e8501 to 3c07f6e Compare September 19, 2024 09:22
@k8s-ci-robot k8s-ci-robot removed the lgtm "Looks good to me", indicates that a PR is ready to be merged. label Sep 19, 2024
@zarvd
Copy link
Contributor Author

zarvd commented Sep 19, 2024

/retest

1 similar comment
@zarvd
Copy link
Contributor Author

zarvd commented Sep 19, 2024

/retest

@zarvd zarvd force-pushed the build/log/enable=json-format branch from 3c07f6e to fc77dff Compare September 19, 2024 22:39
@feiskyer
Copy link
Member

/retest
/lgtm

@k8s-ci-robot k8s-ci-robot added the lgtm "Looks good to me", indicates that a PR is ready to be merged. label Sep 20, 2024
@zarvd zarvd force-pushed the build/log/enable=json-format branch from fc77dff to ee08bc6 Compare September 20, 2024 06:24
@k8s-ci-robot k8s-ci-robot removed the lgtm "Looks good to me", indicates that a PR is ready to be merged. label Sep 20, 2024
@zarvd zarvd force-pushed the build/log/enable=json-format branch from ee08bc6 to c4d1621 Compare September 20, 2024 11:25
@zarvd
Copy link
Contributor Author

zarvd commented Sep 21, 2024

/retest

@MartinForReal
Copy link
Contributor

/lgtm

@k8s-ci-robot k8s-ci-robot added the lgtm "Looks good to me", indicates that a PR is ready to be merged. label Sep 21, 2024
@k8s-ci-robot k8s-ci-robot merged commit 826b2c2 into kubernetes-sigs:master Sep 21, 2024
18 checks passed
@zarvd zarvd deleted the build/log/enable=json-format branch September 21, 2024 22:45
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
approved Indicates a PR has been approved by an approver from all required OWNERS files. cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. kind/cleanup Categorizes issue or PR as related to cleaning up code, process, or technical debt. lgtm "Looks good to me", indicates that a PR is ready to be merged. ok-to-test Indicates a non-member PR verified by an org member that is safe to test. release-note-none Denotes a PR that doesn't merit a release note. size/L Denotes a PR that changes 100-499 lines, ignoring generated files.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants