-
Notifications
You must be signed in to change notification settings - Fork 911
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Undetected container/k8s information leads to alert fatigue caused by k8s_containers macro #3257
Comments
Thanks @aberezovski. The fact that For other rules and other events the [The |
Hi @incertum, Regarding your question above For other rules and events I observed a random behaviour of showing only container.id value and missing all other attributes' values regarding container, pod and k8s namespace. Check another issue I opened for that #3256 |
Thanks for opening the new issue, best to discuss it there. I tagged some other maintainers.
Known imperfection. There are other open issues. Given we do API calls there can be lookup time delays, plus we still need to improve the container engine in general. It is on my plate for the Falco 0.39.0 dev cycle. |
I am also seeing the same message (with different IPs, ports and timestamps but with the same user_uid) when I use EDIT: I forgot to mention that my belief is that these log lines are generated for falco itself, i.e. when falco tries to get the metadata from the K8s API server to enrich the system calls. |
Issues go stale after 90d of inactivity. Mark the issue as fresh with Stale issues rot after an additional 30d of inactivity and eventually close. If this issue is safe to close now please do so with Provide feedback via https://github.com/falcosecurity/community. /lifecycle stale |
Stale issues rot after 30d of inactivity. Mark the issue as fresh with Rotten issues close after an additional 30d of inactivity. If this issue is safe to close now please do so with Provide feedback via https://github.com/falcosecurity/community. /lifecycle rotten |
Rotten issues close after 30d of inactivity. Reopen the issue with Mark the issue as fresh with Provide feedback via https://github.com/falcosecurity/community. |
@poiana: Closing this issue. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. |
Hello Falco team,
During evaluating Falco on AKS k8s clusters, my team and I observed a continuous alert generation triggered by rule
Contact K8S API Server From Container
.The huge amount of generated alerts appear to be false positive, due it was caused by wrongly evaluating of the macro k8s_containers that was caused by missing value for the attribute
container.image.repository
.Describe the bug
Once least privileged falco was deployed to Azure AKS cluster, the rule
Contact K8S API Server From Container
started to generated alerts each second. The generated alerts do not provide any container/k8s information, so no tracing could be performed.The only valuable information is the k8s pod's internal IP address that initiated the connection, but that was not enough and did not help falco engine with anything to not trigger the alert.
How to reproduce it
Contact K8S API Server From Container
. Those alerts are generated each second."container.id": ""
Expected behaviour
All container/k8s information, inclusive
container.id
have to be properly detected and as result macro k8s_containers should successfully evaluated and none false positive alerts be generated.Evidences
Environment
Falco deployed on AKS cluster using Falco Helm Chart version 4.4.2.
Falco version:
Falco version: 0.38.0 (x86_64)
System info:
Linux falco-8b7db 5.15.0-1064-azure #73-Ubuntu SMP Tue Apr 30 14:24:24 UTC 2024 x86_64 GNU/Linux
Deploy to k8s cluster as DaemonSet by using Helm Chart version 4.4.2 using the custom values YAML file
The text was updated successfully, but these errors were encountered: