-
Notifications
You must be signed in to change notification settings - Fork 76
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Bug]: Memory leak on all providers #538
Comments
This patch fixes the propagation of context cancellation through the call stack. It prevents leaks of channel and goroutine from the [terraform provider][provider_code]. Closes: crossplane-contrib#538 [provider_code]: https://github.com/hashicorp/terraform-provider-google/blob/1d1a50adf64af60815b7a08ffc5e9d3e856d2e9c/google/transport/batcher.go#L117-L123
This patch fixes the propagation of context cancellation through the call stack. It prevents leaks of channel and goroutine from the [terraform provider][provider_code]. Closes: crossplane-contrib#538 [provider_code]: https://github.com/hashicorp/terraform-provider-google/blob/1d1a50adf64af60815b7a08ffc5e9d3e856d2e9c/google/transport/batcher.go#L117-L123 Signed-off-by: Maxime Vidori <[email protected]>
This patch fixes the propagation of context cancellation through the call stack. It prevents leaks of channel and goroutine from the [terraform provider][provider_code]. Fixes: crossplane-contrib#538 [provider_code]: https://github.com/hashicorp/terraform-provider-google/blob/1d1a50adf64af60815b7a08ffc5e9d3e856d2e9c/google/transport/batcher.go#L117-L123 Signed-off-by: Maxime Vidori <[email protected]>
@IxDay, thanks for your discovery and beautiful report 🙏 I wanted to note here that other providers are likely to be affected, in case underlying terraform providers function similarly (links to relevant lines): |
This is probably a (well-observed) generalisation of the issue I noticed with pubsub provider, noted here. I suspect that addressing this at this level would resolve also the issues I've experienced. |
See: #539 (comment) |
I would like to follow up with my comment above. In a discussion with @ulucinar and @turkenf, we suspected that provider-upjet-azure and provider-upjet-azuread might not be experiencing memory leaks, because of the differences in underlying Terraform providers. We didn't test, but I wouldn't be surprised if there were no memory leaks in those providers. |
Is there an existing issue for this?
Affected Resource(s)
All the providers
Resource MRs required to reproduce the bug
No response
Steps to Reproduce
What happened?
The memory kept growing until pod got restarted or OOM kill, see:
Cloudplatform
Redis
Storage
Most of those drops are restarts, curve is steeper on Cloudplatform because it is the most used in our setup. Behavior is consistent across all the modules.
Relevant Error Output Snippet
No response
Crossplane Version
1.14.5
Provider Version
1.1.0
Kubernetes Version
No response
Kubernetes Distribution
No response
Additional Info
Since we identified the culprit, I will not dig into too much details. We will publish a PR with the fix we deployed in order to discuss what should be done.
The text was updated successfully, but these errors were encountered: