plans with very large number of cloudflare_records
take too much time to complete
#4887
Closed
3 tasks done
Labels
kind/bug
Categorizes issue or PR as related to a bug.
needs-triage
Indicates an issue or PR lacks a `triage/foo` label and requires one.
triage/debug-log-attached
Indicates an issue or PR has a complete Terraform debug log.
Confirmation
Terraform and Cloudflare provider version
Terraform 1.5.7 (we're limited to the last OSS version) / OpenTofu 1.6.2
Provider version 4.39 (because of #4280) but tried with 4.50 with similar results.
Affected resource(s)
cloudflare_record
Terraform configuration files
Link to debug output
https://gist.github.com/jficz/a0c393bef69720d882dbec8bacba32c2
Panic output
No response
Expected output
Much faster plan execution.
Actual output
takes about an hour to execute a full import of ~5000 records
takes about half that time to update just a single record
Steps to reproduce
Additional factoids
Debug log just for a refresh of ~3500 records with very few changes has 19MB. Attached debug log only has two imported records and one change but the information in it is otherwise the same as in the large one (sans the other 3500 records).
Due to API rate limiting, zones with thousands of DNS records take ages to properly refresh and apply which makes the provider highly impractical for large deployments.
Such long runs cause issues with CI, block resources and bandwidth for very long time even for small changes.
The API provides batch operation endpoins for both record list and record change.
It would be great if the provider used these endpoints for refresh and update instead of iterating through the records one by one. That would likely speed up operations noticeably even for small deployments but would be several order of magnitude improvement for large deployments.
-refresh=false
is a possible workaround for some use cases but introduces other problems like configuration driftWe in fact use a
for_each
loop in generating the resources but the example above causes the same issues.References
No response
The text was updated successfully, but these errors were encountered: