Terraform inconsistency with Redpanda vs confluent #7859
Replies: 8 comments 5 replies
-
The partition count and replication factor part needs investigating. Redpanda's API behavior aside, it is kind of peculiar that this terraform code is reporting something like |
Beta Was this translation helpful? Give feedback.
-
good point of cleanup.policy from john |
Beta Was this translation helpful? Give feedback.
-
Yup Edit: Backport to v22.2.x PR here: #7488 , link to commit which made the cut for the 22.2.8 tag: https://github.com/redpanda-data/redpanda/blob/279d483252f0ba959ded2b52d6d010a8a7580cbf/src/v/kafka/server/handlers/describe_configs.cc |
Beta Was this translation helpful? Give feedback.
-
Thanks all for the colour on this, and for the replies. Appreciated. |
Beta Was this translation helpful? Give feedback.
-
@mattgodbolt let me know if we should move this to a discussion or keep open as an issue. |
Beta Was this translation helpful? Give feedback.
-
I think this is probably cool as a discussion, I mentioned the whole issue to @patrickangeles in a call and he said to file an issue to start things moving along :) It's not hurting me (other than the Thanks! |
Beta Was this translation helpful? Give feedback.
-
got ya, moving to discussion then. yeah will be on the next release |
Beta Was this translation helpful? Give feedback.
-
Looks like some of these things are fixed! Thanks! However, looking at the remote replication/tiered storage and we hit similar problems: resource "kafka_topic" "tradewinds_eval_rtsdk_raw" {
provider = kafka.redpanda_eval_prod
name = "tradewinds_eval_rtsdk_raw"
replication_factor = 3
partitions = 1
config = {
"retention.bytes" = -1
"cleanup.policy" = "delete" # we don't want anything deleted but if we don't specify, complains
# "redpanda.remote.write" = true, # maybe not needed as it's a cluster default? terraform complains
"redpanda.remote.read" = false,
}
} Two issues in this example:
...and if I let it try:
Separately, if I try and configure a readreplica on a different cluster, it "works" the first time but then every subsequent terraform it complains as the config: resource "kafka_topic" "tradewinds_eval_rtsdk_raw_replica" {
provider = kafka.redpanda_eval_replica
name = kafka_topic.tradewinds_eval_rtsdk_raw.name
replication_factor = 3
partitions = 1
config = {
"retention.bytes" = -1
"cleanup.policy" = "delete" # we don't want anything deleted but if we don't specify, complains
"redpanda.remote.readreplica" = "aq-redpanda-ts" # this works the first time but terraform wants to reapply every time
}
depends_on = [kafka_topic.tradewinds_eval_rtsdk_raw]
} On this last point even |
Beta Was this translation helpful? Give feedback.
-
Version & Environment
Redpanda version: (use
rpk version
): 22.2.8OS: Ubuntu 22.04 Linux
Docker: n/a
Kubernetes: n/a
terraform: 0.13.6
mongey-kafka: 0.5.1
What went wrong?
When creating topics with
mongey-kafka
interraform
, there are certain required parameters that get "echoed back" as config key-value pairs, which makes them appear to need to be re-terraformed every time.What should have happened instead?
Ideally the confluent-compatible expression should apply cleanly and remain so. Or else some kind of useful error/response. I appreciate this is a third-party tool but I think it's a result of RP "echoing" back parameters (or not) that it accepted during configuration.
How to reproduce the issue?
terraform apply
with a configlike:Note that it creates ok:
Then try to
terraform apply
a second time (which should be a no-op):Note how the
config
block is echoing the default cleanup policy, and also replicating the partition counts and replication factors, as strings. Our workaround is to duplicate the required configuration of partisions and replications, and explicitly add the default cleanup policy:This workaround is not necessary when terraforming a confluent kafka broker setup.
Beta Was this translation helpful? Give feedback.
All reactions