Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

I want to import existing KMS keys instead of creating new ones #1740

Closed
LCaparelli opened this issue Apr 25, 2023 · 8 comments
Closed

I want to import existing KMS keys instead of creating new ones #1740

LCaparelli opened this issue Apr 25, 2023 · 8 comments
Labels
enhancement New feature or request

Comments

@LCaparelli
Copy link
Contributor

LCaparelli commented Apr 25, 2023

What problem are you facing?

We have a composition that manages RDS Aurora clusters. In it, besides the DBCluster, we manage a KMS key. This key is used to encrypt the DB's data.

On creation, this works great. The Key is created and correctly associated with the DBCluster.

We always set the the Key's .spec.deletionPolicy to be Orphan, because we want previous snapshots taken with that Key to continue to be available long after the RDS is gone.

However, if .spec.deletionPolicy of the DBCluster is set to Orphan and we delete the claim from Kubernetes, then re-create it, the DBCluster gets imported (as expected). What's not so expected, is that another Key is created. The DBCluster continues to use the previous Key, so this new Key isn't being used at all.

From the research I performed, it's impossible to change the KMS of a DBCluster.

Additionally, I'm guessing the existing key isn't imported because its identifier is random, meaning the provider has no way of finding the existing key in order to import it.

These lingering additional keys are a problem, since "Each AWS KMS key that you create in AWS KMS costs $1/month (prorated hourly)", not to mention that cloud organization becomes kind of a mess.

How could Crossplane help solve your problem?

I'm thinking of two approaches:

  1. Somehow provide the resource on which the key is being used at the Key's .spec.forProvider. It should check if the resource already has a Key bound to it, and import that key if does. If there is no bound key, it should be created.
  2. Use Aliases to import existing Keys, this way we no longer have a random ID and can deterministically check if a given key already exists. I'm imagining something like an Alias selector/ref setup, which it can use to determine if the Key already exists by fetching it through the Alias. If the Alias is not informed in the Key, then it works as it does now, by creating a new key in a "best-effort" kind of behavior.

Looking forward to hearing alternatives, as to be honest I think both of these sound pretty complex. It would be much easier if we could just search for the Key using a name instead of a non-deterministic string/ID.

@LCaparelli LCaparelli added the enhancement New feature or request label Apr 25, 2023
@LCaparelli
Copy link
Contributor Author

I feel like 2. is better for handling cases where the same KMS key is shared by multiple resources or when multiple Aliases point to the same key.

@github-actions
Copy link

github-actions bot commented Aug 5, 2023

Crossplane does not currently have enough maintainers to address every issue and pull request. This issue has been automatically marked as stale because it has had no activity in the last 90 days. It will be closed in 14 days if no further activity occurs. Leaving a comment starting with /fresh will mark this issue as not stale.

@github-actions github-actions bot added the stale label Aug 5, 2023
@LCaparelli
Copy link
Contributor Author

/fresh

@github-actions github-actions bot removed the stale label Aug 8, 2023
@MisterMX
Copy link
Collaborator

MisterMX commented Sep 4, 2023

Is this issue about DBCluster handling KMS keys? The spec.forProvider.kmsKeyId property should be late initialized when reconciling a DBCluster that already exists. See also https://doc.crds.dev/github.com/crossplane/provider-aws/rds.aws.crossplane.io/DBCluster/[email protected]#spec-forProvider-kmsKeyID.

Regarding the import of existing KMS keys as a new Key resource: If resource IDs are generated by the API then you have to provide the crossplane.io/external-name annotation. Otherwise the controller will assume you want to create a new key.

@LCaparelli
Copy link
Contributor Author

Is this issue about DBCluster handling KMS keys?

Yes, that's correct.

The spec.forProvider.kmsKeyId property should be late initialized when reconciling a DBCluster that already exists.

Who should perform this late initialization? The provider itself, right?

Keep in mind we're using a selector, not an ID directly to reference the Key, because the key might not exist yet. The KMSKey itself is part of the composition that also provisions the DBCluster.

Assuming the provider should populate spec.forProvider.kmsKeyId, the main problem here is that it populates it with a new Key ID, not the one that is actually being used by the DBCluster in AWS. This means that:

  • stuff on Kubernetes references a new Key ID which was created when re-creating orphaned resources
  • stuff on AWS references the old Key ID, since nothing was ever deleted due to the deletion policy

Regarding the import of existing KMS keys as a new Key resource: If resource IDs are generated by the API then you have to provide the crossplane.io/external-name annotation. Otherwise the controller will assume you want to create a new key.

Ok, but that doesn't seem to play very well with GitOps. If that's something we're aiming for here, I think this needs addressing.

Picture this use case, where the actor is the developer team, using the composition provided by the platform team:

  1. Create a claim, which eventually provisions a new DBCluster and a new KMSKey
  2. Accidentally deletes claim, but this should be ok since deletion policy for all resources is Orphan
  3. GitOps tool re-creates claim (think ArgoCD, for example, syncing back resources according to what's on Git)

This leads to the Provider creating a new KMS key, which is unwanted and leaks money.

In order to fix this, according to what you're saying, the KMS Key resource should be recreated with an annotation to inform the ID of the existing Key, but how could this be achieved, given that:

  1. it will be automatically recreated via GitOps automation
  2. more importantly, the managed resource will be rendered by a composition, which does not know the ID of the existing KMS

@MisterMX
Copy link
Collaborator

MisterMX commented Sep 4, 2023

This is, unfortunately, the way AWS works with KMS keys: You either provide the ID of an existing key or you get a new one with a random ID assigned. It works the same for other AWS APIs, such as VPCs and subnets. Its not very nice indeed.

I don't think there is anything we can do in the provider to work around this effeciently. The provider is, by design, only a simple delegate that mirrors the AWS API into K8s.

However, I think you can solve your problem on a higher level by moving key and DBCluster into two separate compositions / XRDs. Since the lifetime of your key seems to not be coupled to that of your DB (hence the orphan deletion) it makes sense to do that anyways. This approach would also comply to the GitOps workflow.

Does this help you?

@LCaparelli
Copy link
Contributor Author

Thank you @MisterMX, I understand the desire to keep things simple here at the provider level.

There are other places where this exact mirroring architecture combined with the (also by design) simplicity of Crossplane compositions that caused us some annoyances, such as arrays which would more naturally be defined as maps (eg tags). Hopefully composition functions will assist with this, but that comes with its own trade-offs, maintainability and availability concerns for us.

We really want to keep things at a single "package", that provides and manages everything necessary for a Database. I don't think it would be good from an UX perspective to make users first provision the KMS, and only then provision the DBCluster (as I believe would be necessary if we split it all into 2 separate compositions).

We will probably want to fix this some other way, but keep this suggestion close to heart. Maybe some cleanup recurring job, idk.

@go-faustino
Copy link

go-faustino commented Dec 15, 2023

For anyone having to deal with this, we were able to import an existing Key, by using the Observe management policy on a KMS Alias, to get the KMS Key ID, which can then be patched between resources.

The observe composition is something like this:

apiVersion: apiextensions.crossplane.io/v1
kind: Composition
metadata:
  labels:
    crossplane.io/xrd: xgetkmss.foo
    provider: default.getkms.foo
  name: xgetkms.foo
spec:
  compositeTypeRef:
    apiVersion: foo/v1alpha1
    kind: XGetKMS
  resources:
  - name: observeKmsAlias
    base:
      apiVersion: kms.aws.upbound.io/v1beta1
      kind: Alias
      spec:
        deletionPolicy: Orphan
        forProvider:
          region: my-region
        managementPolicies:
        - 'Observe'
    readinessChecks:
    - type: None
    patches:
    - type: FromCompositeFieldPath
      fromFieldPath: spec.myAlias
      toFieldPath: metadata.annotations[crossplane.io/external-name]
    - type: ToCompositeFieldPath
      fromFieldPath: status.conditions[0].status
      toFieldPath: status.kmsKeyId
      transforms:
      - type: match
        match:
          patterns:
            - type: literal
              literal: "False"
              result: ""
          fallbackTo: Input
    - type: ToCompositeFieldPath
      fromFieldPath: status.atProvider.targetKeyId
      toFieldPath: status.kmsKeyId

then, on the KMS Key resources, the patch would be like:

    - type: FromCompositeFieldPath
      fromFieldPath: status.kmsKeyId
      toFieldPath: metadata.annotations[crossplane.io/external-name]
      policy:
        fromFieldPath: Required

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

3 participants