diff --git a/.markdownlinkcheck.json b/.markdownlinkcheck.json index 353ac60c8a1..ec7c81160f6 100644 --- a/.markdownlinkcheck.json +++ b/.markdownlinkcheck.json @@ -1,7 +1,7 @@ { "ignorePatterns": [ { "pattern": "^https://calendar.google.com/calendar" }, - { "pattern": "^../reference/" } + { "pattern": "^\.\.?/" } ], "httpHeaders": [{ "comment": "Workaround as suggested here: https://github.com/tcort/markdown-link-check/issues/201", diff --git a/docs/book/src/managed/adopting-clusters.md b/docs/book/src/managed/adopting-clusters.md index b5463fa271b..04aa9e0542a 100644 --- a/docs/book/src/managed/adopting-clusters.md +++ b/docs/book/src/managed/adopting-clusters.md @@ -2,7 +2,6 @@ ### Option 1: Using the new AzureASOManaged API - The [AzureASOManagedControlPlane and related APIs](./asomanagedcluster.md) support adoption as a first-class use case. Going forward, this method is likely to be easier, more reliable, include more features, and better supported for adopting AKS clusters than Option 2 below. @@ -15,10 +14,10 @@ and AzureASOManagedMachinePools. The [`asoctl import azure-resource`](https://azure.github.io/azure-service-operator/tools/asoctl/#import-azure-resource) command can help generate the required YAML. -Caveats: -- The `asoctl import azure-resource` command has at least [one known - bug](https://github.com/Azure/azure-service-operator/issues/3805) requiring the YAML it generates to be - edited before it can be applied to a cluster. +This method can also be used to [migrate](./asomanagedcluster#migrating-existing-clusters-to-azureasomanagedcontrolplane) from AzureManagedControlPlane and its associated APIs. + +#### Caveats + - CAPZ currently only records the ASO resources in the CAPZ resources' `spec.resources` that it needs to function, which include the ManagedCluster, its ResourceGroup, and associated ManagedClustersAgentPools. Other resources owned by the ManagedCluster like Kubernetes extensions or Fleet memberships are not @@ -29,6 +28,8 @@ Caveats: - Adopting existing clusters created with the GA AzureManagedControlPlane API to the experimental API with this method is theoretically possible, but untested. Care should be taken to prevent CAPZ from reconciling two different representations of the same underlying Azure resources. +- This method cannot be used to import existing clusters as a ClusterClass or a topology, only as a standalone + Cluster. ### Option 2: Using the current AzureManagedControlPlane API diff --git a/docs/book/src/managed/asomanagedcluster.md b/docs/book/src/managed/asomanagedcluster.md index eb7f36c3a00..350da513926 100644 --- a/docs/book/src/managed/asomanagedcluster.md +++ b/docs/book/src/managed/asomanagedcluster.md @@ -89,3 +89,32 @@ spec: name: ${CLUSTER_NAME}-user-kubeconfig # NOT ${CLUSTER_NAME}-kubeconfig key: value ``` + +### Migrating existing Clusters to AzureASOManagedControlPlane + +Existing CAPI Clusters using the AzureManagedControlPlane and associated APIs can be migrated to use the new +AzureASOManagedControlPlane and its associated APIs. This process relies on CAPZ's ability to +[adopt](./adopting-clusters#option-1-using-the-new-azureasomanaged-api) existing clusters that may not have +been created by CAPZ, which comes with some [caveats](./adopting-clusters#caveats) that should be reviewed first. + +To migrate one cluster to the ASO-based APIs: + +1. Pause the cluster by setting the Cluster's `spec.paused` to `true`. +1. Wait for the cluster to be paused by waiting for the _absence_ of the `clusterctl.cluster.x-k8s.io/block-move` + annotation on the AzureManagedControlPlane and its AzureManagedMachinePools. This should be fairly instantaneous. +1. Create a new namespace to contain the new resources to avoid conflicting ASO definitions. +1. [Adopt](./adopting-clusters#option-1-using-the-new-azureasomanaged-api) the underlying AKS resources from + the new namespace, which creates the new CAPI and CAPZ resources. +1. Forcefully delete the old Cluster. This is more complicated than normal because CAPI controllers do not reconcile + paused resources at all, even when they are deleted. The underlying Azure resources will not be affected. + - Delete the cluster: `kubectl delete cluster --wait=false` + - Delete the cluster infrastructure object: `kubectl delete azuremanagedcluster --wait=false` + - Delete the cluster control plane object: `kubectl delete azuremanagedcontrolplane --wait=false` + - Delete the machine pools: `kubectl delete machinepool --wait=false` + - Delete the machine pool infrastructure resources: `kubectl delete azuremanagedmachinepool --wait=false` + - Remove finalizers from the machine pool infrastructure resources: `kubectl patch azuremanagedmachinepool --type merge -p '{"metadata": {"finalizers": null}}'` + - Remove finalizers from the machine pools: `kubectl patch machinepool --type merge -p '{"metadata": {"finalizers": null}}'` + - Remove finalizers from the cluster control plane object: `kubectl patch azuremanagedcontrolplane --type merge -p '{"metadata": {"finalizers": null}}'` + - Note: the cluster infrastructure object should not have any finalizers and should already be deleted + - Remove finalizers from the cluster: `kubectl patch cluster --type merge -p '{"metadata": {"finalizers": null}}'` + - Verify the old ASO resources like ResourceGroup and ManagedCluster managed by the old Cluster are deleted.