Skip to content

Latest commit

 

History

History
114 lines (80 loc) · 4.11 KB

CLEANUP.md

File metadata and controls

114 lines (80 loc) · 4.11 KB

FAST deployment clean up

If you want to destroy a previous FAST deployment in your organization, follow these steps.

Destruction must be done in reverse order, from stage 3 to stage 0

Stage 3 (Project Factory)

cd $FAST_PWD/03-project-factory/prod/
terraform destroy

Stage 3 (GKE)

Terraform refuses to delete non-empty GCS buckets and BigQuery datasets, so they need to be removed manually from the state.

cd $FAST_PWD/03-project-factory/prod/

# remove BQ dataset manually
for x in $(terraform state list | grep google_bigquery_dataset); do  
  terraform state rm "$x"; 
done

terraform destroy

Stage 2 (Security)

cd $FAST_PWD/02-security/
terraform destroy

Stage 2 (Networking)

cd $FAST_PWD/02-networking-XXX/
terraform destroy

A minor glitch can surface running terraform destroy, where the service project attachments to the Shared VPCs will not get destroyed even with the relevant API call succeeding. We are investigating the issue but in the meantime, manually remove the attachment in the Cloud console or via the gcloud beta compute shared-vpc associated-projects remove command when destroy fails, and then relaunch the command.

Stage 1 (Resource Management)

Stage 1 is a little more complicated because of the GCS buckets containing your terraform statefiles. By default, Terraform refuses to delete non-empty buckets, which is good to protect your terraform state, but it makes destruction a bit harder. Use the commands below to remove the GCS buckets from the state and then execute terraform destroy

cd $FAST_PWD/01-resman/

# remove buckets from state since terraform refuses to delete them
for x in $(terraform state list | grep google_storage_bucket.bucket); do  
  terraform state rm "$x"
done

terraform destroy

Stage 0 (Bootstrap)

Warning: you should follow these steps carefully as we will modify our own permissions. Ensure you can grant yourself the Organization Admin role again. Otherwise, you will not be able to finish the destruction process and will, most likely, get locked out of your organization.

Just like before, we manually remove several resources (GCS buckets and BQ datasets). Note that terrafom destroy will fail. This is expected; just continue with the rest of the steps.

cd $FAST_PWD/00-bootstrap/

# remove provider config to execute without SA impersonation
rm 00-bootstrap-providers.tf

# migrate to local state
terraform init -migrate-state

# remove GCS buckets and BQ dataset manually
for x in $(terraform state list | grep google_storage_bucket.bucket); do  
  terraform state rm "$x"; 
done

for x in $(terraform state list | grep google_bigquery_dataset); do  
  terraform state rm "$x"; 
done

terraform destroy

When the destroy fails, continue with the steps below. Again, make sure your user (the one you are using to execute this step) has the Organization Administrator role, as we will remove the permissions for the organization-admins group

# Add the Organization Admin role to $BU_USER in the GCP Console
# then execute the command below to grant yourself the permissions needed 
# to finish the destruction
export FAST_DESTROY_ROLES="roles/billing.admin roles/logging.admin \
  roles/iam.organizationRoleAdmin roles/resourcemanager.projectDeleter \
  roles/resourcemanager.folderAdmin roles/owner"

export FAST_BU=$(gcloud config list --format 'value(core.account)')

# find your org id
gcloud organizations list --filter display_name:[part of your domain]

# set your org id
export FAST_ORG_ID=XXXX

for role in $FAST_DESTROY_ROLES; do
  gcloud organizations add-iam-policy-binding $FAST_ORG_ID \
    --member user:$FAST_BU --role $role
done

terraform destroy
rm -i terraform.tfstate*

In case you want to deploy FAST stages again, the make sure to:

  • Modify the prefix variable to allow the deployment of resources that need unique names (eg, projects).
  • Modify the custom_roles variable to allow recently deleted custom roles to be created again.