Skip to content

Latest commit

 

History

History
480 lines (333 loc) · 21 KB

File metadata and controls

480 lines (333 loc) · 21 KB

2-environments

This repo is part of a multi-part guide that shows how to configure and deploy the example.com reference architecture described in Google Cloud security foundations guide. The following table lists the parts of the guide.

0-bootstrap Bootstraps a Google Cloud organization, creating all the required resources and permissions to start using the Cloud Foundation Toolkit (CFT). This step also configures a CI/CD Pipeline for foundations code in subsequent stages.
1-org Sets up top level shared folders, monitoring and networking projects, and organization-level logging, and sets baseline security settings through organizational policy.
2-environments (this file) Sets up development, non-production, and production environments within the Google Cloud organization that you've created.
3-networks-dual-svpc Sets up base and restricted shared VPCs with default DNS, NAT (optional), Private Service networking, VPC service controls, on-premises Dedicated Interconnect, and baseline firewall rules for each environment. It also sets up the global DNS hub.
3-networks-hub-and-spoke Sets up base and restricted shared VPCs with all the default configuration found on step 3-networks-dual-svpc, but here the architecture will be based on the Hub and Spoke network model. It also sets up the global DNS hub
4-projects Sets up a folder structure, projects, and application infrastructure pipeline for applications, which are connected as service projects to the shared VPC created in the previous stage.
5-app-infra Deploys Service Catalog Pipeline and Custom Artifacts Pipeline.

For an overview of the architecture and the parts, see the terraform-google-enterprise-genai README.

Purpose

The purpose of this step is to setup development, non-production, and production environments within the Google Cloud organization that you've created.

Prerequisites

  1. 0-bootstrap executed successfully.
  2. 1-org executed successfully.
  3. Cloud Identity / Google Workspace group for monitoring admins.
  4. Membership in the monitoring admins group for user running Terraform.

Troubleshooting

Please refer to troubleshooting if you run into issues during this step.

Assured Workloads

To enable Assured Workloads in the production folder, edit the main.tf file and update assured_workload_configuration.enable to true.

See the env_baseline module README.md file for additional information on the values that can be configured for the Workload.

Assured Workload is a paid service. FedRAMP Moderate workloads can be deployed at no additional charge to Google Cloud products and service usage. For other compliance regimes, see Assured Workloads pricing.

If you enable Assured Workloads, to delete the Assured workload, you will need to manually delete the resources under it. Use the GCP console to identify the resources to be deleted.

Usage

Note: If you are using MacOS, replace cp -RT with cp -R in the relevant commands. The -T flag is needed for Linux, but causes problems for MacOS.

Deploying with Cloud Build

  1. Clone the gcp-environments repo based on the Terraform output from the 0-bootstrap step. Clone the repo at the same level of the terraform-google-enterprise-genai folder, the following instructions assume this layout. Run terraform output cloudbuild_project_id in the 0-bootstrap folder to get the Cloud Build Project ID.

    export CLOUD_BUILD_PROJECT_ID=$(terraform -chdir="terraform-google-enterprise-genai/0-bootstrap/" output -raw cloudbuild_project_id)
    echo ${CLOUD_BUILD_PROJECT_ID}
    
    gcloud source repos clone gcp-environments --project=${CLOUD_BUILD_PROJECT_ID}
  2. Navigate into the repo, change to the non-main branch and copy contents of foundation to new repo. All subsequent steps assume you are running them from the gcp-environments directory. If you run them from another directory, adjust your copy paths accordingly.

    cd gcp-environments
    git checkout -b plan
    
    cp -RT ../terraform-google-enterprise-genai/2-environments/ .
    cp ../terraform-google-enterprise-genai/build/cloudbuild-tf-* .
    cp ../terraform-google-enterprise-genai/build/tf-wrapper.sh .
    chmod 755 ./tf-wrapper.sh
  3. Rename terraform.example.tfvars to terraform.tfvars.

    mv terraform.example.tfvars terraform.tfvars
  4. Update the file with values from your environment and bootstrap (you can re-run terraform output in the 0-bootstrap directory to find these values). See any of the envs folder README.md files for additional information on the values in the terraform.tfvars file.

    export backend_bucket=$(terraform -chdir="../terraform-google-enterprise-genai/0-bootstrap/" output -raw gcs_bucket_tfstate)
    echo "remote_state_bucket = ${backend_bucket}"
    
    sed -i "s/REMOTE_STATE_BUCKET/${backend_bucket}/" terraform.tfvars
  5. Commit changes.

    git add .
    git commit -m 'Initialize environments repo'
  6. Push your plan branch to trigger a plan for all environments. Because the plan branch is not a named environment branch, pushing your plan branch triggers terraform plan but not terraform apply.

    git push --set-upstream origin plan
  7. Review the plan output in your cloud build project https://console.cloud.google.com/cloud-build/builds;region=DEFAULT_REGION?project=YOUR_CLOUD_BUILD_PROJECT_ID

  8. Merge changes to development branch. Because this is a named environment branch, pushing to this branch triggers both terraform plan and terraform apply.

    git checkout -b development
    git push origin development
  9. Review the apply output in your cloud build project https://console.cloud.google.com/cloud-build/builds;region=DEFAULT_REGION?project=YOUR_CLOUD_BUILD_PROJECT_ID

  10. Merge changes to non-production. Because this is a named environment branch, pushing to this branch triggers both terraform plan and terraform apply. Review the apply output in your cloud build project https://console.cloud.google.com/cloud-build/builds;region=DEFAULT_REGION?project=YOUR_CLOUD_BUILD_PROJECT_ID

    git checkout -b non-production
    git push origin non-production
  11. Merge changes to production branch. Because this is a named environment branch, pushing to this branch triggers both terraform plan and terraform apply. Review the apply output in your cloud build project https://console.cloud.google.com/cloud-build/builds;region=DEFAULT_REGION?project=YOUR_CLOUD_BUILD_PROJECT_ID

    git checkout -b production
    git push origin production

N.B. Read this before continuing further

A logging project will be created in every environment (development, non-production, production) when running this code. This project contains a storage bucket for the purposes of project logging within its respective environment. This requires the [email protected] group permissions for the storage bucket. Since foundations has more restricted security measures, a domain restriction constraint is enforced. This restraint will prevent the google cloud-storage-analytics group to be added to any permissions. In order for this terraform code to execute without error, manual intervention must be made to ensure everything applies without issue.

You must disable the contraint, assign the permission on the bucket and then apply the contraint again. This step-by-step presents you with two different options (Option 1 and Option 2) and only one of them should be executed.

The first and the recommended option is making the changes by using gcloud cli, as described in Option 1.

Option 2 is an alternative to gcloud cli and relies on Google Cloud Console.

Option 1: Use gcloud cli to disable/enable organization policy constraint

You will be doing this procedure for each environment (development, non-production & production)

development environment configuration
  1. Configure the following variable below with the value of gcp-environments repository path.

    export GCP_ENVIRONMENTS_PATH=INSERT_YOUR_PATH_HERE

    Make sure your git is checked out to the development branch by running git checkout development on GCP_ENVIRONMENTS_PATH.

    (cd $GCP_ENVIRONMENTS_PATH && git checkout development && ./tf-wrapper.sh init development)
  2. Retrieve the bucket name and project id from terraform outputs.

    export ENV_LOG_BUCKET_NAME=$(terraform -chdir="$GCP_ENVIRONMENTS_PATH/envs/development" output -raw env_log_bucket_name)
    export ENV_LOG_PROJECT_ID=$(terraform -chdir="$GCP_ENVIRONMENTS_PATH/envs/development" output -raw env_log_project_id)
  3. Validate the variable values.

    echo env_log_project_id=$ENV_LOG_PROJECT_ID
    echo env_log_bucket_name=$ENV_LOG_BUCKET_NAME
  4. Reset your org policy for the logging project by running the following command.

    gcloud org-policies reset iam.allowedPolicyMemberDomains --project=$ENV_LOG_PROJECT_ID
  5. Assign roles/storage.objectCreator role to [email protected] group.

    gcloud storage buckets add-iam-policy-binding gs://$ENV_LOG_BUCKET_NAME --member="group:[email protected]" --role="roles/storage.objectCreator"

    Note: you might receive an error telling you that this is against an organization policy, this can happen because of the propagation time from the change made to the organization policy (propagation time is tipically 2 minutes, but can take 7 minutes or longer). If this happens, wait some minutes and try again

  6. Delete the change made on the first step to the organization policy, this will make the project inherit parent policies.

    gcloud org-policies delete iam.allowedPolicyMemberDomains --project=$ENV_LOG_PROJECT_ID
non-production environment configuration
  1. Configure the following variable below with the value of gcp-environments repository path.

    export GCP_ENVIRONMENTS_PATH=INSERT_YOUR_PATH_HERE

    Make sure your git is checked out to the non-production branch by running git checkout non-production on GCP_ENVIRONMENTS_PATH.

    (cd $GCP_ENVIRONMENTS_PATH && git checkout non-production && ./tf-wrapper.sh init non-production)
  2. Retrieve the bucket name and project id from terraform outputs.

    export ENV_LOG_BUCKET_NAME=$(terraform -chdir="$GCP_ENVIRONMENTS_PATH/envs/non-production" output -raw env_log_bucket_name)
    export ENV_LOG_PROJECT_ID=$(terraform -chdir="$GCP_ENVIRONMENTS_PATH/envs/non-production" output -raw env_log_project_id)
  3. Validate the variable values.

    echo env_log_project_id=$ENV_LOG_PROJECT_ID
    echo env_log_bucket_name=$ENV_LOG_BUCKET_NAME
  4. Reset your org policy for the logging project by running the following command.

    gcloud org-policies reset iam.allowedPolicyMemberDomains --project=$ENV_LOG_PROJECT_ID
  5. Assign roles/storage.objectCreator role to [email protected] group.

    gcloud storage buckets add-iam-policy-binding gs://$ENV_LOG_BUCKET_NAME --member="group:[email protected]" --role="roles/storage.objectCreator"

    Note: you might receive an error telling you that this is against an organization policy, this can happen because of the propagation time from the change made to the organization policy (propagation time is tipically 2 minutes, but can take 7 minutes or longer). If this happens, wait some minutes and try again

  6. Delete the change made on the first step to the organization policy, this will make the project inherit parent policies.

    gcloud org-policies delete iam.allowedPolicyMemberDomains --project=$ENV_LOG_PROJECT_ID
production environment configuration
  1. Configure the following variable below with the value of gcp-environments repository path.

    export GCP_ENVIRONMENTS_PATH=INSERT_YOUR_PATH_HERE

    Make sure your git is checked out to the production branch by running git checkout production on GCP_ENVIRONMENTS_PATH.

    (cd $GCP_ENVIRONMENTS_PATH && git checkout production && ./tf-wrapper.sh init production)
  2. Retrieve the bucket name and project id from terraform outputs.

    export ENV_LOG_BUCKET_NAME=$(terraform -chdir="$GCP_ENVIRONMENTS_PATH/envs/production" output -raw env_log_bucket_name)
    export ENV_LOG_PROJECT_ID=$(terraform -chdir="$GCP_ENVIRONMENTS_PATH/envs/production" output -raw env_log_project_id)
  3. Validate the variable values.

    echo env_log_project_id=$ENV_LOG_PROJECT_ID
    echo env_log_bucket_name=$ENV_LOG_BUCKET_NAME
  4. Reset your org policy for the logging project by running the following command.

    gcloud org-policies reset iam.allowedPolicyMemberDomains --project=$ENV_LOG_PROJECT_ID
  5. Assign roles/storage.objectCreator role to [email protected] group.

    gcloud storage buckets add-iam-policy-binding gs://$ENV_LOG_BUCKET_NAME --member="group:[email protected]" --role="roles/storage.objectCreator"

    Note: you might receive an error telling you that this is against an organization policy, this can happen because of the propagation time from the change made to the organization policy (propagation time is tipically 2 minutes, but can take 7 minutes or longer). If this happens, wait some minutes and try again

  6. Delete the change made on the first step to the organization policy, this will make the project inherit parent policies.

    gcloud org-policies delete iam.allowedPolicyMemberDomains --project=$ENV_LOG_PROJECT_ID

Option 2: Use Google Cloud Console to disable/enable organization policy constraint

Proceed with these steps only if Option 1 is not chosen.

  1. On ml_logging.tf locate the following lines and uncomment them:

    resource "google_storage_bucket_iam_member" "bucket_logging" {
      bucket = google_storage_bucket.log_bucket.name
      role   = "roles/storage.objectCreator"
      member = "group:[email protected]"
    }
  2. Under IAM & Admin, select Organization Policies. Search for "Domain Restricted Sharing".

    list-policy

  3. Select 'Manage Policy'. This directs you to the Domain Restricted Sharing Edit Policy page. It will be set at 'Inherit parent's policy'. Change this to 'Google-managed default'.

    edit-policy

  4. Follow the instructions on checking out development, non-production & production branches. Once environments terraform code has successfully applied, edit the policy again and select 'Inherit parent's policy' and Click SET POLICY.

After making these modifications, you can follow the README.md procedure for 2-environment step on foundation, make sure you change the organization policy after running the steps on foundation.

  1. You can now move to the instructions in the network step. To use the Dual Shared VPC network mode go to 3-networks-dual-svpc.

Deploying with Jenkins

See 0-bootstrap README-Jenkins.md.

Deploying with GitHub Actions

See 0-bootstrap README-GitHub.md.

Run Terraform locally

  1. The next instructions assume that you are at the same level of the terraform-google-enterprise-genai folder. Change into 2-environments folder, copy the Terraform wrapper script and ensure it can be executed.

    cd terraform-google-enterprise-genai/2-environments
    cp ../build/tf-wrapper.sh .
    chmod 755 ./tf-wrapper.sh
  2. Rename terraform.example.tfvars to terraform.tfvars.

    mv terraform.example.tfvars terraform.tfvars
  3. Update the file with values from your environment and 0-bootstrap output.See any of the envs folder README.md files for additional information on the values in the terraform.tfvars file.

  4. Use terraform output to get the backend bucket value from 0-bootstrap output.

    export backend_bucket=$(terraform -chdir="../0-bootstrap/" output -raw gcs_bucket_tfstate)
    echo "remote_state_bucket = ${backend_bucket}"
    
    sed -i "s/REMOTE_STATE_BUCKET/${backend_bucket}/" ./terraform.tfvars

We will now deploy each of our environments(development/production/non-production) using this script. When using Cloud Build or Jenkins as your CI/CD tool each environment corresponds to a branch is the repository for 2-environments step and only the corresponding environment is applied.

To use the validate option of the tf-wrapper.sh script, please follow the instructions to install the terraform-tools component.

  1. Use terraform output to get the Cloud Build project ID and the environment step Terraform Service Account from 0-bootstrap output. An environment variable GOOGLE_IMPERSONATE_SERVICE_ACCOUNT will be set using the Terraform Service Account to enable impersonation.

    export CLOUD_BUILD_PROJECT_ID=$(terraform -chdir="../0-bootstrap/" output -raw cloudbuild_project_id)
    echo ${CLOUD_BUILD_PROJECT_ID}
    
    export GOOGLE_IMPERSONATE_SERVICE_ACCOUNT=$(terraform -chdir="../0-bootstrap/" output -raw environment_step_terraform_service_account_email)
    echo ${GOOGLE_IMPERSONATE_SERVICE_ACCOUNT}
  2. Ensure you disable The Organization Policy on the development folder before continuing further.

  3. Run init and plan and review output for environment development.

    ./tf-wrapper.sh init development
    ./tf-wrapper.sh plan development
  4. Run validate and check for violations.

    ./tf-wrapper.sh validate development $(pwd)/../policy-library ${CLOUD_BUILD_PROJECT_ID}
  5. Run apply development.

    ./tf-wrapper.sh apply development
  6. Ensure you disable The Organization Policy on the non-production folder before continuing further.

  7. Run init and plan and review output for environment non-production.

    ./tf-wrapper.sh init non-production
    ./tf-wrapper.sh plan non-production
  8. Run validate and check for violations.

    ./tf-wrapper.sh validate non-production $(pwd)/../policy-library ${CLOUD_BUILD_PROJECT_ID}
  9. Run apply non-production.

    ./tf-wrapper.sh apply non-production
  10. Ensure you disable The Organization Policy on the non-production folder before continuing further.

  11. Run init and plan and review output for environment production.

    ./tf-wrapper.sh init production
    ./tf-wrapper.sh plan production
  12. Run validate and check for violations.

    ./tf-wrapper.sh validate production $(pwd)/../policy-library ${CLOUD_BUILD_PROJECT_ID}
  13. Run apply production.

    ./tf-wrapper.sh apply production

If you received any errors or made any changes to the Terraform config or terraform.tfvars you must re-run ./tf-wrapper.sh plan <env> before running ./tf-wrapper.sh apply <env>.

Before executing the next stages, unset the GOOGLE_IMPERSONATE_SERVICE_ACCOUNT environment variable.

unset GOOGLE_IMPERSONATE_SERVICE_ACCOUNT

cd ../..
  1. You can now move to the instructions in the network step. To use the Dual Shared VPC network mode go to 3-networks-dual-svpc.