Skip to content

Latest commit

Β 

History

History

5-app-infra

Folders and files

NameName
Last commit message
Last commit date

parent directory

..
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 

5-app-infra

This repo is part of a multi-part guide that shows how to configure and deploy the example.com reference architecture described in Google Cloud security foundations guide. The following table lists the parts of the guide.

0-bootstrap Bootstraps a Google Cloud organization, creating all the required resources and permissions to start using the Cloud Foundation Toolkit (CFT). This step also configures a CI/CD Pipeline for foundations code in subsequent stages.
1-org Sets up top-level shared folders, monitoring and networking projects, organization-level logging, and baseline security settings through organizational policies.
2-environments Sets up development, non-production, and production environments within the Google Cloud organization that you've created.
3-networks-dual-svpc Sets up base and restricted shared VPCs with default DNS, NAT (optional), Private Service networking, VPC service controls, on-premises Dedicated Interconnect, and baseline firewall rules for each environment. It also sets up the global DNS hub.
4-projects Sets up a folder structure, projects, and an application infrastructure pipeline for applications, which are connected as service projects to the shared VPC created in the previous stage.
5-app-infra (this file) A project folder structure which expands upon all projects created in 4-projects

For an overview of the architecture and the parts, see the terraform-google-enterprise-genai README file.

Purpose

Inside the projects folder, the artifact-publish and service-catalog directories contain applications that will be further developed. These directories are Terraform repositories that house the configuration code for their respective applications. For instance, in the projects/artifact-publish directory, you will find code that configures the custom pipeline for the artifact-publish application.

Note: Remember that in step 4-projects, the Service Catalog and Artifacts projects were created under common folder.

Inside the source_repos folder, the folders artifact-publish and service-catalog are seperate Cloud Build Repositories that have their own unique piplelines configured. These are used for building out in-house Docker images for your machine-learning pipelines and terraform modules that can be deployed through the Service Catalog Google Cloud Product.

This repository contain examples using modules in notebooks in your interactive (development) environment, as well as deployment modules for your operational (non-production, production) environments respectively.

For the purposes of this demonstration, we assume that you are using Cloud Build or manual deployment.

Prerequisites

  1. 0-bootstrap executed successfully.
  2. 1-org executed successfully.
  3. 2-environments executed successfully.
  4. 3-networks executed successfully.
  5. 4-projects executed successfully.

Troubleshooting

Please refer to troubleshooting if you run into issues during this step.

Usage

Note: If you are using MacOS, replace cp -RT with cp -R in the relevant commands. The -T flag is needed for Linux, but causes problems for MacOS.

Deploying with Cloud Build

  1. Ensure you are in a neutral directory outside any other git related repositories.

  2. Clone the gcp-policies repo based on the Terraform output from the 4-projects step. Clone the repo at the same level of the terraform-google-enterprise-genai folder, the following instructions assume this layout. Run terraform output cloudbuild_project_id in the 4-projects folder to get the Cloud Build Project ID.

    export INFRA_PIPELINE_PROJECT_ID=$(terraform -chdir="gcp-projects/ml_business_unit/shared/" output -raw cloudbuild_project_id)
    echo ${INFRA_PIPELINE_PROJECT_ID}
    
    gcloud source repos clone gcp-policies gcp-policies-app-infra --project=${INFRA_PIPELINE_PROJECT_ID}

    Note: gcp-policies repo has the same name as the repo created in step 1-org. In order to prevent a collision, the previous command will clone this repo in the folder gcp-policies-app-infra.

  3. Navigate into the repo and copy contents of policy-library to new repo. All subsequent steps assume you are running them from the gcp-policies-app-infra directory. If you run them from another directory, adjust your copy paths accordingly.

    cd gcp-policies-app-infra/
    git checkout -b main
    
    cp -RT ../terraform-google-enterprise-genai/policy-library/ .
  4. Commit changes and push your main branch to the new repo.

    git add .
    git commit -m 'Initialize policy library repo'
    
    git push --set-upstream origin main
  5. Navigate out of the repo.

    cd ..

Artifacts Application

The purpose of this step is to deploy out an artifact registry to store custom docker images. A Cloud Build pipeline is also deployed out. At the time of this writing, it is configured to attach itself to a Cloud Source Repository. The Cloud Build pipeline is responsible for building out a custom image that may be used in Machine Learning Workflows. If you are in a situation where company policy requires no outside repositories to be accessed, custom images can be used to keep access to any image internally.

Since every workflow will have access to these images, it is deployed in the common folder, and keeping with the foundations structure, is listed as shared under this Business Unit. It will only need to be deployed once.

The Pipeline is connected to a Google Cloud Source Repository with a simple structure:

β”œβ”€β”€ README.md
└── images
   β”œβ”€β”€ tf2-cpu.2-13:0.1
   β”‚Β Β  └── Dockerfile
   └── tf2-gpu.2-13:0.1
      └── Dockerfile

For the purposes of this example, the pipeline is configured to monitor the main branch of this repository.

Each folder under images has the full name and tag of the image that must be built. Once a change to the main branch is pushed, the pipeline will analyse which files have changed and build that image out and place it in the artifact repository. For example, if there is a change to the Dockerfile in the tf2-cpu-13:0.1 folder, or if the folder itself has been renamed, it will build out an image and tag it based on the folder name that the Dockerfile has been housed in.

Once pushed, the pipeline build logs can be accessed by navigating to the artifacts project name created in step-4:

terraform -chdir="gcp-projects/ml_business_unit/shared/" output -raw common_artifacts_project_id
  1. Clone the ml-artifact-publish repo.

    gcloud source repos clone ml-artifact-publish --project=${INFRA_PIPELINE_PROJECT_ID}
  2. Navigate into the repo, change to non-main branch and copy contents of genAI to new repo. All subsequent steps assume you are running them from the ml-artifact-publish directory. If you run them from another directory, adjust your copy paths accordingly.

    cd ml-artifact-publish/
    git checkout -b plan
    
    cp -RT ../terraform-google-enterprise-genai/5-app-infra/projects/artifact-publish/ .
    cp -R ../terraform-google-enterprise-genai/5-app-infra/modules/ ./modules
    cp ../terraform-google-enterprise-genai/build/cloudbuild-tf-* .
    cp ../terraform-google-enterprise-genai/build/tf-wrapper.sh .
    chmod 755 ./tf-wrapper.sh
  3. Rename common.auto.example.tfvars to common.auto.tfvars.

    mv common.auto.example.tfvars common.auto.tfvars
  4. Update the file with values from your environment and 0-bootstrap. See machine learning business unit env folder README.md file for additional information on the values in the common.auto.tfvars file.

    export remote_state_bucket=$(terraform -chdir="../terraform-google-enterprise-genai/0-bootstrap/" output -raw projects_gcs_bucket_tfstate)
    echo "remote_state_bucket = ${remote_state_bucket}"
    sed -i "s/REMOTE_STATE_BUCKET/${remote_state_bucket}/" ./common.auto.tfvars
  5. Update backend.tf with your bucket from the infra pipeline output.

    export backend_bucket=$(terraform -chdir="../gcp-projects/ml_business_unit/shared/" output -json state_buckets | jq '."ml-artifact-publish"' --raw-output)
    echo "backend_bucket = ${backend_bucket}"
    
    for i in `find -name 'backend.tf'`; do sed -i "s/UPDATE_APP_INFRA_BUCKET/${backend_bucket}/" $i; done
  6. Commit changes.

    git add .
    git commit -m 'Initialize repo'
  7. Push your plan branch to trigger a plan for all environments. Because the plan branch is not a named environment branch, pushing your plan branch triggers terraform plan but not terraform apply. Review the plan output in your Cloud Build project https://console.cloud.google.com/cloud-build/builds;region=DEFAULT_REGION?project=YOUR_INFRA_PIPELINE_PROJECT_ID

    git push --set-upstream origin plan
  8. Merge changes to shared. Because this is a named environment branch, pushing to this branch triggers both terraform plan and terraform apply. Review the apply output in your Cloud Build project https://console.cloud.google.com/cloud-build/builds;region=DEFAULT_REGION?project=YOUR_INFRA_PIPELINE_PROJECT_ID

    git checkout -b production
    git push origin production
  9. cd out of the ml-artifacts-publish repository.

    cd ..
  10. Navigate to the project that was output from ${ARTIFACT_PROJECT_ID} in Google's Cloud Console to view the first run of images being built.

Configuring Cloud Source Repository of Artifact Application

  1. Grab the Artifact Project ID

    export ARTIFACT_PROJECT_ID=$(terraform -chdir="gcp-projects/ml_business_unit/shared" output -raw common_artifacts_project_id)
    echo ${ARTIFACT_PROJECT_ID}
  2. Clone the freshly minted Cloud Source Repository that was created for this project.

    gcloud source repos clone publish-artifacts --project=${ARTIFACT_PROJECT_ID}
  3. Enter the repo folder and copy over the artifact files from 5-app-infra/source_repos/artifact-publish folder.

    cd publish-artifacts
    git checkout -b main
    
    git commit -m "Initialize Repository" --allow-empty
    cp -RT ../terraform-google-enterprise-genai/5-app-infra/source_repos/artifact-publish/ .
  4. Commit changes and push your main branch to the new repo.

    git add .
    git commit -m 'Build Images'
    
    git push --set-upstream origin main
  5. cd out of the publish-artifacts repository.

    cd ..

Service Catalog Pipeline Configuration

This step has two main purposes:

  1. To deploy a pipeline and a bucket which is linked to a Google Cloud Repository that houses terraform modules for the use in Service Catalog. Although Service Catalog itself must be manually deployed, the modules which will be used can still be automated.

  2. To deploy infrastructure for operational environments (ie. non-production & production.)

The resoning behind utilizing one repository with two deployment methodologies is due to how close interactive (development) and operational environments are.

The repository has the structure (truncated for brevity):

ml_business_unit
β”œβ”€β”€ development
β”œβ”€β”€ non-production
β”œβ”€β”€ production
modules
β”œβ”€β”€ bucket
β”‚Β Β  β”œβ”€β”€ README.md
β”‚Β Β  β”œβ”€β”€ data.tf
β”‚Β Β  β”œβ”€β”€ main.tf
β”‚Β Β  β”œβ”€β”€ outputs.tf
β”‚Β Β  β”œβ”€β”€ provider.tf
β”‚Β Β  └── variables.tf
β”œβ”€β”€ composer
β”‚Β Β  β”œβ”€β”€ README.md
β”‚Β Β  β”œβ”€β”€ data.tf
β”‚Β Β  β”œβ”€β”€ iam.roles.tf
β”‚Β Β  β”œβ”€β”€ iam.users.tf
β”‚Β Β  β”œβ”€β”€ locals.tf
β”‚Β Β  β”œβ”€β”€ main.tf
β”‚Β Β  β”œβ”€β”€ outputs.tf
β”‚Β Β  β”œβ”€β”€ provider.tf
β”‚Β Β  β”œβ”€β”€ terraform.tfvars.example
β”‚Β Β  β”œβ”€β”€ variables.tf
β”‚Β Β  └── vpc.tf
β”œβ”€β”€ cryptography
β”‚Β Β  β”œβ”€β”€ README.md
β”‚Β Β  β”œβ”€β”€ crypto_key
β”‚Β Β  β”‚Β Β  β”œβ”€β”€ main.tf
β”‚Β Β  β”‚Β Β  β”œβ”€β”€ outputs.tf
β”‚Β Β  β”‚Β Β  └── variables.tf
β”‚Β Β  └── key_ring
β”‚Β Β      β”œβ”€β”€ main.tf
β”‚Β Β      β”œβ”€β”€ outputs.tf
β”‚Β Β      └── variables.tf

Each folder under modules represents a terraform module. When there is a change in any of the terraform module folders, the pipeline will find whichever module has been changed since the last push, tar.gz that file and place it in a bucket for Service Catalog to access.

This pipeline is listening to the main branch of this repository for changes in order for the modules to be uploaded to service catalog.

The pipeline also listens for changes made to plan, development, non-production & production branches, this is used for deploying infrastructure to each project.

  1. Clone the ml-service-catalog repo.

    gcloud source repos clone ml-service-catalog --project=${INFRA_PIPELINE_PROJECT_ID}
  2. Navigate into the repo, change to non-main branch and copy contents of foundation to new repo. All subsequent steps assume you are running them from the ml-service-catalog directory. If you run them from another directory, adjust your copy paths accordingly.

    cd ml-service-catalog
    git checkout -b plan
    
    cp -RT ../terraform-google-enterprise-genai/5-app-infra/projects/service-catalog/ .
    cp -R ../terraform-google-enterprise-genai/5-app-infra/modules/ ./modules
    cp ../terraform-google-enterprise-genai/build/cloudbuild-tf-* .
    cp ../terraform-google-enterprise-genai/build/tf-wrapper.sh .
    chmod 755 ./tf-wrapper.sh
  3. Rename common.auto.example.tfvars to common.auto.tfvars.

    mv common.auto.example.tfvars common.auto.tfvars
  4. Update the file with values from your environment and 0-bootstrap. See any of the business unit 1 envs folders README.md files for additional information on the values in the common.auto.tfvars file.

    export remote_state_bucket=$(terraform -chdir="../terraform-google-enterprise-genai/0-bootstrap/" output -raw projects_gcs_bucket_tfstate)
    echo "remote_state_bucket = ${remote_state_bucket}"
    sed -i "s/REMOTE_STATE_BUCKET/${remote_state_bucket}/" ./common.auto.tfvars
  5. Update backend.tf with your bucket from the infra pipeline output.

    export backend_bucket=$(terraform -chdir="../gcp-projects/ml_business_unit/shared/" output -json state_buckets | jq '."ml-service-catalog"' --raw-output)
    echo "backend_bucket = ${backend_bucket}"
    
    for i in `find -name 'backend.tf'`; do sed -i "s/UPDATE_APP_INFRA_BUCKET/${backend_bucket}/" $i; done
  6. Update the log_bucket variable with the value of the logs_export_storage_bucket_name.

    terraform -chdir="../gcp-org/envs/shared" init
    export log_bucket=$(terraform -chdir="../gcp-org/envs/shared" output -raw logs_export_storage_bucket_name)
    echo "log_bucket = ${log_bucket}"
    sed -i "s/REPLACE_LOG_BUCKET/${log_bucket}/" ./common.auto.tfvars
  7. Commit changes.

    git add .
    git commit -m 'Initialize repo'
  8. Push your plan branch to trigger a plan for all environments. Because the plan branch is not a named environment branch, pushing your plan branch triggers terraform plan but not terraform apply. Review the plan output in your Cloud Build project https://console.cloud.google.com/cloud-build/builds;region=DEFAULT_REGION?project=YOUR_INFRA_PIPELINE_PROJECT_ID

    git push --set-upstream origin plan
  9. Merge changes to production. Because this is a named environment branch, pushing to this branch triggers both terraform plan and terraform apply. Review the apply output in your Cloud Build project https://console.cloud.google.com/cloud-build/builds;region=DEFAULT_REGION?project=YOUR_INFRA_PIPELINE_PROJECT_ID

    git checkout -b production
    git push origin production
  10. cd out of the ml-service-catalog repository.

    cd ..

Configuring Cloud Source Repository of Service Catalog Solutions Pipeline

  1. Grab the Service Catalogs ID

    export SERVICE_CATALOG_PROJECT_ID=$(terraform -chdir="gcp-projects/ml_business_unit/shared" output -raw service_catalog_project_id)
    echo ${SERVICE_CATALOG_PROJECT_ID}
  2. Clone the freshly minted Cloud Source Repository that was created for this project.

    gcloud source repos clone service-catalog --project=${SERVICE_CATALOG_PROJECT_ID}
  3. Enter the repo folder and copy over the service catalogs files from 5-app-infra/source_repos/service-catalog folder.

    cd service-catalog/
    git checkout -b main
    cp -RT ../terraform-google-enterprise-genai/5-app-infra/source_repos/service-catalog/ .
    git add img
    git commit -m "Add img directory"
  4. Commit changes and push main branch to the new repo.

    git add modules
    git commit -m 'Initialize Service Catalog Build Repo'
    
    git push --set-upstream origin main
  5. cd out of the service_catalog repository.

    cd ..
  6. Navigate to the project that was output from ${ARTIFACT_PROJECT_ID} in Google's Cloud Console to view the first run of images being built.

Run Terraform locally

Artifacts Application

  1. Create ml-artifact-publish directory at the same level as terraform-google-enterprise-genai.

    mkdir ml-artifact-publish
  2. Navigate into the repo, change to non-main branch and copy contents of genAI to new repo. All subsequent steps assume you are running them from the ml-artifact-publish directory. If you run them from another directory, adjust your copy paths accordingly.

    cd ml-artifact-publish/
    
    cp -RT ../terraform-google-enterprise-genai/5-app-infra/projects/artifact-publish/ .
    cp -R ../terraform-google-enterprise-genai/5-app-infra/modules/ ./modules
    cp ../terraform-google-enterprise-genai/build/cloudbuild-tf-* .
    cp ../terraform-google-enterprise-genai/build/tf-wrapper.sh .
    chmod 755 ./tf-wrapper.sh
  3. Rename common.auto.example.tfvars files to common.auto.tfvars.

    mv common.auto.example.tfvars common.auto.tfvars
  4. Update common.auto.tfvars file with values from your environment.

  5. Use terraform output to get the project backend bucket value from 0-bootstrap.

    export remote_state_bucket=$(terraform -chdir="../terraform-google-enterprise-genai/0-bootstrap/" output -raw projects_gcs_bucket_tfstate)
    echo "remote_state_bucket = ${remote_state_bucket}"
    sed -i "s/REMOTE_STATE_BUCKET/${remote_state_bucket}/" ./common.auto.tfvars
  6. Provide the user that will be running ./tf-wrapper.sh the Service Account Token Creator role to the ml Terraform service account.

  7. Provide the user permissions to run the terraform locally with the serviceAccountTokenCreator permission.

    member="user:$(gcloud auth list --filter="status=ACTIVE" --format="value(account)")"
    echo ${member}
    
    project_id=$(terraform -chdir="../terraform-google-enterprise-genai/4-projects/ml_business_unit/shared/" output -raw cloudbuild_project_id)
    echo ${project_id}
    
    terraform_sa=$(terraform -chdir="../terraform-google-enterprise-genai/4-projects/ml_business_unit/shared/" output -json terraform_service_accounts | jq '."ml-artifact-publish"' --raw-output)
    echo ${terraform_sa}
    
    gcloud iam service-accounts add-iam-policy-binding ${terraform_sa} --project ${project_id} --member="${member}" --role="roles/iam.serviceAccountTokenCreator"
  8. Update backend.tf with your bucket from the infra pipeline output.

    export backend_bucket=$(terraform -chdir="../terraform-google-enterprise-genai/4-projects/ml_business_unit/shared/" output -json state_buckets | jq '."ml-artifact-publish"' --raw-output)
    echo "backend_bucket = ${backend_bucket}"
    
    for i in `find -name 'backend.tf'`; do sed -i "s/UPDATE_APP_INFRA_BUCKET/${backend_bucket}/" $i; done

We will now deploy each of our environments (development/production/non-production) using this script. When using Cloud Build or Jenkins as your CI/CD tool, each environment corresponds to a branch in the repository for the 5-app-infra step.Β Only the corresponding environment is applied.

To use the validate option of the tf-wrapper.sh script, please follow the instructions to install the terraform-tools component.

  1. Use terraform output to get the Infra Pipeline Project ID from 4-projects output.

    export INFRA_PIPELINE_PROJECT_ID=$(terraform -chdir="../terraform-google-enterprise-genai/4-projects/ml_business_unit/shared/" output -raw cloudbuild_project_id)
    echo ${INFRA_PIPELINE_PROJECT_ID}
    
    export GOOGLE_IMPERSONATE_SERVICE_ACCOUNT=$(terraform -chdir="../terraform-google-enterprise-genai/4-projects/ml_business_unit/shared/" output -json terraform_service_accounts | jq '."ml-artifact-publish"' --raw-output)
    echo ${GOOGLE_IMPERSONATE_SERVICE_ACCOUNT}
  2. Run init and plan and review output for environment shared (common).

    ./tf-wrapper.sh init shared
    ./tf-wrapper.sh plan shared
  3. Run validate and check for violations.

./tf-wrapper.sh validate shared $(pwd)/../terraform-google-enterprise-genai/policy-library ${INFRA_PIPELINE_PROJECT_ID}
  1. Run apply shared.

    ./tf-wrapper.sh apply shared

If you received any errors or made any changes to the Terraform config or common.auto.tfvars you must re-run ./tf-wrapper.sh plan <env> before running ./tf-wrapper.sh apply <env>.

After executing this stage, unset the GOOGLE_IMPERSONATE_SERVICE_ACCOUNT environment variable.

unset GOOGLE_IMPERSONATE_SERVICE_ACCOUNT
  1. cd out of the repository.

    cd ..

Configuring Cloud Source Repository of Artifact Application

  1. The next instructions assume that you are at the same level of the terraform-google-enterprise-genai folder.

  2. Grab the Artifact Project ID

    export ARTIFACT_PROJECT_ID=$(terraform -chdir="terraform-google-enterprise-genai/4-projects/ml_business_unit/shared" output -raw common_artifacts_project_id)
    echo ${ARTIFACT_PROJECT_ID}
  3. Clone the freshly minted Cloud Source Repository that was created for this project.

    gcloud source repos clone publish-artifacts --project=${ARTIFACT_PROJECT_ID}
  4. Enter the repo folder and copy over the artifact files from 5-app-infra/source_repos/artifact-publish folder.

    cd publish-artifacts
    git checkout -b main
    
    git commit -m "Initialize Repository" --allow-empty
    cp -RT ../terraform-google-enterprise-genai/5-app-infra/source_repos/artifact-publish/ .
  5. Commit changes and push your main branch to the new repo.

    git add .
    git commit -m 'Build Images'
    
    git push --set-upstream origin main
  6. cd out of the publish-artifacts repository.

    cd ..

Service Catalog Configuration

  1. Create ml-service-catalog directory at the same level as terraform-google-enterprise-genai.

    mkdir ml-service-catalog
  2. Navigate into the repo, change to non-main branch and copy contents of foundation to new repo. All subsequent steps assume you are running them from the ml-service-catalog directory. If you run them from another directory, adjust your copy paths accordingly.

    cd ml-service-catalog
    
    cp -RT ../terraform-google-enterprise-genai/5-app-infra/projects/service-catalog/ .
    cp -R ../terraform-google-enterprise-genai/5-app-infra/modules/ ./modules
    cp ../terraform-google-enterprise-genai/build/cloudbuild-tf-* .
    cp ../terraform-google-enterprise-genai/build/tf-wrapper.sh .
    chmod 755 ./tf-wrapper.sh
  3. Rename common.auto.example.tfvars to common.auto.tfvars.

    mv common.auto.example.tfvars common.auto.tfvars
  4. Update the file with values from your environment and 0-bootstrap. See any of the business unit 1 envs folders README.md files for additional information on the values in the common.auto.tfvars file.

    export remote_state_bucket=$(terraform -chdir="../terraform-google-enterprise-genai/0-bootstrap/" output -raw projects_gcs_bucket_tfstate)
    echo "remote_state_bucket = ${remote_state_bucket}"
    sed -i "s/REMOTE_STATE_BUCKET/${remote_state_bucket}/" ./common.auto.tfvars
  5. Update backend.tf with your bucket from the infra pipeline output.

    export backend_bucket=$(terraform -chdir="../terraform-google-enterprise-genai/4-projects/ml_business_unit/shared/" output -json state_buckets | jq '."ml-service-catalog"' --raw-output)
    echo "backend_bucket = ${backend_bucket}"
    
    for i in `find -name 'backend.tf'`; do sed -i "s/UPDATE_APP_INFRA_BUCKET/${backend_bucket}/" $i; done
  6. Update the log_bucket variable with the value of the logs_export_storage_bucket_name.

    export log_bucket=$(terraform -chdir="../terraform-google-enterprise-genai/1-org/envs/shared" output -raw logs_export_storage_bucket_name)
    echo "log_bucket = ${log_bucket}"
    sed -i "s/REPLACE_LOG_BUCKET/${log_bucket}/" ./common.auto.tfvars
  7. Provide the user permissions to run the terraform locally with the serviceAccountTokenCreator permission.

    (cd ../terraform-google-enterprise-genai/4-projects && ./tf-wrapper.sh init shared)
    
    member="user:$(gcloud auth list --filter="status=ACTIVE" --format="value(account)")"
    echo ${member}
    
    project_id=$(terraform -chdir="../terraform-google-enterprise-genai/4-projects/ml_business_unit/shared/" output -raw cloudbuild_project_id)
    echo ${project_id}
    
    terraform_sa=$(terraform -chdir="../terraform-google-enterprise-genai/4-projects/ml_business_unit/shared/" output -json terraform_service_accounts | jq '."ml-service-catalog"' --raw-output)
    echo ${terraform_sa}
    
    gcloud iam service-accounts add-iam-policy-binding ${terraform_sa} --project ${project_id} --member="${member}" --role="roles/iam.serviceAccountTokenCreator"

We will now deploy each of our environments (development/production/non-production) using this script. When using Cloud Build or Jenkins as your CI/CD tool, each environment corresponds to a branch in the repository for the 5-app-infra step.Β Only the corresponding environment is applied.

To use the validate option of the tf-wrapper.sh script, please follow the instructions to install the terraform-tools component.

  1. Use terraform output to get the Infra Pipeline Project ID from 4-projects output.

    export INFRA_PIPELINE_PROJECT_ID=$(terraform -chdir="../terraform-google-enterprise-genai/4-projects/ml_business_unit/shared/" output -raw cloudbuild_project_id)
    echo ${INFRA_PIPELINE_PROJECT_ID}
    
    export GOOGLE_IMPERSONATE_SERVICE_ACCOUNT=$(terraform -chdir="../terraform-google-enterprise-genai/4-projects/ml_business_unit/shared/" output -json terraform_service_accounts | jq '."ml-service-catalog"' --raw-output)
    echo ${GOOGLE_IMPERSONATE_SERVICE_ACCOUNT}
  2. Run init and plan and review output for environment shared (common).

    ./tf-wrapper.sh init shared
    ./tf-wrapper.sh plan shared
  3. Run validate and check for violations.

    ./tf-wrapper.sh validate shared $(pwd)/../terraform-google-enterprise-genai/policy-library ${INFRA_PIPELINE_PROJECT_ID}
  4. Run apply shared.

    ./tf-wrapper.sh apply shared

If you received any errors or made any changes to the Terraform config or common.auto.tfvars you must re-run ./tf-wrapper.sh plan <env> before running ./tf-wrapper.sh apply <env>.

After executing this stage, unset the GOOGLE_IMPERSONATE_SERVICE_ACCOUNT environment variable.

unset GOOGLE_IMPERSONATE_SERVICE_ACCOUNT
  1. cd out of the repository.

    cd ..

Configuring Cloud Source Repository of Service Catalog Solutions Pipeline

  1. The next instructions assume that you are at the same level of the terraform-google-enterprise-genai folder

  2. Grab the Service Catalogs ID

    export SERVICE_CATALOG_PROJECT_ID=$(terraform -chdir="terraform-google-enterprise-genai/4-projects/ml_business_unit/shared" output -raw service_catalog_project_id)
    echo ${SERVICE_CATALOG_PROJECT_ID}
  3. Clone the freshly minted Cloud Source Repository that was created for this project.

    gcloud source repos clone service-catalog --project=${SERVICE_CATALOG_PROJECT_ID}
  4. Enter the repo folder and copy over the service catalogs files from 5-app-infra/source_repos/service-catalog folder.

    cd service-catalog/
    git checkout -b main
    
    cp -RT ../terraform-google-enterprise-genai/5-app-infra/source_repos/service-catalog/ .
    git add img
    git commit -m "Add img directory"
  5. Commit changes and push main branch to the new repo.

    git add modules
    git commit -m 'Initialize Service Catalog Build Repo'
    
    git push --set-upstream origin main
  6. cd out of the service-catalog repository.

    cd ..
  7. Navigate to the project that was output from ${SERVICE_CATALOG_PROJECT_ID} in Google's Cloud Console to view the first run of images being built.

https://console.cloud.google.com/cloud-build/builds;region=us-central1?orgonly=true&project=${SERVICE_CATALOG_PROJECT_ID}&supportedpurview=project