diff --git a/docs/2.0/docs/accountfactory/installation/addingnewrepo.md b/docs/2.0/docs/accountfactory/installation/addingnewrepo.md
index 7ea6610b8..c4be0a138 100644
--- a/docs/2.0/docs/accountfactory/installation/addingnewrepo.md
+++ b/docs/2.0/docs/accountfactory/installation/addingnewrepo.md
@@ -1,13 +1,19 @@
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
+import CustomizableValue from '/src/components/CustomizableValue';
+
# Adding Account Factory to a new repository
-To configure Gruntwork Account Factory in a new GitHub repository, the following steps are required (and will be explained in detail below):
+To configure Gruntwork Account Factory in a new GitHub/GitLab repository, the following steps are required (and will be explained in detail below):
+
+1. Create your infrastructure-live root, access-control and catalog repositories.
+2. Configure the authentication for the repositories to ensure that required access tokens are available.
+
+
-1. Create your `infrastructure-live-root` repository using Gruntwork's GitHub template.
-2. Configure the Gruntwork.io GitHub App to authorize your `infrastructure-live-root` repository, or ensure that the appropriate machine user tokens are set up as repository or organization secrets.
-3. Update the Bootstrap Workflow to configure your AWS settings.
-4. Execute the Bootstrap Workflow in your `infrastructure-live-root` repository to generate pull requests and additional repositories.
+
-## Creating the infrastructure-live-root repository
+ Creating the infrastructure-live-root repository
Gruntwork provides a pre-configured git repository template that incorporates best practices while allowing for customization.
@@ -19,29 +25,29 @@ The workflow can optionally scaffold the `infrastructure-live-access-control` an
Navigate to the template repository and select **Use this template** -> **Create a new Repository**. Choose your organization as the owner, add a description if desired, set the repository to **private**, and click **Create repository**.
-## Configuring Gruntwork app settings
+ Configuring Gruntwork app settings
Use the Gruntwork.io GitHub App to [add the repository as an Infra Root repository](/2.0/docs/pipelines/installation/viagithubapp#configuration).
If using the [machine user model](/2.0/docs/pipelines/installation/viamachineusers), ensure the `INFRA_ROOT_WRITE_TOKEN` (and `ORG_REPO_ADMIN_TOKEN` for enterprise customers) is added to the repository as a secret or configured as an organization secret.
-## Updating the Bootstrap Workflow
+ Updating the Bootstrap Workflow
Return to your `infrastructure-live-root` repository and follow the `README` instructions to update the bootstrap workflow for IaC Foundations. Provide details about your AWS organization, accounts, and default values for new account provisioning.
-## Running the workflow
+ Running the workflow
Follow the instructions in your `infrastructure-live-root` repository to execute the Bootstrap Workflow. Gruntwork support is available to address any questions that arise. During the workflow execution, you can choose to create the `infrastructure-live-access-control` and `infrastructure-catalog` repositories. These repositories will be created in your GitHub organization using values defined in the workflow configuration.
-### Infrastructure live access control
+ Infrastructure live access control
This repository is primarily for Enterprise customers but is recommended for all users. When running the Bootstrap Workflow in your `infrastructure-live-root` repository, select the option to "Bootstrap the infrastructure-access-control repository."
-### Infrastructure catalog
+ Infrastructure catalog
The Bootstrap Workflow also creates an empty `infrastructure-catalog` repository. This repository is used to store Terraform/OpenTofu modules authored by your organization for internal use. During the Bootstrap Workflow execution in your `infrastructure-live-root` repository, select the option to "Bootstrap the infrastructure-catalog repository."
-## Completing instructions in Bootstrap Pull Requests
+ Completing instructions in Bootstrap Pull Requests
Each of your repositories will contain a Bootstrap Pull Request. Follow the instructions in these Pull Requests to finalize the setup of your IaC repositories.
@@ -50,3 +56,559 @@ Each of your repositories will contain a Bootstrap Pull Request. Follow the inst
The bootstrapping pull requests include pre-configured files, such as a `.mise.toml` file that specifies versions of OpenTofu and Terragrunt. Ensure you review and update these configurations to align with your organization's requirements.
:::
+
+
+
+
+This guide walks you through the process of setting up a new GitLab Project with the Gruntwork Platform. By the end, you'll have a fully configured GitLab CI/CD pipeline that can create new AWS accounts and deploy infrastructure changes automatically.
+
+:::info
+To use Gruntwork Pipelines in an **existing** GitLab repository, see this [guide](/2.0/docs/pipelines/installation/addinggitlabrepo).
+:::
+
+ Prerequisites
+
+Before you begin, make sure you have:
+
+- Basic familiarity with Git, GitLab, and infrastructure as code concepts
+- Completed the [AWS Landing Zone setup](/2.0/docs/accountfactory/prerequisites/awslandingzone)
+- Have programmatic access to the AWS accounts created in the [AWS Landing Zone setup](/2.0/docs/accountfactory/prerequisites/awslandingzone)
+- Completed the [Pipelines Auth setup for GitLab](/2.0/docs/pipelines/installation/viamachineusers#gitlab) and setup a machine user with appropriate PAT tokens
+- Local access to Gruntwork's GitHub repositories, specifically the [architecture catalog](https://github.com/gruntwork-io/terraform-aws-architecture-catalog/)
+
+
+Additional setup for **custom GitLab instances only**
+
+ Fork the Pipelines workflow project
+
+You must [fork](https://docs.gitlab.com/user/project/repository/forking_workflow/#create-a-fork) Gruntwork's public [Pipelines workflow project](https://gitlab.com/gruntwork-io/pipelines-workflows) into your own GitLab instance.
+This is necessary because Gruntwork Pipelines uses [GitLab CI/CD components](/2.0/docs/pipelines/architecture/ci-workflows), and GitLab requires components to reside within the [same GitLab instance as the project referencing them](https://docs.gitlab.com/ci/components/#use-a-component).
+
+When creating the fork, we recommend configuring it as a public mirror of the original Gruntwork project and ensuring that tags are included.
+
+ Ensure OIDC configuration and JWKS are publicly accessible
+
+This step only applies if you are using a self-hosted GitLab instance that is not accessible from the public internet. If you are using GitLab.com or a self-hosted instance that is publicly accessible, you can skip this step.
+
+1. [Follow GitLab's instructions](https://docs.gitlab.com/ci/cloud_services/aws/#configure-a-non-public-gitlab-instance) for hosting your OIDC configuration and JWKS in a public location (e.g. S3 Bucket). This is necessary for both Gruntwork and the AWS OIDC provider to access the GitLab OIDC configuration and JWKS when authenticating JWT's generated by your custom instance.
+2. Note the (stored as `ci_id_tokens_issuer_url` in your `gitlab.rb` file per GitLab's instructions) generated above for reuse in the next steps.
+
+
+1. Create a new GitLab project for your `infrastructure-live-root` repository.
+1. Install dependencies.
+1. Configure the variables required to run the infrastructure-live-root boilerplate template.
+1. Create your `infrastructure-live-root` repository contents using Gruntwork's architecture-catalog template.
+1. Apply the account baselines to your AWS accounts.
+
+
+ Create a new infrastructure-live-root
+
+ Authorize Your GitLab Group with Gruntwork
+
+To use Gruntwork Pipelines with GitLab, your group needs authorization from Gruntwork. Email your Gruntwork account manager or support@gruntwork.io with:
+
+ ```
+ GitLab group name(s): $$GITLAB_GROUP_NAME$$ (e.g. acme-io)
+ GitLab Issuer URL: $$ISSUER_URL$$ (For most users this is the URL of your GitLab instance e.g. https://gitlab.acme.io, if your instance is not publicly accessible, this should be a separate URL that is publicly accessible per step 0, e.g. https://s3.amazonaws.com/YOUR_BUCKET_NAME/)
+ Organization name: $$ORGANIZATION_NAME$$ (e.g. Acme, Inc.)
+ ```
+
+Continue with the rest of the guide while you await confirmation when your group has been authorized.
+
+ Create a new GitLab project
+
+1. Navigate to the group.
+1. Click the **New Project** button.
+1. Enter a name for the project. e.g. infrastructure-live-root
+1. Click **Create Project**.
+1. Clone the project to your local machine.
+1. Navigate to the project directory.
+1. Create a new branch `bootstrap-repository`.
+
+ Install dependencies
+
+1. Install [mise](https://mise.jdx.dev/getting-started.html) on your machine.
+1. Activate mise in your shell:
+
+ ```bash
+ # For Bash
+ echo 'eval "$(~/.local/bin/mise activate bash)"' >> ~/.bashrc
+
+ # For Zsh
+ echo 'eval "$(~/.local/bin/mise activate zsh)"' >> ~/.zshrc
+
+ # For Fish
+ echo 'mise activate fish | source' >> ~/.config/fish/config.fish
+ ```
+
+1. Add the following to a .mise.toml file in the root of your project:
+
+ ```toml title=".mise.toml"
+ [tools]
+ boilerplate = "0.8.1"
+ opentofu = "1.10.0"
+ terragrunt = "0.81.6"
+ awscli = "latest"
+ ```
+
+1. Run `mise install`.
+
+
+Bootstrap the repository
+
+Gruntwork provides a boilerplate [template](https://github.com/gruntwork-io/terraform-aws-architecture-catalog/tree/main/templates/devops-foundations-infrastructure-live-root) that incorporates best practices while allowing for customization. The template is designed to scaffold a best-practices Terragrunt configurations. It includes patterns for module defaults, global variables, and account baselines. Additionally, it integrates Gruntwork Pipelines.
+
+ Configure the variables required to run the boilerplate template
+
+Copy the content below to a `vars.yaml` file in the root of your project and update the `` values with your own.
+
+```yaml title="vars.yaml"
+SCMProvider: GitLab
+
+# The GitLab group to use for the infrastructure repositories. This should include any additional sub-groups in the name
+# Example: acme/prod
+SCMProviderGroup: $$GITLAB_GROUP_NAME$$
+
+# The GitLab project to use for the infrastructure-live repository.
+SCMProviderRepo: infrastructure-live-root
+
+# The name of the project to use for the infrastructure-live-access-control repository.
+AccessControlRepoName: infrastructure-live-access-control
+
+# The name of the project to use for the infrastructure-catalog repository.
+InfraModulesRepoName: infrastructure-catalog
+
+# The base URL of your GitLab group repos. E.g., gitlab.com/
+RepoBaseUrl: $$GITLAB_GROUP_REPO_BASE_URL$$
+
+# The name of the branch to deploy to.
+# Example: main
+DeployBranchName: $$DEPLOY_BRANCH_NAME$$
+
+# The AWS account ID for the management account
+# Example: "123456789012"
+AwsManagementAccountId: $$AWS_MANAGEMENT_ACCOUNT_ID$$
+
+# The AWS account ID for the security account
+# Example: "123456789013"
+AwsSecurityAccountId: $$AWS_SECURITY_ACCOUNT_ID$$
+
+# The AWS account ID for the logs account
+# Example: "123456789014"
+AwsLogsAccountId: $$AWS_LOGS_ACCOUNT_ID$$
+
+# The AWS account ID for the shared account
+# Example: "123456789015"
+AwsSharedAccountId: $$AWS_SHARED_ACCOUNT_ID$$
+
+# The AWS account Email for the logs account
+# Example: logs@acme.com
+AwsLogsAccountEmail: $$AWS_LOGS_ACCOUNT_EMAIL$$
+
+# The AWS account Email for the management account
+# Example: management@acme.com
+AwsManagementAccountEmail: $$AWS_MANAGEMENT_ACCOUNT_EMAIL$$
+
+# The AWS account Email for the security account
+# Example: security@acme.com
+AwsSecurityAccountEmail: $$AWS_SECURITY_ACCOUNT_EMAIL$$
+
+# The AWS account Email for the shared account
+# Example: shared@acme.com
+AwsSharedAccountEmail: $$AWS_SHARED_ACCOUNT_EMAIL$$
+
+# The name prefix to use for creating resources e.g S3 bucket for OpenTofu state files
+# Example: acme
+OrgNamePrefix: $$ORG_NAME_PREFIX$$
+
+# The default region for AWS Resources
+# Example: us-east-1
+DefaultRegion: $$DEFAULT_REGION$$
+
+################################################################################
+# OPTIONAL VARIABLES WITH THEIR DEFAULT VALUES. UNCOMMENT AND MODIFY IF NEEDED.
+################################################################################
+
+# If you are an enterprise customer, set this to true.
+# IsEnterprise: true
+
+# List of the git repositories to populate for the catalog
+# CatalogRepositories:
+# - github.com/gruntwork-io/terraform-aws-service-catalog
+
+# The AWS partition to use. Options: aws, aws-us-gov, aws-cn
+# AWSPartition: aws
+
+# The name of the IAM role to use for the plan job.
+# PlanIAMRoleName: root-pipelines-plan
+
+# The name of the IAM role to use for the apply job.
+# ApplyIAMRoleName: root-pipelines-apply
+
+# The default tags to apply to all resources.
+# DefaultTags:
+# "{{ .OrgNamePrefix }}:Team": "DevOps"
+
+# The version for terraform-aws-security module to use for OIDC provider and roles provisioning
+# SecurityModulesVersion: v0.75.18
+
+# The URL of the custom SCM provider instance. Set this if you are using a custom instance of GitLab.
+# CustomSCMProviderInstanceURL: https://gitlab.example.io
+
+# The relative path from the host server to the custom pipelines workflow repository. Set this if you are using a custom/forked instance of the pipelines workflow.
+# CustomWorkflowHostRelativePath: pipelines-workflows
+```
+
+ Generate the repository contents
+
+1. Run the following command, from the root of your project, to generate the `infrastructure-live-root` repository contents:
+
+ ```bash
+ boilerplate --template-url "git@github.com:gruntwork-io/terraform-aws-architecture-catalog.git//templates/devops-foundations-infrastructure-live-root/?ref=4.0.1" --output-folder . --var-file vars.yaml --non-interactive
+ ```
+
+ This command adds all code required to set up your `infrastructure-live-root` repository.
+1. Remove the boilerplate dependency from the `mise.toml` file. It is no longer needed.
+
+1. Commit your local changes and push them to the `bootstrap-repository` branch.
+
+ ```bash
+ git add .
+ git commit -m "Bootstrap infrastructure-live-root repository initial commit [skip ci]"
+ git push origin bootstrap-repository
+ ```
+
+ Skipping the CI/CD process for now; you will manually apply the infrastructure baselines to your AWS accounts in a later step.
+
+1. Create a new merge request for the `bootstrap-repository` branch. Review the changes to understand what will be applied to your AWS accounts. The generated files fall under the following categories:
+
+ - GitLab Pipelines workflow file
+ - Gruntwork Pipelines configuration files
+ - Module defaults files for infrastructure code
+ - Account baselines and GitLab OIDC module scaffolding files for your core AWS accounts: management, security, logs and shared.
+
+ Apply the account baselines to your AWS accounts
+
+You will manually `terragrunt apply` the generated infrastructure baselines to get your accounts bootstrapped **before** merging this content into your main branch.
+
+:::tip
+You can utilize the AWS SSO Portal to obtain temporary AWS credentials necessary for subsequent steps:
+
+1. Sign in to the Portal page and select your preferred account to unveil the roles accessible to your SSO user.
+1. Navigate to the "Access keys" tab adjacent to the "AWSAdministratorAccess" role.
+1. Copy the "AWS environment variables" provided and paste them into your terminal for usage.
+:::
+
+
+1. [ ] Apply infrastructure changes in the **management** account
+
+ 1. - [ ] Obtain AWS CLI Administrator credentials for the management account
+
+ 1. - [ ] Navigate to the management account folder
+
+ ```bash
+ cd management/
+ ```
+
+ 1. - [ ] Using your credentials, run `terragrunt plan`.
+
+ ```bash
+ terragrunt run --all --non-interactive --backend-bootstrap plan
+ ```
+
+ 1. - [ ] After the plan succeeds, apply the changes:
+
+ ```bash
+ terragrunt run --all --non-interactive apply
+ ```
+
+ 1. - [ ] After applying the changes, make sure to lock providers in your `.terraform.lock.hcl` files. The lock files will be committed in the final step of the setup. e.g.
+
+ ```bash
+ terragrunt run --all providers -- lock -platform=darwin_amd64 -platform=linux_amd64
+ ```
+
+ 1. - [ ] Update Permissions for Account Factory Portfolio
+
+ The account factory pipeline _will fail_ until you grant the pipelines roles (`root-pipelines-plan` and `root-pipelines-apply`) access to the portfolio. This step **must be done after** you provision the pipelines roles in the management account (where control tower is set up).
+
+ Access to the portfolio is separate from IAM access, it **must** be granted in the Service Catalog console.
+
+ #### **Steps to grant access**
+
+ To grant access to the Account Factory Portfolio, you **must** be an individual with Service Catalog administrative permissions.
+
+ 1. Log into the management AWS account
+ 1. Go into the Service Catalog console
+ 1. Ensure you are in your default region(control-tower region)
+ 1. Select the **Portfolios** option in **Administration** from the left side navigation panel
+ 1. Click on the portfolio named **AWS Control Tower Account Factory Portfolio**
+ 1. Select the **Access** tab
+ 1. Click the **Grant access** button
+ 1. In the **Access type** section, leave the default value of **IAM Principal**
+ 1. Select the **Roles** tab in the lower section
+ 1. Enter `root-pipelines` into the search bar, there should be two results (`root-pipelines-plan` and `root-pipelines-apply`). Click the checkbox to the left of each role name.
+ 1. Click the **Grant access** button in the lower right hand corner
+
+ 1. - [ ] Increase Account Quota Limit (OPTIONAL)
+
+ Note that DevOps Foundations makes it very convenient, and therefore likely, that you will encounter one of the soft limits imposed by AWS on the number of accounts you can create.
+
+ You may need to request a limit increase for the number of accounts you can create in the management account, as the default is currently 10 accounts.
+
+ To request an increase to this limit, search for "Organizations" in the AWS management console [here](https://console.aws.amazon.com/servicequotas/home/dashboard) and request a limit increase to a value that makes sense for your organization.
+
+1. - [ ] Apply infrastructure changes in the **logs** account
+
+ 1. - [ ] Obtain AWS CLI Administrator credentials for the logs account
+ 1. - [ ] Navigate to the logs account folder
+
+ ```bash
+ cd ../logs/
+ ```
+
+ 1. - [ ] Using your credentials, run `terragrunt plan`.
+
+ ```bash
+ terragrunt run --all --non-interactive --backend-bootstrap plan
+ ```
+
+ 1. - [ ] After the plan succeeds, apply the changes:
+
+ ```bash
+ terragrunt run --all --non-interactive apply
+ ```
+
+ 1. - [ ] After applying the changes, make sure to lock providers in your `.terraform.lock.hcl` files. e.g.
+
+ ```bash
+ terragrunt run --all providers lock -platform=darwin_amd64 -platform=linux_amd64
+ ```
+
+1. - [ ] Apply infrastructure changes in the **security** account
+
+ 1. - [ ] Obtain AWS CLI Administrator credentials for the security account
+ 1. - [ ] Navigate to the security account folder
+
+ ```bash
+ cd ../security/
+ ```
+
+ 1. - [ ] Using your credentials, run `terragrunt plan`.
+
+ ```bash
+ terragrunt run --all --non-interactive --backend-bootstrap plan
+ ```
+
+ 1. - [ ] After the plan succeeds, apply the changes:
+
+ ```bash
+ terragrunt run --all --non-interactive apply
+ ```
+
+ 1. - [ ] After applying the changes, make sure to lock providers in your `.terraform.lock.hcl` files. e.g.
+
+ ```bash
+ terragrunt run --all providers lock -platform=darwin_amd64 -platform=linux_amd64
+ ```
+
+1. - [ ] Apply infrastructure changes in the **shared** account
+
+ 1. - [ ] Obtain AWS CLI Administrator credentials for the shared account. You may need to grant your user access to the `AWSAdministratorAccess` permission set in the shared account from the management account's Identity Center Admin console.
+ 1. - [ ] Using your credentials, create a service role
+
+ ```bash
+ aws iam create-service-linked-role --aws-service-name autoscaling.amazonaws.com
+ ```
+
+ 1. - [ ] Navigate to the shared account folder
+
+ ```bash
+ cd ../shared/
+ ```
+
+ 1. - [ ] Using your credentials, run `terragrunt plan`.
+
+ ```bash
+ terragrunt run --all --non-interactive --backend-bootstrap plan
+ ```
+
+ 1. - [ ] After the plan succeeds, apply the changes:
+
+ ```bash
+ terragrunt run --all --non-interactive apply
+ ```
+
+ 1. - [ ] After applying the changes, make sure to lock providers in your `.terraform.lock.hcl` files. e.g.
+
+ ```bash
+ terragrunt run --all providers lock -platform=darwin_amd64 -platform=linux_amd64
+ ```
+
+1. - [ ] Commit your local changes and push them to the `bootstrap-repository` branch.
+
+ ```bash
+ cd ..
+ git add .
+ git commit -m "Bootstrap infrastructure-live-root repository final commit [skip ci]"
+ git push origin bootstrap-repository
+ ```
+
+1. - [ ] Merge the open merge request. **Ensure [skip ci] is present in the commit message.**
+
+
+ Create a new infrastructure-live-access-control (optional, required for enterprise customers)
+
+ Create a new GitLab project
+
+1. Navigate to the group.
+1. Click the **New Project** button.
+1. Enter the name for the project as `infrastructure-live-access-control`.
+1. Click **Create Project**.
+1. Clone the project to your local machine.
+1. Navigate to the project directory.
+1. Create a new branch `bootstrap-repository`.
+
+ Install dependencies
+
+Run `mise install boilerplate@0.8.1` to install the boilerplate tool.
+
+ Bootstrap the repository
+
+ Configure the variables required to run the boilerplate template
+
+Copy the content below to a `vars.yaml` file in the root of your project and update the customizable values as needed.
+
+```yaml title="vars.yaml"
+SCMProvider: GitLab
+
+# The GitLab group to use for the infrastructure repositories. This should include any additional sub-groups in the name
+# Example: acme/prod
+SCMProviderGroup: $$GITLAB_GROUP_NAME$$
+
+# The GitLab project to use for the infrastructure-live-access-control repository.
+InfraLiveAccessControlRepoName: infrastructure-live-access-control
+
+# The name of the branch to deploy to.
+# Example: main
+DeployBranchName: $$DEPLOY_BRANCH_NAME$$
+
+# The name prefix to use for creating resources e.g S3 bucket for OpenTofu state files
+# Example: acme
+OrgNamePrefix: $$ORG_NAME_PREFIX$$
+
+# The default region for AWS Resources
+# Example: us-east-1
+DefaultRegion: $$DEFAULT_REGION$$
+
+################################################################################
+# OPTIONAL VARIABLES WITH THEIR DEFAULT VALUES. UNCOMMENT AND MODIFY IF NEEDED.
+################################################################################
+
+# The AWS partition to use.
+# AWSPartition: aws
+```
+
+ Generate the repository contents
+
+1. Run the following command, from the root of your project, to generate the `infrastructure-live-access-control` repository contents:
+
+ ```bash
+ boilerplate --template-url "git@github.com:gruntwork-io/terraform-aws-architecture-catalog.git//templates/devops-foundations-infrastructure-live-access-control/?ref=4.0.1" --output-folder . --var-file vars.yaml --non-interactive
+ ```
+
+ This command adds all code required to set up your `infrastructure-live-access-control` repository. The generated files fall under the following categories:
+
+ - GitLab Pipelines workflow file
+ - Gruntwork Pipelines configuration files
+ - Module defaults files for GitLab OIDC roles and policies
+
+
+2. Commit your local changes and push them to the `bootstrap-repository` branch.
+
+ ```bash
+ git add .
+ git commit -m "Bootstrap infrastructure-live-access-control repository [skip ci]"
+ git push origin bootstrap-repository
+ ```
+
+ Skipping the CI/CD process now because there is no infrastructure to apply; repository simply contains the GitLab OIDC role module defaults to enable GitLab OIDC authentication from repositories other than `infrastructure-live-root`.
+
+3. Create a new merge request for the `bootstrap-repository` branch. Review the changes to understand the GitLab OIDC role module defaults.
+4. Merge the open merge request. **Ensure [skip ci] is present in the commit message.**
+
+ Create a new infrastructure-catalog (optional)
+
+The `infrastructure-catalog` repository is a collection of modules that can be used to build your infrastructure. It is a great way to share modules with your team and across your organization. Learn more about the [Developer Self-Service](/2.0/docs/overview/concepts/developer-self-service) concept.
+
+ Create a new GitLab project
+
+1. Navigate to the group.
+1. Click the **New Project** button.
+1. Enter the name for the project as `infrastructure-catalog`.
+1. Click **Create Project**.
+1. Clone the project to your local machine.
+1. Navigate to the project directory.
+1. Create a new branch `bootstrap-repository`.
+
+ Install dependencies
+
+Run `mise install boilerplate@0.8.1` to install the boilerplate tool.
+
+ Bootstrap the repository
+
+ Configure the variables required to run the boilerplate template
+
+Copy the content below to a `vars.yaml` file in the root of your project and update the customizable values as needed.
+
+```yaml title="vars.yaml"
+# The name of the repository to use for the catalog.
+InfraModulesRepoName: infrastructure-catalog
+
+# The version of the Gruntwork Service Catalog to use. https://github.com/gruntwork-io/terraform-aws-service-catalog
+ServiceCatalogVersion: v0.111.2
+
+# The version of the Gruntwork VPC module to use. https://github.com/gruntwork-io/terraform-aws-vpc
+VpcVersion: v0.26.22
+
+# The default region for AWS Resources
+# Example: us-east-1
+DefaultRegion: $$DEFAULT_REGION$$
+
+################################################################################
+# OPTIONAL VARIABLES WITH THEIR DEFAULT VALUES. UNCOMMENT AND MODIFY IF NEEDED.
+################################################################################
+
+# The base URL of the Organization to use for the catalog.
+# If you are using Gruntwork's RepoCopier tool, this should be the base URL of the repository you are copying from.
+# RepoBaseUrl: github.com/gruntwork-io
+
+# The name prefix to use for the Gruntwork RepoCopier copied repositories.
+# Example: gruntwork-io-
+# GWCopiedReposNamePrefix:
+```
+
+ Generate the repository contents
+
+1. Run the following command, from the root of your project, to generate the `infrastructure-catalog` repository contents:
+
+ ```bash
+ boilerplate --template-url "git@github.com:gruntwork-io/terraform-aws-architecture-catalog.git//templates/devops-foundations-infrastructure-modules/?ref=4.0.1" --output-folder . --var-file vars.yaml --non-interactive
+ ```
+
+ This command adds some code required to set up your `infrastructure-catalog` repository. The generated files are some usable modules for your infrastructure.
+
+1. Commit your local changes and push them to the `bootstrap-repository` branch.
+
+ ```bash
+ git add .
+ git commit -m "Bootstrap infrastructure-catalog repository"
+ git push origin bootstrap-repository
+ ```
+
+1. Create a new merge request for the `bootstrap-repository` branch. Review the changes to understand the example Service Catalog modules.
+1. Merge the open merge request.
+
+
+
+
diff --git a/docs/2.0/docs/accountfactory/installation/index.md b/docs/2.0/docs/accountfactory/installation/index.md
index f67e6fafd..c61d7b467 100644
--- a/docs/2.0/docs/accountfactory/installation/index.md
+++ b/docs/2.0/docs/accountfactory/installation/index.md
@@ -6,16 +6,14 @@ Account Factory is automatically integrated into [new Pipelines root repositorie
By default, Account Factory includes the following components:
-- An HTML form for generating workflow inputs: `.github/workflows/account-factory-inputs.html`
-
-- A workflow for generating new requests: `.github/workflows/account-factory.yml`
-
- A root directory for tracking account requests: `_new-account-requests`
+- A mechanism for generating new account request files: `_new-account-requests/account-.yml`
+
- A YAML file for tracking account names and IDs: `accounts.yml`
For detailed instructions on using these components, refer to the [Vending a New AWS Account Guide](/2.0/docs/accountfactory/guides/vend-aws-account).
## Configuring account factory
-Account Factory is fully operational for vending new accounts without requiring any configuration changes. However, a [comprehensive reference for all configuration options is available here](/2.0/reference/accountfactory/configurations), allowing you to customize values and templates for generating Infrastructure as Code (IaC) for new accounts.
+Account Factory is fully operational for vending new accounts without requiring any configuration changes. However, a [comprehensive reference for all configuration options is available here](/2.0/reference/accountfactory/configurations-as-code), allowing you to customize values and templates for generating Infrastructure as Code (IaC) for new accounts.
diff --git a/docs/2.0/docs/overview/getting-started/index.md b/docs/2.0/docs/overview/getting-started/index.md
index 4d2a3fb34..f8439b1e9 100644
--- a/docs/2.0/docs/overview/getting-started/index.md
+++ b/docs/2.0/docs/overview/getting-started/index.md
@@ -22,12 +22,12 @@ Set up authentication for Pipelines to enable secure automation of infrastructur
### Step 4: Create new Pipelines repositories
- [New GitHub repository](/2.0/docs/pipelines/installation/addingnewrepo)
-- [New GitLab repository](/2.0/docs/pipelines/installation/addingnewgitlabrepo)
+- [New GitLab repository](/2.0/docs/pipelines/installation/addinggitlabrepo)
Alternatively, you can add Pipelines to an existing repository:
- [Existing GitHub repository](/2.0/docs/pipelines/installation/addingexistingrepo)
-- [Existing GitLab repository](/2.0/docs/pipelines/installation/addinggitlabrepo)
+- [Existing GitLab repository](/2.0/docs/pipelines/installation/addingexistinggitlabrepo)
diff --git a/docs/2.0/docs/pipelines/architecture/ci-workflows.md b/docs/2.0/docs/pipelines/architecture/ci-workflows.md
index 65ca5b4d5..587fc80b8 100644
--- a/docs/2.0/docs/pipelines/architecture/ci-workflows.md
+++ b/docs/2.0/docs/pipelines/architecture/ci-workflows.md
@@ -5,8 +5,8 @@ Pipelines integrates with your repositories through GitHub/GitLab Workflows, lev
import Tabs from '@theme/Tabs';
import TabItem from '@theme/TabItem';
-
-
+
+
```yml
jobs:
@@ -15,7 +15,7 @@ jobs:
```
-
+
```yml
include:
@@ -38,8 +38,8 @@ If you [fork the Gruntwork Workflows](https://docs.gruntwork.io/2.0/docs/pipelin
## Workflow dependencies
-
-
+
+
The `pipelines-workflows` repository includes the following reusable workflows:
@@ -70,7 +70,7 @@ If you are using [Gruntwork Account Factory](/2.0/docs/accountfactory/concepts/)
- `pipelines.yml` - Uses `pipelines.yml`.
-
+
Your `.gitlab-ci.yml` file will include the following workflow:
diff --git a/docs/2.0/docs/pipelines/concepts/cloud-auth/aws.mdx b/docs/2.0/docs/pipelines/concepts/cloud-auth/aws.mdx
index 8a4b014fa..66e547a35 100644
--- a/docs/2.0/docs/pipelines/concepts/cloud-auth/aws.mdx
+++ b/docs/2.0/docs/pipelines/concepts/cloud-auth/aws.mdx
@@ -1,7 +1,7 @@
# Authenticating to AWS
-import Tabs from '@theme/Tabs';
-import TabItem from '@theme/TabItem';
+import Tabs from "@theme/Tabs"
+import TabItem from "@theme/TabItem"
Pipelines automatically determines which AWS account(s) to authenticate with, and how to authenticate with them, based on the infrastructure changes proposed in your pull request.
@@ -15,7 +15,7 @@ When creating a new AWS account, it is necessary to update the AWS OIDC configur
## How Pipelines knows what AWS principals to authenticate as
-
+
For HCL configurations, account mappings are defined using environments specified in HCL configuration files in the `.gruntwork` directory (you are using these if you see `.hcl` files in your `.gruntwork` directory).
@@ -177,7 +177,7 @@ Critically, the issuer is a URL that is both specified inside the token, and is
Typically the issuer is the hostname of the CI/CD platform, such as `https://gitlab.com`, and thus oidc configuration (and public keys) can be fetched from the publicly available route, `https://gitlab.com/.well-known/openid-configuration` etc.
-If, however, your CI/CD platform is hosted privately, you will need to host the public key and OIDC configuration in a publicly accessible location, such as an S3 bucket, and update the issuer in your CI/CD configuration to point to that location. The diagrams below illustrate both approaches - fetching the keys directly from your CI/CD platform via a public route, or fetching the keys from a public S3 bucket.
+If, however, your CI/CD platform is hosted privately, you will need to host the public key and OIDC configuration in a publicly accessible location, such as an S3 bucket, and update the issuer in your CI/CD configuration to point to that location. The diagrams below illustrate both approaches - fetching the keys directly from your CI/CD platform via a public route, or fetching the keys from a public S3 bucket.
### Publicly Available CI/CD Platforms
@@ -200,7 +200,7 @@ sequenceDiagram
### Non-Publicly Available CI/CD Platforms
-This diagram follows the [recommended approach](https://docs.gitlab.com/ci/cloud_services/aws/#configure-a-non-public-gitlab-instance) from GitLab for private CI/CD platform instances. The guidance is to host the public key in a publicly accessible S3 bucket and update the issuer in the CI/CD configuration.
+This diagram follows the [recommended approach](https://docs.gitlab.com/ci/cloud_services/aws/#configure-a-non-public-gitlab-instance) from GitLab for private CI/CD platform instances. The guidance is to host the public key in a publicly accessible S3 bucket and update the issuer in the CI/CD configuration.
A common alternative approach to re-hosting the public key and OIDC configuration is to update the application firewalls to specifically allow requests to the `.well-known/openid-configuration` endpoint and the JWKS endpoint from the AWS IdP.
diff --git a/docs/2.0/docs/pipelines/installation/addingnewgitlabrepo.md b/docs/2.0/docs/pipelines/installation/addingnewgitlabrepo.md
deleted file mode 100644
index 14a30f97f..000000000
--- a/docs/2.0/docs/pipelines/installation/addingnewgitlabrepo.md
+++ /dev/null
@@ -1,547 +0,0 @@
-import CustomizableValue from '/src/components/CustomizableValue';
-
-# Creating a New GitLab Project with Pipelines
-
-This guide walks you through the process of setting up a new GitLab Project with the Gruntwork Platform. By the end, you'll have a fully configured GitLab CI/CD pipeline that can create new AWS accounts and deploy infrastructure changes automatically.
-
-:::info
-To use Gruntwork Pipelines in an **existing** GitLab repository, see this [guide](/2.0/docs/pipelines/installation/addinggitlabrepo).
-:::
-
-## Prerequisites
-
-Before you begin, make sure you have:
-
-- Basic familiarity with Git, GitLab, and infrastructure as code concepts
-- Completed the [AWS Landing Zone setup](/2.0/docs/accountfactory/prerequisites/awslandingzone)
-- Have programmatic access to the AWS accounts created in the [AWS Landing Zone setup](/2.0/docs/accountfactory/prerequisites/awslandingzone)
-- Completed the [Pipelines Auth setup for GitLab](/2.0/docs/pipelines/installation/viamachineusers#gitlab) and setup a machine user with appropriate PAT tokens
-- Local access to Gruntwork's GitHub repositories, specifically the [architecture catalog](https://github.com/gruntwork-io/terraform-aws-architecture-catalog/)
-
-
-Additional setup for **custom GitLab instances only**
-
-### Fork the Pipelines workflow project
-
-You must [fork](https://docs.gitlab.com/user/project/repository/forking_workflow/#create-a-fork) Gruntwork's public [Pipelines workflow project](https://gitlab.com/gruntwork-io/pipelines-workflows) into your own GitLab instance.
-This is necessary because Gruntwork Pipelines uses [GitLab CI/CD components](/2.0/docs/pipelines/architecture/ci-workflows), and GitLab requires components to reside within the [same GitLab instance as the project referencing them](https://docs.gitlab.com/ci/components/#use-a-component).
-
-When creating the fork, we recommend configuring it as a public mirror of the original Gruntwork project and ensuring that tags are included.
-
-### Ensure OIDC configuration and JWKS are publicly accessible
-
-This step only applies if you are using a self-hosted GitLab instance that is not accessible from the public internet. If you are using GitLab.com or a self-hosted instance that is publicly accessible, you can skip this step.
-
-1. [Follow GitLab's instructions](https://docs.gitlab.com/ci/cloud_services/aws/#configure-a-non-public-gitlab-instance) for hosting your OIDC configuration and JWKS in a public location (e.g. S3 Bucket). This is necessary for both Gruntwork and the AWS OIDC provider to access the GitLab OIDC configuration and JWKS when authenticating JWT's generated by your custom instance.
-2. Note the (stored as `ci_id_tokens_issuer_url` in your `gitlab.rb` file per GitLab's instructions) generated above for reuse in the next steps.
-
-
-1. Create a new GitLab project for your `infrastructure-live-root` repository.
-1. Install dependencies.
-1. Configure the variables required to run the infrastructure-live-root boilerplate template.
-1. Create your `infrastructure-live-root` repository contents using Gruntwork's architecture-catalog template.
-1. Apply the account baselines to your AWS accounts.
-
-
-## Create a new infrastructure-live-root
-
-### Authorize Your GitLab Group with Gruntwork
-
-To use Gruntwork Pipelines with GitLab, your group needs authorization from Gruntwork. Email your Gruntwork account manager or support@gruntwork.io with:
-
- ```
- GitLab group name(s): $$GITLAB_GROUP_NAME$$ (e.g. acme-io)
- GitLab Issuer URL: $$ISSUER_URL$$ (For most users this is the URL of your GitLab instance e.g. https://gitlab.acme.io, if your instance is not publicly accessible, this should be a separate URL that is publicly accessible per step 0, e.g. https://s3.amazonaws.com/YOUR_BUCKET_NAME/)
- Organization name: $$ORGANIZATION_NAME$$ (e.g. Acme, Inc.)
- ```
-
-Continue with the rest of the guide while you await confirmation when your group has been authorized.
-
-### Create a new GitLab project
-
-1. Navigate to the group.
-1. Click the **New Project** button.
-1. Enter a name for the project. e.g. infrastructure-live-root
-1. Click **Create Project**.
-1. Clone the project to your local machine.
-1. Navigate to the project directory.
-1. Create a new branch `bootstrap-repository`.
-
-### Install dependencies
-
-1. Install [mise](https://mise.jdx.dev/getting-started.html) on your machine.
-1. Activate mise in your shell:
-
- ```bash
- # For Bash
- echo 'eval "$(~/.local/bin/mise activate bash)"' >> ~/.bashrc
-
- # For Zsh
- echo 'eval "$(~/.local/bin/mise activate zsh)"' >> ~/.zshrc
-
- # For Fish
- echo 'mise activate fish | source' >> ~/.config/fish/config.fish
- ```
-
-1. Add the following to a .mise.toml file in the root of your project:
-
- ```toml title=".mise.toml"
- [tools]
- boilerplate = "0.8.1"
- opentofu = "1.10.0"
- terragrunt = "0.81.6"
- awscli = "latest"
- ```
-
-1. Run `mise install`.
-
-
-### Bootstrap the repository
-
-Gruntwork provides a boilerplate [template](https://github.com/gruntwork-io/terraform-aws-architecture-catalog/tree/main/templates/devops-foundations-infrastructure-live-root) that incorporates best practices while allowing for customization. The template is designed to scaffold a best-practices Terragrunt configurations. It includes patterns for module defaults, global variables, and account baselines. Additionally, it integrates Gruntwork Pipelines.
-
-#### Configure the variables required to run the boilerplate template
-
-Copy the content below to a `vars.yaml` file in the root of your project and update the `` values with your own.
-
-```yaml title="vars.yaml"
-SCMProvider: GitLab
-
-# The GitLab group to use for the infrastructure repositories. This should include any additional sub-groups in the name
-# Example: acme/prod
-SCMProviderGroup: $$GITLAB_GROUP_NAME$$
-
-# The GitLab project to use for the infrastructure-live repository.
-SCMProviderRepo: infrastructure-live-root
-
-# The base URL of your GitLab group repos. E.g., gitlab.com/
-RepoBaseUrl: $$GITLAB_GROUP_REPO_BASE_URL$$
-
-# The name of the branch to deploy to.
-# Example: main
-DeployBranchName: $$DEPLOY_BRANCH_NAME$$
-
-# The AWS account ID for the management account
-# Example: "123456789012"
-AwsManagementAccountId: $$AWS_MANAGEMENT_ACCOUNT_ID$$
-
-# The AWS account ID for the security account
-# Example: "123456789013"
-AwsSecurityAccountId: $$AWS_SECURITY_ACCOUNT_ID$$
-
-# The AWS account ID for the logs account
-# Example: "123456789014"
-AwsLogsAccountId: $$AWS_LOGS_ACCOUNT_ID$$
-
-# The AWS account ID for the shared account
-# Example: "123456789015"
-AwsSharedAccountId: $$AWS_SHARED_ACCOUNT_ID$$
-
-# The AWS account Email for the logs account
-# Example: logs@acme.com
-AwsLogsAccountEmail: $$AWS_LOGS_ACCOUNT_EMAIL$$
-
-# The AWS account Email for the management account
-# Example: management@acme.com
-AwsManagementAccountEmail: $$AWS_MANAGEMENT_ACCOUNT_EMAIL$$
-
-# The AWS account Email for the security account
-# Example: security@acme.com
-AwsSecurityAccountEmail: $$AWS_SECURITY_ACCOUNT_EMAIL$$
-
-# The AWS account Email for the shared account
-# Example: shared@acme.com
-AwsSharedAccountEmail: $$AWS_SHARED_ACCOUNT_EMAIL$$
-
-# The name prefix to use for creating resources e.g S3 bucket for OpenTofu state files
-# Example: acme
-OrgNamePrefix: $$ORG_NAME_PREFIX$$
-
-# The default region for AWS Resources
-# Example: us-east-1
-DefaultRegion: $$DEFAULT_REGION$$
-
-################################################################################
-# OPTIONAL VARIABLES WITH THEIR DEFAULT VALUES. UNCOMMENT AND MODIFY IF NEEDED.
-################################################################################
-
-# List of the git repositories to populate for the catalog
-# CatalogRepositories:
-# - github.com/gruntwork-io/terraform-aws-service-catalog
-
-# The AWS partition to use. Options: aws, aws-us-gov
-# AWSPartition: aws
-
-# The name of the IAM role to use for the plan job.
-# PlanIAMRoleName: root-pipelines-plan
-
-# The name of the IAM role to use for the apply job.
-# ApplyIAMRoleName: root-pipelines-apply
-
-# The default tags to apply to all resources.
-# DefaultTags:
-# "{{ .OrgNamePrefix }}:Team": "DevOps"
-
-# The version for terraform-aws-security module to use for OIDC provider and roles provisioning
-# SecurityModulesVersion: v0.75.18
-
-# The URL of the custom SCM provider instance. Set this if you are using a custom instance of GitLab.
-# CustomSCMProviderInstanceURL: https://gitlab.example.io
-
-# The relative path from the host server to the custom pipelines workflow repository. Set this if you are using a custom/forked instance of the pipelines workflow.
-# CustomWorkflowHostRelativePath: pipelines-workflows
-```
-
-#### Generate the repository contents
-
-1. Run the following command, from the root of your project, to generate the `infrastructure-live-root` repository contents:
-
-
- ```bash
- boilerplate --template-url "git@github.com:gruntwork-io/terraform-aws-architecture-catalog.git//templates/devops-foundations-infrastructure-live-root/?ref=main" --output-folder . --var-file vars.yaml --non-interactive
- ```
-
- This command adds all code required to set up your `infrastructure-live-root` repository.
-1. Remove the boilerplate dependency from the `mise.toml` file. It is no longer needed.
-
-1. Commit your local changes and push them to the `bootstrap-repository` branch.
-
- ```bash
- git add .
- git commit -m "Bootstrap infrastructure-live-root repository initial commit [skip ci]"
- git push origin bootstrap-repository
- ```
-
- Skipping the CI/CD process for now; you will manually apply the infrastructure baselines to your AWS accounts in a later step.
-
-1. Create a new merge request for the `bootstrap-repository` branch. Review the changes to understand what will be applied to your AWS accounts. The generated files fall under the following categories:
-
- - GitLab Pipelines workflow file
- - Gruntwork Pipelines configuration files
- - Module defaults files for infrastructure code
- - Account baselines and GitLab OIDC module scaffolding files for your core AWS accounts: management, security, logs and shared.
-
-### Apply the account baselines to your AWS accounts
-
-You will manually `terragrunt apply` the generated infrastructure baselines to get your accounts bootstrapped **before** merging this content into your main branch.
-
-:::tip
-You can utilize the AWS SSO Portal to obtain temporary AWS credentials necessary for subsequent steps:
-
-1. Sign in to the Portal page and select your preferred account to unveil the roles accessible to your SSO user.
-1. Navigate to the "Access keys" tab adjacent to the "AWSAdministratorAccess" role.
-1. Copy the "AWS environment variables" provided and paste them into your terminal for usage.
-:::
-
-
-1. [ ] Apply infrastructure changes in the **management** account
-
- 1. - [ ] Obtain AWS CLI Administrator credentials for the management account
-
- 1. - [ ] Navigate to the management account folder
-
- ```bash
- cd management/
- ```
-
- 1. - [ ] Using your credentials, run `terragrunt plan`.
-
- ```bash
- terragrunt run --all plan --terragrunt-non-interactive
- ```
-
- 1. - [ ] After the plan succeeds, apply the changes:
-
- ```bash
- terragrunt run --all apply --terragrunt-non-interactive
- ```
-
- 1. - [ ] After applying the changes, make sure to lock providers in your `.terraform.lock.hcl` files. The lock files will be committed in the final step of the setup. e.g.
-
- ```bash
- terragrunt run --all providers -- lock -platform=darwin_amd64 -platform=linux_amd64
- ```
-
- 1. - [ ] Update Permissions for Account Factory Portfolio
-
- The account factory pipeline _will fail_ until you grant the pipelines roles (`root-pipelines-plan` and `root-pipelines-apply`) access to the portfolio. This step **must be done after** you provision the pipelines roles in the management account (where control tower is set up).
-
- Access to the portfolio is separate from IAM access, it **must** be granted in the Service Catalog console.
-
- #### **Steps to grant access**
-
- To grant access to the Account Factory Portfolio, you **must** be an individual with Service Catalog administrative permissions.
-
- 1. Log into the management AWS account
- 1. Go into the Service Catalog console
- 1. Ensure you are in your default region(control-tower region)
- 1. Select the **Portfolios** option in **Administration** from the left side navigation panel
- 1. Click on the portfolio named **AWS Control Tower Account Factory Portfolio**
- 1. Select the **Access** tab
- 1. Click the **Grant access** button
- 1. In the **Access type** section, leave the default value of **IAM Principal**
- 1. Select the **Roles** tab in the lower section
- 1. Enter `root-pipelines` into the search bar, there should be two results (`root-pipelines-plan` and `root-pipelines-apply`). Click the checkbox to the left of each role name.
- 1. Click the **Grant access** button in the lower right hand corner
-
- 1. - [ ] Increase Account Quota Limit (OPTIONAL)
-
- Note that DevOps Foundations makes it very convenient, and therefore likely, that you will encounter one of the soft limits imposed by AWS on the number of accounts you can create.
-
- You may need to request a limit increase for the number of accounts you can create in the management account, as the default is currently 10 accounts.
-
- To request an increase to this limit, search for "Organizations" in the AWS management console [here](https://console.aws.amazon.com/servicequotas/home/dashboard) and request a limit increase to a value that makes sense for your organization.
-
-1. - [ ] Apply infrastructure changes in the **logs** account
-
- 1. - [ ] Obtain AWS CLI Administrator credentials for the logs account
- 1. - [ ] Navigate to the logs account folder
-
- ```bash
- cd ../logs/
- ```
-
- 1. - [ ] Using your credentials, run `terragrunt plan`.
-
- ```bash
- terragrunt run --all plan --terragrunt-non-interactive
- ```
-
- 1. - [ ] After the plan succeeds, apply the changes:
-
- ```bash
- terragrunt run --all apply --terragrunt-non-interactive
- ```
-
- 1. - [ ] After applying the changes, make sure to lock providers in your `.terraform.lock.hcl` files. e.g.
-
- ```bash
- terragrunt run --all providers lock -platform=darwin_amd64 -platform=linux_amd64
- ```
-
-1. - [ ] Apply infrastructure changes in the **security** account
-
- 1. - [ ] Obtain AWS CLI Administrator credentials for the security account
- 1. - [ ] Navigate to the security account folder
-
- ```bash
- cd ../security/
- ```
-
- 1. - [ ] Using your credentials, run `terragrunt plan`.
-
- ```bash
- terragrunt run --all plan --terragrunt-non-interactive
- ```
-
- 1. - [ ] After the plan succeeds, apply the changes:
-
- ```bash
- terragrunt run --all apply --terragrunt-non-interactive
- ```
-
- 1. - [ ] After applying the changes, make sure to lock providers in your `.terraform.lock.hcl` files. e.g.
-
- ```bash
- terragrunt run --all providers lock -platform=darwin_amd64 -platform=linux_amd64
- ```
-
-1. - [ ] Apply infrastructure changes in the **shared** account
-
- 1. - [ ] Obtain AWS CLI Administrator credentials for the shared account. You may need to grant your user access to the `AWSAdministratorAccess` permission set in the shared account from the management account's Identity Center Admin console.
- 1. - [ ] Using your credentials, create a service role
-
- ```bash
- aws iam create-service-linked-role --aws-service-name autoscaling.amazonaws.com
- ```
-
- 1. - [ ] Navigate to the shared account folder
-
- ```bash
- cd ../shared/
- ```
-
- 1. - [ ] Using your credentials, run `terragrunt plan`.
-
- ```bash
- terragrunt run --all plan --terragrunt-non-interactive
- ```
-
- 1. - [ ] After the plan succeeds, apply the changes:
-
- ```bash
- terragrunt run --all apply --terragrunt-non-interactive
- ```
-
- 1. - [ ] After applying the changes, make sure to lock providers in your `.terraform.lock.hcl` files. e.g.
-
- ```bash
- terragrunt run --all providers lock -platform=darwin_amd64 -platform=linux_amd64
- ```
-
-1. - [ ] Commit your local changes and push them to the `bootstrap-repository` branch.
-
- ```bash
- cd ..
- git add .
- git commit -m "Bootstrap infrastructure-live-root repository final commit [skip ci]"
- git push origin bootstrap-repository
- ```
-
-1. - [ ] Merge the open merge request. **Ensure [skip ci] is present in the commit message.**
-
-
-## Create a new infrastructure-live-access-control (optional)
-
-### Create a new GitLab project
-
-1. Navigate to the group.
-1. Click the **New Project** button.
-1. Enter the name for the project as `infrastructure-live-access-control`.
-1. Click **Create Project**.
-1. Clone the project to your local machine.
-1. Navigate to the project directory.
-1. Create a new branch `bootstrap-repository`.
-
-### Install dependencies
-
-Run `mise install boilerplate@0.8.1` to install the boilerplate tool.
-
-### Bootstrap the repository
-
-#### Configure the variables required to run the boilerplate template
-
-Copy the content below to a `vars.yaml` file in the root of your project and update the customizable values as needed.
-
-```yaml title="vars.yaml"
-SCMProvider: GitLab
-
-# The GitLab group to use for the infrastructure repositories. This should include any additional sub-groups in the name
-# Example: acme/prod
-SCMProviderGroup: $$GITLAB_GROUP_NAME$$
-
-# The GitLab project to use for the infrastructure-live repository.
-SCMProviderRepo: infrastructure-live-access-control
-
-# The name of the branch to deploy to.
-# Example: main
-DeployBranchName: $$DEPLOY_BRANCH_NAME$$
-
-# The name prefix to use for creating resources e.g S3 bucket for OpenTofu state files
-# Example: acme
-OrgNamePrefix: $$ORG_NAME_PREFIX$$
-
-# The default region for AWS Resources
-# Example: us-east-1
-DefaultRegion: $$DEFAULT_REGION$$
-
-################################################################################
-# OPTIONAL VARIABLES WITH THEIR DEFAULT VALUES. UNCOMMENT AND MODIFY IF NEEDED.
-################################################################################
-
-# The AWS partition to use.
-# AWSPartition: aws
-```
-
-#### Generate the repository contents
-
-1. Run the following command, from the root of your project, to generate the `infrastructure-live-access-control` repository contents:
-
-
- ```bash
- boilerplate --template-url "git@github.com:gruntwork-io/terraform-aws-architecture-catalog.git//templates/devops-foundations-infrastructure-live-access-control/?ref=main" --output-folder . --var-file vars.yaml --non-interactive
- ```
-
- This command adds all code required to set up your `infrastructure-live-access-control` repository. The generated files fall under the following categories:
-
- - GitLab Pipelines workflow file
- - Gruntwork Pipelines configuration files
- - Module defaults files for GitLab OIDC roles and policies
-
-
-2. Commit your local changes and push them to the `bootstrap-repository` branch.
-
- ```bash
- git add .
- git commit -m "Bootstrap infrastructure-live-access-control repository [skip ci]"
- git push origin bootstrap-repository
- ```
-
- Skipping the CI/CD process now because there is no infrastructure to apply; repository simply contains the GitLab OIDC role module defaults to enable GitLab OIDC authentication from repositories other than `infrastructure-live-root`.
-
-3. Create a new merge request for the `bootstrap-repository` branch. Review the changes to understand the GitLab OIDC role module defaults.
-4. Merge the open merge request. **Ensure [skip ci] is present in the commit message.**
-
-## Create a new infrastructure-catalog (optional)
-
-The `infrastructure-catalog` repository is a collection of modules that can be used to build your infrastructure. It is a great way to share modules with your team and across your organization. Learn more about the [Developer Self-Service](/2.0/docs/overview/concepts/developer-self-service) concept.
-
-### Create a new GitLab project
-
-1. Navigate to the group.
-1. Click the **New Project** button.
-1. Enter the name for the project as `infrastructure-catalog`.
-1. Click **Create Project**.
-1. Clone the project to your local machine.
-1. Navigate to the project directory.
-1. Create a new branch `bootstrap-repository`.
-
-### Install dependencies
-
-Run `mise install boilerplate@0.8.1` to install the boilerplate tool.
-
-### Bootstrap the repository
-
-#### Configure the variables required to run the boilerplate template
-
-Copy the content below to a `vars.yaml` file in the root of your project and update the customizable values as needed.
-
-```yaml title="vars.yaml"
-# The name of the repository to use for the catalog.
-InfraModulesRepoName: infrastructure-catalog
-
-# The version of the Gruntwork Service Catalog to use. https://github.com/gruntwork-io/terraform-aws-service-catalog
-ServiceCatalogVersion: v0.111.2
-
-# The version of the Gruntwork VPC module to use. https://github.com/gruntwork-io/terraform-aws-vpc
-VpcVersion: v0.26.22
-
-# The default region for AWS Resources
-# Example: us-east-1
-DefaultRegion: $$DEFAULT_REGION$$
-
-################################################################################
-# OPTIONAL VARIABLES WITH THEIR DEFAULT VALUES. UNCOMMENT AND MODIFY IF NEEDED.
-################################################################################
-
-# The base URL of the Organization to use for the catalog.
-# If you are using Gruntwork's RepoCopier tool, this should be the base URL of the repository you are copying from.
-# RepoBaseUrl: github.com/gruntwork-io
-
-# The name prefix to use for the Gruntwork RepoCopier copied repositories.
-# Example: gruntwork-io-
-# GWCopiedReposNamePrefix:
-```
-
-
-#### Generate the repository contents
-
-1. Run the following command, from the root of your project, to generate the `infrastructure-catalog` repository contents:
-
-
- ```bash
- boilerplate --template-url "git@github.com:gruntwork-io/terraform-aws-architecture-catalog.git//templates/devops-foundations-infrastructure-modules/?ref=main" --output-folder . --var-file vars.yaml --non-interactive
- ```
-
- This command adds some code required to set up your `infrastructure-catalog` repository. The generated files are some usable modules for your infrastructure.
-
-1. Commit your local changes and push them to the `bootstrap-repository` branch.
-
- ```bash
- git add .
- git commit -m "Bootstrap infrastructure-catalog repository"
- git push origin bootstrap-repository
- ```
-
-1. Create a new merge request for the `bootstrap-repository` branch. Review the changes to understand the example Service Catalog modules.
-1. Merge the open merge request.
diff --git a/docs/2.0/docs/pipelines/installation/addingnewrepo.mdx b/docs/2.0/docs/pipelines/installation/addingnewrepo.mdx
index b3b0ab10a..fdfd50fbd 100644
--- a/docs/2.0/docs/pipelines/installation/addingnewrepo.mdx
+++ b/docs/2.0/docs/pipelines/installation/addingnewrepo.mdx
@@ -1,8 +1,8 @@
# Bootstrap Pipelines in a New GitHub Repository
-import Tabs from '@theme/Tabs';
-import TabItem from '@theme/TabItem';
-import PersistentCheckbox from '/src/components/PersistentCheckbox';
+import Tabs from "@theme/Tabs"
+import TabItem from "@theme/TabItem"
+import PersistentCheckbox from "/src/components/PersistentCheckbox"
To configure Gruntwork Pipelines in a new GitHub repository, complete the following steps (which are explained in detail below):
@@ -27,7 +27,10 @@ There are two ways to configure SCM access for Pipelines:
:::note Progress Checklist
-
+
:::
@@ -58,8 +61,14 @@ cd infrastructure-live
:::note Progress Checklist
-
-
+
+
:::
@@ -95,7 +104,7 @@ mise ls-remote boilerplate
### Cloud-specific bootstrap instructions
-
+
The resources that you need provisioned in AWS to start managing resources with Pipelines are:
@@ -121,6 +130,7 @@ The process that we'll follow to get these resources ready for Pipelines is:
3. (Optionally) Bootstrap additional AWS accounts until all your AWS accounts are ready for Pipelines
{/* We're using an h3 tag here instead of a markdown heading to avoid adding content to the ToC that won't work when switching between tabs */}
+
Bootstrap your `infrastructure-live` repository
To bootstrap your `infrastructure-live` repository, we'll use Boilerplate to scaffold it with the necessary content for Pipelines to function.
@@ -175,7 +185,10 @@ boilerplate \
:::note Progress Checklist
-
+
:::
@@ -187,10 +200,14 @@ mise install
:::note Progress Checklist
-
+
:::
{/* We're using an h3 tag here instead of a markdown heading to avoid adding content to the ToC that won't work when switching between tabs */}
+
Provisioning the resources
Once you've set up the Terragrunt configurations, you can use Terragrunt to provision the resources in your AWS account.
@@ -219,7 +236,10 @@ We're using the `--backend-bootstrap` flag here to tell Terragrunt to bootstrap
:::note Progress Checklist
-
+
:::
@@ -231,11 +251,15 @@ terragrunt run --all --non-interactive --provider-cache apply
:::note Progress Checklist
-
+
:::
{/* We're using an h3 tag here instead of a markdown heading to avoid adding content to the ToC that won't work when switching between tabs */}
+
Optional: Bootstrapping additional AWS accounts
If you have multiple AWS accounts, and you want to bootstrap them as well, you can do so by following a similar, but slightly condensed process.
@@ -307,7 +331,10 @@ boilerplate \
:::note Progress Checklist
-
+
:::
@@ -329,8 +356,14 @@ terragrunt run --all --non-interactive --provider-cache apply
:::note Progress Checklist
-
-
+
+
:::
@@ -369,6 +402,7 @@ The process that we'll follow to get these resources ready for Pipelines is:
5. (Optionally) Bootstrap additional Azure subscriptions until all your Azure subscriptions are ready for Pipelines
{/* We're using an h3 tag here instead of a markdown heading to avoid adding content to the ToC that won't work when switching between tabs */}
+
Bootstrap your `infrastructure-live` repository
To bootstrap your `infrastructure-live` repository, we'll use Boilerplate to scaffold it with the necessary content for Pipelines to function.
@@ -429,7 +463,11 @@ boilerplate \
:::
:::note Progress Checklist
-
+
+
:::
Next, install Terragrunt and OpenTofu locally (the `.mise.toml` file in the root of the repository after scaffolding should already be set to the versions you want for Terragrunt and OpenTofu):
@@ -439,6 +477,7 @@ mise install
```
{/* We're using an h3 tag here instead of a markdown heading to avoid adding content to the ToC that won't work when switching between tabs */}
+
Provisioning the resources
Once you've set up the Terragrunt configurations, you can use Terragrunt to provision the resources in your Azure subscription.
@@ -451,11 +490,13 @@ az login
:::note Progress Checklist
-
+
:::
-
To dynamically configure the Azure provider with a given tenant ID and subscription ID, ensure that you are exporting the following environment variables if you haven't the values via the `az` CLI:
- `ARM_TENANT_ID`
@@ -470,8 +511,14 @@ export ARM_SUBSCRIPTION_ID="11111111-1111-1111-1111-111111111111"
:::note Progress Checklist
-
-
+
+
:::
First, make sure that everything is set up correctly by running a plan in the subscription directory.
@@ -488,7 +535,10 @@ We're using the `--provider-cache` flag here to ensure that we don't re-download
:::note Progress Checklist
-
+
:::
@@ -506,10 +556,14 @@ We're adding the `--no-stack-generate` flag here, as Terragrunt will already hav
:::note Progress Checklist
-
+
:::
{/* We're using an h3 tag here instead of a markdown heading to avoid adding content to the ToC that won't work when switching between tabs */}
+
Finalizing Terragrunt configurations
Once you've provisioned the resources in your Azure subscription, you can finalize the Terragrunt configurations using the bootstrap resources we just provisioned.
@@ -598,7 +652,10 @@ EOF
:::note Progress Checklist
-
+
:::
@@ -645,12 +702,19 @@ You can use those values to set the values for `plan_client_id` and `apply_clien
:::note Progress Checklist
-
-
+
+
:::
{/* We're using an h3 tag here instead of a markdown heading to avoid adding content to the ToC that won't work when switching between tabs */}
+
Pulling the resources into state
Once you've provisioned the resources in your Azure subscription, you can pull the resources into state using the storage account we just provisioned.
@@ -667,11 +731,15 @@ We're adding the `-force-copy` flag here to avoid any issues with OpenTofu waiti
:::note Progress Checklist
-
+
:::
{/* We're using an h3 tag here instead of a markdown heading to avoid adding content to the ToC that won't work when switching between tabs */}
+
Optional: Bootstrapping additional Azure subscriptions
If you have multiple Azure subscriptions, and you want to bootstrap them as well, you can do so by following a similar, but slightly condensed process.
@@ -749,7 +817,10 @@ boilerplate \
:::note Progress Checklist
-
+
:::
@@ -797,7 +868,10 @@ EOF
:::note Progress Checklist
-
+
:::
@@ -819,8 +893,14 @@ We're adding the `--no-stack-generate` flag here, as Terragrunt will already hav
:::note Progress Checklist
-
-
+
+
:::
@@ -868,7 +948,10 @@ EOF
:::note Progress Checklist
-
+
:::
@@ -886,7 +969,10 @@ We're adding the `-force-copy` flag here to avoid any issues with OpenTofu waiti
:::note Progress Checklist
-
+
:::
@@ -933,8 +1019,14 @@ You can use those values to set the values for `plan_client_id` and `apply_clien
:::note Progress Checklist
-
-
+
+
:::
@@ -959,8 +1051,14 @@ git push
:::note Progress Checklist
-
-
+
+
:::
diff --git a/docs/2.0/docs/pipelines/installation/scm-comparison.md b/docs/2.0/docs/pipelines/installation/scm-comparison.md
index 2eb31f54c..0f2618e5c 100644
--- a/docs/2.0/docs/pipelines/installation/scm-comparison.md
+++ b/docs/2.0/docs/pipelines/installation/scm-comparison.md
@@ -12,7 +12,7 @@ Gruntwork Pipelines supports both GitHub Actions and GitLab CI/CD as CI/CD platf
| App-based Authentication | ✅ | ❌ |
| Machine User Authentication | ✅ | ✅ |
| Customizable Workflows | ✅ | ✅ |
-| Pull Request Comments | Rich formatting | Rich formatting |
-| Repository/Group Authorization | Self-service via GitHub App | Manual via Gruntwork Support |
-| Required Setup Time | ~30 minutes | ~30 minutes |
+| Pull Request Comments | Rich formatting | Rich formatting |
+| Repository/Group Authorization | Self-service via GitHub App | Manual via Gruntwork Support |
+| Required Setup Time | ~30 minutes | ~30 minutes |
diff --git a/docs/2.0/docs/pipelines/installation/viamachineusers.mdx b/docs/2.0/docs/pipelines/installation/viamachineusers.mdx
index 4e897f8cd..a6c1f378e 100644
--- a/docs/2.0/docs/pipelines/installation/viamachineusers.mdx
+++ b/docs/2.0/docs/pipelines/installation/viamachineusers.mdx
@@ -4,12 +4,12 @@ toc_min_heading_level: 2
toc_max_heading_level: 4
---
-# Creating Machine Users
-
import PersistentCheckbox from "/src/components/PersistentCheckbox"
import Tabs from "@theme/Tabs"
import TabItem from "@theme/TabItem"
+# Creating Machine Users
+
For GitHub users, of the [two methods](/2.0/docs/pipelines/installation/authoverview.md) for installing Gruntwork Pipelines, we strongly recommend using the [GitHub App](/2.0/docs/pipelines/installation/viagithubapp.md). However, if the GitHub App cannot be used or if machine users are required as a [fallback](/2.0/docs/pipelines/installation/viagithubapp#fallback), this guide outlines how to set up authentication for Pipelines using access tokens and machine users.
For GitHub or GitLab users, when using tokens, Gruntwork recommends setting up CI users specifically for Gruntwork Pipelines, separate from human users in your organization. This separation ensures workflows are not disrupted if an employee leaves the company and allows for more precise permission management. Additionally, using CI users allow you to apply granular permissions that may normally be too restrictive for a normal employee to do their daily work.
@@ -40,7 +40,7 @@ If screen sharing while generating tokens, **pause or hide your screen** before
### Token types
-
+
GitHub supports two types of tokens:
@@ -81,7 +81,7 @@ More information is available [here](https://docs.github.com/en/organizations/ma

-
+
GitLab uses access tokens for authentication. There are several types of access tokens in GitLab:
@@ -112,8 +112,8 @@ When creating tokens, carefully consider the expiration date and scope of access
## Creating machine users
-
-
+
+
The recommended setup for Pipelines uses two machine users: one for opening pull requests and running workflows (`ci-user`) and another with read-only access to repositories (`ci-read-only-user`). Each user is assigned restrictive permissions based on their tasks. As a result, both users may need to participate at different stages to successfully run a pipeline job.
@@ -155,8 +155,8 @@ Generate the required tokens for the ci-user in their GitHub account.
**Checklist:**
-
-
+- [ ] INFRA_ROOT_WRITE_TOKEN created under ci-user
+- [ ] ORG_REPO_ADMIN_TOKEN created under ci-user
#### INFRA_ROOT_WRITE_TOKEN
@@ -273,7 +273,7 @@ Invite `ci-user-read-only` to your `infrastructure-live-root` repository with re
**Checklist:**
-
+- [ ] ci-read-only-user invited to infrastructure-live-root
**Create a token for ci-read-only-user**
@@ -281,7 +281,7 @@ Generate the following token for the `ci-read-only-user`:
**Checklist:**
-
+- [ ] PIPELINES_READ_TOKEN created under ci-read-only-user
#### PIPELINES_READ_TOKEN
@@ -297,7 +297,7 @@ Make sure both machine users are added to your team in Gruntwork’s GitHub Orga
**Checklist:**
-
+- [ ] Machine users invited to Gruntwork organization
## Configure secrets for GitHub Actions
@@ -312,9 +312,10 @@ Since this guide uses secrets scoped to specific repositories, the token permiss
**Checklist:**
-
-
-
+- [ ] PIPELINES_READ_TOKEN added to organization secrets
+- [ ] INFRA_ROOT_WRITE_TOKEN added to organization secrets
+- [ ] ORG_REPO_ADMIN_TOKEN added to organization secrets
+
1. Navigate to your top-level GitHub Organization and select the **Settings** tab.
@@ -373,11 +374,12 @@ For more details on creating and using GitHub Actions Organization secrets, refe
**Checklist:**
-
-
-
-
-
+- [ ] PIPELINES_READ_TOKEN added to infrastructure-live-root
+- [ ] INFRA_ROOT_WRITE_TOKEN added to infrastructure-live-root
+- [ ] ORG_REPO_ADMIN_TOKEN added to infrastructure-live-root
+- [ ] PIPELINES_READ_TOKEN added to infrastructure-live-access-control
+- [ ] ORG_REPO_ADMIN_TOKEN added to infrastructure-live-access-control
+
Gruntwork Pipelines retrieves these secrets from GitHub Actions secrets configured in the repository. For instructions on creating repository Actions secrets, refer to [creating secrets for a repository](https://docs.github.com/en/actions/security-guides/using-secrets-in-github-actions#creating-secrets-for-a-repository).
@@ -417,20 +419,25 @@ For more information on creating and using GitHub Actions Repository secrets, re
-
+
+
+For GitLab, Gruntwork Pipelines requires a couple of CI variables.
-For GitLab, Gruntwork Pipelines two CI variables. The first, the `PIPELINES_GITLAB_TOKEN` requires the `Developer`, `Maintainer` or `Owner` role and the scopes listed below. This token will be used to authenticate API calls and access repositories within your GitLab group. The second, the `PIPELINES_GITLAB_READ_TOKEN` will be used to access your own code within GitLab. If not set, Pipelines will default to the `CI_JOB_TOKEN` when accessing internal GitLab hosted code.
+- `PIPELINES_GITLAB_TOKEN` requires the `Developer`, `Maintainer` or `Owner` role and the scopes listed below. This token will be used to authenticate API calls and access repositories within your GitLab group.
+- `PIPELINES_GITLAB_READ_TOKEN` will be used to access your own code within GitLab. If not set, Pipelines will default to the `CI_JOB_TOKEN` when accessing internal GitLab hosted code.
+- `PIPELINES_GITLAB_ADMIN_TOKEN` (Enterprise customers only) will be used to create repositories within your GitLab group when provisioning new AWS accounts requiring a dedicated repository. This **MUST** be a Group Access Token but only set as a variable in your infrastructure-live-root's CI/CD variables.
### Creating the Access Token
-Gruntwork recommends [creating](https://docs.gitlab.com/user/project/settings/project_access_tokens/#create-a-project-access-token) two Project or Group Access Tokens as best practice:
+Gruntwork recommends [creating](https://docs.gitlab.com/user/project/settings/project_access_tokens/#create-a-project-access-token) separate Project or Group Access Tokens as best practice:
-| Token Name | Required Scopes | Required Role | Purpose |
-| ------------------------------- | -------------------------------------------- | ------------------------------- | ---------------------------------------------------------------------------- |
-| **PIPELINES_GITLAB_TOKEN** | `api` (and `ai_features` if using GitLab AI) | Developer, Maintainer, or Owner | Making API calls (e.g., creating comments on merge requests) |
-| **PIPELINES_GITLAB_READ_TOKEN** | `read_repository` | Any | Accessing GitLab repositories (e.g., your catalog or infrastructure modules) |
+| Token Name | Required Scopes | Required Role | Purpose |
+| -------------------------------- | -------------------------------------------- | ------------------------------- | ------------------------------------------------------------------------------------------------------------------ |
+| **PIPELINES_GITLAB_TOKEN** | `api` (and `ai_features` if using GitLab AI) | Developer, Maintainer, or Owner | Making API calls (e.g., creating comments on merge requests) |
+| **PIPELINES_GITLAB_READ_TOKEN** | `read_repository` | Any | Accessing GitLab repositories (e.g., your catalog or infrastructure modules) |
+| **PIPELINES_GITLAB_ADMIN_TOKEN** | `api` | Maintainer, or Owner | Creating repositories within your GitLab group when provisioning new AWS accounts requiring a dedicated repository |
-You may however generate a single token all scopes scopes if you prefer and use it for both purposes.
+You may however generate a single token with all scopes scopes if you prefer and use it for all purposes.
These tokens will be stored in your CI/CD variables.
@@ -445,12 +452,13 @@ Set an expiration date according to your organization's security policies. We re
**Checklist:**
-
-
+- [ ] PIPELINES_GITLAB_TOKEN created
+- [ ] PIPELINES_GITLAB_READ_TOKEN created
+- [ ] PIPELINES_GITLAB_ADMIN_TOKEN created (Enterprise customers only and must be a Group Access Token)
### Configure CI/CD Variables
-Add the `PIPELINES_GITLAB_TOKEN` and `PIPELINES_GITLAB_READ_TOKEN` as CI/CD variables at the group or project level:
+Add the `PIPELINES_GITLAB_TOKEN` and `PIPELINES_GITLAB_READ_TOKEN` as CI/CD variables at the group or project level. Enterprise customers should set them at the group level so that provisioned repositories have access to them. `PIPELINES_GITLAB_ADMIN_TOKEN` should be set as a project-level variable in your infrastructure-live-root's CI/CD variables.
1. Navigate to your GitLab group or project's **Settings > CI/CD**
2. Expand the **Variables** section
@@ -458,13 +466,14 @@ Add the `PIPELINES_GITLAB_TOKEN` and `PIPELINES_GITLAB_READ_TOKEN` as CI/CD vari
4. Mark the variables as **Masked**
5. Leave both the **Protect variable** and **Expand variable reference** options unchecked
6. Select the environments where this variable should be available
-7. Set the key to the name of the token e.g. `PIPELINES_GITLAB_TOKEN` or `PIPELINES_GITLAB_READ_TOKEN`
+7. Set the key to the name of the token e.g. `PIPELINES_GITLAB_TOKEN`, `PIPELINES_GITLAB_READ_TOKEN` or `PIPELINES_GITLAB_ADMIN_TOKEN`
8. Set the value as the Personal Access Token generated in the [Creating the Access Token](#creating-the-access-token) section
**Checklist:**
-
-
+- [ ] PIPELINES_GITLAB_TOKEN added to CI/CD variables
+- [ ] PIPELINES_GITLAB_READ_TOKEN added to CI/CD variables
+- [ ] PIPELINES_GITLAB_ADMIN_TOKEN added to CI/CD variables
:::caution
Remember to update this token before it expires to prevent pipeline disruptions.
diff --git a/docs/2.0/docs/pipelines/tutorials/deploying-to-aws-gov-cloud.mdx b/docs/2.0/docs/pipelines/tutorials/deploying-to-aws-gov-cloud.mdx
index 7b2f7ca07..ac071bd97 100644
--- a/docs/2.0/docs/pipelines/tutorials/deploying-to-aws-gov-cloud.mdx
+++ b/docs/2.0/docs/pipelines/tutorials/deploying-to-aws-gov-cloud.mdx
@@ -51,7 +51,7 @@ This section covers the Pipelines configuration required to deploy an AWS S3 buc
1. Create a `vars.yaml` file on your local machine with the following content:
-
+
```yaml title="vars.yaml"
AccountName: "$$ACCOUNT_NAME$$"
AccountId: "$$ACCOUNT_ID$$"
@@ -64,7 +64,7 @@ This section covers the Pipelines configuration required to deploy an AWS S3 buc
```
-
+
```yaml title="vars.yaml"
AccountName: "$$ACCOUNT_NAME$$"
AccountId: "$$ACCOUNT_ID$$"
@@ -82,12 +82,12 @@ This section covers the Pipelines configuration required to deploy an AWS S3 buc
3. We'll now use that `vars.yaml` file as input to [boilerplate](https://github.com/gruntwork-io/boilerplate) to generate the Terragrunt code for the OIDC Provider and IAM roles. From the root of your repository, run the following command:
-
+
```bash
boilerplate --template-url "git@github.com:gruntwork-io/terraform-aws-architecture-catalog.git//templates/github-actions-single-account-setup?ref=X.Y.Z" --output-folder . --var-file vars.yaml --non-interactive
```
-
+
```bash
boilerplate --template-url "git@github.com:gruntwork-io/terraform-aws-architecture-catalog.git//templates/gitlab-pipelines-single-account-setup?ref=X.Y.Z" --output-folder . --var-file vars.yaml --non-interactive
```
@@ -128,13 +128,13 @@ aws sts get-caller-identity
In the event you already have an OIDC provider for your SCM in the AWS account you can import the existing one:
-
+
```
cd _global/$$ACCOUNT_NAME$$/github-actions-openid-connect-provider/
terragrunt import "aws_iam_openid_connect_provider.github" "ARN_OF_EXISTING_OIDC_PROVIDER"
```
-
+
```
cd _global/$$ACCOUNT_NAME$$/gitlab-pipelines-openid-connect-provider/
terragrunt import "aws_iam_openid_connect_provider.gitlab" "ARN_OF_EXISTING_OIDC_PROVIDER"
diff --git a/docs/2.0/docs/pipelines/tutorials/deploying-your-first-infrastructure-change.mdx b/docs/2.0/docs/pipelines/tutorials/deploying-your-first-infrastructure-change.mdx
index 4b4e7acaf..ef25ae1c1 100644
--- a/docs/2.0/docs/pipelines/tutorials/deploying-your-first-infrastructure-change.mdx
+++ b/docs/2.0/docs/pipelines/tutorials/deploying-your-first-infrastructure-change.mdx
@@ -1,6 +1,6 @@
# Deploying your first Infrastructure Change
-import CustomizableValue from '/src/components/CustomizableValue';
+import CustomizableValue from "/src/components/CustomizableValue"
import Tabs from "@theme/Tabs"
import TabItem from "@theme/TabItem"
@@ -28,7 +28,7 @@ This section covers creating a cloud storage resource using Pipelines and GitOps
### Adding cloud storage
-
+
:::caution Permissions Required
@@ -45,39 +45,39 @@ The default `bootstrap` Terragrunt stack provided in the installation guide incl
1. Create the folder structure for the new S3 bucket in your environment. Replace with the account name you are deploying to and with the AWS region where the S3 bucket will be deployed.
- ```bash
- mkdir -p $$ACCOUNT_NAME$$/$$REGION$$/data-storage/s3
- touch $$ACCOUNT_NAME$$/$$REGION$$/region.hcl
- touch $$ACCOUNT_NAME$$/$$REGION$$/data-storage/s3/terragrunt.hcl
- ```
+ ```bash
+ mkdir -p $$ACCOUNT_NAME$$/$$REGION$$/data-storage/s3
+ touch $$ACCOUNT_NAME$$/$$REGION$$/region.hcl
+ touch $$ACCOUNT_NAME$$/$$REGION$$/data-storage/s3/terragrunt.hcl
+ ```
2. Add the following content to the `region.hcl` file created earlier.
- ```hcl title="$$ACCOUNT_NAME$$/$$REGION$$/region.hcl"
- locals {
- aws_region = "$$REGION$$"
- }
- ```
+ ```hcl title="$$ACCOUNT_NAME$$/$$REGION$$/region.hcl"
+ locals {
+ aws_region = "$$REGION$$"
+ }
+ ```
3. Add the Terragrunt code below to the newly created `terragrunt.hcl` file to define the S3 bucket. Replace with your desired bucket name. Ensure the bucket name is unique.
- ```hcl title="$$ACCOUNT_NAME$$/$$REGION$$/data-storage/s3/terragrunt.hcl"
- # ------------------------------------------------------------------------------------------------------
- # DEPLOY GRUNTWORK's S3-BUCKET MODULE
- # ------------------------------------------------------------------------------------------------------
+ ```hcl title="$$ACCOUNT_NAME$$/$$REGION$$/data-storage/s3/terragrunt.hcl"
+ # ------------------------------------------------------------------------------------------------------
+ # DEPLOY GRUNTWORK's S3-BUCKET MODULE
+ # ------------------------------------------------------------------------------------------------------
- terraform {
- source = "git::git@github.com:gruntwork-io/terraform-aws-service-catalog.git//modules/data-stores/s3-bucket?ref=v0.116.1"
- }
+ terraform {
+ source = "git::git@github.com:gruntwork-io/terraform-aws-service-catalog.git//modules/data-stores/s3-bucket?ref=v0.116.1"
+ }
- include "root" {
- path = find_in_parent_folders("root.hcl")
- }
+ include "root" {
+ path = find_in_parent_folders("root.hcl")
+ }
- inputs = {
- primary_bucket = "$$S3_BUCKET_NAME$$"
- }
- ```
+ inputs = {
+ primary_bucket = "$$S3_BUCKET_NAME$$"
+ }
+ ```
@@ -94,73 +94,73 @@ The default `bootstrap` Terragrunt stack provided in the installation guide incl
1. Create the folder structure for the new Resource Group and Storage Account in your environment. Replace with the subscription name you are deploying to, with the Azure location where the resources will be deployed, and with your desired resource group name.
- ```bash
- mkdir -p $$SUBSCRIPTION_NAME$$/$$LOCATION$$/resource-groups/$$RESOURCE_GROUP_NAME$$/resource-group
- mkdir -p $$SUBSCRIPTION_NAME$$/$$LOCATION$$/resource-groups/$$RESOURCE_GROUP_NAME$$/data-storage/storage-account
- touch $$SUBSCRIPTION_NAME$$/$$LOCATION$$/region.hcl
- touch $$SUBSCRIPTION_NAME$$/$$LOCATION$$/resource-groups/$$RESOURCE_GROUP_NAME$$/resource-group/terragrunt.hcl
- touch $$SUBSCRIPTION_NAME$$/$$LOCATION$$/resource-groups/$$RESOURCE_GROUP_NAME$$/data-storage/storage-account/terragrunt.hcl
- ```
+ ```bash
+ mkdir -p $$SUBSCRIPTION_NAME$$/$$LOCATION$$/resource-groups/$$RESOURCE_GROUP_NAME$$/resource-group
+ mkdir -p $$SUBSCRIPTION_NAME$$/$$LOCATION$$/resource-groups/$$RESOURCE_GROUP_NAME$$/data-storage/storage-account
+ touch $$SUBSCRIPTION_NAME$$/$$LOCATION$$/region.hcl
+ touch $$SUBSCRIPTION_NAME$$/$$LOCATION$$/resource-groups/$$RESOURCE_GROUP_NAME$$/resource-group/terragrunt.hcl
+ touch $$SUBSCRIPTION_NAME$$/$$LOCATION$$/resource-groups/$$RESOURCE_GROUP_NAME$$/data-storage/storage-account/terragrunt.hcl
+ ```
2. Add the following content to the `region.hcl` file created earlier.
- ```hcl title="$$SUBSCRIPTION_NAME$$/$$LOCATION$$/region.hcl"
- locals {
- azure_location = "$$LOCATION$$"
- }
- ```
+ ```hcl title="$$SUBSCRIPTION_NAME$$/$$LOCATION$$/region.hcl"
+ locals {
+ azure_location = "$$LOCATION$$"
+ }
+ ```
3. Add the Terragrunt code below to define the Resource Group.
- ```hcl title="$$SUBSCRIPTION_NAME$$/$$LOCATION$$/resource-groups/$$RESOURCE_GROUP_NAME$$/resource-group/terragrunt.hcl"
- # ------------------------------------------------------------------------------------------------------
- # DEPLOY GRUNTWORK's AZURE RESOURCE GROUP MODULE
- # ------------------------------------------------------------------------------------------------------
+ ```hcl title="$$SUBSCRIPTION_NAME$$/$$LOCATION$$/resource-groups/$$RESOURCE_GROUP_NAME$$/resource-group/terragrunt.hcl"
+ # ------------------------------------------------------------------------------------------------------
+ # DEPLOY GRUNTWORK's AZURE RESOURCE GROUP MODULE
+ # ------------------------------------------------------------------------------------------------------
- include "root" {
- path = find_in_parent_folders("root.hcl")
- }
+ include "root" {
+ path = find_in_parent_folders("root.hcl")
+ }
- terraform {
- source = "github.com/gruntwork-io/terragrunt-scale-catalog//modules/azure/resource-group?ref=v1.0.0"
- }
+ terraform {
+ source = "github.com/gruntwork-io/terragrunt-scale-catalog//modules/azure/resource-group?ref=v1.0.0"
+ }
- inputs = {
- name = "$$RESOURCE_GROUP_NAME$$"
- location = "$$LOCATION$$"
- }
- ```
+ inputs = {
+ name = "$$RESOURCE_GROUP_NAME$$"
+ location = "$$LOCATION$$"
+ }
+ ```
4. Add the Terragrunt code below to define the Storage Account with a dependency on the Resource Group. Replace with your desired storage account name. Ensure the name is unique and follows Azure naming conventions (lowercase letters and numbers only, 3-24 characters).
- ```hcl title="$$SUBSCRIPTION_NAME$$/$$LOCATION$$/resource-groups/$$RESOURCE_GROUP_NAME$$/data-storage/storage-account/terragrunt.hcl"
- # ------------------------------------------------------------------------------------------------------
- # DEPLOY GRUNTWORK's AZURE STORAGE ACCOUNT MODULE
- # ------------------------------------------------------------------------------------------------------
+ ```hcl title="$$SUBSCRIPTION_NAME$$/$$LOCATION$$/resource-groups/$$RESOURCE_GROUP_NAME$$/data-storage/storage-account/terragrunt.hcl"
+ # ------------------------------------------------------------------------------------------------------
+ # DEPLOY GRUNTWORK's AZURE STORAGE ACCOUNT MODULE
+ # ------------------------------------------------------------------------------------------------------
- include "root" {
- path = find_in_parent_folders("root.hcl")
- }
+ include "root" {
+ path = find_in_parent_folders("root.hcl")
+ }
- terraform {
- source = "github.com/gruntwork-io/terragrunt-scale-catalog//modules/azure/storage-account?ref=v1.0.0"
- }
+ terraform {
+ source = "github.com/gruntwork-io/terragrunt-scale-catalog//modules/azure/storage-account?ref=v1.0.0"
+ }
- dependency "resource_group" {
- config_path = "../../resource-group"
+ dependency "resource_group" {
+ config_path = "../../resource-group"
- mock_outputs = {
- name = "mock-name"
- }
- }
+ mock_outputs = {
+ name = "mock-name"
+ }
+ }
- inputs = {
- name = "$$STORAGE_ACCOUNT_NAME$$"
- location = "$$LOCATION$$"
+ inputs = {
+ name = "$$STORAGE_ACCOUNT_NAME$$"
+ location = "$$LOCATION$$"
- resource_group_name = dependency.resource_group.outputs.name
- }
- ```
+ resource_group_name = dependency.resource_group.outputs.name
+ }
+ ```
@@ -180,7 +180,7 @@ Once the workflow completes, Pipelines will post a comment on the PR summarizing

-Click the *View full logs* link to see the complete output of the Gruntwork Pipelines run. Locate the *TerragruntExecute* step to review the full `terragrunt plan` generated by your changes.
+Click the _View full logs_ link to see the complete output of the Gruntwork Pipelines run. Locate the _TerragruntExecute_ step to review the full `terragrunt plan` generated by your changes.

@@ -194,7 +194,7 @@ Click the *View full logs* link to see the complete output of the Gruntwork Pipe
After creating the MR, GitLab CI/CD will automatically execute the pipeline defined in `.gitlab-ci.yml` in your project.
Once the pipeline completes, Pipelines will post a comment on the MR summarizing the `terragrunt plan` output along with a link to the pipeline logs.
-Click the *View Pipeline Logs* link to see the complete output of the Gruntwork Pipelines run. Select the *plan* job to review the full `terragrunt plan` generated by your changes.
+Click the _View Pipeline Logs_ link to see the complete output of the Gruntwork Pipelines run. Select the _plan_ job to review the full `terragrunt plan` generated by your changes.
@@ -236,7 +236,7 @@ To monitor the pipeline run associated with the merged MR:
Congratulations! You have successfully used Gruntwork Pipelines and a GitOps workflow to provision cloud storage.
-
+
To verify the S3 bucket creation, visit the AWS Management Console and check the S3 service for the bucket.
diff --git a/docs/2.0/docs/pipelines/tutorials/destroying-infrastructure.mdx b/docs/2.0/docs/pipelines/tutorials/destroying-infrastructure.mdx
index 00265d936..d2e6cf2ae 100644
--- a/docs/2.0/docs/pipelines/tutorials/destroying-infrastructure.mdx
+++ b/docs/2.0/docs/pipelines/tutorials/destroying-infrastructure.mdx
@@ -1,6 +1,6 @@
# Destroying Infrastructure with Pipelines
-import CustomizableValue from '/src/components/CustomizableValue';
+import CustomizableValue from "/src/components/CustomizableValue"
import Tabs from "@theme/Tabs"
import TabItem from "@theme/TabItem"
@@ -27,7 +27,7 @@ This section explains how to destroy cloud resources using Pipelines and GitOps
### Delete the infrastructure code
-
+
:::caution Permissions Required
@@ -44,9 +44,9 @@ The default `bootstrap` Terragrunt stack provided in the installation guide incl
1. Remove the folder containing the infrastructure code for the resource you want to destroy. For the S3 bucket example, delete the folder containing the S3 bucket code. Replace and with the appropriate values.
- ```bash
- rm -rf $$ACCOUNT_NAME$$/$$REGION$$/data-storage/s3
- ```
+ ```bash
+ rm -rf $$ACCOUNT_NAME$$/$$REGION$$/data-storage/s3
+ ```
2. Create a new branch, commit the changes, and push the branch to your repository.
@@ -65,9 +65,9 @@ The default `bootstrap` Terragrunt stack provided in the installation guide incl
1. Remove the folder containing the infrastructure code for the resources you want to destroy. For the Resource Group and Storage Account example, delete the folder containing all the resource group code. Replace , , and with the appropriate values.
- ```bash
- rm -rf $$SUBSCRIPTION_NAME$$/$$LOCATION$$/resource-groups/$$RESOURCE_GROUP_NAME$$
- ```
+ ```bash
+ rm -rf $$SUBSCRIPTION_NAME$$/$$LOCATION$$/resource-groups/$$RESOURCE_GROUP_NAME$$
+ ```
2. Create a new branch, commit the changes, and push the branch to your repository.
@@ -94,7 +94,7 @@ Create a Merge Request (MR) for the branch you just pushed, targeting `main` (th
Gruntwork Pipelines, via GitLab CI/CD, will detect the removal of the infrastructure unit's code and trigger a `plan` action in Pipelines. This action will display the destructive changes to be made to your cloud environment.
-Click the *View Pipeline Logs* link to see the complete output of the destroy plan.
+Click the _View Pipeline Logs_ link to see the complete output of the destroy plan.
@@ -107,7 +107,7 @@ Approve and merge the pull/merge request to trigger the apply action, permanentl
Congratulations! You have successfully destroyed cloud resources using Gruntwork Pipelines and GitOps workflows.
-
+
To verify the S3 bucket has been destroyed, check the AWS Management Console and confirm the bucket no longer exists in the S3 service.