-
Notifications
You must be signed in to change notification settings - Fork 461
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Assuming multiple roles for a Terraform deployment #424
Comments
facing same issue and would love to find a solution |
I can provide a suitable solution for the multiple regions & multiple accounts. |
This is how I manage in my pipeline:
And in my terraform:
|
We do Assume Role twice to manage multiple provider situations like this case.
|
If you could provide example code, that would be awesome please. |
Hello, i recently encountered this same issue. is there any update on a fix? |
@CyberViking949 This advice worked for me to assume multiple roles #636 (comment) |
Thanks @Constantin07, however this requires static access keys setup. The whole reason i was leveraging this action was to use the Github OIDC provider in aws. so im assuming a role in an identity account to assume a role in a prod/dev account all using ephemeral tokens. Action assume role --> Identity role (this action) --> backend role for s3 statefiles the backend role is assumed properly and state is pulled. However plan/apply is not using the role defined in the provider and is instead using the role from the identity account |
I'm standing on the shoulders of giants with this, but here is something that I whipped up to meet my use case: https://github.com/marketplace/actions/configure-aws-profile |
Comments on closed issues are hard for our team to see. |
Hi @peterwoodworth , I disagree that #112 will fix this issue here. #112 will use profiles and not the IAM Roles. That would cause a very long pipeline config file, depending on your setup, and lots and lots of Github Secrets to configure... which isn't something practical. If we take the #112 example:
what I propose is a way to support multiple AWS authentication using IAM Roles. |
Thanks @lpossamai, I see why profiles doesn't solve this for you I'm curious to know more about how exactly you're using this action within your workflow, and what exactly you're doing in terraform. I'm unfamiliar with terraform, is there one command that you're running in one step, and you need to be able to assume multiple roles at once for this one terraform command to work? |
Hi @peterwoodworth , thanks for your prompt reply. TBH, I have changed the way I use Terraform and authenticate with AWS. So, this issue is not needed for me and I cannot replicate it anymore. Looking at this further, I realize now that the limitation I was facing is not something that needs and can be fixed by the maintainers of A little background for further reference. Before the change I made, I was using Github Actions to deploy my infrastructure to AWS with Terraform. A sample code would be: // terraform/elb/main.tf
resource "aws_lb" "alb" {
count = terraform.workspace == "test" || terraform.workspace == "staging" ? 1 : 0
name = "example-${terraform.workspace}-alb"
internal = false
load_balancer_type = "application"
security_groups = [aws_security_group.alb[count.index].id]
subnets = data.terraform_remote_state.network.outputs.public_subnets
idle_timeout = 300
enable_deletion_protection = true
enable_http2 = true
preserve_host_header = true
drop_invalid_header_fields = true
access_logs {
bucket = module.alb_log_bucket[count.index].s3_bucket_id
prefix = terraform.workspace
enabled = true
}
tags = merge({
Environment = terraform.workspace
}, var.tags)
} The github workflow for that particular folder would look like this: jobs:
ELB-TEST:
name: "ELB-TEST"
runs-on: ubuntu-latest
environment: test
env:
TF_VAR_iam_role_to_assume_test: ${{ secrets.iam_role_to_assume_test }}
ENVIRONMENT: test
defaults:
run:
working-directory: ${{ env.WORKING_DIRECTORY }}
steps:
- name: Checkout
uses: actions/checkout@v3
- name: Configure AWS Credentials
uses: aws-actions/configure-aws-credentials@v2
with:
role-to-assume: ${{ env.TF_VAR_iam_role_to_assume_test }}
role-session-name: github-ELB-test
aws-region: ${{ env.AWS_REGION }}
- name: Setup Terraform
uses: hashicorp/setup-terraform@v2
- name: Terraform Format
id: fmt
run: terraform fmt -check -recursive
- name: Terraform Init
id: init
run: |
terraform init -backend-config="role_arn=$TF_VAR_iam_role_to_terraform_backend"
- name: Terraform Validate
id: validate
run: |
terraform validate -no-color
env:
TF_WORKSPACE: test
TF_IN_AUTOMATION: true
- name: Terraform Plan
id: plan
if: github.event_name == 'pull_request'
run: terraform plan -input=false -out=tf_plan_out_${{ env.ENVIRONMENT }}_${{ env.TF_MODULE_NAME }}.tfplan
continue-on-error: false
env:
TF_WORKSPACE: test
TF_IN_AUTOMATION: true
- name: Terraform Apply
if: github.ref == 'refs/heads/main' && github.event_name == 'push'
run: terraform apply -input=false -auto-approve tf_plan_out_${{ env.ENVIRONMENT }}_${{ env.TF_MODULE_NAME }}.tfplan
env:
TF_WORKSPACE: test
TF_IN_AUTOMATION: true
ELB-STAGING:
name: "ELB-STAGING"
runs-on: ubuntu-latest
needs: ELB-TEST
environment: staging
env:
TF_VAR_iam_role_to_assume_staging: ${{ secrets.iam_role_to_assume_staging }}
ENVIRONMENT: staging
defaults:
run:
working-directory: ${{ env.WORKING_DIRECTORY }}
steps:
- name: Checkout
uses: actions/checkout@v3
- name: Configure AWS Credentials
uses: aws-actions/configure-aws-credentials@v2
with:
role-to-assume: ${{ env.TF_VAR_iam_role_to_assume_staging }}
role-session-name: github-ELB-staging
aws-region: ${{ env.AWS_REGION }}
- name: Setup Terraform
uses: hashicorp/setup-terraform@v2
- name: Terraform Format
id: fmt
run: terraform fmt -check -recursive
- name: Terraform Init
id: init
run: |
terraform init -backend-config="role_arn=$TF_VAR_iam_role_to_terraform_backend"
- name: Terraform Validate
id: validate
run: |
terraform validate -no-color
env:
TF_WORKSPACE: staging
TF_IN_AUTOMATION: true
- name: Terraform Plan
id: plan
if: github.event_name == 'pull_request'
run: terraform plan -input=false -out=tf_plan_out_${{ env.ENVIRONMENT }}_${{ env.TF_MODULE_NAME }}.tfplan
continue-on-error: false
env:
TF_WORKSPACE: staging
TF_IN_AUTOMATION: true
- name: Terraform Apply
if: github.ref == 'refs/heads/main' && github.event_name == 'push'
run: terraform apply -input=false -auto-approve tf_plan_out_${{ env.ENVIRONMENT }}_${{ env.TF_MODULE_NAME }}.tfplan
env:
TF_WORKSPACE: staging
TF_IN_AUTOMATION: true So not very good as I would have to have a Job for each of my environments and for each of my What I ended up doing was:
This allows me to deploy to multiple accounts now in the same PR using Safe to close this issue now. Thanks! |
Comments on closed issues are hard for our team to see. |
Sorry, I didn't read all the comments before replying. #112 covers my requirements |
Hello,
I have a question on how can I use
configure-aws-credentials
to assume multiple roles so that my TFprovider.tf
file can apply all the necessary changes to multiple accounts?Example: In my
PROD
workspace, I need to deploy toTEST
andDEV
workspaces. In myprovider.tf
file I have the following:In my Github Actions workflow I have the following:
But that gives me an error, because Github didn't have permissions to assume the other two roles
staging
andtest
.Is there a workaround this? Any suggestions is welcome.
Thanks!
The text was updated successfully, but these errors were encountered: