-
Notifications
You must be signed in to change notification settings - Fork 471
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Error: The security token included in the request is invalid - when AWS key/secret changes between GHA jobs #340
Comments
The same error happens when GitHub OIDC provider is used with Using different session names and cleaning all AWS_ envs from $GITHUB_ENV does not help. Is calling configure-aws-credentials multiple times with different roles within the same job supported? |
I'm facing the same issue even with a much simpler setup...
|
+1 |
I had a slightly different scenario but experienced a similar problem. As a means to potentially help future engineers, I'll be a bit more thorough here. My scenario was, within a single job:
I experienced a range of issues such as:
The This then caused the This then caused the In summary, the following setup worked for me:
I hope this helps people coming here in the future! |
I was able to assume a different role by setting AWS env vars to null:
|
hey, I have a similar problem and unfortunately I wasn't able to solve it by setting env vars to null, as someone suggested in this thread. here's what happens to me: - name: Configure AWS Credentials with GitHub OpenID STS
uses: aws-actions/configure-aws-credentials@v1-node16
with:
role-to-assume: arn:aws:iam::123456:role/GitHub_OpenID
aws-region: eu-west-1
- name: Do stuff
run: do_stuff_with_1st_role.sh
- name: cleanup
run: |
unset AWS_DEFAULT_REGION
unset AWS_REGION
unset AWS_ACCESS_KEY_ID
unset AWS_SECRET_ACCESS_KEY
unset AWS_SESSION_TOKEN
- name: Configure AWS Credentials again with IAM user
uses: aws-actions/configure-aws-credentials@v1
with:
aws-access-key-id: ${{ env.AWS_ACCESS_KEY_ }}
aws-secret-access-key: ${{ env.AWS_SECRET_ACCESS_KEY_ }}
aws-region: ${{ env.AWS_DEFAULT_REGION_ }}
- name: Do some more stuff
run: do_stuff_with_2nd_role.sh when I run the second step I don't have a solution for this yet, but anyone who can suggest a fix is more than welcome! |
@grudelsud Apparently setting envs explicitely to - name: Configure AWS Credentials again with IAM user
uses: aws-actions/configure-aws-credentials@v1
env:
AWS_DEFAULT_REGION:
AWS_REGION:
AWS_ACCESS_KEY_ID:
AWS_SECRET_ACCESS_KEY:
AWS_SESSION_TOKEN:
with:
aws-access-key-id: ${{ env.AWS_ACCESS_KEY_ }}
aws-secret-access-key: ${{ env.AWS_SECRET_ACCESS_KEY_ }}
aws-region: ${{ env.AWS_DEFAULT_REGION_ }} |
hey @rjeczalik thanks for your reply I had actually tried to set it as suggested, but I get this error from the runner when running the "aws configure credentials" step:
|
@grudelsud Then it works correctly, since it did not reuse your prior credentials. Now you need to fix input arguments in the with: object, since they are wrong and you're done. |
whoops, was it just a misspelled variable name? embarrassing 😊 thanks @rjeczalik I have to double check my script as I was sure it was correctly set and now, after making a few changes, not so much. I'll report back should I spot further troubles. Many thanks for your checks |
ok, after doing quite a few tests I thought it would be useful to share my findings on this thread, even because I find that similar problems are reported on #383 and #423 my flow needs 2 separate configurations for AWS roles and users, the first call is used to retrieve some secrets from our vault, the second call is used to execute deployment scripts the odd thing I noticed is that after configuring my credentials, our terraform script worked fine, while the following step running the aws cli to upload an updated lambda through the error eventually, I solved the problem by manually setting all aws variables to null, apart from the region, access key and secret in the step that threw the error. below is a snippet that works for me, hope this helps, and thanks everyone for your suggestions! - name: Authenticated on OpenID identity provider to get AWS tokens
uses: aws-actions/configure-aws-credentials@v1-node16
with:
role-to-assume: arn:aws:iam::123456:role/GitHub_OpenID
aws-region: eu-west-1
- name: create deploy environment using role token
run: |
make retrieve_secrets_from_vault
- name: Configure AWS Credentials
uses: aws-actions/configure-aws-credentials@v1-node16
env:
AWS_DEFAULT_REGION:
AWS_REGION:
AWS_ACCESS_KEY_ID:
AWS_SECRET_ACCESS_KEY:
AWS_SESSION_TOKEN:
with:
aws-access-key-id: ${{ env.AWS_ACCESS_KEY_ID_ }}
aws-secret-access-key: ${{ env.AWS_SECRET_ACCESS_KEY_ }}
aws-region: ${{ env.REGION }}
- name: do stuff with terraform (uses aws and works without extra env)
run: |
make stuff_with_terraform
- name: do stuff with aws cli (this step throws error if env vars aren't explicitly set)
env:
AWS_DEFAULT_REGION: ${{ env.REGION }}
AWS_REGION: ${{ env.REGION }}
AWS_ACCESS_KEY_ID: ${{ env.AWS_ACCESS_KEY_ID_ }}
AWS_SECRET_ACCESS_KEY: ${{ env.AWS_SECRET_ACCESS_KEY_ }}
AWS_SESSION_TOKEN:
AWS_ROLE_ARN:
AWS_WEB_IDENTITY_TOKEN_FILE:
AWS_ROLE_SESSION_NAME:
run: |
make stuff_with_aws_cli ain't pretty but it works 😄 |
I added the
|
bro you saved my life :) thats crazy with this naming and error message saying nothin |
worked for me too |
Saved my assignment, thanks a bunch |
It seems to me that the OP of this issue has a different issue than any commenters had. First, to address the commenters who are concerned with multiple invocations in a single job: A different combination between enabling As for the OP, different jobs run on different containers. The credentials in one runner shouldn't impact the credentials of another. I'd need to look into if the values of inputs/secrets are set at the initialization of the runner, or the start of the workflow step. I'd guess initialization of the runner, in which case you'd need to have the secret properly set at the time the job starts. If setting |
This issue has not received a response in a while. If you want to keep this issue open, please leave a comment below and auto-close will be canceled. |
Hi,
My workflow - for purposes of testing - looks like this:
While
devops1
is in the 60 second sleep, I generate a new AWS access key and secret and put these into GitHub Secrets.When devops2 runs I get this error:
This may seem like an odd thing to be doing, but the reason is that my actual workflow (not this test workflow) is rotating AWS access keys and pushing them to GitHub secrets. I have one AWS key that I rotate first, and then the other keys are rotated using this key. But this fails due to the error as above.
It looks like a token is left from the previous key, and the new key then fails due to this old token.
Is there a way to clear the old token?
The text was updated successfully, but these errors were encountered: