You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
This introduces a bug when argocd-vault-plugin is used with more than a single set of credentials.
The first request "A" caches its credentials of AppRole "A" to ~/.avp/config.json and succeeds, but the second request "B" which requires different credentials for AppRole "B" then fails because it uses the cached credentials of "A" from ~/avp/config.json.
Our wrapper with multiple AppRoles
We have a custom wrapper script around argocd-vault-plugin, which attempts to remove ~/avp/config.json before executing argocd-vault-plugin. But there's a race condition: with multiple processess running simultaneously, another process B might recreate the cache in ~/avp/config.json (with different credentials) before the original process A gets to execute its request.
Here's our wrapper around argocd-vault-plugin. We request an AppRole "ArgoCD" which can only retrieve further credentials but not request the actual application secrets. AppRole "ArgoCD" then requests credentials for AppRole "AppProject A" which is then used by argocd-vault-plugin to retrieve the application level credentials.
The purpose is to have a layer between "Vault secrets accessible by applications managed by any developer" and "Vault secrets accessible only by cluster admins". A manifest in "AppProject B" can not use Vault secrets of "AppProject C".
#!/usr/bin/env basheval"$(sentry-cli bash-hook)"set -euo pipefail
# https://argocd-vault-plugin.readthedocs.io/en/stable/config/# https://argo-cd.readthedocs.io/en/stable/user-guide/build-environment/# Get ArgoCD Project name of the current ArgoCD Application (ARGO_CD_APP_NAME)## Avoid kubectl client side throttling by providing --cache-dir# (~/.kube is read-only and throttled kubectl will take seconds longer to run)
AG_APP_PROJECT_NAME=$(kubectl --cache-dir=/tmp/kubectl-cache \ get app "${ARGOCD_APP_NAME}" \ -o "jsonpath={.spec.project}" \)# Retrieve token credentials of the privileged AppRole# which can read the stored AppRole credentials of each application,# but can not read the application-level secrets.
VAULT_TOKEN=$(vault write \ -field=token \ -namespace="${VAULT_NAMESPACE}" \ auth/approle/login \ role_id="${VAULT_APPROLE_ROLE_ID}" \ secret_id="${VAULT_APPROLE_SECRET_ID}")export VAULT_TOKEN
# Scoped Vault access: AppRole can now only access secrets which belong to the same# Application Project (ie. Application > spec.project).# If application references a secret of another Application Project, access is denied# unless that AppRole has explicitly been granted access.
AVP_ROLE_ID=$(vault kv get \ -field=approle-id \"project-approles/${AG_APP_PROJECT_NAME}")export AVP_ROLE_ID
AVP_SECRET_ID=$(vault kv get \ -field=approle-secret \"project-approles/${AG_APP_PROJECT_NAME}")export AVP_SECRET_ID
if [[ -z${AVP_ROLE_ID}||-z${AVP_SECRET_ID} ]];thenecho>&2"$0: Failed to fetch either AppRole id or AppRole secret"exit 1
fi# Forget credentials of the privileged AppRoleunset VAULT_TOKEN
# argocd-vault-plugin v1.17.0 introduced token caching which breaks when different Vault App Roles# are being used.## There's a race condition here when multiple processes are running this script concurrently.# Another process might create ~/.avp/config.json before this process executes argocd-vault-plugin.
rm -f "${HOME}/.avp/config.json"# Generate manifest with scoped AVP_ROLE_ID and AVP_SECRET_ID
argocd-vault-plugin "$@"
The text was updated successfully, but these errors were encountered:
Token caching does not work with different sets of Vault AppRoles (only a single token is cached)
Summary: We wish for the ability to turn off Vault token caching (disable ~/.avp/config.json) eg. as a command line flag for
argocd-vault-plugin
.Release v1.17.0 implemented a cache for the Vault tokens:
This introduces a bug when argocd-vault-plugin is used with more than a single set of credentials.
The first request "A" caches its credentials of AppRole "A" to ~/.avp/config.json and succeeds, but the second request "B" which requires different credentials for AppRole "B" then fails because it uses the cached credentials of "A" from ~/avp/config.json.
Our wrapper with multiple AppRoles
We have a custom wrapper script around argocd-vault-plugin, which attempts to remove ~/avp/config.json before executing argocd-vault-plugin. But there's a race condition: with multiple processess running simultaneously, another process B might recreate the cache in ~/avp/config.json (with different credentials) before the original process A gets to execute its request.
Here's our wrapper around argocd-vault-plugin. We request an AppRole "ArgoCD" which can only retrieve further credentials but not request the actual application secrets. AppRole "ArgoCD" then requests credentials for AppRole "AppProject A" which is then used by
argocd-vault-plugin
to retrieve the application level credentials.The purpose is to have a layer between "Vault secrets accessible by applications managed by any developer" and "Vault secrets accessible only by cluster admins". A manifest in "AppProject B" can not use Vault secrets of "AppProject C".
The text was updated successfully, but these errors were encountered: