-
Notifications
You must be signed in to change notification settings - Fork 1.2k
OIDC ID Token, Authorization Headers, Refreshing and Verification #621
Conversation
Testing this now... With the new PRs I can reliably reproduce your merged state in the pusher:kubernetes branch from the individual PRs. The refresh seems to be working well now as well with your additional changes... for testing I have it expiring every 10 minutes, and it's been reliably and seamlessly refreshing each time it expires. Great stuff @JoelSpeed , hope the maintainers take this merge in. |
Hi @JoelSpeed, when using your
Is the image maybe missing CA certificates? The Btw, thanks for the greate work! This is exactly what I was looking for. |
@danielm0hr There is deliberately no |
@JoelSpeed Thanks. I have added the ca certificates to the image |
As I've said, we are deliberately mounting the certificates from our hosts, so I won't be updating my image to add the certificates. Given CA certificates need to be kept up to date, we find it easier to maintain a single copy on the host and mount that into multiple containers rather than rebuilding each image regularly to keep the certificates up to date, YMMV |
I see your point. But does it really hurt to have the additional 256 KiB of CA certificates in the image? You can still replace the path with your host mount, can't you? But this would enable people to use the image without any additional precautions. E.g. I'm currently using the official oauth2_proxy helm chart with this image. The chart would also have to be changed to enable mounting CA certificates from the host. |
Hello @JoelSpeed. Thanks a lot for contributing this. I'm looking to leverage your changes and have deployed your image of oauth2_proxy with the configuration you partially outlined in your article on The New Stack (great series, BTW). I have a setup with dex just like in the article. The OIDC dance between oauth2_proxy and dex works, and the OAuth flow between dex and GitHub Enterprise in my case also works. For some reason, however, in the last leg of the redirects, oauth2_proxy is not including the Authorization header in the response as per your change, even though I set The only log line I see from oauth2_proxy, which seems to indicate the id_token was picked up successfully, is this:
Any suggestions on how to debug this further, or could you perhaps share your YAML file for the oauth2_proxy deployment in your test environment? Thanks in advance! |
Hi @luispollo Glad you enjoyed the series 😄 I would recommend first checking the If you see the Authorization header set here, you should be able to retrieve the bearer JWT and check that it is indeed valid. If the header isn't there then there is a problem with your OAuth2 Proxy configuration. If you can see the header there, then it would suggest to me that you aren't getting the authorization header passed correctly, double check your ingress object, it should have the configuration snippet:
The line |
Thanks @JoelSpeed. You nailed it. It was the annotation prefix (I was missing Things are looking good now, except I ran into #580. The OIDC provider is currently not including the |
@luispollo Have you tried overriding the scope? In my config file I have:
Which is sent to Dex to make the request. By default the scope is only If you set your scope to include groups you should get the groups in the ID token into the dashboard |
You beat me to it, @JoelSpeed. I just found out you could override scopes. 😄 Thanks so much for contributing this! I hope the maintainers will review and merge this PR soon. |
@JoelSpeed I reiterate the thanks on contributing this. Although also seeking your help. I have leveraged your implementation and altered slightly to create a manual OIDC provider implementation for those that don't implement the introspection endpoint. (https://github.com/tlawrie/oauth2_proxy/tree/session-refresh) However i am getting a 502 in the NGINX ingress on Kubernetes. This is occuring on testing the cookie refresh. Have not yet gotten to testing the 12 hour session refresh. Any thoughts on what could be causing it? I added a lot of extra debug logging and I know that its performing a save session after the session timeout. |
@tlawrie I've seen this a few times since implementing the refreshing for OIDC. For us, it was the fact that Grafana would send about 40 requests simultaneously as it refreshed, 40 requests hit the oauth2_proxy, all triggered a refresh and the OIDC provider that the proxy backed onto was complaining and returning an internal server error because it was receiving too many requests to refresh the same token simultaneously. Too mitigate this, what I've done is create some caching within the ingress controller, see the gist: What this does is cache the response from the upstream oauth2_proxy. The response is cached for three seconds, then when the next batch of requests come through, nginx sends a single request to the oauth2_proxy and sends the response to all requests made to it. Just point your ingress You'll need to change the host on line 30 and 57 and the cookie name on line 43 if you want to make use of this. |
@JoelSpeed thanks for the information, I am going to give that a try very soon. One clarification, I currently use both the auth-url and auth-signin ingress annotations for the protected paths. These currently point to an ingress that is defined for the oauth2_proxy If i add this custom snippet in the configmap, I will then have conflicts with the defined kubernetes ingress and service definitions which achieve a similar implementation without the cache. Is there a best way to implement this, either without the ingresses or without the configmap? Edit: After implementing I get '500 Internal Server Error' |
@tlawrie I would just try to add the cache parts of the gist (L41-L53) as a |
@JoelSpeed many thanks for this change. Right now I'm using a fork of our own that includes this PR (among others) with nginx-ingress and Keycloak and our session info gets big enough to be split into two Cookies. When refreshing the token during an /auth request it seems nginx strips all newly set Cookies except for the first one, which invalidates the session, as it now consists of two inconsistent Cookies. nginx-ingress basically does
which seems to only forward the very first Have you seen this as well and found a solution for it? |
@JoelSpeed Thanks a lot for this PR, makes this project a lot more usable to run straight in front of the Kubernetes dashboard. My setup is running a Traefik proxy in front of it, with the oauth2_proxy handling all the traffic and authenticating against Google. The first logon works well, then after an hour the refresh happens (succesful according to the logs) and I get a new cookie back (which seems properly set from my browser). However, the dashboard hits me back with an Unauthorized error. Looking at the Auth header using the Edit: Looking through your code, I assume this is because the Google provider does not store a new IDToken upon refresh. I could get around this by initialising it as generic OIDC provider, but then I lose the refreshToken I believe. I am happy to PR the changes to the Google provider though, it should be minimal after your groundwork. |
@bcorijn I don't follow as to why you would lost the
Have you got a link for this fork? |
@JoelSpeed Initialising a generic OIDC provider requires passing a IssuerURL, but doing so will also use the OIDC auto-discovered login-url instead of any URL you pass yourself (Cfr https://github.com/bitly/oauth2_proxy/blob/master/options.go#L152-L161). In Google's case you need to explicitly add a query parameter to your login URL if you want to receive refresh-tokens, something that will not be present in the auto-discovered URL. I just pushed my changes to a local fork of your repository, these are the changes that were needed for the Google provider to return the new ID Token: bcorijn/oauth2_proxy@661210e |
@bcorijn I've implemented a Google Connector for a project called Dex which is based on the generic OIDC authentication and uses What you are missing in terms of refreshing is the But you should be able to add I've not needed to override the auto-discovered URLs in this implementation and I can assure you it is working. |
@JoelSpeed Not sure I'm completely following you now. The auto-discovered URL from Google is perfectly correct for basic authentication, but in the current GenericOIDC implementation in this project it won't include the The Google provider in this project does handle Google's specific implementation and will refresh it's token as expected, but will not update the original ID Token with the refreshed token (as the Generic one does after your PR), which is what I tried to solve in the commit I linked. |
Hello @JoelSpeed! We now use other oidc proxy, it works well, but want to replace it because of token refresh feature. Could you please advise, what we miss with configuration ? |
Migration from Bitly to Pusher
Release 3.0.0
Now that @pusher have taken over the repository maintenance, I am closing this in favour of oauth2-proxy/oauth2-proxy#14 |
This PR includes the commits from/replaces #534 and #620 and adds proper OIDC token validation. Please see both original PRs for descriptions and conversations about the PRs.
This now fixes #530
To summarise:
cookie_refresh
is set > 0ValidateSessionState
for the OIDC provider to properly verify theid_token
. The default providers implementation does not work with OIDC and was causing issues with refreshingI have built an image and published it here:
quay.io/joelspeed/kubernetes-3
which is built from https://github.com/pusher/oauth2_proxy/tree/kubernetes and contains the commits from this PR and #464This PR has become quite large but, having opened #620, I realised there was a bug if the
cookie_refresh
period was shorter than theid_token
lifetime (Thanks @jhohertz), to fix this, I needed to correctly implementValidateSessionState
for OIDC which then in turn needs the changes from #534So the resulting PR is a combination of fixes that adds a bunch of new stuff to the OIDC provider