Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Using proxy_protocol v2 with h2c backend gives wrong IP address to backend. #6342

Open
CRCinAU opened this issue May 26, 2024 · 19 comments · May be fixed by #6343
Open

Using proxy_protocol v2 with h2c backend gives wrong IP address to backend. #6342

CRCinAU opened this issue May 26, 2024 · 19 comments · May be fixed by #6343
Labels
upstream ⬆️ Relates to some dependency of this project

Comments

@CRCinAU
Copy link

CRCinAU commented May 26, 2024

I recently moved over to Caddy as a frontend for one of my sites.

Extract of the Caddyfile:

example.com {
        header Strict-Transport-Security "max-age=63072000"
        header -Server

        handle_path /forum/* {
                reverse_proxy http://<host2>:8000
        }

        reverse_proxy h2c://<docker_container_name>:80 {
                transport http {
                        proxy_protocol v2
                }
        }
}

When configured as above, after a random number of hits, the source IP addresses logged in the reverse proxy will all be the same. This includes ANY host - IPv4 or IPv6.

Changing to use http:// as the backend as follows seems to report the source IP address correctly:

example.com {
        header Strict-Transport-Security "max-age=63072000"
        header -Server

        handle_path /forum/* {
                reverse_proxy http://<host2>:8000
        }

        reverse_proxy http://<docker_container_name>:80 {
                transport http {
                        proxy_protocol v2
                }
        }
}

Versions:

/srv # caddy --version
v2.7.6 h1:w0NymbG2m9PcvKWsrXO6EEkY9Ru4FJK8uQbYcev1p3A=
@francislavoie
Copy link
Member

Ah that makes sense. The connections to the backend are pooled I think, so subsequent requests might appear to come from the same IP as the first backend. I'm not sure if we have a way to turn off pooling in h2c mode right now.

@mohammed90
Copy link
Member

The ntlm-transport does pooling per remote IP address. I wonder if the mechanism can be copied into core for this use case. Now that I said that, I realize the same logic probably covers both NTLM and proxy-protocol + h2c.

@WeidiDeng
Copy link
Member

@mohammed90 I always thought we should pool connections for the same client ip if proxy protocol is enabled instead of blindly disabling keep-alive. Tried to implement custom pooling but gave up. That package gives me some inspiration.

@WeidiDeng
Copy link
Member

@CRCinAU Can you try xcaddy build h2c-proxy-protocol to see if it's fixed?

@CRCinAU
Copy link
Author

CRCinAU commented May 27, 2024

I'm not really familiar enough with Caddy to be able to pull this off - I've only ever used the docker container from docker hub to use Caddy. Is there any way to bring this into the existing docker container?

@WeidiDeng
Copy link
Member

Run following dockerfile:

FROM caddy:2.7.6-builder AS builder

RUN xcaddy build h2c-proxy-protocol

FROM caddy:2.7.6

COPY --from=builder /usr/bin/caddy /usr/bin/caddy

The resulting image contains this patch. And you can copy the binary.

@CRCinAU
Copy link
Author

CRCinAU commented May 27, 2024

I tried running this with h2c:// - but caddy just seemed to hang when talking to the backend... Nothing seemed to make it through to the client.

@WeidiDeng
Copy link
Member

Any logs available? Please enable debug in the global option.

@CRCinAU
Copy link
Author

CRCinAU commented May 27, 2024

I'm giving this a go now... I tried it on my main web site - but as it died, I just rolled back straight away.

Here's the logs I see from just random internet traffic hitting the site when using h2c:// to the backend.

{"level":"debug","ts":1716813182.8682232,"logger":"http.handlers.reverse_proxy","msg":"selected upstream","dial":"httpd:80","total_upstreams":1}
{"level":"debug","ts":1716813182.868803,"logger":"http.reverse_proxy.transport.http","msg":"sending proxy protocol header v2","header":{"Version":2,"Command":33,"TransportProtocol":17,"SourceAddr":{"IP":"45.87.9.222","Port":0,"Zone":""},"DestinationAddr":{"IP":"0.0.0.0","Port":0,"Zone":""}}}
{"level":"debug","ts":1716813182.869208,"logger":"http.reverse_proxy.transport.http","msg":"sending proxy protocol header v2","header":{"Version":2,"Command":33,"TransportProtocol":17,"SourceAddr":{"IP":"45.87.9.222","Port":0,"Zone":""},"DestinationAddr":{"IP":"0.0.0.0","Port":0,"Zone":""}}}
{"level":"debug","ts":1716813183.8701084,"logger":"http.reverse_proxy.transport.http","msg":"sending proxy protocol header v2","header":{"Version":2,"Command":33,"TransportProtocol":17,"SourceAddr":{"IP":"45.87.9.222","Port":0,"Zone":""},"DestinationAddr":{"IP":"0.0.0.0","Port":0,"Zone":""}}}
{"level":"debug","ts":1716813185.8720198,"logger":"http.reverse_proxy.transport.http","msg":"sending proxy protocol header v2","header":{"Version":2,"Command":33,"TransportProtocol":17,"SourceAddr":{"IP":"45.87.9.222","Port":0,"Zone":""},"DestinationAddr":{"IP":"0.0.0.0","Port":0,"Zone":""}}}
{"level":"debug","ts":1716813189.8733985,"logger":"http.reverse_proxy.transport.http","msg":"sending proxy protocol header v2","header":{"Version":2,"Command":33,"TransportProtocol":17,"SourceAddr":{"IP":"45.87.9.222","Port":0,"Zone":""},"DestinationAddr":{"IP":"0.0.0.0","Port":0,"Zone":""}}}
{"level":"debug","ts":1716813190.4139547,"logger":"http.reverse_proxy.transport.http","msg":"sending proxy protocol header v2","header":{"Version":2,"Command":33,"TransportProtocol":17,"SourceAddr":{"IP":"83.229.76.239","Port":0,"Zone":""},"DestinationAddr":{"IP":"0.0.0.0","Port":0,"Zone":""}}}
{"level":"debug","ts":1716813197.8786793,"logger":"http.reverse_proxy.transport.http","msg":"sending proxy protocol header v2","header":{"Version":2,"Command":33,"TransportProtocol":17,"SourceAddr":{"IP":"45.87.9.222","Port":0,"Zone":""},"DestinationAddr":{"IP":"0.0.0.0","Port":0,"Zone":""}}}
{"level":"debug","ts":1716813206.4157789,"logger":"http.reverse_proxy.transport.http","msg":"sending proxy protocol header v2","header":{"Version":2,"Command":33,"TransportProtocol":17,"SourceAddr":{"IP":"83.229.76.239","Port":0,"Zone":""},"DestinationAddr":{"IP":"0.0.0.0","Port":0,"Zone":""}}}
{"level":"debug","ts":1716813213.882467,"logger":"http.reverse_proxy.transport.http","msg":"sending proxy protocol header v2","header":{"Version":2,"Command":33,"TransportProtocol":17,"SourceAddr":{"IP":"45.87.9.222","Port":0,"Zone":""},"DestinationAddr":{"IP":"0.0.0.0","Port":0,"Zone":""}}}

@francislavoie
Copy link
Member

You need to configure it like this (see https://caddyserver.com/docs/caddyfile/directives/reverse_proxy#the-http-transport):

	reverse_proxy h2c://<docker_container_name>:80 {
		transport http {
			keepalive off
			proxy_protocol v2
		}
	}

@CRCinAU
Copy link
Author

CRCinAU commented May 27, 2024

You need to configure it like this (see https://caddyserver.com/docs/caddyfile/directives/reverse_proxy#the-http-transport):

	reverse_proxy h2c://<docker_container_name>:80 {
		transport http {
			keepalive off
			proxy_protocol v2
		}
	}

This does actually seem to work - ie the connection doesn't hang - on the caddy:latest image - which seems to be:

docker exec -ti caddy /bin/sh
/srv # caddy --version
v2.7.6 h1:w0NymbG2m9PcvKWsrXO6EEkY9Ru4FJK8uQbYcev1p3A=
/srv # 

However, even in this configuration, the wrong remote IP address is displayed by apache - which is the start of this bug report.

Trying this again with the instructions above ( #6342 (comment) ), the connection still hangs.

Eventually, I get a 502 timeout from the backend:

{
  "level": "error",
  "ts": 1716824790.699319,
  "logger": "http.log.error",
  "msg": "http2: client conn not usable",
  "request": {
    "remote_ip": "<my ipv6 address>",
    "remote_port": "51640",
    "client_ip": "<my ipv6 address>",
    "proto": "HTTP/3.0",
    "method": "GET",
    "host": "<fqdn>",
    "uri": "/",
    "headers": {
      "Sec-Fetch-Dest": [
        "document"
      ],
      "Accept-Language": [
        "en-GB,en;q=0.6"
      ],
      "Sec-Fetch-Mode": [
        "navigate"
      ],
      "Sec-Fetch-User": [
        "?1"
      ],
      "Sec-Ch-Ua-Platform": [
        "\"Linux\""
      ],
      "Upgrade-Insecure-Requests": [
        "1"
      ],
      "Accept": [
        "text/html,application/xhtml+xml,application/xml;q=0.9,image/avif,image/webp,image/apng,*/*;q=0.8"
      ],
      "Sec-Gpc": [
        "1"
      ],
      "Sec-Fetch-Site": [
        "same-origin"
      ],
      "Referer": [
        "https://<fqdn>"
      ],
      "Accept-Encoding": [
        "gzip, deflate, br, zstd"
      ],
      "Sec-Ch-Ua": [
        "\"Brave\";v=\"125\", \"Chromium\";v=\"125\", \"Not.A/Brand\";v=\"24\""
      ],
      "Sec-Ch-Ua-Mobile": [
        "?0"
      ],
      "Cookie": [
        "REDACTED"
      ],
      "Priority": [
        "u=0, i"
      ],
      "Cache-Control": [
        "max-age=0"
      ],
      "User-Agent": [
        "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/125.0.0.0 Safari/537.36"
      ]
    },
    "tls": {
      "resumed": false,
      "version": 772,
      "cipher_suite": 4867,
      "proto": "h3",
      "server_name": "<fqdn>"
    }
  },
  "duration": 64.009204212,
  "status": 502,
  "err_id": "07ygq06wq",
  "err_trace": "reverseproxy.statusError (reverseproxy.go:1269)"
}

@WeidiDeng
Copy link
Member

You need to configure it like this (see https://caddyserver.com/docs/caddyfile/directives/reverse_proxy#the-http-transport):

	reverse_proxy h2c://<docker_container_name>:80 {
		transport http {
			keepalive off
			proxy_protocol v2
		}
	}

@francislavoie keepalive is disabled if proxy_protocol is in use.

@francislavoie
Copy link
Member

francislavoie commented May 28, 2024

Ah, I see yeah

// The proxy protocol header can only be sent once right after opening the connection.
// So single connection must not be used for multiple requests, which can potentially
// come from different clients.
if !rt.DisableKeepAlives && h.ProxyProtocol != "" {
caddyCtx.Logger().Warn("disabling keepalives, they are incompatible with using PROXY protocol")
rt.DisableKeepAlives = true
}

So just need to use the new build to test I guess.

@WeidiDeng
Copy link
Member

I think this is a stdlib issue,

https://github.com/golang/net/blob/022530c41555839e27aec3868cc480fb7b5e33d4/http2/transport.go#L1028

However h2c requests start with a streamID of 3.

https://github.com/golang/net/blob/022530c41555839e27aec3868cc480fb7b5e33d4/http2/transport.go#L836

So requests never get sent in this case.

@WeidiDeng
Copy link
Member

@CRCinAU Can you try with xcaddy build h2c-proxy-protocol --replace golang.org/x/net=github.com/WeidiDeng/net@h2c-disable-keepalive? You'll need to update the caddy image version to 2.8.0 since older version can't build with this method.

@CRCinAU
Copy link
Author

CRCinAU commented May 30, 2024

@WeidiDeng - Sorry, I'm not the best with Caddy or its build process - I tried the following modified Dockerfile based on the above:

FROM caddy:2.8.0-builder AS builder

RUN xcaddy build h2c-proxy-protocol --replace golang.org/x/net=github.com/WeidiDeng/net@h2c-disable-keepalive

FROM caddy:2.8.0

COPY --from=builder /usr/bin/caddy /usr/bin/caddy

However it complains there isn't a tag for caddy:2.8.0 or caddy:2.8.0-builder

ie:

ERROR: failed to solve: caddy:2.8.0-builder: failed to resolve source metadata for docker.io/library/caddy:2.8.0-builder: no match for platform in manifest: not found

Looking at the tags on Docker Hub, I can see that there is a builder and latest tag updated 2 hours ago, and they do have linux/amd64 images listed. Would it be ok to use :builder and :latest for this test?

EDIT: I noticed there are images for caddy:2.8-builder and caddy:2.8 - which I tried, but that errored out with:

 => ERROR [builder 2/2] RUN xcaddy build h2c-proxy-protocol --replace golang.org/x/net=github.com/WeidiDeng/net@h2c-disable-keepalive
[ERROR] missing flag; caddy version already set at h2c-proxy-protocol

@WeidiDeng
Copy link
Member

That builder has the wrong xcaddy version 0.4.1 instead of the latest 0.4.2. It'll be a while before the docker is ready.

@WeidiDeng
Copy link
Member

@CRCinAU 2.8.0-builder is ready now, you can try again.

@CRCinAU
Copy link
Author

CRCinAU commented May 30, 2024

Built ok now, and I can confirm I'm seeing HTTP/2.0 requests to the backend, and it looks to be the correct IP address reported to the backend via the proxy protocol.

@caddyserver caddyserver deleted a comment from jodeokrae Jun 8, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
upstream ⬆️ Relates to some dependency of this project
Projects
None yet
Development

Successfully merging a pull request may close this issue.

4 participants