Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Bug]: Explicitely mapped ports break default readiness hook #2652

Closed
robinvanderstraeten-klarrio opened this issue Jul 16, 2024 · 12 comments · Fixed by #2658
Closed

[Bug]: Explicitely mapped ports break default readiness hook #2652

robinvanderstraeten-klarrio opened this issue Jul 16, 2024 · 12 comments · Fixed by #2658
Labels
bug An issue with the library

Comments

@robinvanderstraeten-klarrio
Copy link
Contributor

Testcontainers version

0.32.0

Using the latest Testcontainers version?

Yes

Host OS

Linux (WSL)

Host arch

x86_64

Go version

1.22.3

Docker version

Client: Docker Engine - Community
 Version:           26.0.1
 API version:       1.45
 Go version:        go1.21.9
 Git commit:        d260a54
 Built:             Thu Apr 11 10:53:21 2024
 OS/Arch:           linux/amd64
 Context:           default

Server: Docker Engine - Community
 Engine:
  Version:          26.0.1
  API version:      1.45 (minimum version 1.24)
  Go version:       go1.21.9
  Git commit:       60b9add
  Built:            Thu Apr 11 10:53:21 2024
  OS/Arch:          linux/amd64
  Experimental:     false
 containerd:
  Version:          1.6.31
  GitCommit:        e377cd56a71523140ca6ae87e30244719194a521
 runc:
  Version:          1.1.12
  GitCommit:        v1.1.12-0-g51d5e94
 docker-init:
  Version:          0.19.0
  GitCommit:        de40ad0

Docker info

Client: Docker Engine - Community
 Version:    26.0.1
 Context:    default
 Debug Mode: false
 Plugins:
  buildx: Docker Buildx (Docker Inc.)
    Version:  v0.13.1
    Path:     /usr/libexec/docker/cli-plugins/docker-buildx
  compose: Docker Compose (Docker Inc.)
    Version:  v2.26.1
    Path:     /usr/libexec/docker/cli-plugins/docker-compose
  scan: Docker Scan (Docker Inc.)
    Version:  v0.23.0
    Path:     /usr/libexec/docker/cli-plugins/docker-scan

Server:
 Containers: 7
  Running: 1
  Paused: 0
  Stopped: 6
 Images: 32
 Server Version: 26.0.1
 Storage Driver: overlay2
  Backing Filesystem: extfs
  Supports d_type: true
  Using metacopy: false
  Native Overlay Diff: true
  userxattr: false
 Logging Driver: json-file
 Cgroup Driver: cgroupfs
 Cgroup Version: 1
 Plugins:
  Volume: local
  Network: bridge host ipvlan macvlan null overlay
  Log: awslogs fluentd gcplogs gelf journald json-file local splunk syslog
 Swarm: inactive
 Runtimes: runc io.containerd.runc.v2
 Default Runtime: runc
 Init Binary: docker-init
 containerd version: e377cd56a71523140ca6ae87e30244719194a521
 runc version: v1.1.12-0-g51d5e94
 init version: de40ad0
 Security Options:
  seccomp
   Profile: builtin
 Kernel Version: 5.15.153.1-microsoft-standard-WSL2
 Operating System: Ubuntu 22.04.4 LTS
 OSType: linux
 Architecture: x86_64
 CPUs: 6
 Total Memory: 15.62GiB
 Name: VD011420
 ID: 5f7f091e-64fd-40bb-bfdf-ad87e9ae09ec
 Docker Root Dir: /var/lib/docker
 Debug Mode: false
 Username: robinvanderstraeten352
 Experimental: false
 Insecure Registries:
  127.0.0.0/8
 Live Restore Enabled: false

WARNING: No blkio throttle.read_bps_device support
WARNING: No blkio throttle.write_bps_device support
WARNING: No blkio throttle.read_iops_device support
WARNING: No blkio throttle.write_iops_device support

What happened?

Since version 0.32.0 of testcontainers, creating a container with an explicit port mapping in ExposedPorts (for instance "4443:4443/tcp") no longer works. The container starts correctly, but a newly introduced readiness check fails. It incorrectly parses the input given in ExposedPorts and concludes that the container did not start correctly. For context, I'm talking about the logic here.
I'm aware that testcontainers creates a port mapping for you, but sometimes you need to know the port before the container starts. Here's an example snippet:

      var port = "4443/tcp"
      var hostPort = "4443"
      req := testcontainers.GenericContainerRequest{
          ContainerRequest: testcontainers.ContainerRequest{
              Image:        "fsouza/fake-gcs-server",
              ExposedPorts: []string{fmt.Sprintf("%s:%s", hostPort, port)},
              Cmd:          []string{"-scheme", "http", "-public-host", fmt.Sprintf("localhost:%s", hostPort)},
              WaitingFor:   wait.ForHTTP("/storage/v1/b").WithPort(nat.Port(port)),
          },
          Started: true,
      }

Relevant log output

failed to start container: all exposed ports, [4443:4443/tcp], were not mapped in 5s: port 4443:4443/tcp is not mapped yet

Additional information

I have made a fork with a suggestion for a fix. robinvanderstraeten-klarrio@193ab06

If this is the right approach I'll submit a PR.

@robinvanderstraeten-klarrio robinvanderstraeten-klarrio added the bug An issue with the library label Jul 16, 2024
@mdelapenya
Copy link
Member

Hey @robinvanderstraeten-klarrio thanks for opening this issue, yeah, I see it clear with your report where the bug is. I think your fix will work, so feel free to elaborate a PR when you have the time.

Thanks for your contribution!

@mdelapenya
Copy link
Member

I have one related question: why do you need a fixed port? Testcontainers will provide APIs to discover the random port for the service, gcs in this case. I'd like to understand more on your use case.

@rgruchalski-klarrio
Copy link

rgruchalski-klarrio commented Jul 17, 2024

@mdelapenya it's because we require the public port inside of the container for public-api address which is returned by the program to client so that the client can reach the container back. In this particular case, it is the fake gcs container: https://github.com/fsouza/fake-gcs-server. There's a public host setting https://github.com/fsouza/fake-gcs-server/blob/main/internal/config/config.go#L74, this address is returned to the client who wants to read a file, and that address becomes a full signed URL like http://localhost:port/storage/v1/b/... The port needs to be a port reachable from the host where the client runs.

You'd normally see this behavior in other places, the most notorious would be Apache Kafka that also requires you to advertise a public hostname and an internal hostname (https://docs.confluent.io/platform/current/installation/configuration/broker-configs.html#advertised-listeners). Sometimes you simply need to know the public port before the container is started.

Normally, when this happens, an approach similar to this is preffered: https://github.com/radekg/app-kit-orytest/blob/master/common/common.go#L42-L108.

@mdelapenya
Copy link
Member

Wdyt about contributing a module for the fake-gcs-server? Then we can do this magic inside the module, hiding the implementation details to the users. Thoughts?

@absorbb
Copy link

absorbb commented Jul 17, 2024

I have one related question: why do you need a fixed port?

@mdelapenya
In our test cases we simulate how our app handles 3rdparty service outage - so we doing stop-start for containers.
without explicitly mapped port after restart port changes.

@mdelapenya
Copy link
Member

In our test cases we simulate how our app handles 3rdparty service outage - so we doing stop-start for containers.
without explicitly mapped port after restart port changes.

@absorbb have you tried toxyproxy for that? We have an example here https://golang.testcontainers.org/examples/toxiproxy/ Probably worth it to convert it into a real Go module 🤔

@ftw-soft
Copy link

In our test cases we simulate how our app handles 3rdparty service outage - so we doing stop-start for containers.
without explicitly mapped port after restart port changes.

@absorbb have you tried toxyproxy for that? We have an example here https://golang.testcontainers.org/examples/toxiproxy/ Probably worth it to convert it into a real Go module 🤔

Hi @mdelapenya.

I'm experiencing same issue using RabbitMQ testcontainer.
We need a fixed port mapping too, because in our tests we create bindings and queues for the exchanges, and with the restart we test that bindings are still persists within the RabbitMQ. So, the toxiproxy is not an option for our library, because with those tests we can be ensured that our queues created by it are durable.

Hope that you manage to use the fork proposed by @robinvanderstraeten-klarrio

@mdonkers
Copy link
Contributor

chipping in, as I have the same problem but a slightly different cause.

We're having exposed ports defined as 0.0.0.0::9000/tcp, which for the same reasons now also doesn't work.
The reason we have this, is to explicitly only bind to IPv4. Otherwise, at least on certain Linux systems, it will bind to both IPv4 and IPv6. And initially the port numbers might be the same, but over time there might be a discrepancy where the mapped port for IPv4 is different to the IPv6 port, causing different kinds of issues. (this is due to Docker, a long open issue which I expect won't be resolved, see #551)

I'm not sure if the provided PR already covers for this. @robinvanderstraeten-klarrio could you add a test case to make sure?

@robinvanderstraeten-klarrio
Copy link
Contributor Author

@mdonkers It's already covered, I've added a testcase here: 352e5ac

@mdelapenya
Copy link
Member

Thanks to all of you folks that were involved in the resolution of this bug 🙇

@czeslavo
Copy link

Hey @mdelapenya, is there a chance there will be a patch version released with this fix?

@mdelapenya
Copy link
Member

Hi @czeslavo yes, we are planning cutting a release soon. There is a bugbash PR with many small but important refinements that I'd like to get in into the release: #2685

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug An issue with the library
Projects
None yet
7 participants