Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

bug: allow start container with reuse in different test package #2247

Merged
merged 5 commits into from
Feb 23, 2024
Merged
Show file tree
Hide file tree
Changes from 2 commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
58 changes: 56 additions & 2 deletions docker.go
Original file line number Diff line number Diff line change
Expand Up @@ -14,6 +14,7 @@ import (
"net/url"
"os"
"path/filepath"
"regexp"
"strings"
"sync"
"time"
Expand Down Expand Up @@ -49,6 +50,8 @@ const (
logStoppedForOutOfSyncMessage = "Stopping log consumer: Headers out of sync"
)

var createContainerFailDueToNameConflictRegex = regexp.MustCompile("Conflict. The container name .* is already in use by container .*")

// DockerContainer represents a container started using Docker
type DockerContainer struct {
// Container ID from Docker
Expand Down Expand Up @@ -1153,13 +1156,40 @@ func (p *DockerProvider) findContainerByName(ctx context.Context, name string) (
return nil, nil
}

func (p *DockerProvider) waitContainerCreation(ctx context.Context, name string) (*types.Container, error) {
var container *types.Container
return container, backoff.Retry(func() error {
c, err := p.findContainerByName(ctx, name)
if err != nil {
return err
}

if c == nil {
return fmt.Errorf("container %s not found", name)
}

container = c
return nil
}, backoff.WithContext(backoff.NewExponentialBackOff(), ctx))
}

func (p *DockerProvider) ReuseOrCreateContainer(ctx context.Context, req ContainerRequest) (Container, error) {
c, err := p.findContainerByName(ctx, req.Name)
if err != nil {
return nil, err
}
if c == nil {
return p.CreateContainer(ctx, req)
createdContainer, err := p.CreateContainer(ctx, req)
if err == nil {
return createdContainer, nil
}
if !createContainerFailDueToNameConflictRegex.MatchString(err.Error()) {
return nil, err
}
c, err = p.waitContainerCreation(ctx, req.Name)
if err != nil {
return nil, err
}
}

sessionID := core.SessionID()
Expand All @@ -1178,6 +1208,13 @@ func (p *DockerProvider) ReuseOrCreateContainer(ctx context.Context, req Contain
}
}

// default hooks include logger hook and pre-create hook
mdelapenya marked this conversation as resolved.
Show resolved Hide resolved
defaultHooks := []ContainerLifecycleHooks{
DefaultLoggingHook(p.Logger),
defaultReadinessHook(),
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Question @Alviner / @mdelapenya : is there any good reason why the defaultReadinessHook() should be added also for a container being reused?

Reason I'm asking is that I noticed quite some overhead when changing testcontainers from v0.28.0 to v0.29.1 and was able to 'bisect' the issue to this PR.

As we're heavily relying on reused containers, this overhead is quite noticeable:

$ time make integration-test
...
real	0m4.555s
user	0m10.373s
sys	0m1.093s

before. Opposed to now:

real	0m9.177s
user	0m8.254s
sys	0m1.094s

When the container is initially created (and due to the mutex) it should have been fully started and the 'readiness' state was already checked. So from my perspective I don't see a reason to keep checking this.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Question @Alviner / @mdelapenya : is there any good reason why the defaultReadinessHook() should be added also for a container being reused?

Reason I'm asking is that I noticed quite some overhead when changing testcontainers from v0.28.0 to v0.29.1 and was able to 'bisect' the issue to this PR.

As we're heavily relying on reused containers, this overhead is quite noticeable:

$ time make integration-test
...
real	0m4.555s
user	0m10.373s
sys	0m1.093s

before. Opposed to now:

real	0m9.177s
user	0m8.254s
sys	0m1.094s

When the container is initially created (and due to the mutex) it should have been fully started and the 'readiness' state was already checked. So from my perspective I don't see a reason to keep checking this.

When you run tests from different packages, they run in separate processes with a single common parent. Therefore, if you reuse containers in these packages, you need to wait for the readiness hook, which effectively triggers waitFor in most situations.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Right, ok. Then the mutex would of course no longer guarantee readiness of the Container.

But then I do think a solution for another bug I found (#2445) might resolve it to some extend. Because if there's a way to (explicitly) 'scope' a reusable container to the SessionID, then it's guaranteed its also always the same mutex instance. But I'm waiting for some feedback there to do a PR :-)

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thank you all for bringing this topic, and for being so proactive in the discussion 🙇

We are working in a better reusable experience for testcontainers-go. At the moment, it was initially built as an experimental feature that was immediately used when landed, so now it's part of the public API 😅 so difficult to remove without causing trouble to users.

The new experience will be improved and more robust and provided through Testcontainers Desktop.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thx for the additional insight.
But does that mean no changes are expected or even approved just for the library? I don't think it needs to be removed, but there must be a way to both fix the existing behaviour and not break others...
I surely hope it would not require using the TestContainers desktop app since that would not work in our CI pipeline. And currently the "reuse" flag is causing actual issues: #2445 (comment)

}
allHooks := combineContainerHooks(defaultHooks, req.LifecycleHooks)

dc := &DockerContainer{
ID: c.ID,
WaitingFor: req.WaitingFor,
Expand All @@ -1187,7 +1224,24 @@ func (p *DockerProvider) ReuseOrCreateContainer(ctx context.Context, req Contain
terminationSignal: termSignal,
stopLogProductionCh: nil,
logger: p.Logger,
isRunning: c.State == "running",
lifecycleHooks: []ContainerLifecycleHooks{
{
PostStarts: allHooks.PostStarts,
PostReadies: allHooks.PostReadies,
mdelapenya marked this conversation as resolved.
Show resolved Hide resolved
},
},
}

err = dc.startedHook(ctx)
if err != nil {
return nil, err
}

dc.isRunning = true

err = dc.readiedHook(ctx)
if err != nil {
return nil, err
}

return dc, nil
Expand Down
72 changes: 72 additions & 0 deletions generic_test.go
Original file line number Diff line number Diff line change
Expand Up @@ -3,6 +3,11 @@ package testcontainers
import (
"context"
"errors"
"net/http"
"os"
"os/exec"
"strings"
"sync"
"testing"
"time"

Expand Down Expand Up @@ -117,3 +122,70 @@ func TestGenericContainerShouldReturnRefOnError(t *testing.T) {
require.NotNil(t, c)
terminateContainerOnEnd(t, context.Background(), c)
}

func TestGenericReusableContainerInSubprocess(t *testing.T) {
wg := sync.WaitGroup{}
wg.Add(10)
for i := 0; i < 10; i++ {
go func() {
defer wg.Done()

// create containers in subprocesses, as "go test ./..." does.
output := createReuseContainerInSubprocess(t)

// check is reuse container with WaitingFor work correctly.
require.True(t, strings.Contains(output, "🚧 Waiting for container id"))
require.True(t, strings.Contains(output, "🔔 Container is ready"))
}()
}

wg.Wait()
}

func createReuseContainerInSubprocess(t *testing.T) string {
cmd := exec.Command(os.Args[0], "-test.run=TestHelperContainerStarterProcess")
cmd.Env = []string{"GO_WANT_HELPER_PROCESS=1"}

output, err := cmd.CombinedOutput()
require.NoError(t, err, string(output))

return string(output)
}

// TestHelperContainerStarterProcess is a helper function
// to start a container in a subprocess. It's not a real test.
func TestHelperContainerStarterProcess(t *testing.T) {
if os.Getenv("GO_WANT_HELPER_PROCESS") != "1" {
t.Skip("Skipping helper test function. It's not a real test")
}

ctx := context.Background()

nginxC, err := GenericContainer(ctx, GenericContainerRequest{
ProviderType: providerType,
ContainerRequest: ContainerRequest{
Image: nginxDelayedImage,
ExposedPorts: []string{nginxDefaultPort},
WaitingFor: wait.ForListeningPort(nginxDefaultPort), // default startupTimeout is 60s
Name: reusableContainerName,
},
Started: true,
Reuse: true,
})
require.NoError(t, err)
require.True(t, nginxC.IsRunning())

origin, err := nginxC.PortEndpoint(ctx, nginxDefaultPort, "http")
require.NoError(t, err)

// check is reuse container with WaitingFor work correctly.
req, err := http.NewRequestWithContext(ctx, http.MethodGet, origin, nil)
require.NoError(t, err)
req.Close = true

resp, err := http.DefaultClient.Do(req)
require.NoError(t, err)
defer resp.Body.Close()

require.Equal(t, http.StatusOK, resp.StatusCode)
}
3 changes: 0 additions & 3 deletions reaper.go
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,6 @@ import (
"fmt"
"math/rand"
"net"
"regexp"
"strings"
"sync"
"time"
Expand Down Expand Up @@ -186,8 +185,6 @@ func reuseOrCreateReaper(ctx context.Context, sessionID string, provider ReaperP
return reaperInstance, nil
}

var createContainerFailDueToNameConflictRegex = regexp.MustCompile("Conflict. The container name .* is already in use by container .*")

// reuseReaperContainer constructs a Reaper from an already running reaper
// DockerContainer.
func reuseReaperContainer(ctx context.Context, sessionID string, provider ReaperProvider, reaperContainer *DockerContainer) (*Reaper, error) {
Expand Down