-
Notifications
You must be signed in to change notification settings - Fork 12
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Container Hosts #165
Comments
@johnaohara @rvansa pinging you both to get your assessment on the idea |
Initially I thought that the containers would have to run I see little value in connecting to an already running container, I wouldn't support this in qDup. Either you want to do some part of the setup in containers, so qDup starts them and then tears them down, but expecting a container to be running, hardcoding the container IDs and such is not the idiomatic way. For hacks, the user can I would stop and remove the container after the test by default, for debugging an override should be possible with
Please don't start values with an exclamation mark, the example
actually maps an empty value tagged with tag |
Should we instead use |
I dislike the asymmetry between |
What is/are our intended goal(s)/workflow(s)? Are they to:Not pollute the target environment with build tools?The idea here is to not install build tools directly on the host and instead use an image that contains the build tools to perform the build step(s) In this case we can either; Spin up a container on a remote/local machine, using an image with the necessary build tools included , and ssh into that machine to perform the build steps. This could support current scripts with minimal changes to the scripts (i.e remove installing the tools) Or We could allow users to provide a build command that we execute against a stated image. In this case does this mean; Is building in a container something we want to abstract away for users, or give users the tools ( Personally, I like the concept of a user defining a runtime env and not having container specific controls exposed to be used in scripts. Provide greater flexibilityHere, we would provide different deployment options for running the qDup scripts. We have mentioned Podman/docker/K8s/OCP/minikube etc. We need to think about the deployment scenario. e.g. are we running the qDup controller inside the cluster? Are we still connecting to the cluster from outside, the way in which we execute script now? Opening an ssh connection to some of these remote env will be difficult, e.g. a remote K8s cluster. This might need a third remote type, we will have other?General commentsI think we need to be careful that these changes are backwards compatible, or we release a qDup 1.0 version
Do we need the container, but just the logs? The same if we were running a qDup script now? The only reason for keeping the containers would be for us to debug what did/didn’t happen that caused a script to not run in the container Wrt yaml syntax, I commented in the |
I agree, I don't like that asymmetry. what about |
@johnaohara It's not only about pollution. One of the main advantages of containerization is that you have full bill of materials, hence better reproducibility. You don't rely on something that you've manually installed to the machine, forgot about and someone removed that later. If containers are to communicate we should offer a way to mount directories. However, that gets us slowly to the container-compose business... In addition to that, you can always use the I wouldn't bother too much about removing ability to |
@willr3 I would call this an obscure syntax. Though, I understand that anything is going to be a compromise. Pick your poison. |
This is an interesting point, atm we don't upload/download artefacts to/from the local machine. We did at one point in time with the assets directory, but the trend has been to build from source directly on the target env. If we are to build in a container, the process for moving the build artifacts from the container to the target env, I think should be fairly transparent to the user. |
@johnaohara Yes, I wasn't considering any sort of |
The plan right now was for a qDup |
We talked about using containers for the target host, which sounds great. Let's iron out the details.
feature ideas:
I have this working in a POC. We use
docker exec
to start a shell process (/bin/bash
) and treat it like a normal terminal connection.this is also working in a POC using
podman run --detach
. qDup normally creates a new ssh session for setup, each run script, then cleanup. This means if we are starting a new container based on an image we have to track the containerId. We could make theHost
mutable and store it there.there is an internal
postRun
phase for writing therun.json
that we could use to stop and remove containers. qDup does not open connections during that phase but we could add it.we can use the same
postRun
connection to cleanup the containersaborts
?I could see us wanting to save the pods for inspection after an aborted run but I could also see that turning into a mess.
we could support this by either having an enum of supported platforms and storing the setup commands internally or we directly expose the setup commands. If we use the enum option we will need a way to support login (e.g.
oc login
). If we directly expose the setup commands we would have:checkContainerId
- check if a containerId already exists and is runningcheckContainerName
- check if a named container already exists and is runningstartContainer
- start a container based on an imageconnectShell
- connect to a shell in the containerupload
- send a file to the containerdownload
- download a file from the containerstopContainer
- stop a running containerremoveContainer
- remove the containerplatformLogin
- do we have a separate login command or do users need to useoc login ...;
before each command?These would all support
${{...}}
pattern replacement to access run state (for oc login injection) and have acontainerId
state variable passed in from qDup.yaml
we should have a syntax to express the host in one line as well as the mapping syntax. The one line syntax does not need to support all the options but we use it extensively in the tests. I think it will be easy enough to identify a container name but there could be corner cases if the name includes an
@
.local changes
we use
rsync
orscp
to copy files from the SUT to the run output folder. Those won't work to get files out of a container. I think we will have to store theupload
anddownload
commands on thehost
and use the current commands as defaults.Any ideas, comments, concerns?
The text was updated successfully, but these errors were encountered: