Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Container Hosts #165

Open
willr3 opened this issue Nov 17, 2022 · 11 comments
Open

Container Hosts #165

willr3 opened this issue Nov 17, 2022 · 11 comments
Labels
enhancement New feature or request

Comments

@willr3
Copy link
Collaborator

willr3 commented Nov 17, 2022

We talked about using containers for the target host, which sounds great. Let's iron out the details.

feature ideas:

  • connect to a container that is already running (local and remote)
    I have this working in a POC. We use docker exec to start a shell process (/bin/bash) and treat it like a normal terminal connection.
  • start a new container based on an image url (local and remote)
    this is also working in a POC using podman run --detach. qDup normally creates a new ssh session for setup, each run script, then cleanup. This means if we are starting a new container based on an image we have to track the containerId. We could make the Host mutable and store it there.
  • stop the container after the run?
    there is an internal postRun phase for writing the run.json that we could use to stop and remove containers. qDup does not open connections during that phase but we could add it.
  • remove the container after the run?
    we can use the same postRun connection to cleanup the containers
  • remove the container if the run aborts?
    I could see us wanting to save the pods for inspection after an aborted run but I could also see that turning into a mess.
  • support different container platforms (podman, docker, kubernetes, openshift, etc)?
    we could support this by either having an enum of supported platforms and storing the setup commands internally or we directly expose the setup commands. If we use the enum option we will need a way to support login (e.g. oc login). If we directly expose the setup commands we would have:
  1. checkContainerId - check if a containerId already exists and is running
  2. checkContainerName - check if a named container already exists and is running
  3. startContainer - start a container based on an image
  4. connectShell - connect to a shell in the container
  5. upload - send a file to the container
  6. download - download a file from the container
  7. stopContainer - stop a running container
  8. removeContainer - remove the container
  9. platformLogin - do we have a separate login command or do users need to use oc login ...; before each command?
    These would all support ${{...}} pattern replacement to access run state (for oc login injection) and have a containerId state variable passed in from qDup.

yaml

we should have a syntax to express the host in one line as well as the mapping syntax. The one line syntax does not need to support all the options but we use it extensively in the tests. I think it will be easy enough to identify a container name but there could be corner cases if the name includes an @.

host:
  justImage: quay.io/foo/bar #implies local
  justContainerId: b3a7bc64f5ad #normal host notation would have an @ so we assume it is container
  localContainerId: !local/b3a7bc64f5ad #local would be implied but can also be explicitly set
  remoteContainerId: user@server/b3a7bc64f5ad #a we pares for an @ then a / to determine there is both host and container info
  justContainerName: bold_rhodes #same as above but we have to check containerId then name for existing containers
  localImage: !local/quay.io/foo/bar # do we use / to separate host info from container info?
  remoteImage: user@server/quay.io/foo/bar 

local changes

we use rsync or scp to copy files from the SUT to the run output folder. Those won't work to get files out of a container. I think we will have to store the upload and download commands on the host and use the current commands as defaults.

Any ideas, comments, concerns?

@willr3 willr3 added the enhancement New feature or request label Nov 17, 2022
@willr3
Copy link
Collaborator Author

willr3 commented Nov 17, 2022

@johnaohara @rvansa pinging you both to get your assessment on the idea

@rvansa
Copy link
Member

rvansa commented Nov 18, 2022

Initially I thought that the containers would have to run sshd inside and you would connect normally over ssh, but using just podman exec ... bash is definitely more usable, though it's more work on the qDup side.

I see little value in connecting to an already running container, I wouldn't support this in qDup. Either you want to do some part of the setup in containers, so qDup starts them and then tears them down, but expecting a container to be running, hardcoding the container IDs and such is not the idiomatic way. For hacks, the user can podman exec ... into himself.
Looking forward to hear about use cases where a setup that expect the container to run would make sense.

I would stop and remove the container after the test by default, for debugging an override should be possible with

myContainer:
   image: quay.io/foo/bar
   stop: true
   remove: false

Please don't start values with an exclamation mark, the example

localImage: !local/quay.io/foo/bar 

actually maps an empty value tagged with tag !local/quay.io/foo/bar. Have you used !local for the local-mode already?

@rvansa rvansa mentioned this issue Nov 18, 2022
@willr3
Copy link
Collaborator Author

willr3 commented Nov 18, 2022

Should we instead use !local quay.io/foo/bar so that we are annotating the image? The !local would be the assumed location for the container platform if the user does not provide ssh credentials (username and hostame) to the host construct so the !local part is not necessary. We could also not support explicitly setting !local and raise an error message if !local is included in the sequence along with image or container information.

@rvansa
Copy link
Member

rvansa commented Nov 21, 2022

I dislike the asymmetry between !local quay.io/foo/bar and user@server/quay.io/foo/bar.
I don't think that a reserved hostname local is that limiting, you can always use the expanded notation to use hostname local explicitly.

@johnaohara
Copy link
Member

What is/are our intended goal(s)/workflow(s)? Are they to:

Not pollute the target environment with build tools?

The idea here is to not install build tools directly on the host and instead use an image that contains the build tools to perform the build step(s)

In this case we can either;

Spin up a container on a remote/local machine, using an image with the necessary build tools included , and ssh into that machine to perform the build steps. This could support current scripts with minimal changes to the scripts (i.e remove installing the tools)

Or

We could allow users to provide a build command that we execute against a stated image. In this case does this mean;
1. Users need to re-write scripts so that the sh cmd that executes the build is either, a. In its own script, or do we expose a container build cmd for wrapping the build?
2. What if build scripts have multiple steps? i.e. building Quarkus, then building a Quickstart? How would this be expressed if we decide on some form of podman exec ... bash syntax? Would we need to link build commands in that case (i.e. push output from one container as input for the next container)

Is building in a container something we want to abstract away for users, or give users the tools (cmd’s) to allow them to orchestrate themselves. If we leave them to orchestrate for themselves how do we ensure that scripts are not installing build tools - take away dnf from the Jenkins user?

Personally, I like the concept of a user defining a runtime env and not having container specific controls exposed to be used in scripts.

Provide greater flexibility

Here, we would provide different deployment options for running the qDup scripts. We have mentioned Podman/docker/K8s/OCP/minikube etc.

We need to think about the deployment scenario. e.g. are we running the qDup controller inside the cluster? Are we still connecting to the cluster from outside, the way in which we execute script now?

Opening an ssh connection to some of these remote env will be difficult, e.g. a remote K8s cluster. This might need a third remote type, we will have ssh, local and possibly wrap kubectl exec --stdin --tty demo -- /bin/bash

other?

General comments

I think we need to be careful that these changes are backwards compatible, or we release a qDup 1.0 version

I could see us wanting to save the pods for inspection after an aborted

Do we need the container, but just the logs? The same if we were running a qDup script now? The only reason for keeping the containers would be for us to debug what did/didn’t happen that caused a script to not run in the container

Wrt yaml syntax, I commented in the

@willr3
Copy link
Collaborator Author

willr3 commented Nov 23, 2022

I agree, I don't like that asymmetry. what about _local_ as a reserved word? That is not a valid hostname and also not a yaml tag expression so we avoid both concerns.

@rvansa
Copy link
Member

rvansa commented Nov 23, 2022

@johnaohara It's not only about pollution. One of the main advantages of containerization is that you have full bill of materials, hence better reproducibility. You don't rely on something that you've manually installed to the machine, forgot about and someone removed that later.

If containers are to communicate we should offer a way to mount directories. However, that gets us slowly to the container-compose business... In addition to that, you can always use the upload/download commands to move files through qDup directly.

I wouldn't bother too much about removing ability to dnf. We're still going to give people sudo because for some tasks it is needed. Let's establish best practices that don't suck and then figure out solutions for the corner cases where following the best practice won't do, rather than enforcing restrictions.

@rvansa
Copy link
Member

rvansa commented Nov 23, 2022

@willr3 I would call this an obscure syntax. Though, I understand that anything is going to be a compromise. Pick your poison.

@johnaohara
Copy link
Member

If containers are to communicate we should offer a way to mount directories. However, that gets us slowly to the container-compose business... In addition to that, you can always use the upload/download commands to move files through qDup directly.

This is an interesting point, atm we don't upload/download artefacts to/from the local machine. We did at one point in time with the assets directory, but the trend has been to build from source directly on the target env. If we are to build in a container, the process for moving the build artifacts from the container to the target env, I think should be fairly transparent to the user.

@rvansa
Copy link
Member

rvansa commented Nov 23, 2022

@johnaohara Yes, I wasn't considering any sort of move (yet). Since the container and SUT would be two targets, you would download from one and then upload to another.

@willr3
Copy link
Collaborator Author

willr3 commented Nov 28, 2022

The plan right now was for a qDup host to be the container. The setup-scripts run-scripts and cleanup-scripts will all run in the same container. A user that wants to separate the setup-scripts into a container and have the run-scripts on baremetal will need to create two separate roles.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

3 participants