Skip to content

Commit

Permalink
Remove collector container and merge it with planner
Browse files Browse the repository at this point in the history
Since collector is simple golang app, we don't need to run it
as standalone container. This PR changes the collector to be
run from planner-agent.

Signed-off-by: Ondra Machacek <[email protected]>
  • Loading branch information
machacekondra committed Oct 16, 2024
1 parent 9dbf908 commit 3c73995
Show file tree
Hide file tree
Showing 11 changed files with 106 additions and 197 deletions.
2 changes: 1 addition & 1 deletion Containerfile.agent
Original file line number Diff line number Diff line change
Expand Up @@ -15,7 +15,7 @@ RUN go mod download
COPY . .

USER 0
RUN CGO_ENABLED=0 GOOS=linux GOARCH=amd64 go build -buildvcs=false -o /planner-agent cmd/planner-agent/main.go
RUN CGO_ENABLED=1 GOOS=linux GOARCH=amd64 go build -buildvcs=false -o /planner-agent cmd/planner-agent/main.go

FROM registry.access.redhat.com/ubi9/ubi-micro

Expand Down
25 changes: 0 additions & 25 deletions Containerfile.collector

This file was deleted.

8 changes: 1 addition & 7 deletions Makefile
Original file line number Diff line number Diff line change
Expand Up @@ -7,7 +7,6 @@ GO_CACHE := -v $${HOME}/go/migration-planner-go-cache:/opt/app-root/src/go:Z -v
TIMEOUT ?= 30m
VERBOSE ?= false
MIGRATION_PLANNER_AGENT_IMAGE ?= quay.io/kubev2v/migration-planner-agent
MIGRATION_PLANNER_COLLECTOR_IMAGE ?= quay.io/kubev2v/migration-planner-collector
MIGRATION_PLANNER_API_IMAGE ?= quay.io/kubev2v/migration-planner-api
MIGRATION_PLANNER_UI_IMAGE ?= quay.io/kubev2v/migration-planner-ui
DOWNLOAD_RHCOS ?= true
Expand Down Expand Up @@ -81,23 +80,18 @@ build-api: bin
bin/.migration-planner-agent-container: bin Containerfile.agent go.mod go.sum $(GO_FILES)
podman build -f Containerfile.agent -t $(MIGRATION_PLANNER_AGENT_IMAGE):latest

bin/.migration-planner-collector-container: bin Containerfile.collector go.mod go.sum $(GO_FILES)
podman build -f Containerfile.collector -t $(MIGRATION_PLANNER_COLLECTOR_IMAGE):latest

bin/.migration-planner-api-container: bin Containerfile.api go.mod go.sum $(GO_FILES)
podman build -f Containerfile.api -t $(MIGRATION_PLANNER_API_IMAGE):latest

migration-planner-api-container: bin/.migration-planner-api-container
migration-planner-collector-container: bin/.migration-planner-collector-container
migration-planner-agent-container: bin/.migration-planner-agent-container

build-containers: migration-planner-api-container migration-planner-agent-container migration-planner-collector-container
build-containers: migration-planner-api-container migration-planner-agent-container

.PHONY: build-containers

push-containers: build-containers
podman push $(MIGRATION_PLANNER_API_IMAGE):latest
podman push $(MIGRATION_PLANNER_COLLECTOR_IMAGE):latest
podman push $(MIGRATION_PLANNER_AGENT_IMAGE):latest

deploy-on-openshift:
Expand Down
24 changes: 21 additions & 3 deletions cmd/collector/README.md → cmd/planner-agent/COLLECTOR.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,11 +4,29 @@ To run the collector localy here are the steps.
## Prepare
Prepare the dependencies.

### Configuration
Create the planner-agent configuration file:

```
$ mkdir /tmp/config
$ mkdir /tmp/data
$ cat <<EOF > ~/.planner-agent/config.yaml
config-dir: /tmp/config
data-dir: /tmp/data
log-level: debug
source-id: 9195e61d-e56d-407d-8b29-ff2fb7986928
update-interval: 5s
planner-service:
service:
server: http://127.0.0.1:7443
EOF
```

### Credentials
Create VMware credentials file.

```
cat <<EOF > /tmp/creds.json
cat <<EOF > /tmp/data/creds.json
{
"username": "[email protected]",
"password": "userpassword",
Expand All @@ -28,7 +46,7 @@ podman run -p 8181:8181 -d --name opa --entrypoint '/usr/bin/opa' quay.io/kubev2
Build & run the collector code specifying credentials file as first argument and as second path to invetory file, where data should be written.

```
go run cmd/collector/main.go /tmp/creds.json /tmp/inventory.json
go run cmd/planner-agent/main.go -config -config ~/.planner-agent/config.yaml
```

Explore `/tmp/inventory.json`
Explore `/tmp/data/inventory.json`
75 changes: 13 additions & 62 deletions data/ignition.template
Original file line number Diff line number Diff line change
Expand Up @@ -22,19 +22,19 @@ storage:
name: core
group:
name: core
- path: /home/core/vol
- path: /home/core/.migration-planner
overwrite: true
user:
name: core
group:
name: core
- path: /home/core/vol/config
- path: /home/core/.migration-planner/config
overwrite: true
user:
name: core
group:
name: core
- path: /home/core/vol/data
- path: /home/core/.migration.planner/data
overwrite: true
user:
name: core
Expand All @@ -46,7 +46,7 @@ storage:
contents:
inline: |
PasswordAuthentication yes
- path: /home/core/vol/config.yaml
- path: /home/core/.migration-planner/config.yaml
contents:
inline: |
config-dir: /agent/config
Expand All @@ -63,58 +63,32 @@ storage:
name: core
group:
name: core
- path: /home/core/.config/containers/systemd/collector.network
- path: /home/core/.config/containers/systemd/agent.network
contents:
inline: |
[Network]
user:
name: core
group:
name: core
- path: /home/core/.config/containers/systemd/planner.volume
contents:
inline: |
[Volume]
VolumeName=planner.volume
user:
name: core
group:
name: core
- path: /home/core/.config/containers/systemd/planner-setup.container
mode: 0644
contents:
inline: |
[Unit]
Description=Prepare data volume for the container
Before=planner-agent.service

[Container]
Image=registry.access.redhat.com/ubi9/ubi-micro
Exec=sh -c "cp -r /mnt/* /agent/ && chmod -R a+rwx /agent"
Volume=planner.volume:/agent
Volume=/home/core/vol:/mnt:Z

[Service]
Type=oneshot
RemainAfterExit=yes

[Install]
WantedBy=multi-user.target default.target
- path: /home/core/.config/containers/systemd/planner-agent.container
mode: 0644
contents:
inline: |
[Unit]
Description=Planner agent quadlet
Wants=planner-setup.service
Wants=planner-agent-opa.service

[Container]
Image={{.MigrationPlannerAgentImage}}
ContainerName=planner-agent
AutoUpdate=registry
Exec= -config /agent/config.yaml
PublishPort=3333:3333
Volume=planner.volume:/agent
Volume=/home/core/.migration-planner:/agent:Z
Environment=OPA_SERVER=opa:8181
Network=agent.network
UserNS=keep-id:uid=1001

[Install]
WantedBy=multi-user.target default.target
Expand All @@ -123,39 +97,16 @@ storage:
contents:
inline: |
[Unit]
Description=Collector quadlet
Before=planner-agent-collector.service
Description=OPA quadlet
Before=planner-agent.service

[Container]
ContainerName=opa
Image=quay.io/kubev2v/forklift-validation:release-v2.6.4
Entrypoint=/usr/bin/opa
PublishPort=8181:8181
Exec=run --server /usr/share/opa/policies
Network=collector.network

[Install]
WantedBy=multi-user.target default.target

- path: /home/core/.config/containers/systemd/planner-agent-collector.container
mode: 0644
contents:
inline: |
[Unit]
Description=Collector quadlet
Wants=planner-agent-opa.service

[Container]
Image={{.MigrationPlannerCollectorImage}}
ContainerName=migration-planner-collector
AutoUpdate=registry
Exec=/vol/data/credentials.json /vol/data/inventory.json
Volume=planner.volume:/vol
Environment=OPA_SERVER=opa:8181
Network=collector.network

[Service]
Restart=on-failure
Network=agent.network

[Install]
WantedBy=multi-user.target default.target
43 changes: 11 additions & 32 deletions doc/agentvm.md
Original file line number Diff line number Diff line change
@@ -1,25 +1,19 @@
# Agent virtual machine
The agent, based on Red Hat CoreOS (RHCOS), communicates with the Agent Service and reports its status.
The agent virtual machine is initialized using ignition, which configures multiple containers that run as systemd services. Each of these services is dedicated to a specific function.
The agent virtual machine is initialized using ignition, which configures container that run as systemd service.

## Systemd services
Follows the list of systemd services that can be found on agent virtual machine. All of the services
are defined as quadlets. Quadlet configuration can be found in the [ignition template file](../data/config.ign.template).
Agent dockerfile can be found [here](../Containerfile.agent), the collector containerfile is [here](../Containerfile.collector).

### planner-setup
Planner-setup service is responsible for inicializing the volume with data, that are shared between `planner-agent` and `planner-agent-collector`.
are defined as quadlets. Quadlet configuration can be found in the [ignition template file](../data/ignition.template).
Agent dockerfile can be found [here](../Containerfile.agent).

### planner-agent
Planner-agent is a service that reports the status to the Agent service. The URL of the Agent service is configured in `$HOME/vol/config.yaml` file, which is injected via ignition.
Planner-agent is a service that reports the status to the Agent service. The URL of the Agent service is configured in `$HOME/.migration-planner/config.yaml` file, which is injected via ignition.

Planner-agent contains web application that is exposed via port 3333. Once user access the web app and enter the credentials of the vCenter, `credentials.json` file is created in the shared volume, and `planner-agent-collector` can be spawned.
Planner-agent contains web application that is exposed via port 3333. Once user access the web app and enter the credentials of the vCenter, `credentials.json` file is created, and goroutine is executed which fetch the data from the vCenter. The data are stored in `invetory.json` file. Once agent notice the file it will send them over to Agent service.

### planner-agent-opa
Planner-agent-opa is a service that re-uses [forklift validation](https://github.com/kubev2v/forklift/blob/main/validation/README.adoc) container. The forklift validation container is responsible for vCenter data validation. When `planner-agent-collector` fetch vCenter data it's validated against the OPA server and report is shared back to Agent Service.

### planner-agent-collector
Planner-agent-collector service waits until user enter vCenter credentials, once credentials are entered the vCenter data are collected. The data are stored in `$HOME/vol/data/inventory.json`. Once `invetory.json` is created `planner-agent` service send the data over to Agent service.
Planner-agent-opa is a service that re-uses [forklift validation](https://github.com/kubev2v/forklift/blob/main/validation/README.adoc) container. The forklift validation container is responsible for vCenter data validation. When `planner-agent` fetch vCenter data it's validated against the OPA server and report is shared back to Agent Service.

### podman-auto-update
Podman auto update is responsible for updating the image of containers in case there is a new release of the image. We use default `podman-auto-update.timer`, which executes `podman-auto-update` every 24hours.
Expand All @@ -32,36 +26,21 @@ Usefull commands to troubleshoot Agent VM. Note that all the containers are runn
$ podman ps
```

### Checking the status of all our services
```
$ systemctl --user status planner-*
```

### Inspecting the shared volume
We create a shared volume between containers, so we can share information between collector and agent container.
In order to expore the data stored in the volume find the mountpoint of the volume:
```
$ podman volume inspect planner.volume | jq .[0].Mountpoint
```

And then you can explore relevant data. Like `config.yaml`, `credentials.json`, `inventory.json`, etc.
### Checking the status of planner-agent service
```
$ ls /var/home/core/.local/share/containers/storage/volumes/planner.volume/_data
$ cat /var/home/core/.local/share/containers/storage/volumes/planner.volume/_data/config.yaml
$ cat /var/home/core/.local/share/containers/storage/volumes/planner.volume/_data/data/credentials.json
$ cat /var/home/core/.local/share/containers/storage/volumes/planner.volume/_data/data/inventory.json
$ systemctl --user status planner-agent
```

### Inspecting the host directory with data
The ignition create a `vol` directory in `core` user home directory.
The ignition create a `.migration-planner` directory in `core` user home directory.
This directory should contain all relevant data, so in order to find misconfiguration please search in this directory.
```
$ ls -l vol
$ ls -l .migration-planner
```

### Check logs of the services
```
$ journalctl --user -f -u planner-*
$ journalctl --user -f -u planner-agent
```

### Status is `Not connected` after VM is booted.
Expand Down
4 changes: 1 addition & 3 deletions doc/deployment.md
Original file line number Diff line number Diff line change
Expand Up @@ -31,8 +31,6 @@ Agent images are defined in the ignition file. So in order to modify the images

```
env:
- name: MIGRATION_PLANNER_COLLECTOR_IMAGE
value: quay.io/$USER/migration-planner-collector
- name: MIGRATION_PLANNER_AGENT_IMAGE
value: quay.io/$USER/migration-planner-agent
```
```
3 changes: 3 additions & 0 deletions internal/agent/agent.go
Original file line number Diff line number Diff line change
Expand Up @@ -92,6 +92,9 @@ func (a *Agent) Run(ctx context.Context) error {
}
healthChecker.Start(healthCheckCh)

collector := NewCollector(a.log, a.config.DataDir)
go collector.collect()

inventoryUpdater := NewInventoryUpdater(a.log, a.config, client)
inventoryUpdater.UpdateServiceWithInventory(ctx)

Expand Down
Loading

0 comments on commit 3c73995

Please sign in to comment.