Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Cannot track usage of podman containers #55

Closed
asmacdo opened this issue Jun 7, 2024 · 3 comments
Closed

Cannot track usage of podman containers #55

asmacdo opened this issue Jun 7, 2024 · 3 comments
Milestone

Comments

@asmacdo
Copy link
Member

asmacdo commented Jun 7, 2024

 duct -- podman run --rm -it progrium/stress --cpu 2 --io 1 --vm 2 --vm-bytes 5000M --timeout 10 
duct is executing podman run --rm -it progrium/stress --cpu 2 --io 1 --vm 2 --vm-bytes 5000M --timeout 10...
Log files will be written to .duct/logs/2024.06.07T10.29.23-530520_
stress: info: [1] dispatching hogs: 2 cpu, 1 io, 2 vm, 0 hdd
stress: dbug: [1] using backoff sleep of 15000us
stress: dbug: [1] setting timeout to 10s
stress: dbug: [1] --> hogcpu worker 2 [2] forked
stress: dbug: [1] --> hogio worker 1 [3] forked
stress: dbug: [1] --> hogvm worker 2 [4] forked
stress: dbug: [1] using backoff sleep of 6000us
stress: dbug: [1] setting timeout to 10s
stress: dbug: [1] --> hogcpu worker 1 [5] forked
stress: dbug: [1] --> hogvm worker 1 [6] forked
stress: dbug: [6] allocating 5242880000 bytes ...
stress: dbug: [6] touching bytes in strides of 4096 bytes ...
stress: dbug: [4] allocating 5242880000 bytes ...
stress: dbug: [4] touching bytes in strides of 4096 bytes ...
stress: dbug: [6] freed 5242880000 bytes
stress: dbug: [6] allocating 5242880000 bytes ...
stress: dbug: [6] touching bytes in strides of 4096 bytes ...
stress: dbug: [4] freed 5242880000 bytes
stress: dbug: [4] allocating 5242880000 bytes ...
stress: dbug: [4] touching bytes in strides of 4096 bytes ...
stress: dbug: [4] freed 5242880000 bytes
stress: dbug: [4] allocating 5242880000 bytes ...
stress: dbug: [4] touching bytes in strides of 4096 bytes ...
stress: dbug: [6] freed 5242880000 bytes
stress: dbug: [6] allocating 5242880000 bytes ...
stress: dbug: [6] touching bytes in strides of 4096 bytes ...
stress: dbug: [6] freed 5242880000 bytes
stress: dbug: [6] allocating 5242880000 bytes ...
stress: dbug: [6] touching bytes in strides of 4096 bytes ...
stress: dbug: [4] freed 5242880000 bytes
stress: dbug: [4] allocating 5242880000 bytes ...
stress: dbug: [4] touching bytes in strides of 4096 bytes ...
stress: dbug: [4] freed 5242880000 bytes
stress: dbug: [4] allocating 5242880000 bytes ...
stress: dbug: [4] touching bytes in strides of 4096 bytes ...
stress: dbug: [6] freed 5242880000 bytes
stress: dbug: [6] allocating 5242880000 bytes ...
stress: dbug: [6] touching bytes in strides of 4096 bytes ...
stress: dbug: [1] <-- worker 2 signalled normally
stress: dbug: [1] <-- worker 5 signalled normally
stress: dbug: [1] <-- worker 3 signalled normally
stress: dbug: [1] <-- worker 4 signalled normally
stress: dbug: [1] <-- worker 6 signalled normally
stress: info: [1] successful run completed in 10s

Exit Code: 0
Command: podman run --rm -it progrium/stress --cpu 2 --io 1 --vm 2 --vm-bytes 5000M --timeout 10
Log files location: .duct/logs/2024.06.07T10.29.23-530520_
Wall Clock Time: 10.3531014919281
Memory Peak Usage: 0.1%
CPU Peak Usage: 7.0%

Thats unfortunate, especially if it turns out to be a fundamental limitation of our method of tracking with session id.

@asmacdo asmacdo changed the title Cannot track usage of containers Cannot track usage of podman containers Jun 7, 2024
@asmacdo
Copy link
Member Author

asmacdo commented Jun 7, 2024

Apptainer (singularity) works!

 austin@fancy  ~/devel/duct   fix-test-script ± duct apptainer build stress.sif docker://progrium/stress
duct is executing apptainer build stress.sif docker://progrium/stress...
Log files will be written to .duct/logs/2024.06.07T10.47.49-540470_
WARNING: 'nodev' mount option set on /tmp, it could be a source of failure during build process
INFO:    Starting build...
Getting image source signatures
Copying blob sha256:7d04a4fe140537a71a03bd27fdf4c8a88981c3fb6f77f890efc2bdc8fd67cd6c
Copying blob sha256:871c32dbbb53711b4c20594aa30663ea264118b63c0dafde1a23a4ba13aad47e
Copying blob sha256:d14088925c6e3c1023b7a1c5cd07eba3cfda32e9898f2a97fde15ea2b3d4fc5c
Copying blob sha256:dbe7819a64dde281f564a4a9777433e9b1340a4a37c40a47c986465cc4c6663c
Copying blob sha256:58026d51efe4ff89203aed9c895b1a0d2e7d3abe386c320d3f1f08736f9883bc
Copying blob sha256:a3ed95caeb02ffe68cdd9fd84406680ae93d633cb16422d00e8a7c22955b46d4
Copying blob sha256:1775fca35fb6a4d31c541746eaea63c5cb3c00280c8b5a351d4e944cdca7489d
Copying blob sha256:5c319e2679086a11f7575e9ae2af256a2410d316931351b82a61da6318750782
Copying blob sha256:1775fca35fb6a4d31c541746eaea63c5cb3c00280c8b5a351d4e944cdca7489d
Copying blob sha256:1775fca35fb6a4d31c541746eaea63c5cb3c00280c8b5a351d4e944cdca7489d
Copying config sha256:cf8e24e9194b403e445c61e6fdcc71ed96f0256194854fc4749b4d6938d4cfb0
Writing manifest to image destination
Storing signatures
2024/06/07 10:47:51  info unpack layer: sha256:a3ed95caeb02ffe68cdd9fd84406680ae93d633cb16422d00e8a7c22955b46d4
2024/06/07 10:47:51  info unpack layer: sha256:871c32dbbb53711b4c20594aa30663ea264118b63c0dafde1a23a4ba13aad47e
2024/06/07 10:47:51  warn rootless{dev/agpgart} creating empty file in place of device 10:175
2024/06/07 10:47:51  warn rootless{dev/audio} creating empty file in place of device 14:4
2024/06/07 10:47:51  warn rootless{dev/audio1} creating empty file in place of device 14:20
2024/06/07 10:47:51  warn rootless{dev/audio2} creating empty file in place of device 14:36
2024/06/07 10:47:51  warn rootless{dev/audio3} creating empty file in place of device 14:52
2024/06/07 10:47:51  warn rootless{dev/audioctl} creating empty file in place of device 14:7
2024/06/07 10:47:51  warn rootless{dev/console} creating empty file in place of device 5:1
2024/06/07 10:47:51  warn rootless{dev/dsp} creating empty file in place of device 14:3
2024/06/07 10:47:51  warn rootless{dev/dsp1} creating empty file in place of device 14:19
2024/06/07 10:47:51  warn rootless{dev/dsp2} creating empty file in place of device 14:35
2024/06/07 10:47:51  warn rootless{dev/dsp3} creating empty file in place of device 14:51
2024/06/07 10:47:51  warn rootless{dev/full} creating empty file in place of device 1:7
2024/06/07 10:47:51  warn rootless{dev/kmem} creating empty file in place of device 1:2
2024/06/07 10:47:51  warn rootless{dev/loop0} creating empty file in place of device 7:0
2024/06/07 10:47:51  warn rootless{dev/loop1} creating empty file in place of device 7:1
2024/06/07 10:47:51  warn rootless{dev/loop2} creating empty file in place of device 7:2
2024/06/07 10:47:51  warn rootless{dev/loop3} creating empty file in place of device 7:3
2024/06/07 10:47:51  warn rootless{dev/loop4} creating empty file in place of device 7:4
2024/06/07 10:47:51  warn rootless{dev/loop5} creating empty file in place of device 7:5
2024/06/07 10:47:51  warn rootless{dev/loop6} creating empty file in place of device 7:6
2024/06/07 10:47:51  warn rootless{dev/loop7} creating empty file in place of device 7:7
2024/06/07 10:47:51  warn rootless{dev/mem} creating empty file in place of device 1:1
2024/06/07 10:47:51  warn rootless{dev/midi0} creating empty file in place of device 35:0
2024/06/07 10:47:51  warn rootless{dev/midi00} creating empty file in place of device 14:2
2024/06/07 10:47:51  warn rootless{dev/midi01} creating empty file in place of device 14:18
2024/06/07 10:47:51  warn rootless{dev/midi02} creating empty file in place of device 14:34
2024/06/07 10:47:51  warn rootless{dev/midi03} creating empty file in place of device 14:50
2024/06/07 10:47:51  warn rootless{dev/midi1} creating empty file in place of device 35:1
2024/06/07 10:47:51  warn rootless{dev/midi2} creating empty file in place of device 35:2
2024/06/07 10:47:51  warn rootless{dev/midi3} creating empty file in place of device 35:3
2024/06/07 10:47:51  warn rootless{dev/mixer} creating empty file in place of device 14:0
2024/06/07 10:47:51  warn rootless{dev/mixer1} creating empty file in place of device 14:16
2024/06/07 10:47:51  warn rootless{dev/mixer2} creating empty file in place of device 14:32
2024/06/07 10:47:51  warn rootless{dev/mixer3} creating empty file in place of device 14:48
2024/06/07 10:47:51  warn rootless{dev/mpu401data} creating empty file in place of device 31:0
2024/06/07 10:47:51  warn rootless{dev/mpu401stat} creating empty file in place of device 31:1
2024/06/07 10:47:51  warn rootless{dev/null} creating empty file in place of device 1:3
2024/06/07 10:47:51  warn rootless{dev/port} creating empty file in place of device 1:4
2024/06/07 10:47:51  warn rootless{dev/ptmx} creating empty file in place of device 5:2
2024/06/07 10:47:51  warn rootless{dev/ram0} creating empty file in place of device 1:0
2024/06/07 10:47:51  warn rootless{dev/ram1} creating empty file in place of device 1:1
2024/06/07 10:47:51  warn rootless{dev/ram10} creating empty file in place of device 1:10
2024/06/07 10:47:51  warn rootless{dev/ram11} creating empty file in place of device 1:11
2024/06/07 10:47:51  warn rootless{dev/ram12} creating empty file in place of device 1:12
2024/06/07 10:47:51  warn rootless{dev/ram13} creating empty file in place of device 1:13
2024/06/07 10:47:51  warn rootless{dev/ram14} creating empty file in place of device 1:14
2024/06/07 10:47:51  warn rootless{dev/ram15} creating empty file in place of device 1:15
2024/06/07 10:47:51  warn rootless{dev/ram16} creating empty file in place of device 1:16
2024/06/07 10:47:51  warn rootless{dev/ram2} creating empty file in place of device 1:2
2024/06/07 10:47:51  warn rootless{dev/ram3} creating empty file in place of device 1:3
2024/06/07 10:47:51  warn rootless{dev/ram4} creating empty file in place of device 1:4
2024/06/07 10:47:51  warn rootless{dev/ram5} creating empty file in place of device 1:5
2024/06/07 10:47:51  warn rootless{dev/ram6} creating empty file in place of device 1:6
2024/06/07 10:47:51  warn rootless{dev/ram7} creating empty file in place of device 1:7
2024/06/07 10:47:51  warn rootless{dev/ram8} creating empty file in place of device 1:8
2024/06/07 10:47:51  warn rootless{dev/ram9} creating empty file in place of device 1:9
2024/06/07 10:47:51  warn rootless{dev/random} creating empty file in place of device 1:8
2024/06/07 10:47:51  warn rootless{dev/rmidi0} creating empty file in place of device 35:64
2024/06/07 10:47:51  warn rootless{dev/rmidi1} creating empty file in place of device 35:65
2024/06/07 10:47:51  warn rootless{dev/rmidi2} creating empty file in place of device 35:66
2024/06/07 10:47:51  warn rootless{dev/rmidi3} creating empty file in place of device 35:67
2024/06/07 10:47:51  warn rootless{dev/sequencer} creating empty file in place of device 14:1
2024/06/07 10:47:51  warn rootless{dev/smpte0} creating empty file in place of device 35:128
2024/06/07 10:47:51  warn rootless{dev/smpte1} creating empty file in place of device 35:129
2024/06/07 10:47:51  warn rootless{dev/smpte2} creating empty file in place of device 35:130
2024/06/07 10:47:51  warn rootless{dev/smpte3} creating empty file in place of device 35:131
2024/06/07 10:47:51  warn rootless{dev/sndstat} creating empty file in place of device 14:6
2024/06/07 10:47:51  warn rootless{dev/tty} creating empty file in place of device 5:0
2024/06/07 10:47:51  warn rootless{dev/tty0} creating empty file in place of device 4:0
2024/06/07 10:47:51  warn rootless{dev/tty1} creating empty file in place of device 4:1
2024/06/07 10:47:51  warn rootless{dev/tty2} creating empty file in place of device 4:2
2024/06/07 10:47:51  warn rootless{dev/tty3} creating empty file in place of device 4:3
2024/06/07 10:47:51  warn rootless{dev/tty4} creating empty file in place of device 4:4
2024/06/07 10:47:51  warn rootless{dev/tty5} creating empty file in place of device 4:5
2024/06/07 10:47:51  warn rootless{dev/tty6} creating empty file in place of device 4:6
2024/06/07 10:47:51  warn rootless{dev/tty7} creating empty file in place of device 4:7
2024/06/07 10:47:51  warn rootless{dev/tty8} creating empty file in place of device 4:8
2024/06/07 10:47:51  warn rootless{dev/tty9} creating empty file in place of device 4:9
2024/06/07 10:47:51  warn rootless{dev/urandom} creating empty file in place of device 1:9
2024/06/07 10:47:51  warn rootless{dev/zero} creating empty file in place of device 1:5
2024/06/07 10:47:52  info unpack layer: sha256:dbe7819a64dde281f564a4a9777433e9b1340a4a37c40a47c986465cc4c6663c
2024/06/07 10:47:52  info unpack layer: sha256:d14088925c6e3c1023b7a1c5cd07eba3cfda32e9898f2a97fde15ea2b3d4fc5c
2024/06/07 10:47:52  info unpack layer: sha256:58026d51efe4ff89203aed9c895b1a0d2e7d3abe386c320d3f1f08736f9883bc
2024/06/07 10:47:52  info unpack layer: sha256:7d04a4fe140537a71a03bd27fdf4c8a88981c3fb6f77f890efc2bdc8fd67cd6c
2024/06/07 10:47:52  info unpack layer: sha256:1775fca35fb6a4d31c541746eaea63c5cb3c00280c8b5a351d4e944cdca7489d
2024/06/07 10:47:52  info unpack layer: sha256:5c319e2679086a11f7575e9ae2af256a2410d316931351b82a61da6318750782
2024/06/07 10:47:52  info unpack layer: sha256:1775fca35fb6a4d31c541746eaea63c5cb3c00280c8b5a351d4e944cdca7489d
2024/06/07 10:47:52  info unpack layer: sha256:1775fca35fb6a4d31c541746eaea63c5cb3c00280c8b5a351d4e944cdca7489d
INFO:    Creating SIF file...

Exit Code: 0
Command: apptainer build stress.sif docker://progrium/stress
Log files location: .duct/logs/2024.06.07T10.47.49-540470_
Wall Clock Time: 6.437902450561523
Memory Peak Usage: 1.4000000000000001%
CPU Peak Usage: 2499.5%
INFO:    Build complete: stress.sif

Now we run it twice consuming different amounts of memory, and sure enough the stats change :)

austin@fancy  ~/devel/duct   fix-test-script ± duct -- apptainer exec stress.sif stress --cpu 2 --io 1 --vm 2 --vm-bytes 5000M --timeout 10s

duct is executing apptainer exec stress.sif stress --cpu 2 --io 1 --vm 2 --vm-bytes 5000M --timeout 10s...
Log files will be written to .duct/logs/2024.06.07T10.45.59-538579_
stress: info: [538602] dispatching hogs: 2 cpu, 1 io, 2 vm, 0 hdd
stress: info: [538602] successful run completed in 10s

Exit Code: 0
Command: apptainer exec stress.sif stress --cpu 2 --io 1 --vm 2 --vm-bytes 5000M --timeout 10s
Log files location: .duct/logs/2024.06.07T10.45.59-538579_
Wall Clock Time: 10.175713539123535
Memory Peak Usage: 31.4%
CPU Peak Usage: 508.0%
 austin@fancy  ~/devel/duct   fix-test-script ± duct -- apptainer exec stress.sif stress --cpu 2 --io 1 --vm 2 --vm-bytes 1000M --timeout 10s

duct is executing apptainer exec stress.sif stress --cpu 2 --io 1 --vm 2 --vm-bytes 1000M --timeout 10s...
Log files will be written to .duct/logs/2024.06.07T10.46.17-539323_
stress: info: [539345] dispatching hogs: 2 cpu, 1 io, 2 vm, 0 hdd
stress: info: [539345] successful run completed in 10s

Exit Code: 0
Command: apptainer exec stress.sif stress --cpu 2 --io 1 --vm 2 --vm-bytes 1000M --timeout 10s
Log files location: .duct/logs/2024.06.07T10.46.17-539323_
Wall Clock Time: 10.162036180496216
Memory Peak Usage: 6.2%
CPU Peak Usage: 534.0%

@asmacdo
Copy link
Member Author

asmacdo commented Jun 7, 2024

FWIW, we dont need exec, apptainer run also works.

duct -- apptainer run stress.sif --cpu 2 --io 1 --vm 2 --vm-bytes 5000M --timeout 10s

duct is executing apptainer run stress.sif --cpu 2 --io 1 --vm 2 --vm-bytes 5000M --timeout 10s...
Log files will be written to .duct/logs/2024.06.07T10.56.38-546832_
stress: info: [546853] dispatching hogs: 2 cpu, 1 io, 2 vm, 0 hdd
stress: dbug: [546853] using backoff sleep of 15000us
stress: dbug: [546853] setting timeout to 10s
stress: dbug: [546853] --> hogcpu worker 2 [546880] forked
stress: dbug: [546853] --> hogio worker 1 [546881] forked
stress: dbug: [546853] --> hogvm worker 2 [546882] forked
stress: dbug: [546853] using backoff sleep of 6000us
stress: dbug: [546853] setting timeout to 10s
stress: dbug: [546853] --> hogcpu worker 1 [546883] forked
stress: dbug: [546853] --> hogvm worker 1 [546884] forked
stress: dbug: [546853] <-- worker 546880 signalled normally
stress: dbug: [546853] <-- worker 546883 signalled normally
stress: dbug: [546853] <-- worker 546881 signalled normally
stress: dbug: [546853] <-- worker 546882 signalled normally
stress: dbug: [546853] <-- worker 546884 signalled normally
stress: info: [546853] successful run completed in 10s

Exit Code: 0
Command: apptainer run stress.sif --cpu 2 --io 1 --vm 2 --vm-bytes 5000M --timeout 10s
Log files location: .duct/logs/2024.06.07T10.56.38-546832_
Wall Clock Time: 10.17927861213684
Memory Peak Usage: 31.4%
CPU Peak Usage: 886.5%

@asmacdo asmacdo added this to the Backlog milestone Jun 7, 2024
@asmacdo
Copy link
Member Author

asmacdo commented Aug 15, 2024

closing as wont/cant fix now that we have #142

@asmacdo asmacdo closed this as completed Aug 15, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant