Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

question: detecting which .wasm to load and run? #46

Open
imjasonh opened this issue Oct 14, 2022 · 4 comments
Open

question: detecting which .wasm to load and run? #46

imjasonh opened this issue Oct 14, 2022 · 4 comments

Comments

@imjasonh
Copy link

Hey 👋 I wasn't sure where to ask and this seemed like a better place that on the #krustlet channel in the K8s slack.

How do the shims decide what .wasm file(s?) to load when they get an image?

I see spin.toml in the example ghcr.io/deislabs/containerd-wasm-shims/examples/spin-rust-hello image specifies

[[component]]
id = "hello"
source = "spin_rust_hello.wasm"

...which leads me to wonder how the shims know to look for spin.toml in the image layer contents, and not credenza.toml or pteradactyl.toml. If I had all three, would only spin.toml be read by convention?

In wasimg I'm setting the .wasm module to run in the OCI Image Config's Entrypoint, but that doesn't seem to be read anywhere, and anyway you may end up moving to OCI Artifacts anyway, where there isn't necessarily a config (#43)

Can an Image/Artifact used by these shims contain multiple [[component]]s, and therefore multiple .wasm files? Can the shims read and consider multiple .toml files, potentially of different variants?

Trying to get an idea of how these images are expected to be laid out, so I can be sure I'm able to build images that are "conformant" with the "spec" (heavy quotes there 😆 ).

@devigned
Copy link
Member

Glad you brought this up. @Mossaka and I were just talking about this. Right now, I believe the runtime expects an /app.wasm, as you say, it should be in the configuration or the entrypoint.

I'd like to see us move the slightfile out of the image and mount that as part of the pod spec. I think we are only adding it to the image at this point for simplicity.

We need to be much more precise in the "spec" for fs layout (definitely heavy quotes).

@squillace
Copy link
Contributor

We'd like this to work with a single container for now, because it's easy and as there's a "shim per host model" each shim needs to know how to map the artifacts for that model. Well heck, that's not very "speccy". :-)

Note that David says

I'd like to see us move the slightfile out of the image and mount that as part of the pod spec. I think we are only adding it to the image at this point for simplicity.

I'm of two minds about this, so it's the kind of thing we'll iterate on with users and anyone else who wants to work with this approach.

On the one hand, using a container/graph to deploy the topology of the host model in one swoop means that you can deploy that host model singly -- as in fire-and-forget edge functions, for example -- and deploy precisely the same thing without any modification in a container/graph to k8s. This, mind you, so long as the shim is in sync with that version of the host model, right?

Upside: k8s is now very clean manifest/chart, workload is treated as an immutable unit, making deployments far, far easier for pretty much everyone, both dev and deployer alike.
Downside: It is now denying the pure k8s experience, in which the topology of a host model is ripped apart and config must migrate to volumes or configmaps and so on but brings with it the visibility of operations that a k8s operator would really like.

In behavior, a graph and a container have equivalent functionality here: they'd both enable the "stamp the immutable topology approach".

the oddness is that I could see enabling both approaches as well. It may be that small clusters in constrained, heterogeneous environments -- single node clusters on risc-v SoCs for example -- might well want a topology stamp. No operator is going to really want a bunch of separate things there; it's too much and those clusters will not have ongoing management. (If they're attached to an edge cluster management service, there's every chance that touching thousands or millions of those clusters is, while theoretically possible, realistically a mess to avoid.)

We'll have our thoughts here, but I'm happy to take ideas!

@Mossaka
Copy link
Member

Mossaka commented Oct 14, 2022

how the shims know to look for spin.toml in the image layer contents, and not credenza.toml or pteradactyl.toml. If I had all three, would only spin.toml be read by convention?

The spin shim has a convention to look for the spin.toml file in the root directory (rootfs). Yes you're right, if you have all three toml files, it would only read spin.toml by convention.

The slight shim, however, has a different convention for now. It reads slightfile.toml and app.wasm in the rootfs directory.

If you'd like to see how they are implemented, the relevent lines are here for spin shim and slight shim, resp:
https://github.com/deislabs/containerd-wasm-shims/blob/main/containerd-shim-spin-v1/src/main.rs#L49
https://github.com/deislabs/containerd-wasm-shims/blob/main/containerd-shim-slight-v1/src/main.rs#L64-L65

Can an Image/Artifact used by these shims contain multiple [[component]]s, and therefore multiple .wasm files?

The spin shim can, because all the paths to wasm modules are defined in a single spin.toml manifest file. The slight shim currently does only work for one wasm module named app.wasm.

@squillace
Copy link
Contributor

Noted: this means that there must always be a mapping between shim and host model version as well. For now that's fine, as we move forward something a tad more "structured" should probably exist.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants