-
Notifications
You must be signed in to change notification settings - Fork 49
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
question: detecting which .wasm
to load and run?
#46
Comments
Glad you brought this up. @Mossaka and I were just talking about this. Right now, I believe the runtime expects an I'd like to see us move the slightfile out of the image and mount that as part of the pod spec. I think we are only adding it to the image at this point for simplicity. We need to be much more precise in the "spec" for fs layout (definitely heavy quotes). |
We'd like this to work with a single container for now, because it's easy and as there's a "shim per host model" each shim needs to know how to map the artifacts for that model. Well heck, that's not very "speccy". :-) Note that David says
I'm of two minds about this, so it's the kind of thing we'll iterate on with users and anyone else who wants to work with this approach. On the one hand, using a container/graph to deploy the topology of the host model in one swoop means that you can deploy that host model singly -- as in fire-and-forget edge functions, for example -- and deploy precisely the same thing without any modification in a container/graph to k8s. This, mind you, so long as the shim is in sync with that version of the host model, right? Upside: k8s is now very clean manifest/chart, workload is treated as an immutable unit, making deployments far, far easier for pretty much everyone, both dev and deployer alike. the oddness is that I could see enabling both approaches as well. It may be that small clusters in constrained, heterogeneous environments -- single node clusters on risc-v SoCs for example -- might well want a topology stamp. No operator is going to really want a bunch of separate things there; it's too much and those clusters will not have ongoing management. (If they're attached to an edge cluster management service, there's every chance that touching thousands or millions of those clusters is, while theoretically possible, realistically a mess to avoid.) We'll have our thoughts here, but I'm happy to take ideas! |
The spin shim has a convention to look for the The slight shim, however, has a different convention for now. It reads If you'd like to see how they are implemented, the relevent lines are here for spin shim and slight shim, resp:
The spin shim can, because all the paths to wasm modules are defined in a single |
Noted: this means that there must always be a mapping between shim and host model version as well. For now that's fine, as we move forward something a tad more "structured" should probably exist. |
Hey 👋 I wasn't sure where to ask and this seemed like a better place that on the
#krustlet
channel in the K8s slack.How do the shims decide what
.wasm
file(s?) to load when they get an image?I see
spin.toml
in the exampleghcr.io/deislabs/containerd-wasm-shims/examples/spin-rust-hello
image specifies...which leads me to wonder how the shims know to look for
spin.toml
in the image layer contents, and notcredenza.toml
orpteradactyl.toml
. If I had all three, would onlyspin.toml
be read by convention?In wasimg I'm setting the
.wasm
module to run in the OCI Image Config'sEntrypoint
, but that doesn't seem to be read anywhere, and anyway you may end up moving to OCI Artifacts anyway, where there isn't necessarily a config (#43)Can an Image/Artifact used by these shims contain multiple
[[component]]
s, and therefore multiple.wasm
files? Can the shims read and consider multiple.toml
files, potentially of different variants?Trying to get an idea of how these images are expected to be laid out, so I can be sure I'm able to build images that are "conformant" with the "spec" (heavy quotes there 😆 ).
The text was updated successfully, but these errors were encountered: