Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Requirements for multi-interface #211

Open
dabernie opened this issue Feb 18, 2022 · 9 comments
Open

Requirements for multi-interface #211

dabernie opened this issue Feb 18, 2022 · 9 comments
Assignees
Labels

Comments

@dabernie
Copy link

I believe it is long overdue to have a clear documented understanding behind the requirements for multiple interfaces within a cloud-native network function, independent from the fact that technical implementations already exist to support them.

Is multi-interface required for :

a) traffic segmentation
b) isolation, security
c) performance
d) hardware dependencies
e) because this is how we did PNF, VNFs
f) all of the above

Understanding the real justification behind this will help better understand how CNFs and infrastructure might be evolving and potential simplification (ie avoiding too much toil).

@electrocucaracha
Copy link
Collaborator

Agree, supporting multiple networks per Pod is still relevant. It was started a discussion about this topic before but I don't think we have created a best practice or use case for it.

@dabernie
Copy link
Author

Thanks @electrocucaracha I remember this one but forgot it never ended with some recommendations, call to arms or actions, lets revive it

@dabernie
Copy link
Author

dabernie commented Feb 22, 2022

Ok, I re-read the above mentioned thread and out of it I can extract 4 main threads

a) A lack of abstracted API structure to clearly define network configurations and attachments beyond standard CNIs.

b) There was a notion of post deployment interfaces in a POD, but if we go truly cloud-native with immutability, what would be the use case to create post deployment, on the fly network attachments. Normally, at creation an app owner already knows where it needs to attach its workload.

c) Exotic protocols, acceleration or specific device drivers would also be a trigger.

d) Because for ages (PNFs, VNFs, now CNFs) we have been used to a model.

@electrocucaracha
Copy link
Collaborator

b) There was a notion of post deployment interfaces in a POD, but if we go truly cloud-native with immutability, what would be the use case to create post deployment, on the fly network attachments. Normally, at creation an app owner already knows where it needs to attach its workload.

Regarding this point, if I understand correctly there are two different approaches:

  1. Pre-created network definitions(Multus, DANM, etc): CNF vendors must specify host requirements and those resources have to be previously created before the CNF on-boarding process. The creation of these resources are explicit.
  2. On-demand network definitions(NSM): Resources are created on the fly during the CNF on-boarding process. The creation of these resources are implicit.

@dabernie
Copy link
Author

Yes but what post on-boarding ? in a normal Kubernetes POD when do we change the networking requirements post deployment ? basic immutability would have you restart with the refreshed specs

@electrocucaracha
Copy link
Collaborator

I have created this draft for starting the discussion about this topic. All comments and feedback are welcome.

@tomkivlin
Copy link
Collaborator

On hold as SIG Network are discussing how to standardise the API for all implementations of a multi-interface network plugin.

@electrocucaracha
Copy link
Collaborator

electrocucaracha commented Dec 5, 2022

These are the efforts made by the Kubernetes community related to this topic:

@iawells
Copy link
Collaborator

iawells commented Dec 5, 2022

I happened to see this ping earlier and I thought I'd add a comment or two to see if it helps. I was looking at the comments on dynamically changing the interfaces.

I want you to think about 'resources' (a loaded word) in two types:

One: a network address somewhere else in the network (not necessarily in k8s). I can produce a network address out of thin air at any time; any other service can hand an address and say 'here you go'. NSM is a bit like this, in that it can produce pipes that will spit and receive packets at any point in time. This kind of 'resource' is not immutable, obviously.

Two: a CNI network interface provided by k8s. One (originally) or a fixed number (Multus, DANM) are given to a pod, and it is not changeable without recreating the pod. This is arguably not because the CNI interface itself is immutable; it's because it's listed in the pod config, and that is immutable. Pods are made to be created and destroyed, not changed on the fly in this way, because if you can change a pod the whole question of orchestrating the parts of an app becomes 1000% more complicated.

To use cases, are there use cases that require giving a CNF a new interface it didn't originally have? Do we have them noted down? I don't recall this being a major concern with VNFs, so am I missing something or is this a new ask? If it's a new ask, is it high or low priority for CNFs and their users?

To technology, given CNF != pod, are there ways using container start/stop that a CNF could use to consume a new interface that it didn't have, even if a given existing pod cannot take on that network interface (that is, for instance, restarting a pod within the broader application in such a way that service is not disrupted)? Does it have so many shortcomings that it doesn't deliver on the use case?

The answers to those determine whether your packet-based interfaces required for CNFs are part of a pod's immutable config or dynamically requested and consumed by the application. The only thing I'll say to that is that it is possible to make these interfaces on demand and pass them around, even if it' isn't what we currently do, and you shouldn't rule out options that require it.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
Status: No status
Development

No branches or pull requests

5 participants