-
Notifications
You must be signed in to change notification settings - Fork 39
API: reduce priority range from 1000 to 500 #319
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
✅ Deploy Preview for kubernetes-sigs-network-policy-api ready!
To edit notification comments on pull requests, go to your Netlify project configuration. |
Welcome @squeed! |
[APPROVALNOTIFIER] This PR is NOT APPROVED This pull-request has been approved by: squeed The full list of commands accepted by this bot can be found here.
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
Hi @squeed. Thanks for your PR. I'm waiting for a kubernetes-sigs member to verify that this patch is reasonable to test. If it is, they should reply with Once the patch is verified, the new status will be reflected by the I understand the commands that are listed here. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. |
This seemingly-minor change means that a full policy tier can be packed into a 16-bit integer (because 500 * 100 < 2^16). As written, the spec could allow up to 100,000 rules per tier, which unfortunately needs 17 bits at a worst case. Signed-off-by: Casey Callendrello <[email protected]>
a0e9406
to
3a4866d
Compare
/ok-to-test |
LGTM /assign @danwinship @bowei |
/hold So, from my perspective as "someone who has not actually implemented ANP myself, and is not currently responsible for maintaining an ANP implementation"... This is wrong. The numerical limits are not hard API guarantees that implementations can be designed around. The goal is "ensure that the total number of rules is bounded", not "ensure that the total number of rules has a specific bound". That said, ovn-kubernetes also currently implements prioritization this way (by multiplying "max policies" times "max rules per policy") and in fact, it forces So, what is the argument for why 2^16 total rules is reasonable, but 2^14 is too few and 2^17 is too many? Or, why is 2^16 rules per-tier the right tradeoff, as opposed to 2^16 total? (Indeed, ovn-k's implementation actually only allows for 2^14 total, not 2^14 per tier... not sure what they're planning to do with BANP priorities... let alone the possibility of DNP in the future...) Anyway, if we want say that implementations are allowed to assume that "total number of policies" times "total number of rules per policy" times "total number of tiers" is less than a specific value, and that it is reasonable for implementations to be designed in such a way that they would need a complete rearchitecting and rewrite if we increased that value, then that means we need to decide now what the total maximum number of tiers will be in the future (eg, including "DNP", etc). Because once we hit GA, we can't change validation in a way that would reject previously-valid objects, so we can't lower the range of valid priority levels or the maximum length of the rule arrays. My impression is that we did not actually intend to say that implementations must support 100,000 rules. Rather, the "1000 policies" and "100 rules per policy" were limits that independently made sense, but we assumed that the 1000*100 space would be filled in sparsely. Or alternatively, if we did intend to say that implementations must support 100,000 rules (and no more), then we should just have that be the requirement, and not try to enforce this in terms of a specific division of max policies / max rules-per-policy. (And thus, say that implementations can assume it's possible to assign a unique priority from 0 to 99,999 to each rule, but that you can't do that just by multiplying.) |
oh, I understood this differently, I assumed that implementations will benefit of having a range of max 500 priorities, because that allow to encode the priority in an int16. If that is the cause it sounds reasonable to move from 1000 to 500, AFAIK because 1000 looks an arbitrary number, maybe picked from the existing Endpoints cap? I did not participate so I assumed a lot of things here ... and I agree with Dan that the limits is per field, and this sounds part of the API contract too. Dan as you comment later, you can not make validation stricter later since you'll invalidate existings things |
OK, consensus in yesterday's meeting was that we expect the "rule space" to be very sparse (that is, we assume that most clusters will not use all 1000 priority values, and most policies will not have 100 rules, and most rules will not have 100 peers, and most The However, the We also generally agreed that the max priority should probably remain a power of 10, and 100 feels too small, so we plan to stick with 1000. |
(so implementations that need to have distinct priority numbers for every policy/rule/peer/whatever should dynamically map them into whatever range they need, and not try to statically map them) |
This seemingly-minor change means that a full policy tier can be packed into a 16-bit integer (because 500 * 100 < 2^16). As written, the spec could allow up to 100,000 rules per tier, which unfortunately needs 17 bits at a worst case.