-
Notifications
You must be signed in to change notification settings - Fork 1.6k
KEP-5278: update KEP for NominatedNodeName, narrowing down the scope of the feature and moving it to beta #5618
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
/hold |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
PRR shadow review
simply be rejected by the scheduler (and the `NominatedNodeName` will be cleared before | ||
moving the rejected pod to unschedulable). | ||
|
||
#### Increasing the load to kube-apiserver |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't understand scheduling all that well so correct me if my assumptions are incorrect.
Is the assumption here that if PreBind()
plugins skip, the binding operation will never take too much time and so we don't need to expose NominatedNodeName
? This is important for the KEP to not be in conflict with "User story 1".
It might be worth noting that updating NominatedNodeName
for every pod would only double the API requests per pod in the happy path. If I understand the docs at https://kubernetes.io/docs/concepts/scheduling-eviction/scheduling-framework/#pre-bind correctly, if there were some good-looking (from scheduling perspectives) nodes that would however cause the prebind plugins to fail often, that might increase the API requests N-times, where N>=2.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Correct, the assumption is that all tasks related to binding that may take long to complete (e.g. creating volumes, attaching DRA devices) are executed in PreBind()
, and Bind()
should not take too much time.
As far as I know increasing the number of API requests by 2x is not acceptable, as this would happen for every pod being scheduled (or re-scheduled), so it might add up to a huge number.
Also adding an extra API call before binding makes the entire procedure a bit longer, and the scheduling throughput a bit lower - so if we assume that Bind()
will be quick, we should avoid that extra cost.
The feature can be disabled in Beta version by restarting the kube-scheduler and kube-apiserver with the feature-gate off. | ||
|
||
###### What happens if we reenable the feature if it was previously rolled back? | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In ###### Are there any tests for feature enablement/disablement?
This feature is only changing when a NominatedNodeName field will be set - it doesn't introduce a new API.
Is that correct? NominatedNodeName
sounds like a new field in the Pod API.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In ###### Were upgrade and rollback tested? Was the upgrade->downgrade->upgrade path tested?
We will do the following manual test after implementing the feature:
What was the result of the test?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
NominatedNodeName
field wasn't added by this KEP (it was added long time ago). KEP's purpose is to extend usage of this field.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
NominatedNodeName
field wasn't added by this KEP (it was added long time ago). KEP's purpose is to extend usage of this field.
The idea behind enablement/disablement tests is that depending on the FG the functionality is or is not working. So the question is more around ensuring that when you turn off the appropriate FG the functionality doesn't set NNN, or in case of kube-apiserver doesn't clear it, and vice versa when it's on. Especially at beta stage, where we need to ensure that users can safely turn off this on-by-default (beta) functionality.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In
###### Were upgrade and rollback tested? Was the upgrade->downgrade->upgrade path tested?
We will do the following manual test after implementing the feature:
As @stlaz pointed out, this description is required for beta promotion.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Below in ##### What are the reasonable SLOs (Service Level Objectives) for the enhancement?
I assume the functionality has been already tested, can you update that section with findings for the throughput?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@soltysh
Addressing only the comments about upgrade->downgrade->upgrade testing (I will address the rest later today):
The idea behind enablement/disablement tests is that depending on the FG the functionality is or is not working. So the question is more around ensuring that when you turn off the appropriate FG the functionality doesn't set NNN, or in case of kube-apiserver doesn't clear it, and vice versa when it's on. Especially at beta stage, where we need to ensure that users can safely turn off this on-by-default (beta) functionality.
Thank you for bringing my attention to this point, I missed this earlier when editing the KEP.
And now I'm not really sure what this test should look like.
So the trouble is, this is not a typical promotion from alpha to beta.
The scope of this KEP in alpha allowed NNN to be set by components other than kube-scheduler and established the semantics for this field. The designed behavior was that the NNN field would be cleared in some situations, but not after a failed scheduling attempt.
But after the v1.34 release there was a change of plans in sig-scheduling, and with the idea of Gang Scheduling coming up (and with that - ideas for new approaches to resource reservation), it seems that NNN might not be the mechanism we want to invest in right now, as a means for other components to suggest pod placement to kube-scheduler.
At the same time using NNN as "set in kube-scheduler, read-only in CA" seems like a good and worthwhile approach to solve the buggy scenario "If pod P is scheduled to bind on node N, but binding P takes a long time, and N is otherwise empty, CA might turn down N before P gets bound".
So the decision was to narrow down the scope of this KEP significantly, and get it to beta.
Please note that before the alpha KEP the scheduler's code would clear NNN after a failed scheduling attempt. So what this hoping-to-be-beta KEP does vs pre-alpha is:
- introduces setting NNN in PreBinding phase, i.e. when scheduler expects that the entire prebinding + binding process may take a significant amount of time (gated by nominatedNodeNameForExpectationEnabled)
- makes kube-apiserver clear NNN when the pod gets bound (gated by ClearingNominatedNodeNameAfterBinding)
And what this beta-KEP does vs alpha-KEP is:
- reverts the logic that does not clear NNN upon failed scheduling attempt (the logic was gated by nominatedNodeNameForExpectationEnabled)
With all that, in beta-KEP the NNN should be set when a pod is either waiting for preemption to complete (which had been the case before alpha-KEP), or during prebinding/binding phases. And it should be cleared after binding in api-server.
Can you please help me with the following questions?
-
Since the implementation has yet to change, is it ok if I run the test after implementing the new version?
-
IIUC the upgrade->downgrade->upgrade test scenario should be as follows, can you verify that?
-
upgrade
-
request scheduling of a pod that will need a long preBinding phase
-
check that NNN gets set for that pod
-
before binding completes, restart the scheduler with nominatedNodeNameForExpectationEnabled = false
-
check that the pod gets scheduled and bound successfully to the same node
-
and upgrade again?
Or:
6a. request scheduling another pod with expected long preBind
7a. check that NNN does not get set in PreBind
8a. restart the scheduler with nominatedNodeNameForExpectationEnabled = true
9a. check that the pod gets scheduled and bound somewhere
Thank you!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Below in
##### What are the reasonable SLOs (Service Level Objectives) for the enhancement?
I assume the functionality has been already tested, can you update that section with findings for the throughput?
As it turns out, there are no tests running with this feature enabled (probably because the original plan was to launch in beta in 1.34, and the FG would be on by default, and all tests would run it).
I can run perf tests now and update the numbers here, but since the implementation is going to change after KEP update, perhaps I can run the test on the actual implementation, when it's ready? WTYD?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
1. Since the implementation has yet to change, is it ok if I run the test after implementing the new version?
Yes, you'll want to update the doc afterwards.
2. IIUC the upgrade->downgrade->upgrade test scenario should be as follows, can you verify that? 3. upgrade 4. request scheduling of a pod that will need a long preBinding phase 5. check that NNN gets set for that pod 6. before binding completes, restart the scheduler with nominatedNodeNameForExpectationEnabled = false 7. check that the pod gets scheduled and bound successfully to the same node 8. and upgrade again? Or: 6a. request scheduling another pod with expected long preBind 7a. check that NNN does not get set in PreBind 8a. restart the scheduler with nominatedNodeNameForExpectationEnabled = true 9a. check that the pod gets scheduled and bound somewhere
Both options are fine by me. But it seems the versions with (a) will be easier to perform.
As it turns out, there are no tests running with this feature enabled (probably because the original plan was to launch in beta in 1.34, and the FG would be on by default, and all tests would run it).
I can run perf tests now and update the numbers here, but since the implementation is going to change after KEP update, perhaps I can run the test on the actual implementation, when it's ready? WTYD?
Yes, it can be updated in followup.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thank you! I will make sure to update the doc with all the results
During the beta period, the feature gates `NominatedNodeNameForExpectation` and `ClearingNominatedNodeNameAfterBinding` are enabled by default, no action is needed. | ||
|
||
**Downgrade** | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In ### Versions Skew Strategy
:
What happens to the pods that already have the NominatedNodeName
set in a cluster with kube-apiserver that does not understand that field?
What happens if a scheduler tries to set NominatedNodeName
on all of its scheduled pods while contacting an older kube-apiserver that does not know the field?
These questions are related to rollout/rollback section of the PRR questionnaire.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
kube-apiserver has known this field for a long time, but does not interpret it - setting / using NominatedNodeName
field in components other that kube-scheduler is out of scope of this KEP.
This field was introduced in 2018 (kubernetes/kubernetes@384a86c) - I assume that if we try using kube-apiserver from pre-2018 with kube-scheduler v1.35, this would cause way bigger problems than just trouble with handling NominatedNodeName
.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I didn't know we had the fields for such a long time, we don't need to worry about it not being present, then 👍
keps/sig-scheduling/5278-nominated-node-name-for-expectation/README.md
Outdated
Show resolved
Hide resolved
The feature can be disabled in Beta version by restarting the kube-scheduler and kube-apiserver with the feature-gate off. | ||
|
||
###### What happens if we reenable the feature if it was previously rolled back? | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
NominatedNodeName
field wasn't added by this KEP (it was added long time ago). KEP's purpose is to extend usage of this field.
keps/sig-scheduling/5278-nominated-node-name-for-expectation/README.md
Outdated
Show resolved
Hide resolved
keps/sig-scheduling/5278-nominated-node-name-for-expectation/README.md
Outdated
Show resolved
Hide resolved
keps/sig-scheduling/5278-nominated-node-name-for-expectation/README.md
Outdated
Show resolved
Hide resolved
keps/sig-scheduling/5278-nominated-node-name-for-expectation/README.md
Outdated
Show resolved
Hide resolved
keps/sig-scheduling/5278-nominated-node-name-for-expectation/README.md
Outdated
Show resolved
Hide resolved
keps/sig-scheduling/5278-nominated-node-name-for-expectation/README.md
Outdated
Show resolved
Hide resolved
2c281b6
to
e2533f7
Compare
/lgtm |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I believe it's important to cover DRA resources accounting as well to completely address scheduler->CA resource accounting problem for pods in delayed binding cases.
@ania-borowiec Sorry that I suggested wording of entire sections, but explaining what I suggest to add would be almost identical to what I wrote anyway. Feel free to reword and rearrange it.
the cluster autoscaler cannot understand the pod is going to be bound there, | ||
misunderstands the node is low-utilized (because the scheduler keeps the place of the pod), and deletes the node. | ||
|
||
We can expose those internal reservations with `NominatedNodeName` so that external components can take a more appropriate action |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You can add another paragraph how DRA interacts with NNN, as it's important in scheduler->CA resource accounting:
Please note that the NominatedNodeName
can express reservation of node resources only, but some resources can be managed by DRA plugin and expressed in a form of ResourceClaim allocation. To correctly account all the resources that a pod needs, both the nomination and the ResourceClaim status update needs to be reflected in the api-server.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@x13n Can you confirm my understanding for the Cluster Autoscaler part? Would the DRA resources be correctly accounted as in-use as soon as the the ResourceClaim allocation is reflected in the api-server?
IIUC the accounting of ResourceClaim allocation does not depend on having a pod that is using it to be neither bound NN nor having NNN nomination.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, that's my understanding, though @towca please keep me honest here.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@johnbelamaric FYI
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm not sure I get the question, but I would love for this KEP to clarify how Cluster Autoscaler should interact with Pods that have nominatedNodeName
set.
The current behavior is that if CA sees nominatedNodeName
on a Pod, it adds the Pod onto the nominated Node in its simulation, without checking kube-scheduler Filters. In particular, if the preempted Pod(s) are still on the Node, the Node is effectively "overscheduled" in CA simulations. This ensures that CA doesn't trigger an unnecessary scale-up for the preemptor Pod.
The above logic worked well enough before DRA, but it's not correct for Pods that reference ResourceClaims. nominatedNodeName
can be set before the ResourceClaims are allocated, and CA won't allocate the claims in its simulation because it's not running scheduler Filters before adding the Pod to the Node. So when that happens, the DRA Devices needed by the claims are effectively free to be taken by other Pods in CA simulations. If there are other pending Pods referencing claims that can be satisfied by these Devices, CA will not scale-up for them until the preemption is completed. I described an example problematic scenario in detail here: kubernetes/autoscaler#7683 (comment).
CA could start running the Filters for Pods with nominatedNodeName
set, but then if the preemption isn't completed yet they won't pass and the Pod won't fit - which will trigger an unnecessary scale-up. So this also doesn't sound like a good option.
This wouldn't be an issue if kube-scheduler persisted the claim allocations in the API before setting nominatedNodeName
. But if some claims need to be deallocated as part of the preemption, that doesn't seem possible.
WDYT we should do here? It doesn't have to be a part of this KEP of course, but we'll need to figure it out at some point (kubernetes/autoscaler#7683).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This wouldn't be an issue if kube-scheduler persisted the claim allocations in the API before setting nominatedNodeName.
Currently the order is different, but in a follow up we can change it, as we need to export ResourceClaim allocation before WaitOnPermit anyway. That would require introducing a new phase in scheduler "Reserve in api-server", so currently plugins cannot do that earlier than in the Pre-Bind.
Is the order that important though? CA would get both updates, so eventually its state should be consistent.
As discussed at [Confusion if `NominatedNodeName` is different from `NodeName` after all](#confusion-if-nominatednodename-is-different-from-nodename-after-all), | ||
we update kube-apiserver so that it clears `NominatedNodeName` when receiving binding requests. | ||
We update kube-apiserver so that it clears `NominatedNodeName` when receiving binding requests. | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Handling ResourceClaim status updates
Since ResourceClaim status updates is complementary to resource nomination (reserves resources in a similar way), it's desired that they will be set at the beginning of the PreBinding phase (before it waits). The order of actions in the devicemanagement plugin is correct, however scheduler performs the binding actions of different plugins sequentially, therefore for instance it may happen that long lasting PVC provisioning may delay exporting ResourceClaim allocation status. It is not desired as it leaves gap of not-reserved DRA resources causing similar problems to the ones originally fixed by this KEP - kubernetes/kubernetes#125491
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
FYI @x13n
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Would executing these binding plugins in parallel (not necessarily here, I'm thinking a separate effort) be a feasible improvement? Are there going to be independent long-running allocations? Or is it just slow PVC provisioning? In the latter case, would it suffice to ensure PVC plugin is executed last?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't see a reason why PVC provisioning should delay DRA provisioning and vice-versa, so I'd say it should be pretty straightforward (at least theoretically) change. Yes, assuring the right order is the minimum.
Regarding exporting the ResourceClaim allocation, we most likely need to move it into a new phase anyway, since pre-binding currently happens after WaitOnPermit, but the node nomination is exported before it. We most likely would like to make those two mechanisms consistent and more tightly coupled, but that requires a bit wider agreement and common understanding among different components, so not only CA but also kubelets.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The main bits missing for PRR are tests:
- units - minor updates
- integration - missing links
- enable/disable feature gate
- upgrade->downgrade tests
Additionally, updated SLO (based on tests in alpha) would be nice, but most importantly the read-only nature of NNN field and its impact.
Also, with [scheduler-perf](https://github.com/kubernetes/kubernetes/tree/master/test/integration/scheduler_perf), we'll make sure the scheduling throughputs for pods that go through Permit or PreBind don't get regress too much. | ||
We need to accept a small regression to some extent since there'll be a new API call to set NominatedNodeName. | ||
But, as discussed, assuming PreBind already makes some API calls for the pods, the regrassion there should be small. | ||
But, as discussed, assuming PreBind already makes some API calls for the pods, the regression there should be small. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I assume the intention to add these tests has materialized in alpha, ideally all but hopefully most 😅 Can you please link per the template?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Especially, as mentioned below, all functionality will be covered with integration tests.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Updated
keps/sig-scheduling/5278-nominated-node-name-for-expectation/README.md
Outdated
Show resolved
Hide resolved
The feature can be disabled in Beta version by restarting the kube-scheduler and kube-apiserver with the feature-gate off. | ||
|
||
###### What happens if we reenable the feature if it was previously rolled back? | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
NominatedNodeName
field wasn't added by this KEP (it was added long time ago). KEP's purpose is to extend usage of this field.
The idea behind enablement/disablement tests is that depending on the FG the functionality is or is not working. So the question is more around ensuring that when you turn off the appropriate FG the functionality doesn't set NNN, or in case of kube-apiserver doesn't clear it, and vice versa when it's on. Especially at beta stage, where we need to ensure that users can safely turn off this on-by-default (beta) functionality.
The feature can be disabled in Beta version by restarting the kube-scheduler and kube-apiserver with the feature-gate off. | ||
|
||
###### What happens if we reenable the feature if it was previously rolled back? | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In
###### Were upgrade and rollback tested? Was the upgrade->downgrade->upgrade path tested?
We will do the following manual test after implementing the feature:
As @stlaz pointed out, this description is required for beta promotion.
The feature can be disabled in Beta version by restarting the kube-scheduler and kube-apiserver with the feature-gate off. | ||
|
||
###### What happens if we reenable the feature if it was previously rolled back? | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Below in ##### What are the reasonable SLOs (Service Level Objectives) for the enhancement?
I assume the functionality has been already tested, can you update that section with findings for the throughput?
#### Allow NominatedNodeName to be set by other components | ||
|
||
In v1.35 this feature is being narrowed down to one-way communication: only kube-scheduler is allowed to set `NominatedNodeName`, | ||
while for other components this field should be read-only. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Given this is one of the main goals for 1.35, I haven't seen anywhere in the document description what happens if another actor sets this field? Alternatively, how can you ensure this field is not set by external actors inside kube-apiserver?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sure, I added a section about this in Risks and mitigations
72bef55
to
bd7922d
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
So with the narrowed scope there are several things we need to agree on:
- Some tests (FG on/off, upgrade->downgrade and performance) can wait until after implementation.
- My question regarding [other actors acting on NNN and how this will be handled in updated version] is a must. Furthermore, I'd like to see a description how rollback to previous versions (pre 1.35 with new, narrow functionality) will impact the system. I'm especially interested in identifying and eliminating/warning cluster admins from achieving erroneous situation coming from in-progress upgrade/downgrade where one component runs older (more feature-rich solution), whereas other runs new version (with narrowed functionality). I don't think such a risk exists, but maybe I'm missing something?
The feature can be disabled in Beta version by restarting the kube-scheduler and kube-apiserver with the feature-gate off. | ||
|
||
###### What happens if we reenable the feature if it was previously rolled back? | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
1. Since the implementation has yet to change, is it ok if I run the test after implementing the new version?
Yes, you'll want to update the doc afterwards.
2. IIUC the upgrade->downgrade->upgrade test scenario should be as follows, can you verify that? 3. upgrade 4. request scheduling of a pod that will need a long preBinding phase 5. check that NNN gets set for that pod 6. before binding completes, restart the scheduler with nominatedNodeNameForExpectationEnabled = false 7. check that the pod gets scheduled and bound successfully to the same node 8. and upgrade again? Or: 6a. request scheduling another pod with expected long preBind 7a. check that NNN does not get set in PreBind 8a. restart the scheduler with nominatedNodeNameForExpectationEnabled = true 9a. check that the pod gets scheduled and bound somewhere
Both options are fine by me. But it seems the versions with (a) will be easier to perform.
As it turns out, there are no tests running with this feature enabled (probably because the original plan was to launch in beta in 1.34, and the FG would be on by default, and all tests would run it).
I can run perf tests now and update the numbers here, but since the implementation is going to change after KEP update, perhaps I can run the test on the actual implementation, when it's ready? WTYD?
Yes, it can be updated in followup.
…component sets NNN
78faaba
to
64d8419
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
/approve
the PRR section
Great! Thanks Ania, it should mitigate the important race between scheduler and CA. We soon should have a proposals how to expand this feature further. /lgtm |
/approve /unhold |
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: ania-borowiec, dom4ha, macsko, soltysh The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
Uh oh!
There was an error while loading. Please reload this page.