You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
CF for VMs specifies memory and disk requests/limits (there is no distinction between these on CF) for app staging tasks on Diego. The specific values for these come from several potential sources:
The staging_memory_in_mb and staging_disk_in_mb properties can be set when creating a Build via POST /v3/builds
If nothing is specifically requested on the Build then the staging task will use the memory and disk_quota fields associated with the app's web Process unless those values are lower than a globally configured minimum
Korifi does not currently specify any requests/limits on the kpack Image by default, but does allow the platform operator to optionally specific global minimum requests as of #2685.
BuildWorkloads use the memory/disk associated with the web (or defautl) process type to specify resource requests
App Developer can specify memory/disk requests when creating a CFBuild (this takes precedence over process-level config)
Story to opt-in to this behavior?
Stories that provide guardrails?
Considerations
Currently Korifi does not set memory/disk requests by default when staging apps. Changing the default behavior may be surprising to operators since it could impact pod scheduling on their clusters.
CF for VMs supports quotas at the org and space level that allow the operator limit how much memory/disk apps in a given org or space can use. This is an important guardrail to prevent someone from requesting an absurd amount of resources. Korifi does not support quotas at this time, but we may consider putting ResourceQuotas in place to prevent abuse (or at least documenting that this could be done) or putting some other sort of limits in place.
The text was updated successfully, but these errors were encountered:
Hi @denispeyrusaubes, Korifi doesn't currently support the Cloud Foundry-style quotas and their associated endpoints. In the meantime I recommend using Kubernetes ResourceQuotas in your namespaces to put up some guardrails around resource consumption.
We might eventually support those endpoints, but nothing is planned in the near-term.
ok, thank for your response.
Is there a place where all not supported endpoints are listed ?
I am migrating my training material from old kubecf to korifi, and some exercices cannot be ran anymore :(
Is there a place where all not supported endpoints are listed ? I am migrating my training material from old kubecf to korifi, and some exercices cannot be ran anymore :(
The closest thing we've got are these docs which list the endpoints which are supported. We don't have anything that lists the inverse.
and some exercices cannot be ran anymore :(
If you're willing, we'd appreciate it if you created issues for anything missing that you feel is important. We're always looking for feedback from the community to help us prioritize what to work on next.
Description
CF for VMs specifies memory and disk requests/limits (there is no distinction between these on CF) for app staging tasks on Diego. The specific values for these come from several potential sources:
staging_memory_in_mb
andstaging_disk_in_mb
properties can be set when creating a Build via POST /v3/buildsmemory
anddisk_quota
fields associated with the app'sweb
Process unless those values are lower than a globally configured minimumThe logic for this lives here in Cloud Controller:
https://github.com/cloudfoundry/cloud_controller_ng/blob/640009b846362fbd443e79ad559700a9e1999846/app/actions/build_create.rb#L147-L161
Korifi does not currently specify any requests/limits on the kpack
Image
by default, but does allow the platform operator to optionally specific global minimum requests as of #2685.Stories
BuildWorkloads
use the memory/disk associated with theweb
(or defautl) process type to specify resource requestsCFBuild
(this takes precedence over process-level config)Considerations
The text was updated successfully, but these errors were encountered: