-
Couldn't load subscription status.
- Fork 2
Add resource reqs to driver and executor containers… #6
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: k8s-support
Are you sure you want to change the base?
Add resource reqs to driver and executor containers… #6
Conversation
* Use images with spark pre-installed * simplify staging for client.jar * Remove some tarball-uri code. Fix kube client URI in scheduler backend. Number executors default to 1 * tweak client again, works across my testing environments * use executor.sh shim * allow configuration of service account name for driver pod * spark image as a configuration setting instead of env var * namespace from spark.kubernetes.namespace * configure client with namespace; smooths out cases when not logged in as admin * Assume a download jar to /opt/spark/kubernetes to avoid dropping protections on /opt
* Add support for dynamic executors * fill in some sane logic for doKillExecutors * doRequestTotalExecutors signals graceful executor shutdown, and favors idle executors
|
This initial push contains the logic to correctly align resource requests for the containers with the spark resource settings. However, when a pod fails to schedule due to insufficient resources, it remains in Before this merges, I want to add some logic for detecting when new executors are stuck in |
… spark resource settings for cores and memory
166f31e to
97c2b86
Compare
7566d27 to
f6ccb54
Compare
|
@erikerlandson - close this in favor of what we have on https://github.com/apache-spark-on-k8s/spark/? |
Add resource reqs to driver and executor containers, corresponding to spark resource settings for cores and memory.