Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Document better the resources to allocate to the Spark executors #200

Merged
merged 3 commits into from
Aug 27, 2024

Conversation

julienrf
Copy link
Collaborator

@julienrf julienrf commented Aug 15, 2024

Emphasize the importance of properly invoking spark-submit to allocate the right amount of resources to the Spark executors.

  • Create a new page dedicated to the arguments that should be supplied to spark-submit
  • Link to this page from every page that explains how to submit the Spark job
  • Insist on the importance of setting --executor-cores and --executor-memory
  • Add a suggestion regarding the cluster size

Fixes #191

- Create a new page dedicated to the arguments that should be supplied to `spark-submit`
- Link to this page from every page that explains how to set up Spark
- Insist on the importance of setting `--executor-cores` and `--executor-memory`

Relates to scylladb#191
@julienrf julienrf force-pushed the improve-documentation branch from 30ed94e to 3331e7d Compare August 19, 2024 06:32
@julienrf julienrf marked this pull request as ready for review August 19, 2024 06:32
@julienrf julienrf requested a review from tarzanek August 19, 2024 06:34
@guy9
Copy link
Collaborator

guy9 commented Aug 25, 2024

@tarzanek please review

@julienrf
Copy link
Collaborator Author

Moving forward and merging. We can revert or amend, if needed.

@julienrf julienrf merged commit a77ea75 into scylladb:master Aug 27, 2024
3 checks passed
@julienrf julienrf deleted the improve-documentation branch August 27, 2024 15:40
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Improve documentation regarding the correct way to submit the Spark job
2 participants