-
Notifications
You must be signed in to change notification settings - Fork 328
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[WIP][Scheduling] Add worker node failover with lineage #3307
Open
zhongchun
wants to merge
4
commits into
mars-project:master
Choose a base branch
from
zhongchun:worker-node-fo-with-lineage
base: master
Could not load branches
Branch not found: {{ refName }}
Loading
Could not load tags
Nothing to show
Loading
Are you sure you want to change the base?
Some commits from the old base branch may be removed from the timeline,
and old review comments may become outdated.
Open
[WIP][Scheduling] Add worker node failover with lineage #3307
zhongchun
wants to merge
4
commits into
mars-project:master
from
zhongchun:worker-node-fo-with-lineage
Conversation
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Merge branch 'worker-node-fo-092 of [email protected]:ray-project/mars.git into 0.9-dev https://code.alipay.com/ray-project/mars/pull_requests/379 Signed-off-by: 慕白 <[email protected]> * PullRequest: 366 Add request timeout argument for web api Merge branch add-request-timeout-for-web-api of [email protected]:ray-project/mars.git into 0.9-dev https://code.linke.alipay.com/ray-project/mars/pull_requests/366?tab=check Signed-off-by: 天苍 <[email protected]> * Add reqeust timeout argument for web api * Add log for IsolatedSession timeout * Add __repr__ for SyncSession * Add log * Add log * Add DATA_PREPARE_TIMEOUT * Remove unused log * Restore default value of http request timeout to 20 * Add lineage info for subtasks * Add log for monitor_sub_pools * Detect error * Add scheduling to recover failed subtasks * Fix rescheduling and add some log * Fix * Fix * Remove unused method * Fix log * Debug scheduling * Add log for subtask infos * Add log * Add log * Fix scheduling * Fix schedulng * Fix duplicated subtasks and add some log * Fix scheduler * Enable set_subtask_result when cur_stage_processor is None * Disable speculation set_subtask_result * Debug rescheduler * Add log * Workaround to trigger scheduling * Fix dictionary changed * Fix duplicated scheduling * Fix duplicated scheduling * Fix hang * Add log * Add log * Fix rescheduling * Refactor set_subtask_result of stage * Fix stage * Fix stage * Fix stage finished state * Fix stage finished state * Fix stage error state * Fix stage state * Disable schedule next when finished * Fxi rescheduling when error recovery * Fxi rescheduling when error recovery * Add errored state for TaskStatus * Fix reschduling * Add balancing switch * Add rescheduling state * Revert "Add rescheduling state" This reverts commit fa80e2b62126a35ae9c381c0c16613236dc7d939. * Add log * Fix scheduling * Remove some log * Update node info when server closed * Reassign subtasks of closed node * Add get_bands for NodeinfoCollectorActor * Fix get_bands * Fix reassign * Add debug log for reassigning subtasks * Fix reassign_subtasks * Fix repush item * Fix unexist band in band_queues * Optimize balancing * Remove some debug log * Add SUBMIT_INTERVAL_SECS * Optimize log * Remove balance switch * Decrease SUBMIT_INTERVAL_SECS * Restore speculation * Do not cancel subtask when not finished * Fix set_subtask_result of executor * Add debug log * Fix speculation * Fix speculation * Add debug log * Add some debug log * Add RepeatedExecutionError * Remove log * Simplify update of SubtaskResult * Add DataNotExist recovery * Fix lineage store * Fix duplicated error of recovery * Fix trigger to dependency subtasks * Fix set_subtask_result of executor * Remove debug log * Fix tests * Pin sqlalchemy * Fix tests * Fix test_optimization * Reorginize tests and configs * Remove some log * Fix and lint * Optimize failover context
…fails Merge branch 'fix-worker-node-fo-context-cleanup-090dev of [email protected]:ray-project/mars.git into 0.9-dev https://code.alipay.com/ray-project/mars/pull_requests/392 Signed-off-by: 慕白 <[email protected]> * Add cleanup for FailOverContext when error recovery fails
zhongchun
force-pushed
the
worker-node-fo-with-lineage
branch
4 times, most recently
from
January 6, 2023 09:08
7cccf56
to
70f8786
Compare
zhongchun
force-pushed
the
worker-node-fo-with-lineage
branch
from
January 6, 2023 09:32
70f8786
to
5d39c03
Compare
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
What do these changes do?
Currently, the supervisor will cancel the execution of the entire stage after receiving an error report that the execution of the subtask fails when the MainPool process exits in Mars. For traditional batch processing, just rerun is good. However, this is very unfriendly to the scenario of a large job, because it has many subtasks and takes a long time to run. Once it fails, it will be expensive to rerun. At the same time, large jobs generally require much more nodes, and the probability of corresponding node failures will also increase. Once a node failure causes the MainPool to exit, the data on the corresponding node will be lost, and subsequent dependencies execution will fail. For example, a job has been running for more than 20 hours, and the execution is 90% complete, but because a certain MainPool exits, the entire job will fail. Large jobs are relatively common in modern data processing. A job will take up 1200 nodes or more, and it will take about 40 hours.
In order to solve the node failure problem and ensure the stable and normal operation of jobs, a complete node failover solution is required.
Related issue number
Issue #3308
Check code requirements