-
Notifications
You must be signed in to change notification settings - Fork 2.8k
[rollout] partial rollout #4349
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
[rollout] partial rollout #4349
Conversation
|
mhh111 seems not to be a GitHub user. You need a GitHub account to be able to sign the CLA. If you have already a GitHub account, please add the email address used for this commit to your account. You have signed the CLA already but the status is still pending? Let us recheck it. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Code Review
This pull request introduces a partial rollout feature, which is a significant addition. The implementation involves creating a new recipe with extensive changes, including new trainer logic, configuration, and monkey-patching of core components. While the feature is valuable, I've identified several critical issues that need to be addressed. These include incorrect configuration access paths, a blocking call within an async loop, an incorrect super() call, and a faulty import. Additionally, there are some high-severity concerns regarding brittle logic and fragile monkey-patching that could impact maintainability. Addressing these points will be crucial for the stability and correctness of this new feature.
| finished_mask = (new_batch.non_tensor_batch[ | ||
| "age"] == self.config.algorithm.partial_rollout_max_split) | finished_mask |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The configuration partial_rollout_max_split is being accessed from self.config.algorithm, but it is defined under actor_rollout_ref.rollout in the YAML configuration file. This will likely lead to a ConfigKeyError at runtime or incorrect behavior if the key happens to exist under algorithm with a different value.
"age"] == self.config.actor_rollout_ref.rollout.partial_rollout_max_split) | finished_mask| model_config: HFModelConfig, | ||
| device_mesh: DeviceMesh, | ||
| ): | ||
| super(vLLMRollout,self).__init__(self,config, model_config, device_mesh) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The call to the parent class constructor super() is incorrect. The current syntax super(vLLMRollout, self).__init__(self, ...) passes self as an explicit argument, leading to __init__ receiving two self arguments, which will cause a TypeError. The correct Python 3 syntax is super().__init__(config, model_config, device_mesh).
| super(vLLMRollout,self).__init__(self,config, model_config, device_mesh) | |
| super().__init__(config, model_config, device_mesh) |
| t = t + 1 | ||
| step_outputs = engine.step() | ||
| if self.aggregator: | ||
| if ray.get(self.aggregator.is_stopped.remote()) and count >= aged_sample_num and t % 20 == 0: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The use of ray.get() within an async function's loop is a blocking call that will freeze the event loop. This negates the benefits of asyncio and can lead to severe performance degradation or deadlocks, especially under load. The remote call should be awaited using await to maintain asynchronicity.
| if ray.get(self.aggregator.is_stopped.remote()) and count >= aged_sample_num and t % 20 == 0: | |
| if await self.aggregator.is_stopped.remote() and count >= aged_sample_num and t % 20 == 0: |
| for key in ("input_ids", "attention_mask", "position_ids"): | ||
| tmp = partial_batch.batch.pop(key, None) | ||
| partial_batch.batch[key] = tmp[:, | ||
| : self.config.data.max_prompt_length] # TODO ? |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This logic to recover the original prompt by truncating the sequence to max_prompt_length is brittle. It assumes that the original prompt's length is exactly self.config.data.max_prompt_length, which may not always be true, especially with padding or variable-length prompts. This could lead to incorrect prompts being used in subsequent partial generation steps. A more robust approach would be to explicitly carry over the original prompt tensors instead of trying to recover them via truncation. The TODO ? comment also suggests uncertainty about this implementation.
| for key in ("input_ids", "attention_mask", "position_ids"): | ||
| tmp = partial_batch.batch.pop(key, None) | ||
| partial_batch.batch[key] = tmp[:, : self.config.data.max_prompt_length] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This logic to recover the original prompt by truncating the sequence to max_prompt_length is brittle. It assumes that the original prompt's length is exactly self.config.data.max_prompt_length, which may not always be true. This could lead to incorrect prompts being used in subsequent partial generation steps. A more robust approach would be to explicitly carry over the original prompt tensors instead of trying to recover them via truncation.
verl/workers/rollout/base.py
Outdated
| if rollout_name=="vllm" and mode=="partial_rollout": | ||
|
|
||
| fqdn = _ROLLOUT_REGISTRY[("vllm", "sync")] | ||
| module_name, class_name = fqdn.rsplit(".", 1) | ||
| rollout_module = importlib.import_module(module_name) | ||
| from recipe.partial_rollout.vllm_rollout_spmd import vLLMRolloutPatch | ||
| #from verl.workers.rollout.vllm_rollout.vllm_rollout_spmd import vLLMRollout | ||
| rollout_module_class = getattr(rollout_module, class_name) | ||
| for name, attr in rollout_module_class.__dict__.items(): | ||
| if name =="__init__": | ||
| print("name1 :",name) | ||
| attr = MethodType(vLLMRolloutPatch.__init__,rollout_module_class) | ||
| setattr(rollout_module_class,name,attr) | ||
| if name == "generate_sequences": | ||
| attr = MethodType(vLLMRolloutPatch.vllmrollout_generate_sequences,rollout_module_class) | ||
| setattr(rollout_module_class,name,attr) | ||
| attr = MethodType(vLLMRolloutPatch.async_generate_sequences,rollout_module_class) | ||
| setattr(rollout_module_class,"async_generate_sequences",attr) | ||
| return rollout_module_class |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This implementation uses extensive monkey-patching to modify the vLLMRollout class at runtime. This approach is fragile, hard to debug, and difficult to maintain, as any change in the original vLLMRollout class could silently break this logic. A more robust and standard software engineering practice would be to use inheritance. Consider creating a PartialRolloutVLLMRollout class that inherits from vLLMRollout, overrides the necessary methods, and is then registered as a new, distinct rollout mode.
Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
What does this PR do?
We conducted tests on the partial rollout feature using the DAPO algorithm with the qwen3-0.6B and qwen2.5-7B models. The blue curve represents the scenario where the partial rollout feature is enabled, while the red curve represents it being disabled. As shown in the results, there is a significant improvement in throughput and a reduction in inference latency. Meanwhile, the trend of the rewards curve remains consistent with that of the scenario where partial rollout is disabled.
qwen2.5-7B
qwen3-0.6B
Checklist Before Starting
[{modules}] {type}: {description}(This will be checked by the CI){modules}includefsdp,megatron,sglang,vllm,rollout,trainer,ci,training_utils,recipe,hardware,deployment,ray,worker,single_controller,misc,perf,model,algo,env,tool,ckpt,doc,data,like[megatron, fsdp, doc]{type}is infeat,fix,refactor,chore,test[BREAKING]to the beginning of the title.[BREAKING][fsdp, megatron] feat: dynamic batchingTest
API and Usage Example
# Add code snippet or script demonstrating how to use thisDesign & Code Changes
Checklist Before Submitting
Important
Please check all the following items before requesting a review, otherwise the reviewer might deprioritize this PR for review.
pre-commit install && pre-commit run --all-files --show-diff-on-failure --color=alwaysci-requestchannel in theverlSlack workspace. (If not accessible, please try the Feishu group (飞书群).)