Skip to content

Conversation

@HelloWorldBeginner
Copy link
Contributor

@HelloWorldBeginner HelloWorldBeginner commented Nov 29, 2025

What does this PR do?

We conducted tests on the partial rollout feature using the DAPO algorithm with the qwen3-0.6B and qwen2.5-7B models. The blue curve represents the scenario where the partial rollout feature is enabled, while the red curve represents it being disabled. As shown in the results, there is a significant improvement in throughput and a reduction in inference latency. Meanwhile, the trend of the rewards curve remains consistent with that of the scenario where partial rollout is disabled.

qwen2.5-7B

img

qwen3-0.6B

img_1

Checklist Before Starting

  • Search for similar PRs. Paste at least one query link here: ...
  • Format the PR title as [{modules}] {type}: {description} (This will be checked by the CI)
    • {modules} include fsdp, megatron, sglang, vllm, rollout, trainer, ci, training_utils, recipe, hardware, deployment, ray, worker, single_controller, misc, perf, model, algo, env, tool, ckpt, doc, data
    • If this PR involves multiple modules, separate them with , like [megatron, fsdp, doc]
    • {type} is in feat, fix, refactor, chore, test
    • If this PR breaks any API (CLI arguments, config, function signature, etc.), add [BREAKING] to the beginning of the title.
    • Example: [BREAKING][fsdp, megatron] feat: dynamic batching

Test

For changes that can not be tested by CI (e.g., algorithm implementation, new model support), validate by experiment(s) and show results like training curve plots, evaluation results, etc.

API and Usage Example

Demonstrate how the API changes if any, and provide usage example(s) if possible.

# Add code snippet or script demonstrating how to use this

Design & Code Changes

Demonstrate the high-level design if this PR is complex, and list the specific changes.

Checklist Before Submitting

Important

Please check all the following items before requesting a review, otherwise the reviewer might deprioritize this PR for review.

@CLAassistant
Copy link

CLAassistant commented Nov 29, 2025

CLA assistant check
Thank you for your submission! We really appreciate it. Like many open source projects, we ask that you all sign our Contributor License Agreement before we can accept your contribution.
0 out of 2 committers have signed the CLA.

❌ mhh111
❌ HelloWorldBeginner


mhh111 seems not to be a GitHub user. You need a GitHub account to be able to sign the CLA. If you have already a GitHub account, please add the email address used for this commit to your account.
You have signed the CLA already but the status is still pending? Let us recheck it.

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces a partial rollout feature, which is a significant addition. The implementation involves creating a new recipe with extensive changes, including new trainer logic, configuration, and monkey-patching of core components. While the feature is valuable, I've identified several critical issues that need to be addressed. These include incorrect configuration access paths, a blocking call within an async loop, an incorrect super() call, and a faulty import. Additionally, there are some high-severity concerns regarding brittle logic and fragile monkey-patching that could impact maintainability. Addressing these points will be crucial for the stability and correctness of this new feature.

Comment on lines 151 to 152
finished_mask = (new_batch.non_tensor_batch[
"age"] == self.config.algorithm.partial_rollout_max_split) | finished_mask
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

critical

The configuration partial_rollout_max_split is being accessed from self.config.algorithm, but it is defined under actor_rollout_ref.rollout in the YAML configuration file. This will likely lead to a ConfigKeyError at runtime or incorrect behavior if the key happens to exist under algorithm with a different value.

                                             "age"] == self.config.actor_rollout_ref.rollout.partial_rollout_max_split) | finished_mask

model_config: HFModelConfig,
device_mesh: DeviceMesh,
):
super(vLLMRollout,self).__init__(self,config, model_config, device_mesh)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

critical

The call to the parent class constructor super() is incorrect. The current syntax super(vLLMRollout, self).__init__(self, ...) passes self as an explicit argument, leading to __init__ receiving two self arguments, which will cause a TypeError. The correct Python 3 syntax is super().__init__(config, model_config, device_mesh).

Suggested change
super(vLLMRollout,self).__init__(self,config, model_config, device_mesh)
super().__init__(config, model_config, device_mesh)

t = t + 1
step_outputs = engine.step()
if self.aggregator:
if ray.get(self.aggregator.is_stopped.remote()) and count >= aged_sample_num and t % 20 == 0:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

critical

The use of ray.get() within an async function's loop is a blocking call that will freeze the event loop. This negates the benefits of asyncio and can lead to severe performance degradation or deadlocks, especially under load. The remote call should be awaited using await to maintain asynchronicity.

Suggested change
if ray.get(self.aggregator.is_stopped.remote()) and count >= aged_sample_num and t % 20 == 0:
if await self.aggregator.is_stopped.remote() and count >= aged_sample_num and t % 20 == 0:

Comment on lines 182 to 185
for key in ("input_ids", "attention_mask", "position_ids"):
tmp = partial_batch.batch.pop(key, None)
partial_batch.batch[key] = tmp[:,
: self.config.data.max_prompt_length] # TODO ?
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

high

This logic to recover the original prompt by truncating the sequence to max_prompt_length is brittle. It assumes that the original prompt's length is exactly self.config.data.max_prompt_length, which may not always be true, especially with padding or variable-length prompts. This could lead to incorrect prompts being used in subsequent partial generation steps. A more robust approach would be to explicitly carry over the original prompt tensors instead of trying to recover them via truncation. The TODO ? comment also suggests uncertainty about this implementation.

Comment on lines 416 to 418
for key in ("input_ids", "attention_mask", "position_ids"):
tmp = partial_batch.batch.pop(key, None)
partial_batch.batch[key] = tmp[:, : self.config.data.max_prompt_length]
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

high

This logic to recover the original prompt by truncating the sequence to max_prompt_length is brittle. It assumes that the original prompt's length is exactly self.config.data.max_prompt_length, which may not always be true. This could lead to incorrect prompts being used in subsequent partial generation steps. A more robust approach would be to explicitly carry over the original prompt tensors instead of trying to recover them via truncation.

Comment on lines 102 to 120
if rollout_name=="vllm" and mode=="partial_rollout":

fqdn = _ROLLOUT_REGISTRY[("vllm", "sync")]
module_name, class_name = fqdn.rsplit(".", 1)
rollout_module = importlib.import_module(module_name)
from recipe.partial_rollout.vllm_rollout_spmd import vLLMRolloutPatch
#from verl.workers.rollout.vllm_rollout.vllm_rollout_spmd import vLLMRollout
rollout_module_class = getattr(rollout_module, class_name)
for name, attr in rollout_module_class.__dict__.items():
if name =="__init__":
print("name1 :",name)
attr = MethodType(vLLMRolloutPatch.__init__,rollout_module_class)
setattr(rollout_module_class,name,attr)
if name == "generate_sequences":
attr = MethodType(vLLMRolloutPatch.vllmrollout_generate_sequences,rollout_module_class)
setattr(rollout_module_class,name,attr)
attr = MethodType(vLLMRolloutPatch.async_generate_sequences,rollout_module_class)
setattr(rollout_module_class,"async_generate_sequences",attr)
return rollout_module_class
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

high

This implementation uses extensive monkey-patching to modify the vLLMRollout class at runtime. This approach is fragile, hard to debug, and difficult to maintain, as any change in the original vLLMRollout class could silently break this logic. A more robust and standard software engineering practice would be to use inheritance. Consider creating a PartialRolloutVLLMRollout class that inherits from vLLMRollout, overrides the necessary methods, and is then registered as a new, distinct rollout mode.

HelloWorldBeginner and others added 6 commits November 29, 2025 10:02
Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants