Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
51 changes: 51 additions & 0 deletions examples/ppo_countdown_exp_replay/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,51 @@
# Example: PPO on Countdown dataset with experience replay

In this example, we follow the main settings in [`ppo_countdown`](../ppo_countdown/README.md),
and demonstrate the **experience replay** mechanisms in Trinity-RFT.


### Motivations

One motivation for experience replay is that, it is often desirable to improve learning efficiency by reusing the rollout samples for multiple training steps, especially in scenarios where rollout (with agent-environment interaction) is slow or expensive.
Moreover, experience replay offers a straightforward method for filling pipeline bubbles in the trainer (caused by discrepencies between explorer's and trainer's speeds) with useful computation, improving hardware utilization for the disaggregated architecture adopted by Trinity (and many other RL systems).

### Implementation and configuration

The priority queue buffer in Trinity offers seamless support for experience replay.
Whenever a batch of highest-priority samples are retrieved from the buffer,
a **priority function** updates their priority scores and decide which one should be put back into the buffer (after `reuse_cooldown_time` seconds have passed) for replay.
Users of Trinity can implement and register their own customized priority functions,
which can then be called by setting the `priority_fn` field in the yaml config.

We present an example config file in [`countdown.yaml`](./countdown.yaml),
where 1 GPU is allocated to the explorer and 6 GPUs to the trainer,
simulating a scenario where agent-environment interaction is slow and rollout data is scarce.
Important config parameters for experience replay include:
* `buffer.trainer_input.experience_buffer.storage_type`: set to `queue`
* `buffer.trainer_input.experience_buffer.replay_buffer`
* `enable`: set to `true` for enabling priority queue buffer
* `reuse_cooldown_time`: delay time (in seconds) before putting sample back into the buffer; must be set explicitly
* `priority_fn`: name of the priority function
* `priority_fn_args`: additional args for the priority function
* `synchronizer.sync_style`: set to `dynamic_by_explorer`, which allows the trainer to run more training steps as long as the priority queue buffer is non-empty

The priority function used in this example is named `decay_limit_randomization`.
The logic behind it:
* Priority score is calculated as `model_version - decay * use_count`, i.e., fresher and less used samples are prioritized;
* If `sigma` is non-zero, priority score is further perturbed by random Gaussian noise with standard deviation `sigma`;
* A retrieved sample will be put back into the buffer if and only if its use count has not exceeded `use_count_limit`.


### Experimental results

We conduct experiment for this config, and compare it with a baseline config that uses each rollout sample exactly once for training.
The first and second figures below --- using rollout step or wall-clock time as the X-axis --- confirms the benefits brought by experience replay (with default hyperparameters).
This is partly because more training steps can be taken, as shown in the third figure (where X-axis represents rollout step).
Comment thread
yanxi-chen marked this conversation as resolved.



<img src="../../docs/sphinx_doc/assets/example_experience_replay/exp_replay_X_explore_step.png" alt="score-vs-explore-step" width="600" />

<img src="../../docs/sphinx_doc/assets/example_experience_replay/exp_replay_X_time.png" alt="score-vs-wall-clock-time" width="600" />

<img src="../../docs/sphinx_doc/assets/example_experience_replay/exp_replay_model_version.png" alt="model-version" width="600" />
Comment thread
yanxi-chen marked this conversation as resolved.
76 changes: 76 additions & 0 deletions examples/ppo_countdown_exp_replay/countdown.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,76 @@
project: "Trinity-RFT-countdown-experience-replay"
name: "qwen2.5-1.5B-countdown"
checkpoint_root_dir: ${oc.env:TRINITY_CHECKPOINT_ROOT_DIR,./checkpoints}
algorithm:
algorithm_type: ppo
repeat_times: 5
optimizer:
lr: 1e-6
model:
model_path: ${oc.env:TRINITY_MODEL_PATH,Qwen/Qwen2.5-1.5B-Instruct}
max_response_tokens: 1024
max_model_len: 2048
cluster:
node_num: 1
gpu_per_node: 7
buffer:
total_epochs: 20
batch_size: 96
explorer_input:
taskset:
name: countdown
storage_type: file
path: 'countdown_dataset/oneshot-split'
format:
prompt_key: 'question'
response_key: 'answer'
rollout_args:
temperature: 1.0
logprobs: 0
default_workflow_type: 'math_workflow'
default_reward_fn_type: 'countdown_reward'
trainer_input:
experience_buffer:
name: experience_buffer
storage_type: queue
replay_buffer:
enable: true
reuse_cooldown_time: 40
priority_fn: decay_limit_randomization
# priority_fn_args: use default values
explorer:
eval_interval: 100
runner_per_model: 8
rollout_model:
engine_num: 1 # allocate 1 GPU for explorer
tensor_parallel_size: 1
enable_prefix_caching: false
enforce_eager: true
dtype: bfloat16
seed: 42
synchronizer:
sync_method: 'nccl'
sync_style: dynamic_by_explorer # set to "fixed" for baseline
sync_interval: 10
sync_timeout: 1200
trainer:
save_interval: 100
grad_clip: 1.0
use_dynamic_bsz: true
max_token_len_per_gpu: 10240
ulysses_sequence_parallel_size: 1
trainer_config:
actor_rollout_ref:
actor:
checkpoint:
load_contents: ['model', 'hf_model', 'optimizer', 'extra']
save_contents: ['model', 'hf_model', 'optimizer', 'extra']
critic:
optim:
lr: 1e-5
ppo_max_token_len_per_gpu: 20480
forward_max_token_len_per_gpu: 20480
cliprange_value: 0.5
trainer:
max_actor_ckpt_to_keep: 5
max_critic_ckpt_to_keep: 5