-
Notifications
You must be signed in to change notification settings - Fork 68
Add example for experience replay #345
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Merged
chenyushuo
merged 4 commits into
agentscope-ai:main
from
yanxi-chen:doc/add_exp_replay_example
Oct 27, 2025
Merged
Changes from all commits
Commits
Show all changes
4 commits
Select commit
Hold shift + click to select a range
File filter
Filter by extension
Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
There are no files selected for viewing
Binary file added
BIN
+184 KB
docs/sphinx_doc/assets/example_experience_replay/exp_replay_X_explore_step.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added
BIN
+184 KB
docs/sphinx_doc/assets/example_experience_replay/exp_replay_X_time.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added
BIN
+138 KB
docs/sphinx_doc/assets/example_experience_replay/exp_replay_model_version.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -0,0 +1,51 @@ | ||
| # Example: PPO on Countdown dataset with experience replay | ||
|
|
||
| In this example, we follow the main settings in [`ppo_countdown`](../ppo_countdown/README.md), | ||
| and demonstrate the **experience replay** mechanisms in Trinity-RFT. | ||
|
|
||
|
|
||
| ### Motivations | ||
|
|
||
| One motivation for experience replay is that, it is often desirable to improve learning efficiency by reusing the rollout samples for multiple training steps, especially in scenarios where rollout (with agent-environment interaction) is slow or expensive. | ||
| Moreover, experience replay offers a straightforward method for filling pipeline bubbles in the trainer (caused by discrepencies between explorer's and trainer's speeds) with useful computation, improving hardware utilization for the disaggregated architecture adopted by Trinity (and many other RL systems). | ||
|
|
||
| ### Implementation and configuration | ||
|
|
||
| The priority queue buffer in Trinity offers seamless support for experience replay. | ||
| Whenever a batch of highest-priority samples are retrieved from the buffer, | ||
| a **priority function** updates their priority scores and decide which one should be put back into the buffer (after `reuse_cooldown_time` seconds have passed) for replay. | ||
| Users of Trinity can implement and register their own customized priority functions, | ||
| which can then be called by setting the `priority_fn` field in the yaml config. | ||
|
|
||
| We present an example config file in [`countdown.yaml`](./countdown.yaml), | ||
| where 1 GPU is allocated to the explorer and 6 GPUs to the trainer, | ||
| simulating a scenario where agent-environment interaction is slow and rollout data is scarce. | ||
| Important config parameters for experience replay include: | ||
| * `buffer.trainer_input.experience_buffer.storage_type`: set to `queue` | ||
| * `buffer.trainer_input.experience_buffer.replay_buffer` | ||
| * `enable`: set to `true` for enabling priority queue buffer | ||
| * `reuse_cooldown_time`: delay time (in seconds) before putting sample back into the buffer; must be set explicitly | ||
| * `priority_fn`: name of the priority function | ||
| * `priority_fn_args`: additional args for the priority function | ||
| * `synchronizer.sync_style`: set to `dynamic_by_explorer`, which allows the trainer to run more training steps as long as the priority queue buffer is non-empty | ||
|
|
||
| The priority function used in this example is named `decay_limit_randomization`. | ||
| The logic behind it: | ||
| * Priority score is calculated as `model_version - decay * use_count`, i.e., fresher and less used samples are prioritized; | ||
| * If `sigma` is non-zero, priority score is further perturbed by random Gaussian noise with standard deviation `sigma`; | ||
| * A retrieved sample will be put back into the buffer if and only if its use count has not exceeded `use_count_limit`. | ||
|
|
||
|
|
||
| ### Experimental results | ||
|
|
||
| We conduct experiment for this config, and compare it with a baseline config that uses each rollout sample exactly once for training. | ||
| The first and second figures below --- using rollout step or wall-clock time as the X-axis --- confirms the benefits brought by experience replay (with default hyperparameters). | ||
| This is partly because more training steps can be taken, as shown in the third figure (where X-axis represents rollout step). | ||
|
|
||
|
|
||
|
|
||
| <img src="../../docs/sphinx_doc/assets/example_experience_replay/exp_replay_X_explore_step.png" alt="score-vs-explore-step" width="600" /> | ||
|
|
||
| <img src="../../docs/sphinx_doc/assets/example_experience_replay/exp_replay_X_time.png" alt="score-vs-wall-clock-time" width="600" /> | ||
|
|
||
| <img src="../../docs/sphinx_doc/assets/example_experience_replay/exp_replay_model_version.png" alt="model-version" width="600" /> | ||
|
yanxi-chen marked this conversation as resolved.
|
||
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -0,0 +1,76 @@ | ||
| project: "Trinity-RFT-countdown-experience-replay" | ||
| name: "qwen2.5-1.5B-countdown" | ||
| checkpoint_root_dir: ${oc.env:TRINITY_CHECKPOINT_ROOT_DIR,./checkpoints} | ||
| algorithm: | ||
| algorithm_type: ppo | ||
| repeat_times: 5 | ||
| optimizer: | ||
| lr: 1e-6 | ||
| model: | ||
| model_path: ${oc.env:TRINITY_MODEL_PATH,Qwen/Qwen2.5-1.5B-Instruct} | ||
| max_response_tokens: 1024 | ||
| max_model_len: 2048 | ||
| cluster: | ||
| node_num: 1 | ||
| gpu_per_node: 7 | ||
| buffer: | ||
| total_epochs: 20 | ||
| batch_size: 96 | ||
| explorer_input: | ||
| taskset: | ||
| name: countdown | ||
| storage_type: file | ||
| path: 'countdown_dataset/oneshot-split' | ||
| format: | ||
| prompt_key: 'question' | ||
| response_key: 'answer' | ||
| rollout_args: | ||
| temperature: 1.0 | ||
| logprobs: 0 | ||
| default_workflow_type: 'math_workflow' | ||
| default_reward_fn_type: 'countdown_reward' | ||
| trainer_input: | ||
| experience_buffer: | ||
| name: experience_buffer | ||
| storage_type: queue | ||
| replay_buffer: | ||
| enable: true | ||
| reuse_cooldown_time: 40 | ||
| priority_fn: decay_limit_randomization | ||
| # priority_fn_args: use default values | ||
| explorer: | ||
| eval_interval: 100 | ||
| runner_per_model: 8 | ||
| rollout_model: | ||
| engine_num: 1 # allocate 1 GPU for explorer | ||
| tensor_parallel_size: 1 | ||
| enable_prefix_caching: false | ||
| enforce_eager: true | ||
| dtype: bfloat16 | ||
| seed: 42 | ||
| synchronizer: | ||
| sync_method: 'nccl' | ||
| sync_style: dynamic_by_explorer # set to "fixed" for baseline | ||
| sync_interval: 10 | ||
| sync_timeout: 1200 | ||
| trainer: | ||
| save_interval: 100 | ||
| grad_clip: 1.0 | ||
| use_dynamic_bsz: true | ||
| max_token_len_per_gpu: 10240 | ||
| ulysses_sequence_parallel_size: 1 | ||
| trainer_config: | ||
| actor_rollout_ref: | ||
| actor: | ||
| checkpoint: | ||
| load_contents: ['model', 'hf_model', 'optimizer', 'extra'] | ||
| save_contents: ['model', 'hf_model', 'optimizer', 'extra'] | ||
| critic: | ||
| optim: | ||
| lr: 1e-5 | ||
| ppo_max_token_len_per_gpu: 20480 | ||
| forward_max_token_len_per_gpu: 20480 | ||
| cliprange_value: 0.5 | ||
| trainer: | ||
| max_actor_ckpt_to_keep: 5 | ||
| max_critic_ckpt_to_keep: 5 |
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Uh oh!
There was an error while loading. Please reload this page.