You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Thank you for your contribution to the RL community. I have some questions about the reply buffer setting in both shared and separated buffer settings. When I am training, I don't want parallel env, so my num_rollout_thread is 1, and the buffer_size is batch_size = n_rollout_threads * episode_length * num_agents . in my understanding, this will only keep 1 traj in buffer so my training ratio is always around 1. if I want to enlarge my batch_size, I should set the n_rollout_threads be same to the buffer_size that I want ?
The text was updated successfully, but these errors were encountered:
Thank you for your contribution to the RL community. I have some questions about the reply buffer setting in both shared and separated buffer settings. When I am training, I don't want parallel env, so my num_rollout_thread is 1, and the buffer_size is
batch_size = n_rollout_threads * episode_length * num_agents
. in my understanding, this will only keep 1 traj in buffer so my training ratio is always around 1. if I want to enlarge my batch_size, I should set the n_rollout_threads be same to the buffer_size that I want ?The text was updated successfully, but these errors were encountered: