You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Is it possible to continue training from checkpoints in MARLlib?
A bit of context: I'm experimenting with policy transfer/reuse across multi-agent teams and would like to test the scenario, when a MARL team kicks-off with a model learned for a related task in the same environment, and adapts it for the target task (i.e. in MPE, reuses the model learned for simple_spread to solve simple_adversary).