You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi, I'm using FSDP through Accelerate and would like to improve checkpoint saving performance. I noticed that PyTorch's torch.distributed.checkpoint (DCP) supports asynchronous saving with dcp.async_save, which can help overlap checkpoint I/O with training.
My question is: Does Accelerate's FSDP integration currently support async checkpointing via DCP or any similar mechanism?
If not directly exposed, is it safe to manually use torch.distributed.checkpoint.async_save with models and optimizers wrapped by Accelerate's FSDP? Are there any known compatibility issues or best practices (e.g., handling sharded state dict, optimizer state reconstruction)?
It would be great if Accelerate could expose async save as an option in save_state or provide utilities to integrate with DCP more seamlessly.