Is your request related to a problem?
When I run training — especially on HPC clusters — I often find it difficult to monitor progress, visualize results, or inspect model parameters without writing custom logging code. This adds extra overhead and makes it harder to keep experiment tracking consistent across projects.
Describe the solution you'd like
It would be great to have built-in logger classes for popular logging platforms (e.g., Weights & Biases, TensorBoard), similar to the approach used in torchrl, for example. These loggers could support tracking training progress, stdout output, model weights, and custom visualizations, reducing boilerplate for users.
I'm particularly interested in WandB integration and would be happy to contribute a first implementation.