Skip to content

Lost hp_metric when call self.trainer.test on fit end #21247

@razgzy

Description

@razgzy

Bug description

In the init of L.LightningModule, I use

super().__init__()
self.save_hyperparameters()

to log hparams and in on_validation_end, i use
self.logger.experiment.add_scalar('hp_metric', hp_metric, global_step=self.current_epoch)
to log hp_metric. After fit, i call test once.

def on_fit_end(self):
        self.trainer.test(ckpt_path="last", datamodule=self.trainer.datamodule)

During training, everything is alright. The hp_metric iss logged correctly in Tensorboard. But after fit, the hp_metric in Tensorboard becomes (0, -1), all the logged hp_metrices are lost.
The code work well in lightning v2.2, but failed in v2.5

What version are you seeing the problem on?

v2.5

Reproduced in studio

No response

How to reproduce the bug

Error messages and logs

# Error messages and logs here please

Environment

Current environment
#- PyTorch Lightning Version (e.g., 2.5.0): 2.5.3
#- PyTorch Version (e.g., 2.7.1): 2.7.1
#- Python version (e.g., 3.12): 3.11.7
#- OS (e.g., Linux): Ubuntu 22.04
#- CUDA/cuDNN version: 11.8
#- GPU models and configuration:
#- How you installed Lightning(`conda`, `pip`, source): pip

More info

No response

cc @lantiga

Metadata

Metadata

Assignees

No one assigned

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions