You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I found the history.collect behavior slightly surprising when a test_step runs less frequently than the train_step. Concretely, I was surprised that collect now returns train metrics at the same frequency as the test metrics. Consider the following toy setup.
Yeah, this should be documented, collect returns rows where all keys appear so some keys might be subsampled up to the least frequent ones, if they have no overlap you get empty lists. I've used collect mainly to plot so this makes sense in that context. If there are other use cases maybe we can generalize the behavior with a flag.
In terms of use cases, I'm not sure it is necessary to add a flag – I was trying to use collect to help me debug by grabbing all of the metrics conveniently. I was slightly confused that things didn't match the Keras logger output during training.
I imagine that simply calling collect twice would be enough of a workaround for most trivial cases like mine.
I found the
history.collect
behavior slightly surprising when atest_step
runs less frequently than thetrain_step
. Concretely, I was surprised that collect now returns train metrics at the same frequency as the test metrics. Consider the following toy setup.If we collect both a train metric and a test metric, we get
where the steps and the train metric (
a
) are subsampled.Compare this to collecting the train and test metrics separately
where the train steps and metric are not subsampled.
I wouldn't say this is a bug, but perhaps the behavior should be documented somewhere or included in some examples?
The text was updated successfully, but these errors were encountered: