Skip to content

Releases: Lightning-AI/torchmetrics

Minor patch release

03 Sep 13:54
Compare
Choose a tag to compare

[1.8.2] - 2025-09-03

Fixed

  • Fixed BinaryPrecisionRecallCurve now returns NaN for precision when no predictions meet a threshold (#3227)
  • Fixed precision_at_fixed_recall and recall_at_fixed_precision to correctly return NaN thresholds when recall/precision conditions are not met (#3226)

Key Contributors

@iamkulbhushansingh

If we forgot someone due to not matching commit email with GitHub account, let us know :]


Full Changelog: v1.8.1...v1.8.2

Minor patch release

07 Aug 20:38
Compare
Choose a tag to compare

[1.8.1] - 2025-08-07

Changed

  • Added reduction='none' to vif metric (#3196)
  • Float input support for segmentation metrics (#3198)

Fixed

  • Fixed unintended sigmoid normalization in BinaryPrecisionRecallCurve (#3182)

Key Contributors

@iamkulbhushansingh, @PussyCat0700, @simonreise

If we forgot someone due to not matching commit email with GitHub account, let us know :]


Full Changelog: v1.8.0...v1.8.1

First video and vertex metrics

23 Jul 17:33
Compare
Choose a tag to compare

The upcoming TorchMetrics v1.8.0 release introduces three flagship metrics, each designed to address critical evaluation needs in real-world applications.

Video Multi-Method Assessment Fusion (VMAF) brings a perceptual video-quality score that closely mirrors human judgment, powering streaming services such as Netflix and YouTube to optimize encoding ladders for consistent viewer experiences and enabling video-restoration labs to quantify improvements achieved by denoising and super-resolution algorithms.

Continuous Ranked Probability Score (CRPS) enables comprehensive evaluation of full predictive distributions rather than point estimates; meteorological centers leverage CRPS to benchmark probabilistic precipitation and temperature forecasts, improving public weather alerts, while energy companies apply it to assess uncertainty in load-demand predictions and refine grid management and trading strategies.

Lip Vertex Error (LVE) measures the discrepancy between predicted and ground-truth lip landmarks to quantify audio-visual synchronization. Localization studios use LVE to validate lip-sync accuracy during film dubbing, while AR/VR developers integrate it into avatar pipelines to ensure natural mouth movements in real-time virtual meetings and social experiences.


[1.8.0] - 2025-07-23

Added

  • Added VMAF metric to new video domain (#2991)
  • Added CRPS in regression domain (#3024)
  • Added aggregation_level argument to DiceScore (#3018)
  • Added support for reduction="none" to LearnedPerceptualImagePatchSimilarity (#3053)
  • Added support single str input for functional interface of bert_score (#3056)
  • Enhance: BERTScore to evaluate hypotheses against multiple references (#3069)
  • Added Lip Vertex Error (LVE) in multimodal domain (#3090)
  • Added antialias argument to FID metric (#3177)
  • Added mixed input format to segmentation metrics (#3176)

Changed

  • Changed data_range argument in PSNR metric to be a required argument (#3178)

Removed

  • Removed zero_division argument from DiceScore (#3018)

Key Contributors

@nkaenzig, @rittik9, @simonreise, @SkafteNicki

New Contributors

If we forgot someone due to not matching commit email with GitHub account, let us know :]


Full Changelog: v1.7.0...v1.8.0

Minor patch release

05 Jul 12:22
Compare
Choose a tag to compare

[1.7.4] - 2025-07-04

Changed

  • Improved numerical stability of pearson's correlation coefficient (#3152)

Fixed

  • Fixed: Ignore zero and negative predictions in retrieval metrics (#3160)
  • Fixed SSIM dist_reduce_fx when reduction=None for distributed training (#3162, #3166)
  • Fixed attribute error (#3154)
  • Fixed incorrect shape in _pearson_corrcoef_update (#3168)

Key Contributors

@AymenKallala, @gratus907, @Isalia20, @rittik9

If we forgot someone due to not matching commit email with GitHub account, let us know :]


Full Changelog: v1.7.3...v1.7.4

Minor patch release

13 Jun 15:33
Compare
Choose a tag to compare

[1.7.3] - 2025-06-13

Fixed

  • Fixed: ensure WrapperMetric resets wrapped_metric state (#3123)
  • Fixed top_k in multiclass_accuracy (#3117)
  • Fixed compatibility to COCO format for pycocotools 2.0.10 (#3131)

Key Contributors

@rittik9

If we forgot someone due to not matching commit email with GitHub account, let us know :]


Full Changelog: v1.7.2...v1.7.3

Minor patch release

28 May 20:20
Compare
Choose a tag to compare

[1.7.2] - 2025-05-27

Changed

  • Enhance: improve performance of _rank_data (#3103)

Fixed

  • Fixed UnboundLocalError in MatthewsCorrCoef (#3059)
  • Fixed MIFID incorrectly converts inputs to byte dtype with custom encoders (#3064)
  • Fixed ignore_index in MultilabelExactMatch (#3085)
  • Fixed: disable non-blocking on MPS (#3101)

Key Contributors

@ahmedhshahin, @gratus907, @rittik9, @ZhiyuanChen

If we forgot someone due to not matching commit email with GitHub account, let us know :]


Full Changelog: v1.7.1...v1.7.2

Minor patch release

07 Apr 19:33
Compare
Choose a tag to compare

[1.7.1] - 2025-04-06

Changed

  • Enhance Support Adding a MetricCollection to Another MetricCollection in add_metrics Function (#3032)

Fixed

  • Fixed absent class MeanIOU (#2892)
  • Fixed detection IoU ignores predictions without ground truth (#3025)
  • Fixed error raised in MulticlassAccuracy when top_k>1 (#3039)

Key Contributors

@Isalia20, @rittik9, @SkafteNicki

If we forgot someone due to not matching commit email with GitHub account, let us know :]


Full Changelog: v1.7.0...v1.7.1

More image metrics

20 Mar 19:05
3fe3aa5
Compare
Choose a tag to compare

The upcoming release of TorchMetrics is set to deliver a range of innovative features and enhancements across multiple domains, further solidifying its position as a leading tool for machine learning metrics. In the image domain, significant additions include the ARNIQA and DeepImageStructureAndTextureSimilarity metrics, which provide new insights into image quality and similarity. Additionally, the CLIPScore metric now supports more models and processors, expanding its versatility in image-text alignment tasks.

Beyond image analysis, the regression package welcomes the JensenShannonDivergence metric, offering a powerful tool for comparing probability distributions. The clustering package also sees a notable update with the introduction of the ClusterAccuracy metric, which helps evaluate the performance of clustering algorithms more effectively.

In the realm of classification, the Equal Error Rate (EER) metric has been added, providing a crucial measure for assessing the performance of classification models, particularly in scenarios where false positives and false negatives have different costs. Furthermore, the MeanAveragePrecision metric now includes a functional interface, enhancing its usability and flexibility for users.

These updates collectively enhance the capabilities of TorchMetrics, making it an even more comprehensive and indispensable resource for machine learning practitioners and researchers.

[1.7.0] - 2025-03-20

Added

  • Additions to image domain:
    • Added ARNIQA metric (#2953)
    • Added DeepImageStructureAndTextureSimilarity (#2993)
    • Added support for more models and processors in CLIPScore (#2978)
  • Added JensenShannonDivergence metric to regression package (#2992)
  • Added ClusterAccuracy metric to cluster package (#2777)
  • Added Equal Error Rate (EER) to classification package (#3013)
  • Added functional interface to MeanAveragePrecision metric (#3011)

Changed

  • Making num_classes optional for one-hot inputs in MeanIoU (#3012)

Removed

  • Removed Dice from classification (#3017)

Fixed

  • Fixed edge case in integration between class-wise wrapper and metric tracker (#3008)
  • Fixed IndexError in MultiClassAccuracy when using top_k with single sample (#3021)

Key Contributors

@Isalia20, @LorenzoAgnolucci, @nathanpainchaud, @rittik9, @SkafteNicki

If we forgot someone due to not matching commit email with GitHub account, let us know :]


Full Changelog: v1.6.0...v1.7.0

Minor patch release

14 Mar 06:57
Compare
Choose a tag to compare

[1.6.3] - 2024-03-13

Fixed

  • Fixed logic in how metric states referencing is handled in MetricCollection (#2990)
  • Fixed integration between class-wise wrapper and metric tracker (#3004)

Key Contributors

@SkafteNicki

If we forgot someone due to not matching commit email with GitHub account, let us know :]


Full Changelog: v1.6.2...v1.6.3

Minor patch release

03 Mar 11:25
Compare
Choose a tag to compare

[1.6.2] - 2024-02-28

Added

  • Added zero_division argument to DiceScore in segmentation package (#2860)
  • Added cache_session to DNSMOS metric to control caching behavior (#2974)
  • Added disable option to nan_strategy in basic aggregation metrics (#2943)

Changed

  • Make num_classes optional for classification in case of micro averaging (#2841)
  • Enhance Clip_Score to calculate similarities between same modalities (#2875)

Fixed

  • Fixed DiceScore when there is zero overlap between predictions and targets (#2860)
  • Fixed MeanAveragePrecision for average="micro" when 0 label is not present (#2968)
  • Fixed corner-case in PearsonCorrCoef when input is constant (#2975)
  • Fixed MetricCollection.update gives identical results (#2944)
  • Fixed missing kwargs in PIT metric for permutation wise mode (#2977)
  • Fixed multiple errors in the _final_aggregation function for PearsonCorrCoef (#2980)
  • Fixed incorrect CLIP-IQA type hints (#2952)

Key Contributors

@baskrahmer, @czmrand, @rbedyakin, @rittik9, @SkafteNicki, @wooseopkim

If we forgot someone due to not matching commit email with GitHub account, let us know :]


Full Changelog: v1.6.1...v1.6.2