Skip to content

Releases: cleanlab/cleanlab-tlm

v1.1.30

09 Sep 17:53
e6469df
Compare
Choose a tag to compare

Added

  • Add get_explanation() API for TLM, TrustworthyRAG and TLMChatCompletions

v1.1.29

04 Sep 02:19
e245464
Compare
Choose a tag to compare

Added

  • Improve exception handling of HTTP errors for VPC ChatCompletion module
  • Add custom VPCTLMOptions class that defines model provider option

v1.1.28

25 Aug 22:06
1d39849
Compare
Choose a tag to compare

Added

  • Support model_provider in TLMOptions for VPC ChatCompletion module

v1.1.27

21 Aug 18:52
bd23f58
Compare
Choose a tag to compare

Added

  • TLMOptions includes disable_persistence option.

v1.1.26

20 Aug 15:34
34b5198
Compare
Choose a tag to compare

Added

  • TrustworthyRAG now skips response-based evaluations when tool calls are detected in the response text.

v1.1.25

12 Aug 21:20
91c0a75
Compare
Choose a tag to compare

Added

  • Add support for explanations for VPC ChatCompletion module

Fixed

  • Unittest logic for quality preset changes
  • Typing in chat.py for new openai versions

v1.1.24

08 Aug 00:32
8b39578
Compare
Choose a tag to compare

Added

  • Add new OpenAI models: gpt-5, gpt-5-mini, gpt-5-nano

v1.1.23

06 Aug 01:46
b9763f9
Compare
Choose a tag to compare

Changed

  • Updated TLMOptions to support disable_trustworthiness parameter
    • Skips trustworthiness scoring when disable_trustworthiness is True, assuming either custom evaluation criteria (TLM) or RAG Evals (TrustworthyRAG) are provided

v1.1.22

29 Jul 22:33
2816b08
Compare
Choose a tag to compare

Added

  • Added TLMResponses module, providing support for trust scoring with OpenAI Responses object

v1.1.21

28 Jul 22:59
149fc88
Compare
Choose a tag to compare

Changed

  • Updated the VPC version of TLMChatCompletion to accept request_headers parameter, which is forwarded to the TLM app as part of API requests