Releases: cleanlab/cleanlab-tlm
Releases · cleanlab/cleanlab-tlm
v1.1.30
v1.1.29
Added
- Improve exception handling of HTTP errors for VPC ChatCompletion module
- Add custom VPCTLMOptions class that defines model provider option
v1.1.28
Added
- Support
model_provider
inTLMOptions
for VPC ChatCompletion module
v1.1.27
Added
- TLMOptions includes disable_persistence option.
v1.1.26
Added
- TrustworthyRAG now skips response-based evaluations when tool calls are detected in the response text.
v1.1.25
Added
- Add support for explanations for VPC ChatCompletion module
Fixed
- Unittest logic for quality preset changes
- Typing in
chat.py
for newopenai
versions
v1.1.24
Added
- Add new OpenAI models:
gpt-5
,gpt-5-mini
,gpt-5-nano
v1.1.23
Changed
- Updated
TLMOptions
to supportdisable_trustworthiness
parameter- Skips trustworthiness scoring when
disable_trustworthiness
is True, assuming either custom evaluation criteria (TLM) or RAG Evals (TrustworthyRAG) are provided
- Skips trustworthiness scoring when
v1.1.22
Added
- Added
TLMResponses
module, providing support for trust scoring with OpenAI Responses object
v1.1.21
Changed
- Updated the VPC version of
TLMChatCompletion
to acceptrequest_headers
parameter, which is forwarded to the TLM app as part of API requests