Skip to content

Conversation

Sameerlite
Copy link
Collaborator

Title

Add Cohere v2/chat API support

Relevant issues

Fixes #13311

Pre-Submission checklist

Please complete all items before asking a LiteLLM maintainer to review your PR

  • I have Added testing in the tests/litellm/ directory, Adding at least 1 test is a hard requirement - see details
  • I have added a screenshot of my new test passing locally
  • My PR passes all unit tests on make test-unit
  • My PR's scope is as isolated as possible, it only solves 1 specific problem

Type

🆕 New Feature

Changes

I have used concept getting used for bedrock models, where we detect the API to be used in chat based on the suffix after the llm_provider. I have changed the a file's name to CohereV2ChatPassthroughConfig as it was getting used for passthrough code.

Test

Screenshot 2025-10-20 at 11 47 53 AM

@vercel
Copy link

vercel bot commented Oct 20, 2025

The latest updates on your projects. Learn more about Vercel for GitHub.

Project Deployment Preview Comments Updated (UTC)
litellm Ready Ready Preview Comment Oct 22, 2025 5:46pm

if extra_headers is not None:
headers.update(extra_headers)

verbose_logger.debug(f"Model: {model}, API Base: {api_base}")

Check failure

Code scanning / CodeQL

Clear-text logging of sensitive information High

This expression logs
sensitive data (password)
as clear text.
This expression logs
sensitive data (password)
as clear text.
This expression logs
sensitive data (password)
as clear text.
This expression logs
sensitive data (password)
as clear text.
This expression logs
sensitive data (password)
as clear text.
This expression logs
sensitive data (secret)
as clear text.
This expression logs
sensitive data (password)
as clear text.
This expression logs
sensitive data (secret)
as clear text.
This expression logs
sensitive data (password)
as clear text.
This expression logs
sensitive data (secret)
as clear text.
This expression logs
sensitive data (password)
as clear text.
This expression logs
sensitive data (secret)
as clear text.
This expression logs
sensitive data (password)
as clear text.
This expression logs
sensitive data (secret)
as clear text.
This expression logs
sensitive data (password)
as clear text.
This expression logs
sensitive data (secret)
as clear text.
This expression logs
sensitive data (password)
as clear text.
This expression logs
sensitive data (secret)
as clear text.
This expression logs
sensitive data (password)
as clear text.
This expression logs
sensitive data (secret)
as clear text.
This expression logs
sensitive data (password)
as clear text.
This expression logs
sensitive data (secret)
as clear text.
This expression logs
sensitive data (password)
as clear text.
This expression logs
sensitive data (secret)
as clear text.
This expression logs
sensitive data (password)
as clear text.
This expression logs
sensitive data (secret)
as clear text.
This expression logs
sensitive data (password)
as clear text.
This expression logs
sensitive data (secret)
as clear text.
This expression logs
sensitive data (password)
as clear text.
This expression logs
sensitive data (secret)
as clear text.
This expression logs
sensitive data (password)
as clear text.
This expression logs
sensitive data (secret)
as clear text.
This expression logs
sensitive data (password)
as clear text.
This expression logs
sensitive data (secret)
as clear text.
This expression logs
sensitive data (password)
as clear text.
This expression logs
sensitive data (secret)
as clear text.
This expression logs
sensitive data (secret)
as clear text.
This expression logs
sensitive data (password)
as clear text.
This expression logs
sensitive data (secret)
as clear text.
This expression logs
sensitive data (password)
as clear text.
This expression logs
sensitive data (secret)
as clear text.
This expression logs
sensitive data (password)
as clear text.
This expression logs
sensitive data (secret)
as clear text.
This expression logs
sensitive data (password)
as clear text.
This expression logs
sensitive data (password)
as clear text.
This expression logs
sensitive data (secret)
as clear text.
This expression logs
sensitive data (password)
as clear text.
This expression logs
sensitive data (secret)
as clear text.
This expression logs
sensitive data (password)
as clear text.
This expression logs
sensitive data (secret)
as clear text.
This expression logs
sensitive data (password)
as clear text.
This expression logs
sensitive data (secret)
as clear text.
This expression logs
sensitive data (password)
as clear text.
This expression logs
sensitive data (secret)
as clear text.
This expression logs
sensitive data (password)
as clear text.
This expression logs
sensitive data (secret)
as clear text.
This expression logs
sensitive data (password)
as clear text.
This expression logs
sensitive data (secret)
as clear text.
This expression logs
sensitive data (secret)
as clear text.
This expression logs
sensitive data (password)
as clear text.
This expression logs
sensitive data (secret)
as clear text.
This expression

Copilot Autofix

AI 3 days ago

The correct fix is to ensure that sensitive values such as api_base and model are not logged if they may contain or be derived from user-provided secrets such as API keys/tokens. We should:

  • Scrub/obfuscate potentially sensitive data before logging, or
  • Avoid logging these fields altogether unless we're confident (through sanitization) that they cannot contain secrets.

Best approach:

  • Implement a helper function (sanitize_url or similar) for logging which strips query strings, authentication info, and sensitive path segments from api_base and model before logging.
  • Update the verbose logger call on line 2461 to use sanitized versions.
  • This ensures no sensitive data is leaked to logs while preserving helpful debug information for troubleshooting.

Edits needed:

  • Define a sanitize_for_logging function in litellm/main.py, e.g., near the imports or top-of-file utility region.
  • Use sanitize_for_logging(model) and sanitize_for_logging(api_base) in the debug log at line 2461.

Suggested changeset 1
litellm/main.py

Autofix patch

Autofix patch
Run the following command in your local git repository to apply this patch
cat << 'EOF' | git apply
diff --git a/litellm/main.py b/litellm/main.py
--- a/litellm/main.py
+++ b/litellm/main.py
@@ -41,6 +41,45 @@
 
 from litellm._uuid import uuid
 
+
+
+def sanitize_for_logging(val):
+    """Remove sensitive data from strings (like URLs or tokens).
+    - For URLs, removes query parameters and userinfo.
+    - For other str, if looks like an API key, redacts.
+    """
+    import re
+    from urllib.parse import urlparse, urlunparse
+
+    if not isinstance(val, str):
+        return val
+
+    # If the string looks like a URL, strip userinfo and query
+    try:
+        p = urlparse(val)
+        netloc = p.hostname or ""
+        if p.port:
+            netloc += f":{p.port}"
+        sanitized = urlunparse((
+            p.scheme,
+            netloc,  # omit user:pass@
+            p.path,
+            p.params,
+            '',      # strip query
+            '',      # strip fragment
+        ))
+        if p.scheme and p.netloc:
+            return sanitized
+    except Exception:
+        pass
+
+    # Redact likely API keys/tokens (crude: long alphanum strings)
+    if re.fullmatch(r"[A-Za-z0-9_\-\.=]{20,}", val):
+        return "[REDACTED]"
+    
+    # Redact if the string contains 'key=' or 'token='
+    return re.sub(r'((key|token)=)[^&;]+', r'\1[REDACTED]', val, flags=re.IGNORECASE)
+
 if TYPE_CHECKING:
     from aiohttp import ClientSession
 
@@ -2458,7 +2497,7 @@
             if extra_headers is not None:
                 headers.update(extra_headers)
 
-            verbose_logger.debug(f"Model: {model}, API Base: {api_base}")
+            verbose_logger.debug(f"Model: {sanitize_for_logging(model)}, API Base: {sanitize_for_logging(api_base)}")
             verbose_logger.debug(f"Provider Config: {provider_config}")
             response = base_llm_http_handler.completion(
                 model=model,
EOF
@@ -41,6 +41,45 @@

from litellm._uuid import uuid



def sanitize_for_logging(val):
"""Remove sensitive data from strings (like URLs or tokens).
- For URLs, removes query parameters and userinfo.
- For other str, if looks like an API key, redacts.
"""
import re
from urllib.parse import urlparse, urlunparse

if not isinstance(val, str):
return val

# If the string looks like a URL, strip userinfo and query
try:
p = urlparse(val)
netloc = p.hostname or ""
if p.port:
netloc += f":{p.port}"
sanitized = urlunparse((
p.scheme,
netloc, # omit user:pass@
p.path,
p.params,
'', # strip query
'', # strip fragment
))
if p.scheme and p.netloc:
return sanitized
except Exception:
pass

# Redact likely API keys/tokens (crude: long alphanum strings)
if re.fullmatch(r"[A-Za-z0-9_\-\.=]{20,}", val):
return "[REDACTED]"

# Redact if the string contains 'key=' or 'token='
return re.sub(r'((key|token)=)[^&;]+', r'\1[REDACTED]', val, flags=re.IGNORECASE)

if TYPE_CHECKING:
from aiohttp import ClientSession

@@ -2458,7 +2497,7 @@
if extra_headers is not None:
headers.update(extra_headers)

verbose_logger.debug(f"Model: {model}, API Base: {api_base}")
verbose_logger.debug(f"Model: {sanitize_for_logging(model)}, API Base: {sanitize_for_logging(api_base)}")
verbose_logger.debug(f"Provider Config: {provider_config}")
response = base_llm_http_handler.completion(
model=model,
Copilot is powered by AI and may make mistakes. Always verify output.
Copy link
Contributor

@ishaan-jaff ishaan-jaff left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

lgtm, small edit to testing

except Exception as e:
pytest.fail(f"Error occurred: {e}")


Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

can you add a test "documents" and "citation_options" is getting sent in the request body

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Added test_cohere_documents_citation_options_in_request_body

@Sameerlite Sameerlite changed the base branch from main to litellm_sameer_oct_staging October 20, 2025 17:32
os.environ["COHERE_API_KEY"] = "cohere key"

# cohere call
# cohere v1 call
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

i believe /v2 should be the default @Sameerlite

it's likely /v1 will be deprecated soon, so it's safer to move to v2 at this point (it's been around for quite a while, so i would assume it's stable)

Copy link
Contributor

@krrishdholakia krrishdholakia left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

make default /v2/ API

@Sameerlite
Copy link
Collaborator Author

Sameerlite commented Oct 22, 2025

make default /v2/ API

@ishaan-jaff and I discussed this. This might break stuff for people relying on v1 api. So this should be done gradually

Copy link
Contributor

@ishaan-jaff ishaan-jaff left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

on 2nd thought let's use v2 as the default now

@Sameerlite
Copy link
Collaborator Author

on 2nd thought let's use v2 as the default now

Yes, made v2 default

@Sameerlite Sameerlite merged commit c74d5c9 into litellm_sameer_oct_staging Oct 23, 2025
36 of 43 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

[Feature]: Support for routing requests to the Cohere v2/chat api when submitted through OpenAI SDK

3 participants