Skip to content

Commit

Permalink
SDK regeneration
Browse files Browse the repository at this point in the history
  • Loading branch information
fern-api[bot] committed Oct 18, 2024
1 parent 56d48b0 commit 247ae60
Show file tree
Hide file tree
Showing 18 changed files with 2,094 additions and 447 deletions.
2 changes: 0 additions & 2 deletions mypy.ini

This file was deleted.

2,264 changes: 1,925 additions & 339 deletions poetry.lock

Large diffs are not rendered by default.

9 changes: 6 additions & 3 deletions pyproject.toml
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
[tool.poetry]
name = "cohere"
version = "5.11.0"
version = "5.11.1"
description = ""
readme = "README.md"
authors = []
Expand Down Expand Up @@ -32,18 +32,18 @@ Repository = 'https://github.com/cohere-ai/cohere-python'

[tool.poetry.dependencies]
python = "^3.8"
boto3 = "^1.34.0"
boto3 = { version="^1.34.0", optional = true}
fastavro = "^1.9.4"
httpx = ">=0.21.2"
httpx-sse = "0.4.0"
parameterized = "^0.9.0"
pydantic = ">= 1.9.2"
pydantic-core = "^2.18.2"
requests = "^2.0.0"
sagemaker = { version="^2.232.1", optional = true}
tokenizers = ">=0.15,<1"
types-requests = "^2.0.0"
typing_extensions = ">= 4.0.0"
sagemaker = "^2.232.1"

[tool.poetry.dev-dependencies]
mypy = "1.0.1"
Expand All @@ -68,3 +68,6 @@ line-length = 120
[build-system]
requires = ["poetry-core"]
build-backend = "poetry.core.masonry.api"

[tool.poetry.extras]
aws=["sagemaker", "boto3"]
63 changes: 37 additions & 26 deletions reference.md
Original file line number Diff line number Diff line change
Expand Up @@ -2319,7 +2319,9 @@ client.check_api_key()
<dl>
<dd>

Generates a message from the model in response to a provided conversation. To learn how to use the Chat API with Streaming and RAG follow our Text Generation guides.
Generates a message from the model in response to a provided conversation. To learn more about the features of the Chat API follow our [Text Generation guides](https://docs.cohere.com/v2/docs/chat-api).

Follow the [Migration Guide](https://docs.cohere.com/v2/docs/migrating-v1-to-v2) for instructions on moving from API v1 to API v2.
</dd>
</dl>
</dd>
Expand Down Expand Up @@ -2396,7 +2398,7 @@ for chunk in response:
<dl>
<dd>

**model:** `str` — The name of a compatible [Cohere model](https://docs.cohere.com/docs/models) (such as command-r or command-r-plus) or the ID of a [fine-tuned](https://docs.cohere.com/docs/chat-fine-tuning) model.
**model:** `str` — The name of a compatible [Cohere model](https://docs.cohere.com/v2/docs/models) (such as command-r or command-r-plus) or the ID of a [fine-tuned](https://docs.cohere.com/v2/docs/chat-fine-tuning) model.

</dd>
</dl>
Expand Down Expand Up @@ -2452,14 +2454,12 @@ When `tools` is passed (without `tool_results`), the `text` content in the respo

**safety_mode:** `typing.Optional[V2ChatStreamRequestSafetyMode]`

Used to select the [safety instruction](/docs/safety-modes) inserted into the prompt. Defaults to `CONTEXTUAL`.
Used to select the [safety instruction](https://docs.cohere.com/v2/docs/safety-modes) inserted into the prompt. Defaults to `CONTEXTUAL`.
When `OFF` is specified, the safety instruction will be omitted.

Safety modes are not yet configurable in combination with `tools`, `tool_results` and `documents` parameters.

**Note**: This parameter is only compatible with models [Command R 08-2024](/docs/command-r#august-2024-release), [Command R+ 08-2024](/docs/command-r-plus#august-2024-release) and newer.

Compatible Deployments: Cohere Platform, Azure, AWS Sagemaker/Bedrock, Private Deployments
**Note**: This parameter is only compatible with models [Command R 08-2024](https://docs.cohere.com/v2/docs/command-r#august-2024-release), [Command R+ 08-2024](https://docs.cohere.com/v2/docs/command-r-plus#august-2024-release) and newer.


</dd>
Expand All @@ -2468,7 +2468,11 @@ Compatible Deployments: Cohere Platform, Azure, AWS Sagemaker/Bedrock, Private D
<dl>
<dd>

**max_tokens:** `typing.Optional[int]` — The maximum number of tokens the model will generate as part of the response. Note: Setting a low value may result in incomplete generations.
**max_tokens:** `typing.Optional[int]`

The maximum number of tokens the model will generate as part of the response.

**Note**: Setting a low value may result in incomplete generations.


</dd>
Expand Down Expand Up @@ -2595,7 +2599,9 @@ Defaults to `0.75`. min value of `0.01`, max value of `0.99`.
<dl>
<dd>

Generates a message from the model in response to a provided conversation. To learn how to use the Chat API with Streaming and RAG follow our Text Generation guides.
Generates a message from the model in response to a provided conversation. To learn more about the features of the Chat API follow our [Text Generation guides](https://docs.cohere.com/v2/docs/chat-api).

Follow the [Migration Guide](https://docs.cohere.com/v2/docs/migrating-v1-to-v2) for instructions on moving from API v1 to API v2.
</dd>
</dl>
</dd>
Expand All @@ -2621,6 +2627,7 @@ client.v2.chat(
messages=[
ToolChatMessageV2(
tool_call_id="messages",
content="messages",
)
],
)
Expand All @@ -2639,7 +2646,7 @@ client.v2.chat(
<dl>
<dd>

**model:** `str` — The name of a compatible [Cohere model](https://docs.cohere.com/docs/models) (such as command-r or command-r-plus) or the ID of a [fine-tuned](https://docs.cohere.com/docs/chat-fine-tuning) model.
**model:** `str` — The name of a compatible [Cohere model](https://docs.cohere.com/v2/docs/models) (such as command-r or command-r-plus) or the ID of a [fine-tuned](https://docs.cohere.com/v2/docs/chat-fine-tuning) model.

</dd>
</dl>
Expand Down Expand Up @@ -2695,14 +2702,12 @@ When `tools` is passed (without `tool_results`), the `text` content in the respo

**safety_mode:** `typing.Optional[V2ChatRequestSafetyMode]`

Used to select the [safety instruction](/docs/safety-modes) inserted into the prompt. Defaults to `CONTEXTUAL`.
Used to select the [safety instruction](https://docs.cohere.com/v2/docs/safety-modes) inserted into the prompt. Defaults to `CONTEXTUAL`.
When `OFF` is specified, the safety instruction will be omitted.

Safety modes are not yet configurable in combination with `tools`, `tool_results` and `documents` parameters.

**Note**: This parameter is only compatible with models [Command R 08-2024](/docs/command-r#august-2024-release), [Command R+ 08-2024](/docs/command-r-plus#august-2024-release) and newer.

Compatible Deployments: Cohere Platform, Azure, AWS Sagemaker/Bedrock, Private Deployments
**Note**: This parameter is only compatible with models [Command R 08-2024](https://docs.cohere.com/v2/docs/command-r#august-2024-release), [Command R+ 08-2024](https://docs.cohere.com/v2/docs/command-r-plus#august-2024-release) and newer.


</dd>
Expand All @@ -2711,7 +2716,11 @@ Compatible Deployments: Cohere Platform, Azure, AWS Sagemaker/Bedrock, Private D
<dl>
<dd>

**max_tokens:** `typing.Optional[int]` — The maximum number of tokens the model will generate as part of the response. Note: Setting a low value may result in incomplete generations.
**max_tokens:** `typing.Optional[int]`

The maximum number of tokens the model will generate as part of the response.

**Note**: Setting a low value may result in incomplete generations.


</dd>
Expand Down Expand Up @@ -2865,6 +2874,8 @@ client = Client(
)
client.v2.embed(
model="model",
input_type="search_document",
embedding_types=["float"],
)

```
Expand Down Expand Up @@ -2904,43 +2915,43 @@ Available models and corresponding embedding dimensions:
<dl>
<dd>

**texts:** `typing.Optional[typing.Sequence[str]]` — An array of strings for the model to embed. Maximum number of texts per call is `96`. We recommend reducing the length of each text to be under `512` tokens for optimal quality.
**input_type:** `EmbedInputType`

</dd>
</dl>

<dl>
<dd>

**images:** `typing.Optional[typing.Sequence[str]]`
**embedding_types:** `typing.Sequence[EmbeddingType]`

An array of image data URIs for the model to embed. Maximum number of images per call is `1`.
Specifies the types of embeddings you want to get back. Not required and default is None, which returns the Embed Floats response type. Can be one or more of the following types.

The image must be a valid [data URI](https://developer.mozilla.org/en-US/docs/Web/URI/Schemes/data). The image must be in either `image/jpeg` or `image/png` format and has a maximum size of 5MB.
* `"float"`: Use this when you want to get back the default float embeddings. Valid for all models.
* `"int8"`: Use this when you want to get back signed int8 embeddings. Valid for only v3 models.
* `"uint8"`: Use this when you want to get back unsigned int8 embeddings. Valid for only v3 models.
* `"binary"`: Use this when you want to get back signed binary embeddings. Valid for only v3 models.
* `"ubinary"`: Use this when you want to get back unsigned binary embeddings. Valid for only v3 models.

</dd>
</dl>

<dl>
<dd>

**input_type:** `typing.Optional[EmbedInputType]`
**texts:** `typing.Optional[typing.Sequence[str]]` — An array of strings for the model to embed. Maximum number of texts per call is `96`. We recommend reducing the length of each text to be under `512` tokens for optimal quality.

</dd>
</dl>

<dl>
<dd>

**embedding_types:** `typing.Optional[typing.Sequence[EmbeddingType]]`
**images:** `typing.Optional[typing.Sequence[str]]`

Specifies the types of embeddings you want to get back. Not required and default is None, which returns the Embed Floats response type. Can be one or more of the following types.
An array of image data URIs for the model to embed. Maximum number of images per call is `1`.

* `"float"`: Use this when you want to get back the default float embeddings. Valid for all models.
* `"int8"`: Use this when you want to get back signed int8 embeddings. Valid for only v3 models.
* `"uint8"`: Use this when you want to get back unsigned int8 embeddings. Valid for only v3 models.
* `"binary"`: Use this when you want to get back signed binary embeddings. Valid for only v3 models.
* `"ubinary"`: Use this when you want to get back unsigned binary embeddings. Valid for only v3 models.
The image must be a valid [data URI](https://developer.mozilla.org/en-US/docs/Web/URI/Schemes/data). The image must be in either `image/jpeg` or `image/png` format and has a maximum size of 5MB.

</dd>
</dl>
Expand Down
2 changes: 1 addition & 1 deletion src/cohere/core/client_wrapper.py
Original file line number Diff line number Diff line change
Expand Up @@ -24,7 +24,7 @@ def get_headers(self) -> typing.Dict[str, str]:
headers: typing.Dict[str, str] = {
"X-Fern-Language": "Python",
"X-Fern-SDK-Name": "cohere",
"X-Fern-SDK-Version": "5.11.0",
"X-Fern-SDK-Version": "5.11.1",
}
if self._client_name is not None:
headers["X-Client-Name"] = self._client_name
Expand Down
8 changes: 8 additions & 0 deletions src/cohere/types/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -110,6 +110,7 @@
from .detokenize_response import DetokenizeResponse
from .document import Document
from .document_content import DocumentContent
from .document_source import DocumentSource
from .embed_by_type_response import EmbedByTypeResponse
from .embed_by_type_response_embeddings import EmbedByTypeResponseEmbeddings
from .embed_floats_response import EmbedFloatsResponse
Expand Down Expand Up @@ -196,8 +197,12 @@
from .summarize_request_format import SummarizeRequestFormat
from .summarize_request_length import SummarizeRequestLength
from .summarize_response import SummarizeResponse
from .system_message import SystemMessage
from .system_message_content import SystemMessageContent
from .system_message_content_item import SystemMessageContentItem, TextSystemMessageContentItem
from .text_content import TextContent
from .text_response_format import TextResponseFormat
from .text_response_format_v2 import TextResponseFormatV2
from .tokenize_response import TokenizeResponse
from .too_many_requests_error_body import TooManyRequestsErrorBody
from .tool import Tool
Expand All @@ -206,17 +211,20 @@
from .tool_call_v2 import ToolCallV2
from .tool_call_v2function import ToolCallV2Function
from .tool_content import DocumentToolContent, TextToolContent, ToolContent
from .tool_message import ToolMessage
from .tool_message_v2 import ToolMessageV2
from .tool_message_v2content import ToolMessageV2Content
from .tool_parameter_definitions_value import ToolParameterDefinitionsValue
from .tool_result import ToolResult
from .tool_source import ToolSource
from .tool_v2 import ToolV2
from .tool_v2function import ToolV2Function
from .unprocessable_entity_error_body import UnprocessableEntityErrorBody
from .update_connector_response import UpdateConnectorResponse
from .usage import Usage
from .usage_billed_units import UsageBilledUnits
from .usage_tokens import UsageTokens
from .user_message import UserMessage
from .user_message_content import UserMessageContent

__all__ = [
Expand Down
8 changes: 6 additions & 2 deletions src/cohere/types/assistant_message.py
Original file line number Diff line number Diff line change
Expand Up @@ -3,10 +3,10 @@
from ..core.unchecked_base_model import UncheckedBaseModel
import typing
from .tool_call_v2 import ToolCallV2
import pydantic
from .assistant_message_content import AssistantMessageContent
from .citation import Citation
from ..core.pydantic_utilities import IS_PYDANTIC_V2
import pydantic


class AssistantMessage(UncheckedBaseModel):
Expand All @@ -15,7 +15,11 @@ class AssistantMessage(UncheckedBaseModel):
"""

tool_calls: typing.Optional[typing.List[ToolCallV2]] = None
tool_plan: typing.Optional[str] = None
tool_plan: typing.Optional[str] = pydantic.Field(default=None)
"""
A chain-of-thought style reflection and plan that the model generates when working with Tools.
"""

content: typing.Optional[AssistantMessageContent] = None
citations: typing.Optional[typing.List[Citation]] = None

Expand Down
8 changes: 6 additions & 2 deletions src/cohere/types/assistant_message_response.py
Original file line number Diff line number Diff line change
Expand Up @@ -3,10 +3,10 @@
from ..core.unchecked_base_model import UncheckedBaseModel
import typing
from .tool_call_v2 import ToolCallV2
import pydantic
from .assistant_message_response_content_item import AssistantMessageResponseContentItem
from .citation import Citation
from ..core.pydantic_utilities import IS_PYDANTIC_V2
import pydantic


class AssistantMessageResponse(UncheckedBaseModel):
Expand All @@ -16,7 +16,11 @@ class AssistantMessageResponse(UncheckedBaseModel):

role: typing.Literal["assistant"] = "assistant"
tool_calls: typing.Optional[typing.List[ToolCallV2]] = None
tool_plan: typing.Optional[str] = None
tool_plan: typing.Optional[str] = pydantic.Field(default=None)
"""
A chain-of-thought style reflection and plan that the model generates when working with Tools.
"""

content: typing.Optional[typing.List[AssistantMessageResponseContentItem]] = None
citations: typing.Optional[typing.List[Citation]] = None

Expand Down
3 changes: 1 addition & 2 deletions src/cohere/types/chat_finish_reason.py
Original file line number Diff line number Diff line change
Expand Up @@ -3,6 +3,5 @@
import typing

ChatFinishReason = typing.Union[
typing.Literal["complete", "stop_sequence", "max_tokens", "tool_call", "error", "content_blocked", "error_limit"],
typing.Any,
typing.Literal["COMPLETE", "STOP_SEQUENCE", "MAX_TOKENS", "TOOL_CALL", "ERROR"], typing.Any
]
2 changes: 1 addition & 1 deletion src/cohere/types/chat_message_v2.py
Original file line number Diff line number Diff line change
Expand Up @@ -79,7 +79,7 @@ class ToolChatMessageV2(UncheckedBaseModel):

role: typing.Literal["tool"] = "tool"
tool_call_id: str
content: typing.Optional[ToolMessageV2Content] = None
content: ToolMessageV2Content

if IS_PYDANTIC_V2:
model_config: typing.ClassVar[pydantic.ConfigDict] = pydantic.ConfigDict(extra="allow", frozen=True) # type: ignore # Pydantic v2
Expand Down
3 changes: 2 additions & 1 deletion src/cohere/types/chat_tool_calls_chunk_event.py
Original file line number Diff line number Diff line change
Expand Up @@ -2,13 +2,14 @@

from .chat_stream_event import ChatStreamEvent
from .tool_call_delta import ToolCallDelta
from ..core.pydantic_utilities import IS_PYDANTIC_V2
import typing
from ..core.pydantic_utilities import IS_PYDANTIC_V2
import pydantic


class ChatToolCallsChunkEvent(ChatStreamEvent):
tool_call_delta: ToolCallDelta
text: typing.Optional[str] = None

if IS_PYDANTIC_V2:
model_config: typing.ClassVar[pydantic.ConfigDict] = pydantic.ConfigDict(extra="allow", frozen=True) # type: ignore # Pydantic v2
Expand Down
20 changes: 16 additions & 4 deletions src/cohere/types/citation.py
Original file line number Diff line number Diff line change
Expand Up @@ -2,19 +2,31 @@

from ..core.unchecked_base_model import UncheckedBaseModel
import typing
import pydantic
from .source import Source
from ..core.pydantic_utilities import IS_PYDANTIC_V2
import pydantic


class Citation(UncheckedBaseModel):
"""
Citation information containing sources and the text cited.
"""

start: typing.Optional[int] = None
end: typing.Optional[int] = None
text: typing.Optional[str] = None
start: typing.Optional[int] = pydantic.Field(default=None)
"""
Start index of the cited snippet in the original source text.
"""

end: typing.Optional[int] = pydantic.Field(default=None)
"""
End index of the cited snippet in the original source text.
"""

text: typing.Optional[str] = pydantic.Field(default=None)
"""
Text snippet that is being cited.
"""

sources: typing.Optional[typing.List[Source]] = None

if IS_PYDANTIC_V2:
Expand Down
2 changes: 1 addition & 1 deletion src/cohere/types/json_response_format_v2.py
Original file line number Diff line number Diff line change
Expand Up @@ -9,7 +9,7 @@
class JsonResponseFormatV2(UncheckedBaseModel):
json_schema: typing.Optional[typing.Dict[str, typing.Optional[typing.Any]]] = pydantic.Field(default=None)
"""
[BETA] A JSON schema object that the output will adhere to. There are some restrictions we have on the schema, refer to [our guide](/docs/structured-outputs-json#schema-constraints) for more information.
A [JSON schema](https://json-schema.org/overview/what-is-jsonschema) object that the output will adhere to. There are some restrictions we have on the schema, refer to [our guide](/docs/structured-outputs-json#schema-constraints) for more information.
Example (required name and age object):
```json
Expand Down
Loading

0 comments on commit 247ae60

Please sign in to comment.