You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: docs/source/en/chat_templating_multimodal.md
+6-42Lines changed: 6 additions & 42 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -18,51 +18,12 @@ rendered properly in your Markdown viewer.
18
18
19
19
Multimodal chat models accept inputs like images, audio or video, in addition to text. The `content` key in a multimodal chat history is a list containing multiple items of different types. This is unlike text-only chat models whose `content` key is a single string.
20
20
21
-
22
21
In the same way the [Tokenizer](./fast_tokenizer) class handles chat templates and tokenization for text-only models,
23
22
the [Processor](./processors) class handles preprocessing, tokenization and chat templates for multimodal models. Their [`~ProcessorMixin.apply_chat_template`] methods are almost identical.
24
23
25
-
This guide will show you how to chat with multimodal models with the high-level [`ImageTextToTextPipeline`] and at a lower level using the [`~ProcessorMixin.apply_chat_template`] and [`~GenerationMixin.generate`] methods.
26
-
27
-
## ImageTextToTextPipeline
28
-
29
-
[`ImageTextToTextPipeline`] is a high-level image and text generation class with a “chat mode”. Chat mode is enabled when a conversational model is detected and the chat prompt is [properly formatted](./llm_tutorial#wrong-prompt-format).
30
-
31
-
Add image and text blocks to the `content` key in the chat history.
32
-
33
-
```py
34
-
messages = [
35
-
{
36
-
"role": "system",
37
-
"content": [{"type": "text", "text": "You are a friendly chatbot who always responds in the style of a pirate"}],
Create an [`ImageTextToTextPipeline`] and pass the chat to it. For large models, setting [device_map=“auto”](./models#big-model-inference) helps load the model quicker and automatically places it on the fastest device available. Setting the data type to [auto](./models#model-data-type) also helps save memory and improve speed.
Ahoy, me hearty! These be two feline friends, likely some tabby cats, taking a siesta on a cozy pink blanket. They're resting near remote controls, perhaps after watching some TV or just enjoying some quiet time together. Cats sure know how to find comfort and relaxation, don't they?
63
-
```
64
-
65
-
Aside from the gradual descent from pirate-speak into modern American English (it **is** only a 3B model, after all), this is correct!
24
+
This guide covers chats with image and video models at a lower level using the [`~ProcessorMixin.apply_chat_template`] and [`~GenerationMixin.generate`] methods, and
25
+
is intended for more advanced users. If you just want to quickly chat with a VLM, you can use the [`ImageTextToTextPipeline`] class, which is covered in the
The decoded output contains the full conversation so far, including the user message and the placeholder tokens that contain the image information. You may need to trim the previous conversation from the output before displaying it to the user.
115
76
77
+
## Response parsing
78
+
79
+
TODO section on response parsing with a processor here
Copy file name to clipboardExpand all lines: docs/source/en/conversations.md
+45-2Lines changed: 45 additions & 2 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -49,7 +49,7 @@ transformers chat -h
49
49
The chat is implemented on top of the [AutoClass](./model_doc/auto), using tooling from [text generation](./llm_tutorial) and [chat](./chat_templating). It uses the `transformers serve` CLI under the hood ([docs](./serving.md#serve-cli)).
50
50
51
51
52
-
## TextGenerationPipeline
52
+
## Using pipelines to chat
53
53
54
54
[`TextGenerationPipeline`] is a high-level text generation class with a "chat mode". Chat mode is enabled when a conversational model is detected and the chat prompt is [properly formatted](./llm_tutorial#wrong-prompt-format).
By repeating this process, you can continue the conversation as long as you like, at least until the model runs out of context window
94
94
or you run out of memory.
95
95
96
+
## Including images in chats
97
+
98
+
Some models, known as vision-language models (VLMs), can accept images as part of the chat input. When loading a VLM, you
99
+
should use the `ImageTextToTextPipeline`, which you can load by setting the `task` argument of `pipeline` to `image-text-to-text`. It works very similarly to
100
+
the `TextGenerationPipeline` above, but we can add `image` keys to our messages:
101
+
102
+
```py
103
+
messages = [
104
+
{
105
+
"role": "system",
106
+
"content": [{"type": "text", "text": "You are a friendly chatbot who always responds in the style of a pirate"}],
And as above, you can continue the conversation by appending your reply to the `messages` list. It's okay for
130
+
some messages to VLMs to be text-only - you don't need to include an image every time!
131
+
132
+
## Chatting with "reasoning" models
133
+
134
+
Since late 2024, we have started to see the appearance of "reasoning" models, also known as "chain of thought" models.
135
+
These models write a step-by-step reasoning process before their final answer.
136
+
137
+
TODO example and show response parsing
138
+
96
139
## Performance and memory usage
97
140
98
-
Transformers load models in full `float32` precision by default, and for a 8B model, this requires ~32GB of memory! Use the `torch_dtype="auto"` argument, which generally uses `bfloat16` for models that were trained with it, to reduce your memory usage.
141
+
Transformers load models in full `float32` precision by default, and for a 8B model, this requires ~32GB of memory! Use the `dtype="auto"` argument, which generally uses `bfloat16` for models that were trained with it, to reduce your memory usage.
99
142
100
143
> [!TIP]
101
144
> Refer to the [Quantization](./quantization/overview) docs for more information about the different quantization backends available.
0 commit comments