Skip to content
Closed
Show file tree
Hide file tree
Changes from 4 commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion site/lib/src/style_hash.dart
Original file line number Diff line number Diff line change
Expand Up @@ -2,4 +2,4 @@
// dart format off

/// The generated hash of the `main.css` file.
const generatedStylesHash = '0lnBUTa5o0lF';
const generatedStylesHash = 'lWAnsjm6RjR2';
12 changes: 5 additions & 7 deletions src/content/ai-toolkit/chat-client-sample.md
Original file line number Diff line number Diff line change
Expand Up @@ -7,13 +7,11 @@ prev:
path: /ai-toolkit/custom-llm-providers
---

The AI Chat sample is meant to be a full-fledged chat app
built using the Flutter AI Toolkit and Vertex AI for Firebase.
In addition to all of the multi-shot, multi-media,
streaming features that it gets from the AI Toolkit,
the AI Chat sample shows how to store and manage
multiple chats at once in your own apps.
On desktop form-factors, the AI Chat sample looks like the following:
The AI Chat sample is meant to be a full-fledged chat app built using the
Flutter AI Toolkit and Vertex AI for Firebase. In addition to all of the
multi-shot, multi-media, streaming features that it gets from the AI Toolkit,
the AI Chat sample shows how to store and manage multiple chats at once in your
own apps. On desktop form-factors, the AI Chat sample looks like the following:

![Desktop app UI](/assets/images/docs/ai-toolkit/desktop-pluto-convo.png)

Expand Down
151 changes: 66 additions & 85 deletions src/content/ai-toolkit/custom-llm-providers.md
Original file line number Diff line number Diff line change
Expand Up @@ -10,8 +10,8 @@ next:
path: /ai-toolkit/chat-client-sample
---

The protocol connecting an LLM and the `LlmChatView`
is expressed in the [`LlmProvider` interface][]:
The protocol connecting an LLM and the `LlmChatView` is expressed in the
[`LlmProvider` interface][]:

```dart
abstract class LlmProvider implements Listenable {
Expand All @@ -22,48 +22,42 @@ abstract class LlmProvider implements Listenable {
}
```

The LLM could be in the cloud or local,
it could be hosted in the Google Cloud Platform
or on some other cloud provider,
it could be a proprietary LLM or open source.
Any LLM or LLM-like endpoint that can be used
to implement this interface can be plugged into
the chat view as an LLM provider. The AI Toolkit
comes with three providers out of the box,
all of which implement the `LlmProvider` interface
that is required to plug the provider into the following:

* The [Gemini provider][],
which wraps the `google_generative_ai` package
* The [Vertex provider][],
which wraps the `firebase_vertexai` package
* The [Echo provider][],
which is useful as a minimal provider example

[Echo provider]: {{site.pub-api}}/flutter_ai_toolkit/latest/flutter_ai_toolkit/EchoProvider-class.html
[Gemini provider]: {{site.pub-api}}/flutter_ai_toolkit/latest/flutter_ai_toolkit/GeminiProvider-class.html
[`LlmProvider` interface]: {{site.pub-api}}/flutter_ai_toolkit/latest/flutter_ai_toolkit/LlmProvider-class.html
[Vertex provider]: {{site.pub-api}}/flutter_ai_toolkit/latest/flutter_ai_toolkit/VertexProvider-class.html
The LLM could be in the cloud or local, it could be hosted in the Google Cloud
Platform or on some other cloud provider, it could be a proprietary LLM or open
source. Any LLM or LLM-like endpoint that can be used to implement this
interface can be plugged into the chat view as an LLM provider. The AI Toolkit
comes with two providers out of the box, both of which implement the
`LlmProvider` interface that is required to plug the provider into the
following:

* The [Firebase provider][], which wraps the `firebase_ai` package
* The [Echo provider][], which is useful as a minimal provider example

[Echo provider]:
{{site.pub-api}}/flutter_ai_toolkit/latest/flutter_ai_toolkit/EchoProvider-class.html
[Firebase provider]:
{{site.pub-api}}/flutter_ai_toolkit/latest/flutter_ai_toolkit/FirebaseProvider-class.html
[`LlmProvider` interface]:
{{site.pub-api}}/flutter_ai_toolkit/latest/flutter_ai_toolkit/LlmProvider-class.html

## Implementation

To build your own provider, you need to implement
the `LlmProvider` interface with these things in mind:
To build your own provider, you need to implement the `LlmProvider` interface
with these things in mind:

1. Providing for full configuration support
1. Handling history
1. Translating messages and attachments to the underlying LLM
1. Calling the underlying LLM

1. Configuration
To support full configurability in your custom provider,
you should allow the user to create the underlying model
and pass that in as a parameter, as the Gemini provider does:
1. Configuration To support full configurability in your custom provider, you
should allow the user to create the underlying model and pass that in as a
parameter, as the Firebase provider does:

```dart
class GeminiProvider extends LlmProvider ... {
class FirebaseProvider extends LlmProvider ... {
@immutable
GeminiProvider({
FirebaseProvider({
required GenerativeModel model,
...
}) : _model = model,
Expand All @@ -74,25 +68,22 @@ class GeminiProvider extends LlmProvider ... {
}
```

In this way, no matter what changes come
to the underlying model in the future,
the configuration knobs will all be available
to the user of your custom provider.
In this way, no matter what changes come to the underlying model in the future,
the configuration knobs will all be available to the user of your custom
provider.

2. History
History is a big part of any provider—not only
does the provider need to allow history to be
manipulated directly, but it has to notify listeners
as it changes. In addition, to support serialization
and changing provider parameters, it must also support
saving history as part of the construction process.
2. History History is a big part of any provider—not only does the provider need
to allow history to be manipulated directly, but it has to notify listeners as
it changes. In addition, to support serialization and changing provider
parameters, it must also support saving history as part of the construction
process.

The Gemini provider handles this as shown:
The Firebase provider handles this as shown:

```dart
class GeminiProvider extends LlmProvider with ChangeNotifier {
class FirebaseProvider extends LlmProvider with ChangeNotifier {
@immutable
GeminiProvider({
FirebaseProvider({
required GenerativeModel model,
Iterable<ChatMessage>? history,
...
Expand Down Expand Up @@ -143,37 +134,32 @@ class GeminiProvider extends LlmProvider with ChangeNotifier {
```

You'll notice several things in this code:
* The use of `ChangeNotifier` to implement the `Listenable`
method requirements from the `LlmProvider` interface
* The use of `ChangeNotifier` to implement the `Listenable` method requirements
from the `LlmProvider` interface
* The ability to pass initial history in as a constructor parameter
* Notifying listeners when there's a new user
prompt/LLM response pair
* Notifying listeners when there's a new user prompt/LLM response pair
* Notifying listeners when the history is changed manually
* Creating a new chat when the history changes, using the new history

Essentially, a custom provider manages the history
for a single chat session with the underlying LLM.
As the history changes, the underlying chat either
needs to be kept up to date automatically
(as the Gemini AI SDK for Dart does when you call
the underlying chat-specific methods) or manually recreated
(as the Gemini provider does whenever the history is set manually).
Essentially, a custom provider manages the history for a single chat session
with the underlying LLM. As the history changes, the underlying chat either
needs to be kept up to date automatically (as the Gemini AI SDK for Dart does
when you call the underlying chat-specific methods) or manually recreated (as
the Firebase provider does whenever the history is set manually).

3. Messages and attachments

Attachments must be mapped from the standard
`ChatMessage` class exposed by the `LlmProvider`
type to whatever is handled by the underlying LLM.
For example, the Gemini provider maps from the
`ChatMessage` class from the AI Toolkit to the
`Content` type provided by the Gemini AI SDK for Dart,
as shown in the following example:
Attachments must be mapped from the standard `ChatMessage` class exposed by the
`LlmProvider` type to whatever is handled by the underlying LLM. For example,
the Firebase provider maps from the `ChatMessage` class from the AI Toolkit to the
`Content` type provided by the Gemini AI SDK for Dart, as shown in the following
example:

```dart
import 'package:google_generative_ai/google_generative_ai.dart';
...

class GeminiProvider extends LlmProvider with ChangeNotifier {
class FirebaseProvider extends LlmProvider with ChangeNotifier {
...
static Part _partFrom(Attachment attachment) => switch (attachment) {
(final FileAttachment a) => DataPart(a.mimeType, a.bytes),
Expand All @@ -190,22 +176,19 @@ class GeminiProvider extends LlmProvider with ChangeNotifier {
}
```

The `_contentFrom` method is called whenever a user prompt
needs to be sent to the underlying LLM.
Every provider needs to provide for its own mapping.
The `_contentFrom` method is called whenever a user prompt needs to be sent to
the underlying LLM. Every provider needs to provide for its own mapping.

4. Calling the LLM

How you call the underlying LLM to implement
`generateStream` and `sendMessageStream` methods
depends on the protocol it exposes.
The Gemini provider in the AI Toolkit
handles configuration and history but calls to
`generateStream` and `sendMessageStream` each
end up in a call to an API from the Gemini AI SDK for Dart:
How you call the underlying LLM to implement `generateStream` and
`sendMessageStream` methods depends on the protocol it exposes. The Firebase
provider in the AI Toolkit handles configuration and history but calls to
`generateStream` and `sendMessageStream` each end up in a call to an API from
the Firebase AI Logic SDK:

```dart
class GeminiProvider extends LlmProvider with ChangeNotifier {
class FirebaseProvider extends LlmProvider with ChangeNotifier {
...

@override
Expand Down Expand Up @@ -275,13 +258,11 @@ class GeminiProvider extends LlmProvider with ChangeNotifier {

## Examples

The [Gemini provider][] and [Vertex provider][]
implementations are nearly identical and provide
a good starting point for your own custom provider.
If you'd like to see an example provider implementation with
all of the calls to the underlying LLM stripped away,
check out the [Echo example app][], which simply formats
the user's prompt and attachments as Markdown
to send back to the user as its response.
The [Firebase provider][] implementation provides a good starting point for your
own custom provider. If you'd like to see an example provider implementation
with all of the calls to the underlying LLM stripped away, check out the [Echo
example app][], which simply formats the user's prompt and attachments as
Markdown to send back to the user as its response.

[Echo example app]: {{site.github}}/flutter/ai/blob/main/lib/src/providers/implementations/echo_provider.dart
[Echo example app]:
{{site.github}}/flutter/ai/blob/main/lib/src/providers/implementations/echo_provider.dart
Loading