From 27c29a32a3831fb28de8649541a5639054d3b067 Mon Sep 17 00:00:00 2001 From: Chris Sells Date: Tue, 25 Nov 2025 23:30:56 -0800 Subject: [PATCH 1/6] Enhance AI Toolkit documentation with function calling, custom speech-to-text features and moving to firebase_ai. --- src/content/ai-toolkit/feature-integration.md | 42 ++++- src/content/ai-toolkit/index.md | 148 ++++++------------ src/content/ai-toolkit/user-experience.md | 11 +- 3 files changed, 97 insertions(+), 104 deletions(-) diff --git a/src/content/ai-toolkit/feature-integration.md b/src/content/ai-toolkit/feature-integration.md index bf7725a18c1..08ff3b5fc67 100644 --- a/src/content/ai-toolkit/feature-integration.md +++ b/src/content/ai-toolkit/feature-integration.md @@ -176,6 +176,22 @@ class _HomePageState extends State { } ``` +## Function calling + +To enable the LLM to perform actions on behalf of the user, +you can provide a set of tools (functions) that the LLM can call. +The `FirebaseProvider` supports function calling out of the box. +It handles the loop of sending the user's prompt, +receiving a function call request from the LLM, +executing the function, and sending the result back to the LLM +until a final text response is generated. + +To use function calling, you need to define your tools +and pass them to the `FirebaseProvider`. +Check out the [function calling example][] for details. + +[function calling example]: {{site.github}}/flutter/ai/tree/main/example/lib/function_calls + ## Disable attachments and audio input If you'd like to disable attachments (the **+** button) or audio input (the mic button), @@ -204,6 +220,25 @@ class ChatPage extends StatelessWidget { Both of these flags default to `true`. +## Custom speech-to-text + +By default, the AI Toolkit uses the `LlmProvider` to pass to the `LlmChatView` to provide the speech-to-text implementation. +If you'd like to provide your own implementation, +for example to use a device-specific service, +you can do so by implementing the `SpeechToText` interface +and passing it to the `LlmChatView` constructor: + +```dart +LlmChatView( + // ... + speechToText: MyCustomSpeechToText(), +) +``` + +Check out the [custom STT example][] for details. + +[custom STT example]: {{site.github}}/flutter/ai/tree/main/example/lib/custom_stt + ## Manage cancel or error behavior By default, when the user cancels an LLM request, the LLM's response will be @@ -569,11 +604,16 @@ uses this feature to implement an app with a Halloween theme: For a complete list of the styles available in the `LlmChatViewStyle` class, check out the [reference documentation][]. +You can also customize the appearance of the voice recorder +using the `voiceNoteRecorderStyle` parameter of the `LlmChatViewStyle` class, +which is demonstrated in the [styles example][styles-ex]. + To see custom styles in action, -in addition to the [custom styles example][custom-ex], +in addition to the [custom styles example][custom-ex] and the [styles example][styles-ex], check out the [dark mode example][] and the [demo app][]. [custom-ex]: {{site.github}}/flutter/ai/blob/main/example/lib/custom_styles/custom_styles.dart +[styles-ex]: {{site.github}}/flutter/ai/blob/main/example/lib/styles/styles.dart [dark mode example]: {{site.github}}/flutter/ai/blob/main/example/lib/dark_mode/dark_mode.dart [demo app]: {{site.github}}/flutter/ai#online-demo [reference documentation]: {{site.pub-api}}/flutter_ai_toolkit/latest/flutter_ai_toolkit/LlmChatViewStyle-class.html diff --git a/src/content/ai-toolkit/index.md b/src/content/ai-toolkit/index.md index 3a52c2834b0..8af7abaa45e 100644 --- a/src/content/ai-toolkit/index.md +++ b/src/content/ai-toolkit/index.md @@ -10,22 +10,14 @@ next: Hello and welcome to the Flutter AI Toolkit! -:::note -These pages are now out of date. They will be -updated soon but, in the meantime, be aware that the -`google_generative_ai` and `vertexai_firebase` packages -are deprecated and replaced with [`package:firebase_ai`][]. -::: - -[`package:firebase_ai`]: {{site.pub-pkg}}/firebase_ai - The AI Toolkit is a set of AI chat-related widgets that make it easy to add an AI chat window to your Flutter app. The AI Toolkit is organized around an abstract LLM provider API to make it easy to swap out the LLM provider that you'd like your chat provider to use. -Out of the box, it comes with support for two LLM provider -integrations: Google Gemini AI and Firebase Vertex AI. +Out of the box, it comes with support for [Firebase Vertex AI][]. + +[Firebase Vertex AI]: https://firebase.google.com/docs/vertex-ai ## Key features @@ -36,6 +28,7 @@ integrations: Google Gemini AI and Firebase Vertex AI. * **Voice input**: Allows users to input prompts using speech. * **Multimedia attachments**: Enables sending and receiving various media types. +* **Function calling**: Supports tool calls to the LLM provider. * **Custom styling**: Offers extensive customization to match your app's design. * **Chat serialization/deserialization**: Store and retrieve conversations @@ -88,75 +81,16 @@ Add the following dependencies to your `pubspec.yaml` file: ```yaml dependencies: flutter_ai_toolkit: ^latest_version - google_generative_ai: ^latest_version # you might choose to use Gemini, - firebase_core: ^latest_version # or Vertex AI or both + firebase_ai: ^latest_version + firebase_core: ^latest_version ``` -
  • Gemini AI configuration - -The toolkit supports both Google Gemini AI and -Firebase Vertex AI as LLM providers. -To use Google Gemini AI, -[obtain an API key][] from Gemini AI Studio. -Be careful not to check this key into your source code -repository to prevent unauthorized access. - -[obtain an API key]: https://aistudio.google.com/app/apikey - -You'll also need to choose a specific Gemini model name -to use in creating an instance of the Gemini model. -The following example uses `gemini-2.0-flash`, -but you can choose from an [ever-expanding set of models][models]. - -[models]: https://ai.google.dev/gemini-api/docs/models/gemini - - -```dart -import 'package:google_generative_ai/google_generative_ai.dart'; -import 'package:flutter_ai_toolkit/flutter_ai_toolkit.dart'; - -// ... app stuff here - -class ChatPage extends StatelessWidget { - const ChatPage({super.key}); - - @override - Widget build(BuildContext context) => Scaffold( - appBar: AppBar(title: const Text(App.title)), - body: LlmChatView( - provider: GeminiProvider( - model: GenerativeModel( - model: 'gemini-2.0-flash', - apiKey: 'GEMINI-API-KEY', - ), - ), - ), - ); -} -``` - -The `GenerativeModel` class comes from the -`google_generative_ai` package. -The AI Toolkit builds on top of this package with -the `GeminiProvider`, which plugs Gemini AI into the -`LlmChatView`, the top-level widget that provides an -LLM-based chat conversation with your users. - -For a complete example, check out [`gemini.dart`][] on GitHub. +
  • Configuration -[`gemini.dart`]: {{site.github}}/flutter/ai/blob/main/example/lib/gemini/gemini.dart -
  • - -
  • Vertex AI configuration - -While Gemini AI is useful for quick prototyping, -the recommended solution for production apps is -Vertex AI in Firebase. This eliminates the need -for an API key in your client app and replaces it -with a more secure Firebase project. -To use Vertex AI in your project, -follow the steps described in the +The AI Toolkit supports both Google Gemini AI (for prototyping) and +Firebase Vertex AI (for production). Both require a Firebase project and +the `firebase_core` package to be initialized, as described in the [Get started with the Gemini API using the Vertex AI in Firebase SDKs][vertex] docs. [vertex]: https://firebase.google.com/docs/vertex-ai/get-started?platform=flutter @@ -168,12 +102,12 @@ as described in the [Add Firebase to your Flutter app][firebase] docs. [firebase]: https://firebase.google.com/docs/flutter/setup After following these instructions, -you're ready to use Firebase Vertex AI in your Flutter app. +you're ready to use Firebase to integrate AI in your Flutter app. Start by initializing Firebase: ```dart import 'package:firebase_core/firebase_core.dart'; -import 'package:firebase_vertexai/firebase_vertexai.dart'; +import 'package:firebase_ai/firebase_ai.dart'; import 'package:flutter_ai_toolkit/flutter_ai_toolkit.dart'; // ... other imports @@ -190,20 +124,27 @@ void main() async { ``` With Firebase properly initialized in your Flutter app, -you're now ready to create an instance of the Vertex provider: +you're now ready to create an instance of the Firebase provider. You can do this +in two ways. For prototyping, consider the Gemini AI endpoint: ```dart +import 'package:firebase_ai/firebase_ai.dart'; +import 'package:flutter_ai_toolkit/flutter_ai_toolkit.dart'; + +// ... app stuff here + class ChatPage extends StatelessWidget { const ChatPage({super.key}); @override Widget build(BuildContext context) => Scaffold( appBar: AppBar(title: const Text(App.title)), - // create the chat view, passing in the Vertex provider + // create the chat view, passing in the Firebase provider body: LlmChatView( - provider: VertexProvider( - chatModel: FirebaseVertexAI.instance.generativeModel( - model: 'gemini-2.0-flash', + provider: FirebaseProvider( + // Use the Google AI endpoint + model: FirebaseAI.googleAI().generativeModel( + model: 'gemini-2.5-flash', ), ), ), @@ -211,19 +152,39 @@ class ChatPage extends StatelessWidget { } ``` - -The `FirebaseVertexAI` class comes from the -`firebase_vertexai` package. The AI Toolkit -builds the `VertexProvider` class to expose +The `FirebaseProvider` class exposes Vertex AI to the `LlmChatView`. Note that you provide a model name ([you have several options][options] from which to choose), but you do not provide an API key. All of that is handled as part of the Firebase project. -For a complete example, check out [vertex.dart][] on GitHub. +For production workloads, it's easy to swap in the Vertex AI endpoint: + +```dart +class ChatPage extends StatelessWidget { + const ChatPage({super.key}); + + @override + Widget build(BuildContext context) => Scaffold( + appBar: AppBar(title: const Text(App.title)), + body: LlmChatView( + provider: FirebaseProvider( + // Use the Vertex AI endpoint + model: FirebaseAI.vertexAI().generativeModel( + model: 'gemini-2.5-flash', + ), + ), + ), + ); +} +``` + + +For a complete example, check out the [gemini.dart] and [vertex.dart][] examples. [options]: https://firebase.google.com/docs/vertex-ai/gemini-models#available-model-names +[gemini.dart]: {{site.github}}/flutter/ai/blob/main/example/lib/gemini/gemini.dart [vertex.dart]: {{site.github}}/flutter/ai/blob/main/example/lib/vertex/vertex.dart
  • @@ -280,17 +241,6 @@ and `example/lib/firebase_options.dart` files, both of which are just placeholders. They're needed to enable the example projects in the `example/lib` folder. -**gemini_api_key.dart** - -Most of the example apps rely on a Gemini API key, -so for those to work, you'll need to plug your API key -in the `example/lib/gemini_api_key.dart` file. -You can get an API key in [Gemini AI Studio][]. - -:::note -**Be careful not to check the `gemini_api_key.dart` file into your git repo.** -::: - **firebase_options.dart** To use the [Vertex AI example app][vertex-ex], diff --git a/src/content/ai-toolkit/user-experience.md b/src/content/ai-toolkit/user-experience.md index e1196aa548a..ea5fd528622 100644 --- a/src/content/ai-toolkit/user-experience.md +++ b/src/content/ai-toolkit/user-experience.md @@ -20,8 +20,8 @@ any additional code to use: input or insert new lines into their text as they enter it. * **Voice input**: Allows users to input prompts using speech for ease of use. -* **Multimedia input**: Enables users to take pictures and - send images and other file types. +* **Multimedia input**: Enables users to take pictures, send images and other file types + and attach URLs as link to online resources. * **Image zoom**: Enables users to zoom into image thumbnails. * **Copy to clipboard**: Allows the user to copy the text of a message or a LLM response to the clipboard. @@ -89,7 +89,7 @@ This text can then be edited, augmented and submitted as normal. The chat view can also take images and files as input to pass along to the underlying LLM. The user can press the **Plus** button to the left of the text input and choose from the **Take Photo**, **Image Gallery**, -and **Attach File** icons: +**Attach File** and **Attach Link** icons: ![Screenshot of the 4 icons](/assets/images/docs/ai-toolkit/multi-media-icons.png) @@ -105,7 +105,10 @@ from their device's image gallery: Pressing the **Attach File** button lets the user select a file of any type available on their device, like a PDF or TXT file. -Once a photo, image, or file has been selected, it becomes an attachment and shows up as a thumbnail associated with the currently active prompt: +Pressing the **Attach Link** button lets the user enter a link to a web page or +an online file. + +Once a photo, image, file, or link has been selected, it becomes an attachment and shows up as a thumbnail associated with the currently active prompt: ![Thumbnails of images](/assets/images/docs/ai-toolkit/image-thumbnails.png) From 7eff3caeee2ea13457d7dfe222792c6622b7115a Mon Sep 17 00:00:00 2001 From: Chris Sells Date: Tue, 25 Nov 2025 23:44:09 -0800 Subject: [PATCH 2/6] appled google style guidelines --- src/content/ai-toolkit/feature-integration.md | 2 +- src/content/ai-toolkit/index.md | 6 ++--- src/content/ai-toolkit/user-experience.md | 24 +++++++++---------- 3 files changed, 16 insertions(+), 16 deletions(-) diff --git a/src/content/ai-toolkit/feature-integration.md b/src/content/ai-toolkit/feature-integration.md index 08ff3b5fc67..9dd303422ce 100644 --- a/src/content/ai-toolkit/feature-integration.md +++ b/src/content/ai-toolkit/feature-integration.md @@ -30,7 +30,7 @@ additional functionality: to present LLM responses. * **Custom styling**: Define unique visual styles to match the chat appearance to the overall app. -* **Chat w/o UI**: Interact directly with the LLM providers without +* **Chat without UI**: Interact directly with the LLM providers without affecting the user's current chat session. * **Custom LLM providers**: Build your own LLM provider for integration of chat with your own model backend. diff --git a/src/content/ai-toolkit/index.md b/src/content/ai-toolkit/index.md index 8af7abaa45e..5b64081e80f 100644 --- a/src/content/ai-toolkit/index.md +++ b/src/content/ai-toolkit/index.md @@ -21,7 +21,7 @@ Out of the box, it comes with support for [Firebase Vertex AI][]. ## Key features -* **Multi-turn chat**: Maintains context across multiple interactions. +* **Multiturn chat**: Maintains context across multiple interactions. * **Streaming responses**: Displays AI responses in real-time as they are generated. * **Rich text display**: Supports formatted text in chat messages. @@ -40,7 +40,7 @@ Out of the box, it comes with support for [Firebase Vertex AI][]. * **Cross-platform support**: Compatible with Android, iOS, web, and macOS platforms. -## Online Demo +## Online demo Here's the online demo hosting the AI Toolkit: @@ -255,7 +255,7 @@ in the [Add Firebase to your Flutter app][add-fb] docs file into your git repo.** ::: -## Feedback! +## Feedback Along the way, as you use this package, please [log issues and feature requests][file-issues] as well as diff --git a/src/content/ai-toolkit/user-experience.md b/src/content/ai-toolkit/user-experience.md index ea5fd528622..0fe85139561 100644 --- a/src/content/ai-toolkit/user-experience.md +++ b/src/content/ai-toolkit/user-experience.md @@ -16,7 +16,7 @@ Hosting an instance of the `LlmChatView` enables a number of user experience features that don't require any additional code to use: -* **Multi-line text input**: Allows users to paste long text +* **Multiline text input**: Allows users to paste long text input or insert new lines into their text as they enter it. * **Voice input**: Allows users to input prompts using speech for ease of use. @@ -32,7 +32,7 @@ any additional code to use: [`LlmChatView`]: {{site.pub-api}}/flutter_ai_toolkit/latest/flutter_ai_toolkit/LlmChatView-class.html -## Multi-line text input +## Multiline text input The user has options when it comes to submitting their prompt once they've finished composing it, @@ -72,22 +72,22 @@ In addition to text input the chat view can take an audio recording as input by tapping the Mic button, which is visible when no text has yet been entered. -Tapping the **Mic** button starts the recording: +Tap the **Mic** button to start the recording: ![Screenshot of entering text](/assets/images/docs/ai-toolkit/enter-textfield.png) -Pressing the **Stop** button translates the user's voice input into text: +Select the **Stop** button to translate the user's voice input into text: This text can then be edited, augmented and submitted as normal. ![Screenshot of entered voice](/assets/images/docs/ai-toolkit/enter-voice-into-textfield.png) -## Multi-media Input +## Multimedia input ![Textfield containing "Testing, testing, one, two, three"](/assets/images/docs/ai-toolkit/multi-media-testing-testing.png) The chat view can also take images and files as input to pass along -to the underlying LLM. The user can press the **Plus** button to the +to the underlying LLM. The user can select the **Plus** button to the left of the text input and choose from the **Take Photo**, **Image Gallery**, **Attach File** and **Attach Link** icons: @@ -97,15 +97,15 @@ The **Take Photo** button allows the user to use their device's camera to take a ![Selfie image](/assets/images/docs/ai-toolkit/selfie.png) -Pressing the **Image Gallery** button lets the user upload +Select the **Image Gallery** button to let the user upload from their device's image gallery: ![Download image from gallery](/assets/images/docs/ai-toolkit/download-from-gallery.png) -Pressing the **Attach File** button lets the user select +Select the **Attach File** button to let the user select a file of any type available on their device, like a PDF or TXT file. -Pressing the **Attach Link** button lets the user enter a link to a web page or +Select the **Attach Link** button to let the user enter a link to a web page or an online file. Once a photo, image, file, or link has been selected, it becomes an attachment and shows up as a thumbnail associated with the currently active prompt: @@ -121,7 +121,7 @@ The user can zoom into an image thumbnail by tapping it: ![Zoomed image](/assets/images/docs/ai-toolkit/image-zoom.png) -Pressing the **ESC** key or tapping anywhere outside the +Pressing the **Esc** key or tapping anywhere outside the image dismisses the zoomed image. ## Copy to clipboard @@ -135,10 +135,10 @@ copy it to the clipboard as normal: ![Copy to clipboard](/assets/images/docs/ai-toolkit/copy-to-clipboard.png) In addition, at the bottom of each prompt or response, -the user can press the **Copy** button that pops up +the user can select the **Copy** button that pops up when they hover their mouse: -![Press the copy button](/assets/images/docs/ai-toolkit/chatbot-prompt.png) +![Select the copy button](/assets/images/docs/ai-toolkit/chatbot-prompt.png) On mobile platforms, the user can long-tap a prompt or response and choose the Copy option: From 22819bb6e882c3a338bc0920b7dc0d8edffb3cf5 Mon Sep 17 00:00:00 2001 From: Chris Sells Date: Tue, 25 Nov 2025 23:45:50 -0800 Subject: [PATCH 3/6] spacing updates --- site/lib/src/style_hash.dart | 2 +- src/content/ai-toolkit/chat-client-sample.md | 12 +- .../ai-toolkit/custom-llm-providers.md | 141 +++--- src/content/ai-toolkit/feature-integration.md | 403 +++++++++--------- src/content/ai-toolkit/index.md | 169 ++++---- src/content/ai-toolkit/user-experience.md | 184 ++++---- 6 files changed, 432 insertions(+), 479 deletions(-) diff --git a/site/lib/src/style_hash.dart b/site/lib/src/style_hash.dart index bafc0188de9..b4d0aecfda2 100644 --- a/site/lib/src/style_hash.dart +++ b/site/lib/src/style_hash.dart @@ -2,4 +2,4 @@ // dart format off /// The generated hash of the `main.css` file. -const generatedStylesHash = '0lnBUTa5o0lF'; +const generatedStylesHash = 'lWAnsjm6RjR2'; diff --git a/src/content/ai-toolkit/chat-client-sample.md b/src/content/ai-toolkit/chat-client-sample.md index 911d4bb3027..8cad3cb7035 100644 --- a/src/content/ai-toolkit/chat-client-sample.md +++ b/src/content/ai-toolkit/chat-client-sample.md @@ -7,13 +7,11 @@ prev: path: /ai-toolkit/custom-llm-providers --- -The AI Chat sample is meant to be a full-fledged chat app -built using the Flutter AI Toolkit and Vertex AI for Firebase. -In addition to all of the multi-shot, multi-media, -streaming features that it gets from the AI Toolkit, -the AI Chat sample shows how to store and manage -multiple chats at once in your own apps. -On desktop form-factors, the AI Chat sample looks like the following: +The AI Chat sample is meant to be a full-fledged chat app built using the +Flutter AI Toolkit and Vertex AI for Firebase. In addition to all of the +multi-shot, multi-media, streaming features that it gets from the AI Toolkit, +the AI Chat sample shows how to store and manage multiple chats at once in your +own apps. On desktop form-factors, the AI Chat sample looks like the following: ![Desktop app UI](/assets/images/docs/ai-toolkit/desktop-pluto-convo.png) diff --git a/src/content/ai-toolkit/custom-llm-providers.md b/src/content/ai-toolkit/custom-llm-providers.md index b64c23c71e8..78cf444e5b0 100644 --- a/src/content/ai-toolkit/custom-llm-providers.md +++ b/src/content/ai-toolkit/custom-llm-providers.md @@ -10,8 +10,8 @@ next: path: /ai-toolkit/chat-client-sample --- -The protocol connecting an LLM and the `LlmChatView` -is expressed in the [`LlmProvider` interface][]: +The protocol connecting an LLM and the `LlmChatView` is expressed in the +[`LlmProvider` interface][]: ```dart abstract class LlmProvider implements Listenable { @@ -22,43 +22,40 @@ abstract class LlmProvider implements Listenable { } ``` -The LLM could be in the cloud or local, -it could be hosted in the Google Cloud Platform -or on some other cloud provider, -it could be a proprietary LLM or open source. -Any LLM or LLM-like endpoint that can be used -to implement this interface can be plugged into -the chat view as an LLM provider. The AI Toolkit -comes with three providers out of the box, -all of which implement the `LlmProvider` interface -that is required to plug the provider into the following: - -* The [Gemini provider][], - which wraps the `google_generative_ai` package -* The [Vertex provider][], - which wraps the `firebase_vertexai` package -* The [Echo provider][], - which is useful as a minimal provider example - -[Echo provider]: {{site.pub-api}}/flutter_ai_toolkit/latest/flutter_ai_toolkit/EchoProvider-class.html -[Gemini provider]: {{site.pub-api}}/flutter_ai_toolkit/latest/flutter_ai_toolkit/GeminiProvider-class.html -[`LlmProvider` interface]: {{site.pub-api}}/flutter_ai_toolkit/latest/flutter_ai_toolkit/LlmProvider-class.html -[Vertex provider]: {{site.pub-api}}/flutter_ai_toolkit/latest/flutter_ai_toolkit/VertexProvider-class.html +The LLM could be in the cloud or local, it could be hosted in the Google Cloud +Platform or on some other cloud provider, it could be a proprietary LLM or open +source. Any LLM or LLM-like endpoint that can be used to implement this +interface can be plugged into the chat view as an LLM provider. The AI Toolkit +comes with three providers out of the box, all of which implement the +`LlmProvider` interface that is required to plug the provider into the +following: + +* The [Gemini provider][], which wraps the `google_generative_ai` package +* The [Vertex provider][], which wraps the `firebase_vertexai` package +* The [Echo provider][], which is useful as a minimal provider example + +[Echo provider]: + {{site.pub-api}}/flutter_ai_toolkit/latest/flutter_ai_toolkit/EchoProvider-class.html +[Gemini provider]: + {{site.pub-api}}/flutter_ai_toolkit/latest/flutter_ai_toolkit/GeminiProvider-class.html +[`LlmProvider` interface]: + {{site.pub-api}}/flutter_ai_toolkit/latest/flutter_ai_toolkit/LlmProvider-class.html +[Vertex provider]: + {{site.pub-api}}/flutter_ai_toolkit/latest/flutter_ai_toolkit/VertexProvider-class.html ## Implementation -To build your own provider, you need to implement -the `LlmProvider` interface with these things in mind: +To build your own provider, you need to implement the `LlmProvider` interface +with these things in mind: 1. Providing for full configuration support 1. Handling history 1. Translating messages and attachments to the underlying LLM 1. Calling the underlying LLM -1. Configuration - To support full configurability in your custom provider, - you should allow the user to create the underlying model - and pass that in as a parameter, as the Gemini provider does: +1. Configuration To support full configurability in your custom provider, you + should allow the user to create the underlying model and pass that in as a + parameter, as the Gemini provider does: ```dart class GeminiProvider extends LlmProvider ... { @@ -74,18 +71,15 @@ class GeminiProvider extends LlmProvider ... { } ``` -In this way, no matter what changes come -to the underlying model in the future, -the configuration knobs will all be available -to the user of your custom provider. +In this way, no matter what changes come to the underlying model in the future, +the configuration knobs will all be available to the user of your custom +provider. -2. History - History is a big part of any provider—not only - does the provider need to allow history to be - manipulated directly, but it has to notify listeners - as it changes. In addition, to support serialization - and changing provider parameters, it must also support - saving history as part of the construction process. +2. History History is a big part of any provider—not only does the provider need + to allow history to be manipulated directly, but it has to notify listeners as + it changes. In addition, to support serialization and changing provider + parameters, it must also support saving history as part of the construction + process. The Gemini provider handles this as shown: @@ -143,31 +137,26 @@ class GeminiProvider extends LlmProvider with ChangeNotifier { ``` You'll notice several things in this code: -* The use of `ChangeNotifier` to implement the `Listenable` - method requirements from the `LlmProvider` interface +* The use of `ChangeNotifier` to implement the `Listenable` method requirements + from the `LlmProvider` interface * The ability to pass initial history in as a constructor parameter -* Notifying listeners when there's a new user - prompt/LLM response pair +* Notifying listeners when there's a new user prompt/LLM response pair * Notifying listeners when the history is changed manually * Creating a new chat when the history changes, using the new history -Essentially, a custom provider manages the history -for a single chat session with the underlying LLM. -As the history changes, the underlying chat either -needs to be kept up to date automatically -(as the Gemini AI SDK for Dart does when you call -the underlying chat-specific methods) or manually recreated -(as the Gemini provider does whenever the history is set manually). +Essentially, a custom provider manages the history for a single chat session +with the underlying LLM. As the history changes, the underlying chat either +needs to be kept up to date automatically (as the Gemini AI SDK for Dart does +when you call the underlying chat-specific methods) or manually recreated (as +the Gemini provider does whenever the history is set manually). 3. Messages and attachments -Attachments must be mapped from the standard -`ChatMessage` class exposed by the `LlmProvider` -type to whatever is handled by the underlying LLM. -For example, the Gemini provider maps from the -`ChatMessage` class from the AI Toolkit to the -`Content` type provided by the Gemini AI SDK for Dart, -as shown in the following example: +Attachments must be mapped from the standard `ChatMessage` class exposed by the +`LlmProvider` type to whatever is handled by the underlying LLM. For example, +the Gemini provider maps from the `ChatMessage` class from the AI Toolkit to the +`Content` type provided by the Gemini AI SDK for Dart, as shown in the following +example: ```dart import 'package:google_generative_ai/google_generative_ai.dart'; @@ -190,19 +179,16 @@ class GeminiProvider extends LlmProvider with ChangeNotifier { } ``` -The `_contentFrom` method is called whenever a user prompt -needs to be sent to the underlying LLM. -Every provider needs to provide for its own mapping. +The `_contentFrom` method is called whenever a user prompt needs to be sent to +the underlying LLM. Every provider needs to provide for its own mapping. 4. Calling the LLM -How you call the underlying LLM to implement -`generateStream` and `sendMessageStream` methods -depends on the protocol it exposes. -The Gemini provider in the AI Toolkit -handles configuration and history but calls to -`generateStream` and `sendMessageStream` each -end up in a call to an API from the Gemini AI SDK for Dart: +How you call the underlying LLM to implement `generateStream` and +`sendMessageStream` methods depends on the protocol it exposes. The Gemini +provider in the AI Toolkit handles configuration and history but calls to +`generateStream` and `sendMessageStream` each end up in a call to an API from +the Gemini AI SDK for Dart: ```dart class GeminiProvider extends LlmProvider with ChangeNotifier { @@ -275,13 +261,12 @@ class GeminiProvider extends LlmProvider with ChangeNotifier { ## Examples -The [Gemini provider][] and [Vertex provider][] -implementations are nearly identical and provide -a good starting point for your own custom provider. -If you'd like to see an example provider implementation with -all of the calls to the underlying LLM stripped away, -check out the [Echo example app][], which simply formats -the user's prompt and attachments as Markdown -to send back to the user as its response. +The [Gemini provider][] and [Vertex provider][] implementations are nearly +identical and provide a good starting point for your own custom provider. If +you'd like to see an example provider implementation with all of the calls to +the underlying LLM stripped away, check out the [Echo example app][], which +simply formats the user's prompt and attachments as Markdown to send back to the +user as its response. -[Echo example app]: {{site.github}}/flutter/ai/blob/main/lib/src/providers/implementations/echo_provider.dart +[Echo example app]: + {{site.github}}/flutter/ai/blob/main/lib/src/providers/implementations/echo_provider.dart diff --git a/src/content/ai-toolkit/feature-integration.md b/src/content/ai-toolkit/feature-integration.md index 9dd303422ce..f6d13818e81 100644 --- a/src/content/ai-toolkit/feature-integration.md +++ b/src/content/ai-toolkit/feature-integration.md @@ -10,26 +10,26 @@ next: path: /ai-toolkit/custom-llm-providers --- -In addition to the features that are provided -automatically by the [`LlmChatView`][], -a number of integration points allow your app to -blend seamlessly with other features to provide -additional functionality: +In addition to the features that are provided automatically by the +[`LlmChatView`][], a number of integration points allow your app to blend +seamlessly with other features to provide additional functionality: * **Welcome messages**: Display an initial greeting to users. * **Suggested prompts**: Offer users predefined prompts to guide interactions. -* **System instructions**: Provide the LLM with specific input to influence its responses. +* **System instructions**: Provide the LLM with specific input to influence its + responses. * **Disable attachments and audio input**: Remove optional parts of the chat UI. -* **Manage cancel or error behavior**: Change the user cancellation or LLM error behavior. -* **Manage history**: Every LLM provider allows for managing chat history, - which is useful for clearing it, - changing it dynamically and storing it between sessions. +* **Manage cancel or error behavior**: Change the user cancellation or LLM error + behavior. +* **Manage history**: Every LLM provider allows for managing chat history, which + is useful for clearing it, changing it dynamically and storing it between + sessions. * **Chat serialization/deserialization**: Store and retrieve conversations between app sessions. -* **Custom response widgets**: Introduce specialized UI components - to present LLM responses. -* **Custom styling**: Define unique visual styles to match the chat - appearance to the overall app. +* **Custom response widgets**: Introduce specialized UI components to present + LLM responses. +* **Custom styling**: Define unique visual styles to match the chat appearance + to the overall app. * **Chat without UI**: Interact directly with the LLM providers without affecting the user's current chat session. * **Custom LLM providers**: Build your own LLM provider for integration of chat @@ -37,17 +37,19 @@ additional functionality: * **Rerouting prompts**: Debug, log, or reroute messages meant for the provider to track down issues or route prompts dynamically. -[`LlmChatView`]: {{site.pub-api}}/flutter_ai_toolkit/latest/flutter_ai_toolkit/LlmChatView-class.html +[`LlmChatView`]: + {{site.pub-api}}/flutter_ai_toolkit/latest/flutter_ai_toolkit/LlmChatView-class.html ## Welcome messages -The chat view allows you to provide a custom welcome message -to set context for the user: +The chat view allows you to provide a custom welcome message to set context for +the user: -![Example welcome message](/assets/images/docs/ai-toolkit/example-of-welcome-message.png) +![Example welcome +message](/assets/images/docs/ai-toolkit/example-of-welcome-message.png) -You can initialize the `LlmChatView` with a welcome message -by setting the `welcomeMessage` parameter: +You can initialize the `LlmChatView` with a welcome message by setting the +`welcomeMessage` parameter: ```dart class ChatPage extends StatelessWidget { @@ -69,22 +71,23 @@ class ChatPage extends StatelessWidget { } ``` -To see a complete example of setting the welcome message, -check out the [welcome example][]. +To see a complete example of setting the welcome message, check out the [welcome +example][]. -[welcome example]: {{site.github}}/flutter/ai/blob/main/example/lib/welcome/welcome.dart +[welcome example]: + {{site.github}}/flutter/ai/blob/main/example/lib/welcome/welcome.dart ## Suggested prompts -You can provide a set of suggested prompts to give -the user some idea of what the chat session has been optimized for: +You can provide a set of suggested prompts to give the user some idea of what +the chat session has been optimized for: -![Example suggested prompts](/assets/images/docs/ai-toolkit/example-of-suggested-prompts.png) +![Example suggested +prompts](/assets/images/docs/ai-toolkit/example-of-suggested-prompts.png) -The suggestions are only shown when there is no existing -chat history. Clicking one copies the text into the -user's prompt editing area. To set the list of suggestions, -construct the `LlmChatView` with the `suggestions` parameter: +The suggestions are only shown when there is no existing chat history. Clicking +one copies the text into the user's prompt editing area. To set the list of +suggestions, construct the `LlmChatView` with the `suggestions` parameter: ```dart class ChatPage extends StatelessWidget { @@ -110,19 +113,18 @@ class ChatPage extends StatelessWidget { } ``` -To see a complete example of setting up suggestions for the user, -take a look at the [suggestions example][]. +To see a complete example of setting up suggestions for the user, take a look at +the [suggestions example][]. -[suggestions example]: {{site.github}}/flutter/ai/blob/main/example/lib/suggestions/suggestions.dart +[suggestions example]: + {{site.github}}/flutter/ai/blob/main/example/lib/suggestions/suggestions.dart ## LLM instructions -To optimize an LLM's responses based on the needs -of your app, you'll want to give it instructions. -For example, the [recipes example app][] uses the -`systemInstructions` parameter of the `GenerativeModel` -class to tailor the LLM to focus on delivering recipes -based on the user's instructions: +To optimize an LLM's responses based on the needs of your app, you'll want to +give it instructions. For example, the [recipes example app][] uses the +`systemInstructions` parameter of the `GenerativeModel` class to tailor the LLM +to focus on delivering recipes based on the user's instructions: ```dart class _HomePageState extends State { @@ -149,21 +151,21 @@ You should keep things casual and friendly. You may generate multiple recipes in } ``` -Setting system instructions is unique to each provider; -both the `GeminiProvider` and the `VertexProvider` -allow you to provide them through the `systemInstruction` parameter. +Setting system instructions is unique to each provider; both the +`GeminiProvider` and the `VertexProvider` allow you to provide them through the +`systemInstruction` parameter. -Notice that, in this case, we're bringing in user preferences -as part of the creation of the LLM provider passed to the -`LlmChatView` constructor. We set the instructions as part -of the creation process each time the user changes their preferences. -The recipes app allows the user to change their food preferences +Notice that, in this case, we're bringing in user preferences as part of the +creation of the LLM provider passed to the `LlmChatView` constructor. We set the +instructions as part of the creation process each time the user changes their +preferences. The recipes app allows the user to change their food preferences using a drawer on the scaffold: -![Example of refining prompt](/assets/images/docs/ai-toolkit/setting-food-preferences.png) +![Example of refining +prompt](/assets/images/docs/ai-toolkit/setting-food-preferences.png) -Whenever the user changes their food preferences, -the recipes app creates a new model to use the new preferences: +Whenever the user changes their food preferences, the recipes app creates a new +model to use the new preferences: ```dart class _HomePageState extends State { @@ -178,25 +180,23 @@ class _HomePageState extends State { ## Function calling -To enable the LLM to perform actions on behalf of the user, -you can provide a set of tools (functions) that the LLM can call. -The `FirebaseProvider` supports function calling out of the box. -It handles the loop of sending the user's prompt, -receiving a function call request from the LLM, -executing the function, and sending the result back to the LLM -until a final text response is generated. +To enable the LLM to perform actions on behalf of the user, you can provide a +set of tools (functions) that the LLM can call. The `FirebaseProvider` supports +function calling out of the box. It handles the loop of sending the user's +prompt, receiving a function call request from the LLM, executing the function, +and sending the result back to the LLM until a final text response is generated. -To use function calling, you need to define your tools -and pass them to the `FirebaseProvider`. -Check out the [function calling example][] for details. +To use function calling, you need to define your tools and pass them to the +`FirebaseProvider`. Check out the [function calling example][] for details. -[function calling example]: {{site.github}}/flutter/ai/tree/main/example/lib/function_calls +[function calling example]: + {{site.github}}/flutter/ai/tree/main/example/lib/function_calls ## Disable attachments and audio input -If you'd like to disable attachments (the **+** button) or audio input (the mic button), -you can do so with the `enableAttachments` and `enableVoiceNotes` parameters to -the `LlmChatView` constructor: +If you'd like to disable attachments (the **+** button) or audio input (the mic +button), you can do so with the `enableAttachments` and `enableVoiceNotes` +parameters to the `LlmChatView` constructor: ```dart class ChatPage extends StatelessWidget { @@ -222,11 +222,11 @@ Both of these flags default to `true`. ## Custom speech-to-text -By default, the AI Toolkit uses the `LlmProvider` to pass to the `LlmChatView` to provide the speech-to-text implementation. -If you'd like to provide your own implementation, -for example to use a device-specific service, -you can do so by implementing the `SpeechToText` interface -and passing it to the `LlmChatView` constructor: +By default, the AI Toolkit uses the `LlmProvider` to pass to the `LlmChatView` +to provide the speech-to-text implementation. If you'd like to provide your own +implementation, for example to use a device-specific service, you can do so by +implementing the `SpeechToText` interface and passing it to the `LlmChatView` +constructor: ```dart LlmChatView( @@ -237,15 +237,16 @@ LlmChatView( Check out the [custom STT example][] for details. -[custom STT example]: {{site.github}}/flutter/ai/tree/main/example/lib/custom_stt +[custom STT example]: + {{site.github}}/flutter/ai/tree/main/example/lib/custom_stt ## Manage cancel or error behavior By default, when the user cancels an LLM request, the LLM's response will be appended with the string "CANCEL" and a message will pop up that the user has canceled the request. Likewise, in the event of an LLM error, like a dropped -network connection, the LLM's response will be appended with the -string "ERROR" and an alert dialog will pop up with the details of the error. +network connection, the LLM's response will be appended with the string "ERROR" +and an alert dialog will pop up with the details of the error. You can override the cancel and error behavior with the `cancelMessage`, `errorMessage`, `onCancelCallback` and `onErrorCallback` parameters of the @@ -279,9 +280,9 @@ its defaults for anything you don't override. ## Manage history -The [standard interface that defines all LLM providers][providerIF] -that can plug into the chat view includes the ability to -get and set history for the provider: +The [standard interface that defines all LLM providers][providerIF] that can +plug into the chat view includes the ability to get and set history for the +provider: ```dart abstract class LlmProvider implements Listenable { @@ -300,21 +301,20 @@ abstract class LlmProvider implements Listenable { } ``` -[providerIF]: {{site.pub-api}}/flutter_ai_toolkit/latest/flutter_ai_toolkit/LlmProvider-class.html +[providerIF]: + {{site.pub-api}}/flutter_ai_toolkit/latest/flutter_ai_toolkit/LlmProvider-class.html -When the history for a provider changes, -it calls the `notifyListener` method exposed by the -`Listenable` base class. This means that you manually -subscribe/unsubscribe with the `add` and `remove` methods -or use it to construct an instance of the `ListenableBuilder` class. +When the history for a provider changes, it calls the `notifyListener` method +exposed by the `Listenable` base class. This means that you manually +subscribe/unsubscribe with the `add` and `remove` methods or use it to construct +an instance of the `ListenableBuilder` class. -The `generateStream` method calls into the underlying LLM -without affecting the history. Calling the `sendMessageStream` -method changes the history by adding two new messages to the -provider's history—one for the user message and one for the LLM -response—when the response is completed. The chat view uses -`sendMessageStream` when it processes a user's chat prompt and -`generateStream` when it's processing the user's voice input. +The `generateStream` method calls into the underlying LLM without affecting the +history. Calling the `sendMessageStream` method changes the history by adding +two new messages to the provider's history—one for the user message and one for +the LLM response—when the response is completed. The chat view uses +`sendMessageStream` when it processes a user's chat prompt and `generateStream` +when it's processing the user's voice input. To see or set the history, you can access the `history` property: @@ -322,8 +322,8 @@ To see or set the history, you can access the `history` property: void _clearHistory() => _provider.history = []; ``` -The ability to access a provider's history is also useful -when it comes to recreating a provider while maintaining the history: +The ability to access a provider's history is also useful when it comes to +recreating a provider while maintaining the history: ```dart class _HomePageState extends State { @@ -336,14 +336,10 @@ class _HomePageState extends State { } ``` -The `_createProvider` method -creates a new provider with the history from -the previous provider _and_ the new user -preferences. -It's seamless for the user; they can keep chatting away -but now the LLM gives them responses taking their -new food preferences into account. -For example: +The `_createProvider` method creates a new provider with the history from the +previous provider _and_ the new user preferences. It's seamless for the user; +they can keep chatting away but now the LLM gives them responses taking their +new food preferences into account. For example: ```dart @@ -359,22 +355,20 @@ class _HomePageState extends State { } ``` -To see history in action, -check out the [recipes example app][] and the [history example app][]. +To see history in action, check out the [recipes example app][] and the [history +example app][]. -[history example app]: {{site.github}}/flutter/ai/blob/main/example/lib/history/history.dart +[history example app]: + {{site.github}}/flutter/ai/blob/main/example/lib/history/history.dart [recipes example app]: {{site.github}}/flutter/ai/tree/main/example/lib/recipes ## Chat serialization/deserialization -To save and restore chat history between sessions -of an app requires the ability to serialize and -deserialize each user prompt, including the attachments, -and each LLM response. Both kinds of messages -(the user prompts and LLM responses), -are exposed in the `ChatMessage` class. -Serialization can be accomplished by using the `toJson` -method of each `ChatMessage` instance. +To save and restore chat history between sessions of an app requires the ability +to serialize and deserialize each user prompt, including the attachments, and +each LLM response. Both kinds of messages (the user prompts and LLM responses), +are exposed in the `ChatMessage` class. Serialization can be accomplished by +using the `toJson` method of each `ChatMessage` instance. ```dart Future _saveHistory() async { @@ -395,8 +389,8 @@ Future _saveHistory() async { } ``` -Likewise, to deserialize, use the static `fromJson` -method of the `ChatMessage` class: +Likewise, to deserialize, use the static `fromJson` method of the `ChatMessage` +class: ```dart Future _loadHistory() async { @@ -415,34 +409,29 @@ Future _loadHistory() async { } ``` -To ensure fast turnaround when serializing, -we recommend only writing each user message once. -Otherwise, the user must wait for your app to -write every message every time and, -in the face of binary attachments, -that could take a while. +To ensure fast turnaround when serializing, we recommend only writing each user +message once. Otherwise, the user must wait for your app to write every message +every time and, in the face of binary attachments, that could take a while. To see this in action, check out the [history example app][]. -[history example app]: {{site.github}}/flutter/ai/blob/main/example/lib/history/history.dart +[history example app]: + {{site.github}}/flutter/ai/blob/main/example/lib/history/history.dart ## Custom response widgets -By default, the LLM response shown by the chat view is -formatted Markdown. However, in some cases, -you want to create a custom widget to show the -LLM response that's specific to and integrated with your app. -For example, when the user requests a recipe in the -[recipes example app][], the LLM response is used -to create a widget that's specific to showing recipes -just like the rest of the app does and to provide for an -**Add** button in case the user would like to add +By default, the LLM response shown by the chat view is formatted Markdown. +However, in some cases, you want to create a custom widget to show the LLM +response that's specific to and integrated with your app. For example, when the +user requests a recipe in the [recipes example app][], the LLM response is used +to create a widget that's specific to showing recipes just like the rest of the +app does and to provide for an **Add** button in case the user would like to add the recipe to their database: ![Add recipe button](/assets/images/docs/ai-toolkit/add-recipe-button.png) -This is accomplished by setting the `responseBuilder` -parameter of the `LlmChatView` constructor: +This is accomplished by setting the `responseBuilder` parameter of the +`LlmChatView` constructor: ```dart LlmChatView( @@ -454,9 +443,8 @@ LlmChatView( ), ``` -In this particular example, the `RecipeReponseView` -widget is constructed with the LLM provider's response text -and uses that to implement its `build` method: +In this particular example, the `RecipeReponseView` widget is constructed with +the LLM provider's response text and uses that to implement its `build` method: ```dart class RecipeResponseView extends StatelessWidget { @@ -517,17 +505,15 @@ class RecipeResponseView extends StatelessWidget { } ``` -This code parses the text to extract introductory text -and the recipe from the LLM, bundling them together -with an **Add Recipe** button to show in place of the Markdown. - -Notice that we're parsing the LLM response as JSON. -It's common to set the provider into JSON mode and -to provide a schema to restrict the format of its responses -to ensure that we've got something we can parse. -Each provider exposes this functionality in its own way, -but both the `GeminiProvider` and `VertexProvider` classes -enable this with a `GenerationConfig` object that the +This code parses the text to extract introductory text and the recipe from the +LLM, bundling them together with an **Add Recipe** button to show in place of +the Markdown. + +Notice that we're parsing the LLM response as JSON. It's common to set the +provider into JSON mode and to provide a schema to restrict the format of its +responses to ensure that we've got something we can parse. Each provider exposes +this functionality in its own way, but both the `GeminiProvider` and +`VertexProvider` classes enable this with a `GenerationConfig` object that the recipes example uses as follows: ```dart @@ -571,24 +557,21 @@ well as any trailing text commentary you care to provide: } ``` -This code initializes the `GenerationConfig` object -by setting the `responseMimeType` parameter to `'application/json'` -and the `responseSchema` parameter to an instance of the -`Schema` class that defines the structure of the JSON -that you're prepared to parse. In addition, -it's good practice to also ask for JSON and to provide -a description of that JSON schema in the system instructions, -which we've done here. +This code initializes the `GenerationConfig` object by setting the +`responseMimeType` parameter to `'application/json'` and the `responseSchema` +parameter to an instance of the `Schema` class that defines the structure of the +JSON that you're prepared to parse. In addition, it's good practice to also ask +for JSON and to provide a description of that JSON schema in the system +instructions, which we've done here. To see this in action, check out the [recipes example app][]. ## Custom styling -The chat view comes out of the box with a set of default styles -for the background, the text field, the buttons, the icons, -the suggestions, and so on. You can fully customize those -styles by setting your own by using the `style` parameter to the -`LlmChatView` constructor: +The chat view comes out of the box with a set of default styles for the +background, the text field, the buttons, the icons, the suggestions, and so on. +You can fully customize those styles by setting your own by using the `style` +parameter to the `LlmChatView` constructor: ```dart LlmChatView( @@ -597,51 +580,53 @@ LlmChatView( ), ``` -For example, the [custom styles example app][custom-ex] -uses this feature to implement an app with a Halloween theme: +For example, the [custom styles example app][custom-ex] uses this feature to +implement an app with a Halloween theme: ![Halloween-themed demo app](/assets/images/docs/ai-toolkit/demo-app.png) -For a complete list of the styles available in the -`LlmChatViewStyle` class, check out the [reference documentation][]. -You can also customize the appearance of the voice recorder -using the `voiceNoteRecorderStyle` parameter of the `LlmChatViewStyle` class, -which is demonstrated in the [styles example][styles-ex]. +For a complete list of the styles available in the `LlmChatViewStyle` class, +check out the [reference documentation][]. You can also customize the appearance +of the voice recorder using the `voiceNoteRecorderStyle` parameter of the +`LlmChatViewStyle` class, which is demonstrated in the [styles +example][styles-ex]. -To see custom styles in action, -in addition to the [custom styles example][custom-ex] and the [styles example][styles-ex], -check out the [dark mode example][] and the [demo app][]. +To see custom styles in action, in addition to the [custom styles +example][custom-ex] and the [styles example][styles-ex], check out the [dark +mode example][] and the [demo app][]. -[custom-ex]: {{site.github}}/flutter/ai/blob/main/example/lib/custom_styles/custom_styles.dart +[custom-ex]: + {{site.github}}/flutter/ai/blob/main/example/lib/custom_styles/custom_styles.dart [styles-ex]: {{site.github}}/flutter/ai/blob/main/example/lib/styles/styles.dart -[dark mode example]: {{site.github}}/flutter/ai/blob/main/example/lib/dark_mode/dark_mode.dart +[dark mode example]: + {{site.github}}/flutter/ai/blob/main/example/lib/dark_mode/dark_mode.dart [demo app]: {{site.github}}/flutter/ai#online-demo -[reference documentation]: {{site.pub-api}}/flutter_ai_toolkit/latest/flutter_ai_toolkit/LlmChatViewStyle-class.html +[reference documentation]: + {{site.pub-api}}/flutter_ai_toolkit/latest/flutter_ai_toolkit/LlmChatViewStyle-class.html ## Chat without UI -You don't have to use the chat view to access the -functionality of the underlying provider. -In addition to being able to simply call it with -whatever proprietary interface it provides, -you can also use it with the [LlmProvider interface][]. +You don't have to use the chat view to access the functionality of the +underlying provider. In addition to being able to simply call it with whatever +proprietary interface it provides, you can also use it with the [LlmProvider +interface][]. -[LlmProvider interface]: {{site.pub-api}}/flutter_ai_toolkit/latest/flutter_ai_toolkit/LlmProvider-class.html +[LlmProvider interface]: + {{site.pub-api}}/flutter_ai_toolkit/latest/flutter_ai_toolkit/LlmProvider-class.html -As an example, the recipes example app provides a -Magic button on the page for editing recipes. -The purpose of that button is to update an existing recipe -in your database with your current food preferences. -Pressing the button allows you to preview the recommended changes and -decide whether you'd like to apply them or not: +As an example, the recipes example app provides a Magic button on the page for +editing recipes. The purpose of that button is to update an existing recipe in +your database with your current food preferences. Pressing the button allows you +to preview the recommended changes and decide whether you'd like to apply them +or not: -![User decides whether to update recipe in database](/assets/images/docs/ai-toolkit/apply-changes-decision.png) +![User decides whether to update recipe in +database](/assets/images/docs/ai-toolkit/apply-changes-decision.png) -Instead of using the same provider that the chat portion -of the app uses, which would insert spurious user messages -and LLM responses into the user's chat history, -the Edit Recipe page instead creates its own provider -and uses it directly: +Instead of using the same provider that the chat portion of the app uses, which +would insert spurious user messages and LLM responses into the user's chat +history, the Edit Recipe page instead creates its own provider and uses it +directly: ```dart class _EditRecipePageState extends State { @@ -694,27 +679,27 @@ class _EditRecipePageState extends State { } ``` -The call to `sendMessageStream` creates entries in the -provider's history, but since it's not associated with a chat view, -they won't be shown. If it's convenient, -you can also accomplish the same thing by calling `generateStream`, -which allows you to reuse an existing provider without affecting -the chat history. +The call to `sendMessageStream` creates entries in the provider's history, but +since it's not associated with a chat view, they won't be shown. If it's +convenient, you can also accomplish the same thing by calling `generateStream`, +which allows you to reuse an existing provider without affecting the chat +history. -To see this in action, -check out the [Edit Recipe page][] of the recipes example. +To see this in action, check out the [Edit Recipe page][] of the recipes +example. -[Edit Recipe page]: {{site.github}}/flutter/ai/blob/main/example/lib/recipes/pages/edit_recipe_page.dart +[Edit Recipe page]: + {{site.github}}/flutter/ai/blob/main/example/lib/recipes/pages/edit_recipe_page.dart ## Rerouting prompts -If you'd like to debug, log, or manipulate the connection -between the chat view and the underlying provider, -you can do so with an implementation of an [`LlmStreamGenerator`][] function. -You then pass that function to the `LlmChatView` in the -`messageSender` parameter: +If you'd like to debug, log, or manipulate the connection between the chat view +and the underlying provider, you can do so with an implementation of an +[`LlmStreamGenerator`][] function. You then pass that function to the +`LlmChatView` in the `messageSender` parameter: -[`LlmStreamGenerator`]: {{site.pub-api}}/flutter_ai_toolkit/latest/flutter_ai_toolkit/LlmStreamGenerator.html +[`LlmStreamGenerator`]: + {{site.pub-api}}/flutter_ai_toolkit/latest/flutter_ai_toolkit/LlmStreamGenerator.html ```dart class ChatPage extends StatelessWidget { @@ -754,13 +739,13 @@ class ChatPage extends StatelessWidget { } ``` -This example logs the user prompts and LLM responses -as they go back and forth. When providing a function -as a `messageSender`, it's your responsibility to call -the underlying provider. If you don't, it won't get the message. -This capability allows you to do advanced things like routing to -a provider dynamically or Retrieval Augmented Generation (RAG). +This example logs the user prompts and LLM responses as they go back and forth. +When providing a function as a `messageSender`, it's your responsibility to call +the underlying provider. If you don't, it won't get the message. This capability +allows you to do advanced things like routing to a provider dynamically or +Retrieval Augmented Generation (RAG). To see this in action, check out the [logging example app][]. -[logging example app]: {{site.github}}/flutter/ai/blob/main/example/lib/logging/logging.dart +[logging example app]: + {{site.github}}/flutter/ai/blob/main/example/lib/logging/logging.dart diff --git a/src/content/ai-toolkit/index.md b/src/content/ai-toolkit/index.md index 5b64081e80f..182d7aecd90 100644 --- a/src/content/ai-toolkit/index.md +++ b/src/content/ai-toolkit/index.md @@ -10,63 +10,53 @@ next: Hello and welcome to the Flutter AI Toolkit! -The AI Toolkit is a set of AI chat-related widgets that make -it easy to add an AI chat window to your Flutter app. -The AI Toolkit is organized around an abstract -LLM provider API to make it easy to swap out the -LLM provider that you'd like your chat provider to use. -Out of the box, it comes with support for [Firebase Vertex AI][]. +The AI Toolkit is a set of AI chat-related widgets that make it easy to add an +AI chat window to your Flutter app. The AI Toolkit is organized around an +abstract LLM provider API to make it easy to swap out the LLM provider that +you'd like your chat provider to use. Out of the box, it comes with support for +[Firebase Vertex AI][]. [Firebase Vertex AI]: https://firebase.google.com/docs/vertex-ai ## Key features * **Multiturn chat**: Maintains context across multiple interactions. -* **Streaming responses**: Displays AI responses in - real-time as they are generated. +* **Streaming responses**: Displays AI responses in real-time as they are + generated. * **Rich text display**: Supports formatted text in chat messages. * **Voice input**: Allows users to input prompts using speech. -* **Multimedia attachments**: Enables sending and - receiving various media types. +* **Multimedia attachments**: Enables sending and receiving various media types. * **Function calling**: Supports tool calls to the LLM provider. -* **Custom styling**: Offers extensive customization to - match your app's design. +* **Custom styling**: Offers extensive customization to match your app's design. * **Chat serialization/deserialization**: Store and retrieve conversations between app sessions. -* **Custom response widgets**: Introduce specialized UI components - to present LLM responses. -* **Pluggable LLM support**: Implement a simple interface to plug - in your own LLM. -* **Cross-platform support**: Compatible with Android, iOS, web, - and macOS platforms. +* **Custom response widgets**: Introduce specialized UI components to present + LLM responses. +* **Pluggable LLM support**: Implement a simple interface to plug in your own + LLM. +* **Cross-platform support**: Compatible with Android, iOS, web, and macOS + platforms. ## Online demo Here's the online demo hosting the AI Toolkit: - -AI demo app - + AI demo app The [source code for this demo][src-code] is available in the repo on GitHub. -Or, you can open it in [Firebase Studio][], -Google's full-stack AI workspace and IDE that runs in the cloud: +Or, you can open it in [Firebase Studio][], Google's full-stack AI workspace and +IDE that runs in the cloud: - - - - + - Try in Firebase Studio - - + srcset="https://cdn.firebasestudio.dev/btn/try_dark_32.svg"> Try in Firebase Studio [src-code]: {{site.github}}/flutter/ai/blob/main/example/lib/demo/demo.dart [Firebase Studio]: https://firebase.studio/ @@ -88,22 +78,22 @@ dependencies:
  • Configuration -The AI Toolkit supports both Google Gemini AI (for prototyping) and -Firebase Vertex AI (for production). Both require a Firebase project and -the `firebase_core` package to be initialized, as described in the -[Get started with the Gemini API using the Vertex AI in Firebase SDKs][vertex] docs. +The AI Toolkit supports both Google Gemini AI (for prototyping) and Firebase +Vertex AI (for production). Both require a Firebase project and the +`firebase_core` package to be initialized, as described in the [Get started with +the Gemini API using the Vertex AI in Firebase SDKs][vertex] docs. -[vertex]: https://firebase.google.com/docs/vertex-ai/get-started?platform=flutter +[vertex]: + https://firebase.google.com/docs/vertex-ai/get-started?platform=flutter -Once that's complete, integrate the new Firebase project -into your Flutter app using the `flutterfire CLI` tool, -as described in the [Add Firebase to your Flutter app][firebase] docs. +Once that's complete, integrate the new Firebase project into your Flutter app +using the `flutterfire CLI` tool, as described in the [Add Firebase to your +Flutter app][firebase] docs. [firebase]: https://firebase.google.com/docs/flutter/setup -After following these instructions, -you're ready to use Firebase to integrate AI in your Flutter app. -Start by initializing Firebase: +After following these instructions, you're ready to use Firebase to integrate AI +in your Flutter app. Start by initializing Firebase: ```dart import 'package:firebase_core/firebase_core.dart'; @@ -123,9 +113,9 @@ void main() async { // ...app stuff here ``` -With Firebase properly initialized in your Flutter app, -you're now ready to create an instance of the Firebase provider. You can do this -in two ways. For prototyping, consider the Gemini AI endpoint: +With Firebase properly initialized in your Flutter app, you're now ready to +create an instance of the Firebase provider. You can do this in two ways. For +prototyping, consider the Gemini AI endpoint: ```dart import 'package:firebase_ai/firebase_ai.dart'; @@ -152,12 +142,10 @@ class ChatPage extends StatelessWidget { } ``` -The `FirebaseProvider` class exposes -Vertex AI to the `LlmChatView`. -Note that you provide a model name -([you have several options][options] from which to choose), -but you do not provide an API key. -All of that is handled as part of the Firebase project. +The `FirebaseProvider` class exposes Vertex AI to the `LlmChatView`. Note that +you provide a model name ([you have several options][options] from which to +choose), but you do not provide an API key. All of that is handled as part of +the Firebase project. For production workloads, it's easy to swap in the Vertex AI endpoint: @@ -181,11 +169,15 @@ class ChatPage extends StatelessWidget { ``` -For a complete example, check out the [gemini.dart] and [vertex.dart][] examples. +For a complete example, check out the [gemini.dart] and [vertex.dart][] +examples. -[options]: https://firebase.google.com/docs/vertex-ai/gemini-models#available-model-names -[gemini.dart]: {{site.github}}/flutter/ai/blob/main/example/lib/gemini/gemini.dart -[vertex.dart]: {{site.github}}/flutter/ai/blob/main/example/lib/vertex/vertex.dart +[options]: + https://firebase.google.com/docs/vertex-ai/gemini-models#available-model-names +[gemini.dart]: + {{site.github}}/flutter/ai/blob/main/example/lib/gemini/gemini.dart +[vertex.dart]: + {{site.github}}/flutter/ai/blob/main/example/lib/vertex/vertex.dart
  • Set up device permissions @@ -193,9 +185,8 @@ For a complete example, check out the [gemini.dart] and [vertex.dart][] examples To enable your users to take advantage of features like voice input and media attachments, ensure that your app has the necessary permissions: -* **Network access:** - To enable network access on macOS, - add the following to your `*.entitlements` files: +* **Network access:** To enable network access on macOS, add the following to + your `*.entitlements` files: ```xml @@ -207,8 +198,8 @@ attachments, ensure that your app has the necessary permissions: ``` - To enable network access on Android, - ensure that your `AndroidManifest.xml` file contains the following: + To enable network access on Android, ensure that your `AndroidManifest.xml` + file contains the following: ```xml @@ -217,14 +208,14 @@ attachments, ensure that your app has the necessary permissions: ``` -* **Microphone access**: Configure according to the - [record package's permission setup instructions][record]. +* **Microphone access**: Configure according to the [record package's permission + setup instructions][record]. * **File selection**: Follow the [file_selector plugin's instructions][file]. * **Image selection**: To take a picture on _or_ select a picture from their - device, refer to the - [image_picker plugin's installation instructions][image_picker]. -* **Web photo**: To take a picture on the web, configure the app - according to the [camera plugin's setup instructions][camera]. + device, refer to the [image_picker plugin's installation + instructions][image_picker]. +* **Web photo**: To take a picture on the web, configure the app according to + the [camera plugin's setup instructions][camera]. [camera]: {{site.pub-pkg}}/camera#setup [file]: {{site.pub-pkg}}/file_selector#usage @@ -235,34 +226,28 @@ attachments, ensure that your app has the necessary permissions: ## Examples -To execute the [example apps][] in the repo, -you'll need to replace the `example/lib/gemini_api_key.dart` -and `example/lib/firebase_options.dart` files, -both of which are just placeholders. They're needed -to enable the example projects in the `example/lib` folder. +To execute the [example apps][] in the repo, you'll need to replace the +`example/lib/gemini_api_key.dart` and `example/lib/firebase_options.dart` files, +both of which are just placeholders. They're needed to enable the example +projects in the `example/lib` folder. **firebase_options.dart** -To use the [Vertex AI example app][vertex-ex], -place your Firebase configuration details -into the `example/lib/firebase_options.dart` file. -You can do this with the `flutterfire CLI` tool as described -in the [Add Firebase to your Flutter app][add-fb] docs -**from within the `example` directory**. +To use the [Vertex AI example app][vertex-ex], place your Firebase configuration +details into the `example/lib/firebase_options.dart` file. You can do this with +the `flutterfire CLI` tool as described in the [Add Firebase to your Flutter +app][add-fb] docs **from within the `example` directory**. -:::note -**Be careful not to check the `firebase_options.dart` -file into your git repo.** -::: +:::note **Be careful not to check the `firebase_options.dart` file into your git +repo.** ::: ## Feedback -Along the way, as you use this package, -please [log issues and feature requests][file-issues] as well as -submit any [code you'd like to contribute][submit]. -We want your feedback and your contributions -to ensure that the AI Toolkit is just as robust and useful -as it can be for your real-world apps. +Along the way, as you use this package, please [log issues and feature +requests][file-issues] as well as submit any [code you'd like to +contribute][submit]. We want your feedback and your contributions to ensure that +the AI Toolkit is just as robust and useful as it can be for your real-world +apps. [add-fb]: https://firebase.google.com/docs/flutter/setup [example apps]: {{site.github}}/flutter/ai/tree/main/example/lib diff --git a/src/content/ai-toolkit/user-experience.md b/src/content/ai-toolkit/user-experience.md index 0fe85139561..ed3af8fa1ca 100644 --- a/src/content/ai-toolkit/user-experience.md +++ b/src/content/ai-toolkit/user-experience.md @@ -10,47 +10,43 @@ next: path: /ai-toolkit/feature-integration --- -The [`LlmChatView`][] widget is the entry point for the -interactive chat experience that AI Toolkit provides. -Hosting an instance of the `LlmChatView` enables a -number of user experience features that don't require -any additional code to use: - -* **Multiline text input**: Allows users to paste long text - input or insert new lines into their text as they enter it. -* **Voice input**: Allows users to input prompts using speech - for ease of use. -* **Multimedia input**: Enables users to take pictures, send images and other file types - and attach URLs as link to online resources. +The [`LlmChatView`][] widget is the entry point for the interactive chat +experience that AI Toolkit provides. Hosting an instance of the `LlmChatView` +enables a number of user experience features that don't require any additional +code to use: + +* **Multiline text input**: Allows users to paste long text input or insert new + lines into their text as they enter it. +* **Voice input**: Allows users to input prompts using speech for ease of use. +* **Multimedia input**: Enables users to take pictures, send images and other + file types and attach URLs as link to online resources. * **Image zoom**: Enables users to zoom into image thumbnails. -* **Copy to clipboard**: Allows the user to copy the text of - a message or a LLM response to the clipboard. -* **Message editing**: Allows the user to edit the most recent - message for resubmission to the LLM. -* **Material and Cupertino**: Adapts to the best practices of - both design languages. +* **Copy to clipboard**: Allows the user to copy the text of a message or a LLM + response to the clipboard. +* **Message editing**: Allows the user to edit the most recent message for + resubmission to the LLM. +* **Material and Cupertino**: Adapts to the best practices of both design + languages. -[`LlmChatView`]: {{site.pub-api}}/flutter_ai_toolkit/latest/flutter_ai_toolkit/LlmChatView-class.html +[`LlmChatView`]: + {{site.pub-api}}/flutter_ai_toolkit/latest/flutter_ai_toolkit/LlmChatView-class.html ## Multiline text input -The user has options when it comes to submitting -their prompt once they've finished composing it, -which again differs depending on their platform: +The user has options when it comes to submitting their prompt once they've +finished composing it, which again differs depending on their platform: * **Mobile**: Tap the **Submit** button * **Web**: Press **Enter** or tap the **Submit** button * **Desktop**: Press **Enter** or tap the **Submit** button -In addition, the chat view supports text prompts -with embedded newlines in them. If the user has existing -text with newlines, they can paste them into the -prompt text field as normal. +In addition, the chat view supports text prompts with embedded newlines in them. +If the user has existing text with newlines, they can paste them into the prompt +text field as normal. -If they'd like to embed newlines into their prompt -manually as they enter it, they can do so. -The gesture for that activity differs based on the -platform they're using: +If they'd like to embed newlines into their prompt manually as they enter it, +they can do so. The gesture for that activity differs based on the platform +they're using: * **Mobile**: Tap Return key on the virtual keyboard * **Web**: Unsupported @@ -60,60 +56,67 @@ These options look like the following: **Desktop**: -![Screenshot of entering text on desktop](/assets/images/docs/ai-toolkit/desktop-enter-text.png) +![Screenshot of entering text on +desktop](/assets/images/docs/ai-toolkit/desktop-enter-text.png) **Mobile**: -![Screenshot of entering text on mobile](/assets/images/docs/ai-toolkit/mobile-enter-text.png) +![Screenshot of entering text on +mobile](/assets/images/docs/ai-toolkit/mobile-enter-text.png) ## Voice input -In addition to text input the chat view can take an -audio recording as input by tapping the Mic button, -which is visible when no text has yet been entered. +In addition to text input the chat view can take an audio recording as input by +tapping the Mic button, which is visible when no text has yet been entered. Tap the **Mic** button to start the recording: -![Screenshot of entering text](/assets/images/docs/ai-toolkit/enter-textfield.png) +![Screenshot of entering +text](/assets/images/docs/ai-toolkit/enter-textfield.png) Select the **Stop** button to translate the user's voice input into text: This text can then be edited, augmented and submitted as normal. -![Screenshot of entered voice](/assets/images/docs/ai-toolkit/enter-voice-into-textfield.png) +![Screenshot of entered +voice](/assets/images/docs/ai-toolkit/enter-voice-into-textfield.png) ## Multimedia input -![Textfield containing "Testing, testing, one, two, three"](/assets/images/docs/ai-toolkit/multi-media-testing-testing.png) +![Textfield containing "Testing, testing, one, two, +three"](/assets/images/docs/ai-toolkit/multi-media-testing-testing.png) -The chat view can also take images and files as input to pass along -to the underlying LLM. The user can select the **Plus** button to the -left of the text input and choose from the **Take Photo**, **Image Gallery**, -**Attach File** and **Attach Link** icons: +The chat view can also take images and files as input to pass along to the +underlying LLM. The user can select the **Plus** button to the left of the text +input and choose from the **Take Photo**, **Image Gallery**, **Attach File** and +**Attach Link** icons: -![Screenshot of the 4 icons](/assets/images/docs/ai-toolkit/multi-media-icons.png) +![Screenshot of the 4 +icons](/assets/images/docs/ai-toolkit/multi-media-icons.png) -The **Take Photo** button allows the user to use their device's camera to take a photo: +The **Take Photo** button allows the user to use their device's camera to take a +photo: ![Selfie image](/assets/images/docs/ai-toolkit/selfie.png) -Select the **Image Gallery** button to let the user upload -from their device's image gallery: +Select the **Image Gallery** button to let the user upload from their device's +image gallery: -![Download image from gallery](/assets/images/docs/ai-toolkit/download-from-gallery.png) +![Download image from +gallery](/assets/images/docs/ai-toolkit/download-from-gallery.png) -Select the **Attach File** button to let the user select -a file of any type available on their device, like a PDF or TXT file. +Select the **Attach File** button to let the user select a file of any type +available on their device, like a PDF or TXT file. Select the **Attach Link** button to let the user enter a link to a web page or an online file. -Once a photo, image, file, or link has been selected, it becomes an attachment and shows up as a thumbnail associated with the currently active prompt: +Once a photo, image, file, or link has been selected, it becomes an attachment +and shows up as a thumbnail associated with the currently active prompt: ![Thumbnails of images](/assets/images/docs/ai-toolkit/image-thumbnails.png) -The user can remove an attachment by clicking the -**X** button on the thumbnail. +The user can remove an attachment by clicking the **X** button on the thumbnail. ## Image zoom @@ -121,75 +124,72 @@ The user can zoom into an image thumbnail by tapping it: ![Zoomed image](/assets/images/docs/ai-toolkit/image-zoom.png) -Pressing the **Esc** key or tapping anywhere outside the -image dismisses the zoomed image. +Pressing the **Esc** key or tapping anywhere outside the image dismisses the +zoomed image. ## Copy to clipboard -The user can copy any text prompt or LLM response -in their current chat in a variety of ways. -On the desktop or the web, the user can mouse -to select the text on their screen and -copy it to the clipboard as normal: +The user can copy any text prompt or LLM response in their current chat in a +variety of ways. On the desktop or the web, the user can mouse to select the +text on their screen and copy it to the clipboard as normal: ![Copy to clipboard](/assets/images/docs/ai-toolkit/copy-to-clipboard.png) -In addition, at the bottom of each prompt or response, -the user can select the **Copy** button that pops up -when they hover their mouse: +In addition, at the bottom of each prompt or response, the user can select the +**Copy** button that pops up when they hover their mouse: ![Select the copy button](/assets/images/docs/ai-toolkit/chatbot-prompt.png) -On mobile platforms, the user can long-tap a prompt or response and choose the Copy option: +On mobile platforms, the user can long-tap a prompt or response and choose the +Copy option: -![Long tap to see the copy button](/assets/images/docs/ai-toolkit/long-tap-choose-copy.png) +![Long tap to see the copy +button](/assets/images/docs/ai-toolkit/long-tap-choose-copy.png) ## Message editing -If the user would like to edit their last prompt -and cause the LLM to take another run at it, -they can do so. On the desktop, -the user can tap the **Edit** button alongside the -**Copy** button for their most recent prompt: +If the user would like to edit their last prompt and cause the LLM to take +another run at it, they can do so. On the desktop, the user can tap the **Edit** +button alongside the **Copy** button for their most recent prompt: ![How to edit prompt](/assets/images/docs/ai-toolkit/how-to-edit-prompt.png) -On a mobile device, the user can long-tap and get access -to the **Edit** option on their most recent prompt: +On a mobile device, the user can long-tap and get access to the **Edit** option +on their most recent prompt: -![How to access edit menu](/assets/images/docs/ai-toolkit/accessing-edit-menu.png) +![How to access edit +menu](/assets/images/docs/ai-toolkit/accessing-edit-menu.png) -Once the user taps the **Edit** button, they enter Editing mode, -which removes both the user's last prompt and the LLM's -last response from the chat history, -puts the text of the prompt into the text field, and -provides an Editing indicator: +Once the user taps the **Edit** button, they enter Editing mode, which removes +both the user's last prompt and the LLM's last response from the chat history, +puts the text of the prompt into the text field, and provides an Editing +indicator: -![How to exit editing mode](/assets/images/docs/ai-toolkit/how-to-exit-editing-mode.png) +![How to exit editing +mode](/assets/images/docs/ai-toolkit/how-to-exit-editing-mode.png) -In Editing mode, the user can edit the prompt as they choose -and submit it to have the LLM produce a response as normal. -Or, if they change their mind, they can tap the **X** -near the Editing indicator to cancel their edit and restore +In Editing mode, the user can edit the prompt as they choose and submit it to +have the LLM produce a response as normal. Or, if they change their mind, they +can tap the **X** near the Editing indicator to cancel their edit and restore their previous LLM response. ## Material and Cupertino -When the `LlmChatView` widget is hosted in a [Material app][], -it uses facilities provided by the Material design language, -such as Material's [`TextField`][]. -Likewise, when hosted in a [Cupertino app][], -it uses those facilities, such as [`CupertinoTextField`][]. +When the `LlmChatView` widget is hosted in a [Material app][], it uses +facilities provided by the Material design language, such as Material's +[`TextField`][]. Likewise, when hosted in a [Cupertino app][], it uses those +facilities, such as [`CupertinoTextField`][]. ![Cupertino example app](/assets/images/docs/ai-toolkit/cupertino-chat-app.png) -However, while the chat view supports both the Material and -Cupertino app types, it doesn't automatically adopt the associated themes. -Instead, that's set by the `style` property of the `LlmChatView` -as described in the [Custom styling][] documentation. +However, while the chat view supports both the Material and Cupertino app types, +it doesn't automatically adopt the associated themes. Instead, that's set by the +`style` property of the `LlmChatView` as described in the [Custom styling][] +documentation. [Cupertino app]: {{site.api}}/flutter/cupertino/CupertinoApp-class.html -[`CupertinoTextField`]: {{site.api}}/flutter/cupertino/CupertinoTextField-class.html +[`CupertinoTextField`]: + {{site.api}}/flutter/cupertino/CupertinoTextField-class.html [Custom styling]: /ai-toolkit/feature-integration#custom-styling [Material app]: {{site.api}}/flutter/material/MaterialApp-class.html [`TextField`]: {{site.api}}/flutter/material/TextField-class.html From 99f8700e64e48f16c27e73672107d31edae16036 Mon Sep 17 00:00:00 2001 From: Chris Sells Date: Wed, 26 Nov 2025 00:06:08 -0800 Subject: [PATCH 4/6] firebase_ai updates --- .../ai-toolkit/custom-llm-providers.md | 46 +++++++++---------- src/content/ai-toolkit/feature-integration.md | 6 +-- src/content/ai-toolkit/index.md | 2 + src/content/ai-toolkit/user-experience.md | 6 +-- 4 files changed, 29 insertions(+), 31 deletions(-) diff --git a/src/content/ai-toolkit/custom-llm-providers.md b/src/content/ai-toolkit/custom-llm-providers.md index 78cf444e5b0..1dad31f6d7d 100644 --- a/src/content/ai-toolkit/custom-llm-providers.md +++ b/src/content/ai-toolkit/custom-llm-providers.md @@ -26,22 +26,19 @@ The LLM could be in the cloud or local, it could be hosted in the Google Cloud Platform or on some other cloud provider, it could be a proprietary LLM or open source. Any LLM or LLM-like endpoint that can be used to implement this interface can be plugged into the chat view as an LLM provider. The AI Toolkit -comes with three providers out of the box, all of which implement the +comes with two providers out of the box, both of which implement the `LlmProvider` interface that is required to plug the provider into the following: -* The [Gemini provider][], which wraps the `google_generative_ai` package -* The [Vertex provider][], which wraps the `firebase_vertexai` package +* The [Firebase provider][], which wraps the `firebase_ai` package * The [Echo provider][], which is useful as a minimal provider example [Echo provider]: {{site.pub-api}}/flutter_ai_toolkit/latest/flutter_ai_toolkit/EchoProvider-class.html -[Gemini provider]: - {{site.pub-api}}/flutter_ai_toolkit/latest/flutter_ai_toolkit/GeminiProvider-class.html +[Firebase provider]: + {{site.pub-api}}/flutter_ai_toolkit/latest/flutter_ai_toolkit/FirebaseProvider-class.html [`LlmProvider` interface]: {{site.pub-api}}/flutter_ai_toolkit/latest/flutter_ai_toolkit/LlmProvider-class.html -[Vertex provider]: - {{site.pub-api}}/flutter_ai_toolkit/latest/flutter_ai_toolkit/VertexProvider-class.html ## Implementation @@ -55,12 +52,12 @@ with these things in mind: 1. Configuration To support full configurability in your custom provider, you should allow the user to create the underlying model and pass that in as a - parameter, as the Gemini provider does: + parameter, as the Firebase provider does: ```dart -class GeminiProvider extends LlmProvider ... { +class FirebaseProvider extends LlmProvider ... { @immutable - GeminiProvider({ + FirebaseProvider({ required GenerativeModel model, ... }) : _model = model, @@ -81,12 +78,12 @@ provider. parameters, it must also support saving history as part of the construction process. - The Gemini provider handles this as shown: + The Firebase provider handles this as shown: ```dart -class GeminiProvider extends LlmProvider with ChangeNotifier { +class FirebaseProvider extends LlmProvider with ChangeNotifier { @immutable - GeminiProvider({ + FirebaseProvider({ required GenerativeModel model, Iterable? history, ... @@ -148,13 +145,13 @@ Essentially, a custom provider manages the history for a single chat session with the underlying LLM. As the history changes, the underlying chat either needs to be kept up to date automatically (as the Gemini AI SDK for Dart does when you call the underlying chat-specific methods) or manually recreated (as -the Gemini provider does whenever the history is set manually). +the Firebase provider does whenever the history is set manually). 3. Messages and attachments Attachments must be mapped from the standard `ChatMessage` class exposed by the `LlmProvider` type to whatever is handled by the underlying LLM. For example, -the Gemini provider maps from the `ChatMessage` class from the AI Toolkit to the +the Firebase provider maps from the `ChatMessage` class from the AI Toolkit to the `Content` type provided by the Gemini AI SDK for Dart, as shown in the following example: @@ -162,7 +159,7 @@ example: import 'package:google_generative_ai/google_generative_ai.dart'; ... -class GeminiProvider extends LlmProvider with ChangeNotifier { +class FirebaseProvider extends LlmProvider with ChangeNotifier { ... static Part _partFrom(Attachment attachment) => switch (attachment) { (final FileAttachment a) => DataPart(a.mimeType, a.bytes), @@ -185,13 +182,13 @@ the underlying LLM. Every provider needs to provide for its own mapping. 4. Calling the LLM How you call the underlying LLM to implement `generateStream` and -`sendMessageStream` methods depends on the protocol it exposes. The Gemini +`sendMessageStream` methods depends on the protocol it exposes. The Firebase provider in the AI Toolkit handles configuration and history but calls to `generateStream` and `sendMessageStream` each end up in a call to an API from -the Gemini AI SDK for Dart: +the Firebase AI Logic SDK: ```dart -class GeminiProvider extends LlmProvider with ChangeNotifier { +class FirebaseProvider extends LlmProvider with ChangeNotifier { ... @override @@ -261,12 +258,11 @@ class GeminiProvider extends LlmProvider with ChangeNotifier { ## Examples -The [Gemini provider][] and [Vertex provider][] implementations are nearly -identical and provide a good starting point for your own custom provider. If -you'd like to see an example provider implementation with all of the calls to -the underlying LLM stripped away, check out the [Echo example app][], which -simply formats the user's prompt and attachments as Markdown to send back to the -user as its response. +The [Firebase provider][] implementation provides a good starting point for your +own custom provider. If you'd like to see an example provider implementation +with all of the calls to the underlying LLM stripped away, check out the [Echo +example app][], which simply formats the user's prompt and attachments as +Markdown to send back to the user as its response. [Echo example app]: {{site.github}}/flutter/ai/blob/main/lib/src/providers/implementations/echo_provider.dart diff --git a/src/content/ai-toolkit/feature-integration.md b/src/content/ai-toolkit/feature-integration.md index f6d13818e81..8909cac628a 100644 --- a/src/content/ai-toolkit/feature-integration.md +++ b/src/content/ai-toolkit/feature-integration.md @@ -86,8 +86,8 @@ the chat session has been optimized for: prompts](/assets/images/docs/ai-toolkit/example-of-suggested-prompts.png) The suggestions are only shown when there is no existing chat history. Clicking -one copies the text into the user's prompt editing area. To set the list of -suggestions, construct the `LlmChatView` with the `suggestions` parameter: +one sends the prompt to the LLM. To set the list of suggestions, construct the +`LlmChatView` with the `suggestions` parameter: ```dart class ChatPage extends StatelessWidget { @@ -190,7 +190,7 @@ To use function calling, you need to define your tools and pass them to the `FirebaseProvider`. Check out the [function calling example][] for details. [function calling example]: - {{site.github}}/flutter/ai/tree/main/example/lib/function_calls + {{site.github}}/flutter/ai/tree/main/example/lib/function_calls/function_calls.dart ## Disable attachments and audio input diff --git a/src/content/ai-toolkit/index.md b/src/content/ai-toolkit/index.md index 182d7aecd90..7a795b78188 100644 --- a/src/content/ai-toolkit/index.md +++ b/src/content/ai-toolkit/index.md @@ -238,9 +238,11 @@ details into the `example/lib/firebase_options.dart` file. You can do this with the `flutterfire CLI` tool as described in the [Add Firebase to your Flutter app][add-fb] docs **from within the `example` directory**. + :::note **Be careful not to check the `firebase_options.dart` file into your git repo.** ::: + ## Feedback Along the way, as you use this package, please [log issues and feature diff --git a/src/content/ai-toolkit/user-experience.md b/src/content/ai-toolkit/user-experience.md index ed3af8fa1ca..763f69d52c6 100644 --- a/src/content/ai-toolkit/user-experience.md +++ b/src/content/ai-toolkit/user-experience.md @@ -49,12 +49,12 @@ they can do so. The gesture for that activity differs based on the platform they're using: * **Mobile**: Tap Return key on the virtual keyboard -* **Web**: Unsupported -* **Desktop**: Press `Ctrl+Enter` or `Opt/Alt+Enter` +* **Web**: Press `Shift+Enter` +* **Desktop**: Press `Shift+Enter` These options look like the following: -**Desktop**: +**Web and Desktop**: ![Screenshot of entering text on desktop](/assets/images/docs/ai-toolkit/desktop-enter-text.png) From fd010d549c9023cd6f6cf4c7dd47d4f278411020 Mon Sep 17 00:00:00 2001 From: Chris Sells Date: Tue, 25 Nov 2025 23:45:50 -0800 Subject: [PATCH 5/6] spacing updates --- src/content/ai-toolkit/chat-client-sample.md | 12 ++- .../ai-toolkit/custom-llm-providers.md | 96 +++++++++++-------- src/content/ai-toolkit/feature-integration.md | 10 +- src/content/ai-toolkit/index.md | 55 ++++++----- 4 files changed, 96 insertions(+), 77 deletions(-) diff --git a/src/content/ai-toolkit/chat-client-sample.md b/src/content/ai-toolkit/chat-client-sample.md index 5720f2e171b..f537a8e2b8d 100644 --- a/src/content/ai-toolkit/chat-client-sample.md +++ b/src/content/ai-toolkit/chat-client-sample.md @@ -7,11 +7,13 @@ prev: path: /ai-toolkit/custom-llm-providers --- -The AI Chat sample is meant to be a full-fledged chat app built using the -Flutter AI Toolkit and Firebase AI for Firebase. In addition to all of the -multi-shot, multi-media, streaming features that it gets from the AI Toolkit, -the AI Chat sample shows how to store and manage multiple chats at once in your -own apps. On desktop form-factors, the AI Chat sample looks like the following: +The AI Chat sample is meant to be a full-fledged chat app +built using the Flutter AI Toolkit and the Firebase AI Logic SDK. +In addition to all of the multi-shot, multi-media, +streaming features that it gets from the AI Toolkit, +the AI Chat sample shows how to store and manage +multiple chats at once in your own apps. +On desktop form-factors, the AI Chat sample looks like the following: ![Desktop app UI](/assets/images/docs/ai-toolkit/desktop-pluto-convo.png) diff --git a/src/content/ai-toolkit/custom-llm-providers.md b/src/content/ai-toolkit/custom-llm-providers.md index 1dad31f6d7d..45223e70bf1 100644 --- a/src/content/ai-toolkit/custom-llm-providers.md +++ b/src/content/ai-toolkit/custom-llm-providers.md @@ -22,23 +22,25 @@ abstract class LlmProvider implements Listenable { } ``` -The LLM could be in the cloud or local, it could be hosted in the Google Cloud -Platform or on some other cloud provider, it could be a proprietary LLM or open -source. Any LLM or LLM-like endpoint that can be used to implement this -interface can be plugged into the chat view as an LLM provider. The AI Toolkit -comes with two providers out of the box, both of which implement the -`LlmProvider` interface that is required to plug the provider into the -following: - -* The [Firebase provider][], which wraps the `firebase_ai` package -* The [Echo provider][], which is useful as a minimal provider example - -[Echo provider]: - {{site.pub-api}}/flutter_ai_toolkit/latest/flutter_ai_toolkit/EchoProvider-class.html -[Firebase provider]: - {{site.pub-api}}/flutter_ai_toolkit/latest/flutter_ai_toolkit/FirebaseProvider-class.html -[`LlmProvider` interface]: - {{site.pub-api}}/flutter_ai_toolkit/latest/flutter_ai_toolkit/LlmProvider-class.html +The LLM could be in the cloud or local, +it could be hosted in the Google Cloud Platform +or on some other cloud provider, +it could be a proprietary LLM or open source. +Any LLM or LLM-like endpoint that can be used +to implement this interface can be plugged into +the chat view as an LLM provider. The AI Toolkit +comes with two providers out of the box, +both of which implement the `LlmProvider` interface +that is required to plug the provider into the following: + +* The [Firebase AI Logic provider][], + which wraps the `firebase_ai` package +* The [Echo provider][], + which is useful as a minimal provider example + +[Echo provider]: {{site.pub-api}}/flutter_ai_toolkit/latest/flutter_ai_toolkit/EchoProvider-class.html +[`LlmProvider` interface]: {{site.pub-api}}/flutter_ai_toolkit/latest/flutter_ai_toolkit/LlmProvider-class.html +[Firebase AI Logic provider]: {{site.pub-api}}/flutter_ai_toolkit/latest/flutter_ai_toolkit/FirebaseProvider-class.html ## Implementation @@ -50,9 +52,10 @@ with these things in mind: 1. Translating messages and attachments to the underlying LLM 1. Calling the underlying LLM -1. Configuration To support full configurability in your custom provider, you - should allow the user to create the underlying model and pass that in as a - parameter, as the Firebase provider does: +1. Configuration + To support full configurability in your custom provider, + you should allow the user to create the underlying model + and pass that in as a parameter, as the Firebase provider does: ```dart class FirebaseProvider extends LlmProvider ... { @@ -141,19 +144,23 @@ You'll notice several things in this code: * Notifying listeners when the history is changed manually * Creating a new chat when the history changes, using the new history -Essentially, a custom provider manages the history for a single chat session -with the underlying LLM. As the history changes, the underlying chat either -needs to be kept up to date automatically (as the Gemini AI SDK for Dart does -when you call the underlying chat-specific methods) or manually recreated (as -the Firebase provider does whenever the history is set manually). +Essentially, a custom provider manages the history +for a single chat session with the underlying LLM. +As the history changes, the underlying chat either +needs to be kept up to date automatically +(as the Firebase provider does when you call +the underlying chat-specific methods) or manually recreated +(as the Firebase provider does whenever the history is set manually). -3. Messages and attachments +1. Messages and attachments -Attachments must be mapped from the standard `ChatMessage` class exposed by the -`LlmProvider` type to whatever is handled by the underlying LLM. For example, -the Firebase provider maps from the `ChatMessage` class from the AI Toolkit to the -`Content` type provided by the Gemini AI SDK for Dart, as shown in the following -example: +Attachments must be mapped from the standard +`ChatMessage` class exposed by the `LlmProvider` +type to whatever is handled by the underlying LLM. +For example, the Firebase provider maps from the +`ChatMessage` class from the AI Toolkit to the +`Content` type provided by the Firebase Logic AI SDK, +as shown in the following example: ```dart import 'package:google_generative_ai/google_generative_ai.dart'; @@ -179,13 +186,15 @@ class FirebaseProvider extends LlmProvider with ChangeNotifier { The `_contentFrom` method is called whenever a user prompt needs to be sent to the underlying LLM. Every provider needs to provide for its own mapping. -4. Calling the LLM +1. Calling the LLM -How you call the underlying LLM to implement `generateStream` and -`sendMessageStream` methods depends on the protocol it exposes. The Firebase -provider in the AI Toolkit handles configuration and history but calls to -`generateStream` and `sendMessageStream` each end up in a call to an API from -the Firebase AI Logic SDK: +How you call the underlying LLM to implement +`generateStream` and `sendMessageStream` methods +depends on the protocol it exposes. +The Firebase provider in the AI Toolkit +handles configuration and history but calls to +`generateStream` and `sendMessageStream` each +end up in a call to an API from the Firebase Logic AI SDK: ```dart class FirebaseProvider extends LlmProvider with ChangeNotifier { @@ -258,11 +267,14 @@ class FirebaseProvider extends LlmProvider with ChangeNotifier { ## Examples -The [Firebase provider][] implementation provides a good starting point for your -own custom provider. If you'd like to see an example provider implementation -with all of the calls to the underlying LLM stripped away, check out the [Echo -example app][], which simply formats the user's prompt and attachments as -Markdown to send back to the user as its response. +The [Firebase provider][] +implementation provides +a good starting point for your own custom provider. +If you'd like to see an example provider implementation with +all of the calls to the underlying LLM stripped away, +check out the [Echo example app][], which simply formats +the user's prompt and attachments as Markdown +to send back to the user as its response. [Echo example app]: {{site.github}}/flutter/ai/blob/main/lib/src/providers/implementations/echo_provider.dart diff --git a/src/content/ai-toolkit/feature-integration.md b/src/content/ai-toolkit/feature-integration.md index 8909cac628a..bdc4c0d3a57 100644 --- a/src/content/ai-toolkit/feature-integration.md +++ b/src/content/ai-toolkit/feature-integration.md @@ -85,9 +85,10 @@ the chat session has been optimized for: ![Example suggested prompts](/assets/images/docs/ai-toolkit/example-of-suggested-prompts.png) -The suggestions are only shown when there is no existing chat history. Clicking -one sends the prompt to the LLM. To set the list of suggestions, construct the -`LlmChatView` with the `suggestions` parameter: +The suggestions are only shown when there is no existing +chat history. Clicking one sends it immediately as a request to the underlying LLM. + To set the list of suggestions, +construct the `LlmChatView` with the `suggestions` parameter: ```dart class ChatPage extends StatelessWidget { @@ -189,8 +190,7 @@ and sending the result back to the LLM until a final text response is generated. To use function calling, you need to define your tools and pass them to the `FirebaseProvider`. Check out the [function calling example][] for details. -[function calling example]: - {{site.github}}/flutter/ai/tree/main/example/lib/function_calls/function_calls.dart +[function calling example]: {{site.github}}/flutter/ai/blob/main/example/lib/function_calls/function_calls.dart ## Disable attachments and audio input diff --git a/src/content/ai-toolkit/index.md b/src/content/ai-toolkit/index.md index c5ec1ef8852..cbae5cb1033 100644 --- a/src/content/ai-toolkit/index.md +++ b/src/content/ai-toolkit/index.md @@ -10,13 +10,14 @@ next: Hello and welcome to the Flutter AI Toolkit! -The AI Toolkit is a set of AI chat-related widgets that make it easy to add an -AI chat window to your Flutter app. The AI Toolkit is organized around an -abstract LLM provider API to make it easy to swap out the LLM provider that -you'd like your chat provider to use. Out of the box, it comes with support for -[Firebase Logic AI][]. +The AI Toolkit is a set of AI chat-related widgets that make +it easy to add an AI chat window to your Flutter app. +The AI Toolkit is organized around an abstract +LLM provider API to make it easy to swap out the +LLM provider that you'd like your chat provider to use. +Out of the box, it comes with support for [Firebase AI Logic][]. -[Firebase Logic AI]: https://firebase.google.com/docs/vertex-ai +[Firebase AI Logic]: https://firebase.google.com/docs/ai-logic ## Key features @@ -78,13 +79,13 @@ dependencies:
  • Configuration -The AI Toolkit supports both Google Gemini AI (for prototyping) and Firebase -Firebase Logic AI (for production). Both require a Firebase project and the -`firebase_core` package to be initialized, as described in the [Get started with -the Gemini API using the Firebase Logic AI in Firebase SDKs][vertex] docs. +The AI Toolkit supports both Google Gemini endpoint (for prototyping) and +the Vertex endpoint (for production). Both require a Firebase project and +the `firebase_core` package to be initialized, as described in the +[Get started with the Gemini API using the Firebase AI Logic SDKs][firebase_ai] docs. -[vertex]: - https://firebase.google.com/docs/vertex-ai/get-started?platform=flutter +[firebase_ai]: + https://firebase.google.com/docs/ai-logic/get-started?platform=flutter Once that's complete, integrate the new Firebase project into your Flutter app using the `flutterfire CLI` tool, as described in the [Add Firebase to your @@ -142,10 +143,12 @@ class ChatPage extends StatelessWidget { } ``` -The `FirebaseProvider` class exposes Firebase Logic AI to the `LlmChatView`. Note that -you provide a model name ([you have several options][options] from which to -choose), but you do not provide an API key. All of that is handled as part of -the Firebase project. +The `FirebaseProvider` class exposes +the Firebase AI Logic SDK to the `LlmChatView`. +Note that you provide a model name +([you have several options][options] from which to choose), +but you do not provide an API key. +All of that is handled as part of the Firebase project. For production workloads, it's easy to swap in the Firebase Logic AI endpoint: @@ -233,15 +236,17 @@ projects in the `example/lib` folder. **firebase_options.dart** -To use the [Vertex AI example app][vertex-ex], place your Firebase configuration -details into the `example/lib/firebase_options.dart` file. You can do this with -the `flutterfire CLI` tool as described in the [Add Firebase to your Flutter -app][add-fb] docs **from within the `example` directory**. - - -:::note **Be careful not to check the `firebase_options.dart` file into your git -repo.** ::: - +To use the [Vertex AI example app][vertex-ex], +place your Firebase configuration details +into the `example/lib/firebase_options.dart` file. +You can do this with the `flutterfire CLI` tool as described +in the [Add Firebase to your Flutter app][add-fb] docs +**from within the `example` directory**. + +:::note +**Be careful not to check the `firebase_options.dart` +file into your git repo.** +::: ## Feedback From e418fca9b476d2eed8c3261870a76efe3c353554 Mon Sep 17 00:00:00 2001 From: Chris Sells Date: Fri, 5 Dec 2025 16:51:36 -0800 Subject: [PATCH 6/6] remove demo link --- src/content/ai-toolkit/index.md | 8 ++++---- 1 file changed, 4 insertions(+), 4 deletions(-) diff --git a/src/content/ai-toolkit/index.md b/src/content/ai-toolkit/index.md index cbae5cb1033..73d586c6957 100644 --- a/src/content/ai-toolkit/index.md +++ b/src/content/ai-toolkit/index.md @@ -38,12 +38,12 @@ Out of the box, it comes with support for [Firebase AI Logic][]. * **Cross-platform support**: Compatible with Android, iOS, web, and macOS platforms. -## Online demo +## Demo -Here's the online demo hosting the AI Toolkit: +Here's what the demo example looks like hosting the AI Toolkit: - AI demo app +AI demo app The [source code for this demo][src-code] is available in the repo on GitHub.