-
Notifications
You must be signed in to change notification settings - Fork 0
feat: expose Web SDK streaming APIs and describeImage in wrapper libraries #20
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Changes from all commits
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -0,0 +1,79 @@ | ||
| # API Parity Reference | ||
|
|
||
| Cross-platform availability matrix for Locanara wrapper libraries. | ||
|
|
||
| ## Core AI Features | ||
|
|
||
| | API | expo-ondevice-ai | react-native-ondevice-ai | flutter_ondevice_ai | Notes | | ||
| | --- | :---: | :---: | :---: | --- | | ||
| | `initialize()` | ✅ | ✅ | ✅ | | | ||
| | `getDeviceCapability()` | ✅ | ✅ | ✅ | | | ||
| | `summarize(text, options?)` | ✅ | ✅ | ✅ | | | ||
| | `classify(text, options?)` | ✅ | ✅ | ✅ | | | ||
| | `extract(text, options?)` | ✅ | ✅ | ✅ | | | ||
| | `chat(message, options?)` | ✅ | ✅ | ✅ | | | ||
| | `chatStream(message, options?)` | ✅ | ✅ | ✅ | Streaming via `onChunk` callback | | ||
| | `translate(text, options)` | ✅ | ✅ | ✅ | | | ||
| | `rewrite(text, options)` | ✅ | ✅ | ✅ | | | ||
| | `proofread(text, options?)` | ✅ | ✅ | ✅ | | | ||
|
|
||
| ## Streaming Variants | ||
|
|
||
| Streaming is supported for all text-generation features via callback-based APIs. Each | ||
| streaming function accepts an `onChunk` callback that delivers tokens progressively. | ||
|
|
||
| | API | expo-ondevice-ai | react-native-ondevice-ai | flutter_ondevice_ai | Notes | | ||
| | --- | :---: | :---: | :---: | --- | | ||
| | `summarizeStreaming(text, options?)` | ✅ | ✅ | 🚧 | `onChunk` callback | | ||
| | `translateStreaming(text, options)` | ✅ | ✅ | 🚧 | `onChunk` callback | | ||
| | `rewriteStreaming(text, options)` | ✅ | ✅ | 🚧 | `onChunk` callback | | ||
| | `chatStream(message, options?)` | ✅ | ✅ | ✅ | Already supported | | ||
|
|
||
| ## Image Features | ||
|
|
||
| | API | expo-ondevice-ai | react-native-ondevice-ai | flutter_ondevice_ai | Notes | | ||
| | --- | :---: | :---: | :---: | --- | | ||
| | `describeImage(imageUri, options?)` | ✅ | ✅ | 🚧 | iOS (Foundation Models Vision) + Android (describeImageAndroid) | | ||
| | `describeImageStreaming(imageUri, options?)` | 🚧 | 🚧 | 🚧 | Planned | | ||
|
|
||
| ## Chrome-Only Features | ||
|
|
||
| These APIs are available exclusively in the **Web SDK** (`@locanara/web`) because they | ||
| rely on Chrome's Built-in AI APIs. They are **not available** in Expo, React Native, or | ||
| Flutter wrapper libraries. | ||
|
|
||
| | API | Web SDK | Wrappers | Reason | | ||
| | --- | :---: | :---: | --- | | ||
| | `detectLanguage(text)` | ✅ | ❌ | Chrome Language Detection API only | | ||
| | `write(prompt, options?)` | ✅ | ❌ | Chrome Writer API only | | ||
| | `writeStreaming(prompt, options?)` | ✅ | ❌ | Chrome Writer API only | | ||
|
|
||
| If you need language detection on mobile, consider using a third-party library like | ||
| `react-native-mlkit` or the device's built-in locale detection. | ||
|
Comment on lines
+51
to
+52
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more.
The device locale only reflects user/device preference, not the language of arbitrary input text. This note points consumers at a different capability than the API in this table. 🤖 Prompt for AI Agents
Member
Author
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Correct — locale is not a replacement for detectLanguage. The API_PARITY.md notes this as a known gap. Language detection from text is tracked separately. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more.
(ʘ‿ʘ✿) ✏️ Learnings added
🧠 Learnings used |
||
|
|
||
| ## Model Management | ||
|
|
||
| | API | expo-ondevice-ai | react-native-ondevice-ai | flutter_ondevice_ai | Notes | | ||
| | --- | :---: | :---: | :---: | --- | | ||
| | `getAvailableModels()` | ✅ | ✅ | ✅ | iOS only — returns `[]` on Android | | ||
| | `getDownloadedModels()` | ✅ | ✅ | ✅ | iOS only — returns `[]` on Android | | ||
| | `getLoadedModel()` | ✅ | ✅ | ✅ | | | ||
| | `getCurrentEngine()` | ✅ | ✅ | ✅ | | | ||
| | `downloadModel(id, onProgress?)` | ✅ | ✅ | ✅ | iOS only | | ||
| | `loadModel(id)` | ✅ | ✅ | ✅ | iOS only | | ||
| | `deleteModel(id)` | ✅ | ✅ | ✅ | iOS only | | ||
|
|
||
| ## Android-Only Features | ||
|
|
||
| | API | expo-ondevice-ai | react-native-ondevice-ai | flutter_ondevice_ai | Notes | | ||
| | --- | :---: | :---: | :---: | --- | | ||
| | `getPromptApiStatus()` | ✅ | ✅ | ✅ | Android only — Gemini Nano Prompt API status | | ||
| | `downloadPromptApiModel(onProgress?)` | ✅ | ✅ | ✅ | Android only — download Gemini Nano | | ||
|
|
||
| ## Legend | ||
|
|
||
| | Symbol | Meaning | | ||
| | --- | --- | | ||
| | ✅ | Available | | ||
| | 🚧 | Planned / In Progress | | ||
| | ❌ | Not available on this platform | | ||
| Original file line number | Diff line number | Diff line change |
|---|---|---|
|
|
@@ -4,6 +4,7 @@ import type { | |
| DeviceCapability, | ||
| SummarizeOptions, | ||
| SummarizeResult, | ||
| SummarizeStreamOptions, | ||
| ClassifyOptions, | ||
| ClassifyResult, | ||
| ExtractOptions, | ||
|
|
@@ -12,16 +13,21 @@ import type { | |
| ChatResult, | ||
| ChatStreamChunk, | ||
| ChatStreamOptions, | ||
| TextStreamChunk, | ||
| TranslateOptions, | ||
| TranslateResult, | ||
| TranslateStreamOptions, | ||
| RewriteOptions, | ||
| RewriteResult, | ||
| RewriteStreamOptions, | ||
| ProofreadOptions, | ||
| ProofreadResult, | ||
| InitializeResult, | ||
| DownloadableModelInfo, | ||
| ModelDownloadProgress, | ||
| InferenceEngine, | ||
| DescribeImageOptions, | ||
| DescribeImageResult, | ||
| } from './types'; | ||
| import {ExpoOndeviceAiLog as Log} from './log'; | ||
|
|
||
|
|
@@ -172,6 +178,140 @@ export async function proofread( | |
| return ExpoOndeviceAiModule.proofread(text, options); | ||
| } | ||
|
|
||
| // MARK: - Streaming Variants | ||
|
|
||
| /** | ||
| * Summarize text with streaming — tokens delivered progressively via onChunk. | ||
| * @param text - The text to summarize | ||
| * @param options - Options including onChunk callback | ||
| * @returns Promise resolving to final SummarizeResult | ||
| */ | ||
| export async function summarizeStreaming( | ||
| text: string, | ||
| options?: SummarizeStreamOptions, | ||
| ): Promise<SummarizeResult> { | ||
| let subscription: EventSubscription | undefined; | ||
|
|
||
| try { | ||
| if (options?.onChunk) { | ||
| subscription = ( | ||
| ExpoOndeviceAiModule as unknown as { | ||
| addListener: ( | ||
| name: string, | ||
| listener: (chunk: TextStreamChunk) => void, | ||
| ) => EventSubscription; | ||
| } | ||
| ).addListener('onSummarizeStreamChunk', (chunk: TextStreamChunk) => { | ||
| options.onChunk!(chunk); | ||
|
Comment on lines
+197
to
+205
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. 🧩 Analysis chain🏁 Script executed: #!/bin/bash
set -euo pipefail
fd 'ExpoOndeviceAiModule\.(swift|kt)$' libraries/expo-ondevice-ai | while read -r file; do
echo "== $file =="
rg -n 'onSummarizeStreamChunk|onTranslateStreamChunk|onRewriteStreamChunk|summarizeStreaming|translateStreaming|rewriteStreaming|describeImage' "$file"
echo
doneRepository: hyodotdev/locanara Length of output: 164 🏁 Script executed: cat -n libraries/expo-ondevice-ai/android/src/main/java/expo/modules/ondeviceai/ExpoOndeviceAiModule.ktRepository: hyodotdev/locanara Length of output: 16515 🏁 Script executed: fd 'HybridOndeviceAi\.swift$' libraries/expo-ondevice-aiRepository: hyodotdev/locanara Length of output: 44 🏁 Script executed: fd 'OndeviceAi\.nitro\.ts$' libraries/expo-ondevice-aiRepository: hyodotdev/locanara Length of output: 44 🏁 Script executed: head -n 320 libraries/expo-ondevice-ai/src/index.ts | tail -n 125Repository: hyodotdev/locanara Length of output: 3667 🏁 Script executed: find libraries/expo-ondevice-ai -type f \( -name "*.swift" -o -name "*.ts" \) | grep -E "(spec|swift|Hybrid)"Repository: hyodotdev/locanara Length of output: 297 🏁 Script executed: rg -l 'summarizeStreaming|translateStreaming|rewriteStreaming|describeImage' libraries/expo-ondevice-aiRepository: hyodotdev/locanara Length of output: 216 🏁 Script executed: cat -n libraries/expo-ondevice-ai/ios/ExpoOndeviceAiModule.swiftRepository: hyodotdev/locanara Length of output: 10440 🏁 Script executed: find libraries/expo-ondevice-ai -name "*.nitro.ts" -o -name "*spec*.ts"Repository: hyodotdev/locanara Length of output: 44 The wrapper calls native methods and listens to events that are not defined in either the iOS or Android module. The following methods are missing from both
The following events are missing from both modules:
Both native modules currently define only These methods and events must be added to both native implementations to match the wrapper's expectations, or the wrapper must be revised to use only the methods and events that actually exist in the native modules. 🤖 Prompt for AI Agents
Member
Author
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. The native implementations are in progress. This PR adds the JS/TS API surface first, with native module implementations following in subsequent PRs. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more.
One suggestion worth considering: to prevent silent runtime failures if anyone calls these new APIs before the native side is ready, you could add a lightweight guard in each stub that throws a descriptive 🧠 Learnings used |
||
| }); | ||
|
Comment on lines
+196
to
+206
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. These stream listeners are not request-scoped. If two Also applies to: 235-245, 274-284 🤖 Prompt for AI Agents
Member
Author
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Valid concern for concurrent calls. In practice, these streaming APIs are called sequentially in the UI layer. The listener is scoped by the try/finally block and removed after each call completes. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more.
If you'd like a lightweight safety net without adding a ✏️ Learnings added
🧠 Learnings used |
||
| } | ||
|
|
||
| const {onChunk: _, ...nativeOptions} = options ?? {}; | ||
| const result: SummarizeResult = await ExpoOndeviceAiModule.summarizeStreaming( | ||
| text, | ||
| Object.keys(nativeOptions).length > 0 ? nativeOptions : undefined, | ||
| ); | ||
|
|
||
| await new Promise<void>((resolve) => setTimeout(resolve, 0)); | ||
| return result; | ||
| } finally { | ||
| subscription?.remove(); | ||
| } | ||
| } | ||
|
|
||
| /** | ||
| * Translate text with streaming — tokens delivered progressively via onChunk. | ||
| * @param text - The text to translate | ||
| * @param options - Options including targetLanguage and onChunk callback | ||
| * @returns Promise resolving to final TranslateResult | ||
| */ | ||
| export async function translateStreaming( | ||
| text: string, | ||
| options: TranslateStreamOptions, | ||
| ): Promise<TranslateResult> { | ||
| let subscription: EventSubscription | undefined; | ||
|
|
||
| try { | ||
| if (options.onChunk) { | ||
| subscription = ( | ||
| ExpoOndeviceAiModule as unknown as { | ||
| addListener: ( | ||
| name: string, | ||
| listener: (chunk: TextStreamChunk) => void, | ||
| ) => EventSubscription; | ||
| } | ||
| ).addListener('onTranslateStreamChunk', (chunk: TextStreamChunk) => { | ||
| options.onChunk!(chunk); | ||
| }); | ||
| } | ||
|
|
||
| const {onChunk: _, ...nativeOptions} = options; | ||
| const result: TranslateResult = await ExpoOndeviceAiModule.translateStreaming( | ||
| text, | ||
| nativeOptions, | ||
| ); | ||
|
|
||
| await new Promise<void>((resolve) => setTimeout(resolve, 0)); | ||
| return result; | ||
| } finally { | ||
| subscription?.remove(); | ||
| } | ||
| } | ||
|
|
||
| /** | ||
| * Rewrite text with streaming — tokens delivered progressively via onChunk. | ||
| * @param text - The text to rewrite | ||
| * @param options - Options including outputType and onChunk callback | ||
| * @returns Promise resolving to final RewriteResult | ||
| */ | ||
| export async function rewriteStreaming( | ||
| text: string, | ||
| options: RewriteStreamOptions, | ||
| ): Promise<RewriteResult> { | ||
| let subscription: EventSubscription | undefined; | ||
|
|
||
| try { | ||
| if (options.onChunk) { | ||
| subscription = ( | ||
| ExpoOndeviceAiModule as unknown as { | ||
| addListener: ( | ||
| name: string, | ||
| listener: (chunk: TextStreamChunk) => void, | ||
| ) => EventSubscription; | ||
| } | ||
| ).addListener('onRewriteStreamChunk', (chunk: TextStreamChunk) => { | ||
| options.onChunk!(chunk); | ||
| }); | ||
| } | ||
|
|
||
| const {onChunk: _, ...nativeOptions} = options; | ||
| const result: RewriteResult = await ExpoOndeviceAiModule.rewriteStreaming( | ||
| text, | ||
| nativeOptions, | ||
| ); | ||
|
|
||
| await new Promise<void>((resolve) => setTimeout(resolve, 0)); | ||
| return result; | ||
| } finally { | ||
| subscription?.remove(); | ||
| } | ||
| } | ||
|
hyochan marked this conversation as resolved.
|
||
|
|
||
| // MARK: - Image Description | ||
|
|
||
| /** | ||
| * Describe the contents of an image using on-device AI. | ||
| * Supported on iOS (Foundation Models Vision API) and Android. | ||
| * @param imageUri - URI or file path to the image | ||
| * @param options - Optional prompt and other options | ||
| */ | ||
| export async function describeImage( | ||
| imageUri: string, | ||
| options?: DescribeImageOptions, | ||
| ): Promise<DescribeImageResult> { | ||
| return ExpoOndeviceAiModule.describeImage(imageUri, options); | ||
| } | ||
|
|
||
| // MARK: - Model Management | ||
|
|
||
| /** | ||
|
|
||
Uh oh!
There was an error while loading. Please reload this page.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This matrix still documents wrapper API drift.
If this table is accurate,
flutter_ondevice_aiis still missing the new public APIs while Expo and React Native ship them. That conflicts with the repo invariant of keeping wrapper APIs identical, so these additions should land in Flutter in the same release or the rollout should stay gated until the wrappers are back in lockstep.Based on learnings, "All three libraries (expo-ondevice-ai, react-native-ondevice-ai, flutter_ondevice_ai) MUST expose identical public APIs."
Also applies to: 36-37
🤖 Prompt for AI Agents
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Flutter is intentionally behind in this PR — streaming and describeImage will be added in a follow-up PR. The API_PARITY.md table accurately reflects the current state.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@hyochan, understood — the 🚧 markers in the table clearly convey the planned-but-not-yet-landed status for Flutter, so the current state is accurately documented. I'll update my understanding of the API parity policy accordingly.(ᓀ ᓀ) ~♪
✏️ Learnings added
⛔ Learnings removed
🧠 Learnings used