diff --git a/.claude/guides/09-expo-ondevice-ai.md b/.claude/guides/09-expo-ondevice-ai.md index b777e4d..70f232c 100644 --- a/.claude/guides/09-expo-ondevice-ai.md +++ b/.claude/guides/09-expo-ondevice-ai.md @@ -228,6 +228,38 @@ bun web - AI Status Banner → opens Model Selection Sheet for engine/model management - Model Selection Sheet: download, load, delete GGUF models; switch engines +## Cross-Library iOS llama.cpp Bridge Comparison + +All three libraries (expo, react-native, flutter) use the same `LocanaraLlamaBridge` isolation pattern for llama.cpp, but Flutter requires extra steps due to its framework linking model. + +### Why the Bridge Exists + +C++ interop is **viral** in Swift — any module importing a C++ interop module must also enable it. React Native's `ExpoModulesCore-Swift.h` has a `GenericTypedArray` class that collides with llama.cpp types. The bridge pod isolates C++ interop from framework headers. + +### Key Differences by Library + +| Aspect | Expo | React Native (bare) | Flutter | +| --------------------------------- | ------------------------------- | --------------------------- | ------------------------------------- | +| `use_frameworks!` | Not used (static libs) | Not used (static libs) | **Required** (`:linkage => :static`) | +| SPM `llama.framework` type | Static (linked into binary) | Static (linked into binary) | **Dynamic** (must be embedded) | +| Extra embed phase | Config plugin handles it | Not needed | **Required** (`embed_spm_frameworks`) | +| Bridge pod generation | Auto-generated by config plugin | Manual in example | Manual in example | +| Bridge podspec `static_framework` | Not needed | Not needed | **Required** (`true`) | + +### Common Components (All Libraries) + +1. **`LocanaraLlamaBridge/Sources/LlamaCppBridgeEngine.swift`** — identical bridge engine implementing `InferenceEngine` + `LlamaCppBridgeProvider` +2. **`configure_llama_bridge(installer)`** — post_install hook adding SPM packages + C++ interop to bridge target +3. **`user_target_xcconfig`** — `-framework "llama"` linker flag + framework search paths + +### Flutter-Only Requirements + +1. **`use_frameworks! :linkage => :static`** — without this, hundreds of `Undefined symbol: _ggml_*` errors +2. **`embed_spm_frameworks`** — copies `llama.framework` from `BUILT_PRODUCTS_DIR` to `Runner.app/Frameworks/` (must run BEFORE "Thin Binary" phase) +3. **`static_framework = true`** in bridge podspec + +See `12-flutter-ondevice-ai.md` for complete Flutter-specific details. + ## Notes - `enableLocalDev` requires `localPath` pointing to the monorepo root diff --git a/.claude/guides/11-react-native-ondevice-ai.md b/.claude/guides/11-react-native-ondevice-ai.md new file mode 100644 index 0000000..d9c6e7d --- /dev/null +++ b/.claude/guides/11-react-native-ondevice-ai.md @@ -0,0 +1,171 @@ +# react-native-ondevice-ai (React Native Library) + +## Overview + +Location: `libraries/react-native-ondevice-ai/` + +React Native module using **Nitro Modules** for bare React Native apps. Wraps the Locanara native SDKs with auto-generated JNI/C++ bridges. Expo users should use `expo-ondevice-ai` instead. + +**Does NOT support web** — Nitro is native-only. + +## Requirements + +- React Native 0.76+ +- Nitro Modules 0.22+ +- iOS 17+ / Android API 26+ +- Bun 1.1+ + +## Build Commands + +```bash +cd libraries/react-native-ondevice-ai + +bun install # Install dependencies +bun run nitrogen # Generate Nitro bridge code +bun run lint:tsc # TypeScript type check +bun run test # Run tests +``` + +## Project Structure + +```text +libraries/react-native-ondevice-ai/ +├── src/ +│ ├── index.ts # Public API wrapper (type conversion, listener mgmt) +│ ├── types.ts # Public TypeScript type definitions +│ ├── specs/ +│ │ └── OndeviceAi.nitro.ts # Nitro spec (SOURCE OF TRUTH for bridge codegen) +│ └── __tests__/ # Unit tests +│ └── __mocks__/ # Nitro module mocks +├── ios/ +│ ├── HybridOndeviceAi.swift # iOS native implementation (uses Locanara chains) +│ ├── OndeviceAiHelper.swift # Option extractors, PrefilledMemory adapter +│ └── OndeviceAiSerialization.swift # Chain result conversion +├── android/ +│ └── src/main/java/com/margelo/nitro/ondeviceai/ +│ ├── HybridOndeviceAi.kt # Android native implementation +│ ├── OndeviceAiHelper.kt # Option extractors +│ └── OndeviceAiSerialization.kt +├── nitrogen/generated/ # Auto-generated bridge code (DO NOT EDIT) +├── NitroOndeviceAi.podspec # CocoaPods spec (depends on Locanara) +├── nitro.json # Nitro module configuration +├── example/ # Example React Native app +│ ├── src/screens/ # Feature/Device/Settings screens +│ ├── src/components/ # Feature demos, shared components +│ └── ios/ +│ ├── LocanaraLlamaBridge/ # Bridge pod (C++ interop isolation) +│ └── Podfile +└── package.json +``` + +## How It Works + +### Nitro Module Architecture + +The library uses Nitro Modules for a **spec-first** native bridge: + +```text +OndeviceAi.nitro.ts (Spec — source of truth) + ↓ npx nitrogen +nitrogen/generated/ (C++ / JNI bridge code) + ↓ +HybridOndeviceAi.swift (iOS) HybridOndeviceAi.kt (Android) + ↓ ↓ +src/index.ts (JS wrapper — converts types, manages listeners) +``` + +### TypeScript → Native Chain Mapping + +Same mapping as `expo-ondevice-ai`: + +| TypeScript API | iOS Chain | Android | +| --------------------------- | ------------------------------------------ | -------------------- | +| `summarize(text, opts)` | `SummarizeChain(bulletCount:).run(text)` | ML Kit Summarization | +| `classify(text, opts)` | `ClassifyChain(categories:).run(text)` | Prompt API | +| `extract(text, opts)` | `ExtractChain(entityTypes:).run(text)` | Prompt API | +| `chat(message, opts)` | `ChatChain(memory:).run(message)` | Prompt API | +| `chatStream(message, opts)` | `ChatChain(memory:).streamRun(message)` | Prompt API | +| `translate(text, opts)` | `TranslateChain(source:target:).run(text)` | Prompt API | +| `rewrite(text, opts)` | `RewriteChain(style:).run(text)` | ML Kit Rewriting | +| `proofread(text, opts)` | `ProofreadChain().run(text)` | ML Kit Proofreading | + +### Streaming (Listener Pattern) + +Nitro uses explicit listener add/remove instead of EventEmitter: + +```typescript +// JS wrapper manages listener lifecycle +if (onChunk) { + listener = (chunk) => onChunk(convertChunk(chunk)); + AI.instance.addChatStreamListener(listener); +} +try { + /* call stream API */ +} finally { + AI.instance.removeChatStreamListener(listener); +} +``` + +### Nitro Constraints + +- **Union types**: Must have 2+ values (single-value union = codegen error) +- **No `Record`**: Use flat fields, convert in JS layer +- **All types in spec file**: Nitro codegen only reads the `.nitro.ts` file +- **Optional fields**: Use `field?: Type | null` pattern + +## iOS llama.cpp Bridge + +Same `LocanaraLlamaBridge` pattern as `expo-ondevice-ai` — see the **Cross-Library iOS llama.cpp Bridge** section in `09-expo-ondevice-ai.md`. + +**Key difference**: React Native (without Expo) does NOT use `use_frameworks!` by default, so static linking works naturally. The bridge pod and SPM integration follow the same `configure_llama_bridge` post_install pattern. + +### CocoaPods Configuration + +```ruby +# NitroOndeviceAi.podspec +s.dependency 'React-Core' +s.dependency 'React-jsi' +s.dependency 'React-callinvoker' +s.dependency 'Locanara' +``` + +## Spec-First Development Workflow + +**When adding or modifying an API, follow this exact order:** + +1. **Update Nitro spec** (`src/specs/OndeviceAi.nitro.ts`) +2. **Run nitrogen**: `npx nitrogen` +3. **Update native implementations** (iOS + Android) +4. **Update JS wrapper** (`src/index.ts`) +5. **Update public types** (`src/types.ts`) +6. **Update tests** + mocks +7. **Verify**: `npx nitrogen && npx tsc --noEmit && bun run test` + +## Example App + +```bash +cd libraries/react-native-ondevice-ai/example + +# iOS +bun ios --device + +# Android +bun android +``` + +### App Structure + +- 3-tab navigation: Features, Device, Settings +- Feature list → tappable demo screens for each AI feature +- AI Status Banner → Model Selection Sheet for engine/model management + +## API Parity + +The `react-native-ondevice-ai` public API **MUST** be identical to `expo-ondevice-ai`. When modifying either library, update both. + +## Notes + +- Nitro-generated files in `nitrogen/generated/` must NEVER be edited manually +- The bridge pod is set up in the example app's `ios/LocanaraLlamaBridge/` +- No web support — Nitro is native-only +- Test on real devices (simulators have limited AI support) diff --git a/.claude/guides/12-flutter-ondevice-ai.md b/.claude/guides/12-flutter-ondevice-ai.md new file mode 100644 index 0000000..d0c084e --- /dev/null +++ b/.claude/guides/12-flutter-ondevice-ai.md @@ -0,0 +1,242 @@ +# flutter_ondevice_ai (Flutter Library) + +## Overview + +Location: `libraries/flutter_ondevice_ai/` + +Flutter plugin wrapping the Locanara native SDKs using **MethodChannel** + **EventChannel** pattern (same as `flutter_inapp_purchase`). Supports iOS, Android, and Web (Chrome Built-in AI). + +## Requirements + +- Flutter SDK 3.3.0+ +- Dart SDK 3.3.0+ +- iOS 17+ (for llama.cpp engine) +- Android API 24+ (library minSdk), API 34+ (example app, Gemini Nano requirement) +- Web: Chrome 138+ (Chrome Built-in AI) + +## Build Commands + +```bash +cd libraries/flutter_ondevice_ai + +flutter pub get # Install dependencies +flutter analyze # Static analysis +flutter test # Run tests + +# Example app +cd example +flutter run # Run on connected device +flutter build ios # Build iOS +``` + +## Project Structure + +```text +libraries/flutter_ondevice_ai/ +├── lib/ +│ ├── flutter_ondevice_ai.dart # Barrel export +│ └── src/ +│ ├── flutter_ondevice_ai_plugin.dart # Main Dart API (singleton, MethodChannel) +│ ├── flutter_ondevice_ai_web.dart # Web implementation (dart:js_interop) +│ ├── types.dart # All Dart types (enums, options, results) +│ └── errors.dart # OndeviceAiException +├── ios/ +│ ├── flutter_ondevice_ai.podspec # CocoaPods spec (depends on Locanara) +│ └── Classes/ +│ ├── FlutterOndeviceAiPlugin.swift # FlutterPlugin (MethodCall dispatch) +│ ├── FlutterOndeviceAiHelper.swift # Options decoding, PrefilledMemory adapter +│ └── FlutterOndeviceAiSerialization.swift # Chain result → Flutter dictionary +├── android/ +│ ├── build.gradle +│ └── src/main/kotlin/dev/hyodot/flutter_ondevice_ai/ +│ ├── FlutterOndeviceAiPlugin.kt # FlutterPlugin (MethodChannel + EventChannel) +│ ├── FlutterOndeviceAiHelper.kt # Options decoding +│ └── FlutterOndeviceAiSerialization.kt # Result serialization +├── test/ +│ ├── flutter_ondevice_ai_test.dart # MethodChannel mock tests +│ └── types_test.dart # Type serialization tests +└── example/ + ├── lib/ + │ ├── main.dart # App entry point + │ ├── app_state.dart # Provider state management + │ └── widgets/ + │ ├── pages/ # Feature demos, Device, Settings + │ └── shared/ # ModelSelectionSheet, FeatureRow, etc. + └── ios/ + ├── Podfile # CocoaPods config with bridge + SPM embedding + └── LocanaraLlamaBridge/ # Bridge pod (C++ interop isolation) + ├── LocanaraLlamaBridge.podspec + └── Sources/LlamaCppBridgeEngine.swift +``` + +## How It Works + +### Dart → Native Chain Mapping + +Same chain mapping as all other libraries: + +| Dart API | iOS Chain | Android | Web (Chrome Built-in AI) | +| --------------------------- | ------------------------------------------ | -------------------- | --------------------------------- | +| `summarize(text, opts)` | `SummarizeChain(bulletCount:).run(text)` | ML Kit Summarization | `Summarizer` API | +| `classify(text, opts)` | `ClassifyChain(categories:).run(text)` | Prompt API | `LanguageModel` API | +| `chat(message, opts)` | `ChatChain(memory:).run(message)` | Prompt API | `LanguageModel` API | +| `chatStream(message, opts)` | `ChatChain(memory:).streamRun(message)` | Prompt API | `LanguageModel.promptStreaming()` | +| `translate(text, opts)` | `TranslateChain(source:target:).run(text)` | Prompt API | `Translator` API | +| `rewrite(text, opts)` | `RewriteChain(style:).run(text)` | ML Kit Rewriting | `Rewriter` API | +| `proofread(text, opts)` | `ProofreadChain().run(text)` | ML Kit Proofreading | `LanguageModel` API | + +### MethodChannel / EventChannel + +- **MethodChannel** `'flutter_ondevice_ai'` — request/response calls (all 20 API methods) +- **EventChannel** `'flutter_ondevice_ai/chat_stream'` — streaming chat chunks +- **EventChannel** `'flutter_ondevice_ai/model_download_progress'` — download progress + +### Web Implementation + +`flutter_ondevice_ai_web.dart` uses `dart:js_interop` + `package:web` for Chrome Built-in AI APIs (Summarizer, Translator, Rewriter, Writer, LanguageModel). Registered via `pubspec.yaml` platform plugin entry. + +## iOS llama.cpp Bridge (CRITICAL) + +Flutter's iOS integration requires special handling compared to Expo/React Native due to **framework linking differences**. + +### Why Flutter Is Different + +| Aspect | Expo | React Native (bare) | Flutter | +| ------------------------- | --------------------------------- | ------------------------------ | ------------------------------------------------------ | +| `use_frameworks!` | Not used by default (static libs) | Not used by default | **Required** (dynamic or static) | +| CocoaPods linkage | Static libraries (.a) | Static libraries (.a) | Static frameworks (`:linkage => :static`) | +| SPM framework type | Static (linked into binary) | Static (linked into binary) | **Dynamic** (SPM decides independently) | +| llama.framework embedding | Not needed (statically linked) | Not needed (statically linked) | **Required** (dynamic framework must be in app bundle) | + +### Flutter-Specific Setup (3 pieces) + +Flutter needs **all three** of these that Expo/RN don't: + +#### 1. Static CocoaPods Linkage + +```ruby +# Podfile +use_frameworks! :linkage => :static # NOT just use_frameworks! +``` + +Without `:linkage => :static`, you get `Undefined symbol: _ggml_*` linker errors. + +#### 2. SPM Framework Embedding Script + +SPM builds `llama.framework` as a **dynamic** framework regardless of CocoaPods linkage settings. Flutter's build system doesn't embed SPM-produced frameworks. A custom build phase copies them: + +```ruby +# Podfile — embed_spm_frameworks function +# Copies llama.framework from BUILT_PRODUCTS_DIR to Runner.app/Frameworks/ +# MUST run BEFORE Flutter's "Thin Binary" phase (embed_and_thin) +``` + +Without this, you get `dyld: Library not loaded: @rpath/llama.framework/llama` crash at launch. + +#### 3. Bridge Podspec with Linker Flags + +```ruby +# LocanaraLlamaBridge.podspec +s.static_framework = true # Flutter-specific +s.user_target_xcconfig = { + 'OTHER_LDFLAGS' => '$(inherited) -framework "llama"', # REQUIRED for linking + 'FRAMEWORK_SEARCH_PATHS' => '$(inherited) "$(PODS_CONFIGURATION_BUILD_DIR)"', +} +``` + +Without `-framework "llama"`, the linker can't find ggml symbols. + +### Complete Podfile Structure + +The Flutter example Podfile has these components: + +1. **`configure_llama_bridge(installer)`** — Adds SPM package reference for LocalLLMClient to Pods project, adds SPM dependencies to bridge target, enables C++ interop (same as Expo) + +2. **`embed_spm_frameworks`** — **Flutter-only**. Opens `Runner.xcodeproj`, adds "Embed SPM Frameworks" shell script build phase that copies `llama.framework` into `Runner.app/Frameworks/` and re-signs it. Inserts the phase BEFORE Flutter's "Thin Binary" phase. + +3. **Pod declarations**: `pod 'Locanara'` (local), `pod 'LocanaraLlamaBridge'` (local bridge) + +4. **`post_install`**: Runs `configure_llama_bridge`, `embed_spm_frameworks`, then `flutter_additional_ios_build_settings` + +### Build Phase Order (Must Be Correct) + +```text +0: [CP] Check Pods Manifest.lock +1: Run Script (Flutter build) +2: Sources +3: Frameworks +4: Resources +5: Embed Frameworks (CocoaPods frameworks) +6: Embed SPM Frameworks ← copies llama.framework HERE +7: Thin Binary ← Flutter finalizes app HERE +``` + +If "Embed SPM Frameworks" runs AFTER "Thin Binary", it's too late and `llama.framework` won't be in the app bundle. + +### Common Build Errors + +| Error | Cause | Fix | +| -------------------------------------------------------- | ---------------------------------------------------- | -------------------------------------------------------------------------- | +| `Undefined symbol: _ggml_abort, _ggml_add` | Missing `-framework "llama"` in linker flags | Add `user_target_xcconfig` with `OTHER_LDFLAGS` to bridge podspec | +| `dyld: Library not loaded: @rpath/llama.framework/llama` | SPM dynamic framework not embedded in app | Add `embed_spm_frameworks` to Podfile, ensure it runs before "Thin Binary" | +| `Cannot find type 'Memory' in scope` (20+ errors) | Using Locanara from CocoaPods trunk (outdated 1.0.1) | Use `pod 'Locanara', :path => '../../../../packages/apple'` for local SDK | +| `LocalLLMClient is not configured` | No LocanaraLlamaBridge pod | Add bridge pod + `configure_llama_bridge` to Podfile | + +## LlamaCppBridge Isolation Architecture + +Same as Expo — C++ interop is viral in Swift. The bridge pod is compiled in isolation: + +```text +┌──────────────────────────────┐ ┌──────────────────────────┐ +│ flutter_ondevice_ai pod │ │ LocanaraLlamaBridge pod │ +│ (NO C++ interop) │ │ (C++ interop enabled) │ +│ │ │ │ +│ depends on: │ │ depends on: │ +│ - Flutter │ │ - Locanara (engine) │ +│ - Locanara (chains) │ │ - LocalLLMClient │ +│ │ │ - LocalLLMClientLlama │ +│ uses LocanaraClient for │ │ │ +│ chains (via RouterModel) │ │ implements: │ +│ │ │ - LlamaCppBridgeProvider│ +│ │ │ - InferenceEngine │ +└──────────────────────────────┘ └──────────────────────────┘ + │ │ + │ discovered at runtime via │ + │ NSClassFromString │ + └────────────────────────────────────┘ +``` + +## Example App + +```bash +cd libraries/flutter_ondevice_ai/example + +# iOS device +flutter run + +# Android device +flutter run -d + +# Web (Chrome 138+) +flutter run -d chrome +``` + +### App Features + +- Multi-tab navigation: Device, Features, Framework, Settings +- Feature list → demo screens for all 7 AI features + chat +- AI Status Banner → Model Selection Sheet +- Model Selection Sheet: download, load, delete GGUF models; **switch back to Apple Intelligence** +- `switchToDeviceAI()` — reverts from llama.cpp engine to platform-native AI + +## API Parity + +The `flutter_ondevice_ai` public API follows the same contract as `expo-ondevice-ai` and `react-native-ondevice-ai`. + +## Notes + +- The bridge pod in `example/ios/LocanaraLlamaBridge/` is NOT auto-generated (unlike Expo's config plugin); it's checked into the repo +- Flutter requires `:linkage => :static` — do NOT use bare `use_frameworks!` +- The `embed_spm_frameworks` post_install hook modifies `Runner.xcodeproj` — this is expected +- Metal shader warnings from ggml during model loading are harmless +- Test on real devices for on-device AI features diff --git a/.github/workflows/ci-flutter.yml b/.github/workflows/ci-flutter.yml new file mode 100644 index 0000000..075842d --- /dev/null +++ b/.github/workflows/ci-flutter.yml @@ -0,0 +1,38 @@ +name: CI Flutter + +on: + push: + branches: [main] + paths: + - 'libraries/flutter_ondevice_ai/**' + - '.github/workflows/ci-flutter.yml' + pull_request: + branches: [main] + paths: + - 'libraries/flutter_ondevice_ai/**' + - '.github/workflows/ci-flutter.yml' + +jobs: + analyze-and-test: + name: Analyze & Test + runs-on: ubuntu-latest + + steps: + - uses: actions/checkout@v4 + + - name: Setup Flutter + uses: subosito/flutter-action@v2 + with: + channel: stable + + - name: Install Dependencies + working-directory: libraries/flutter_ondevice_ai + run: flutter pub get + + - name: Analyze + working-directory: libraries/flutter_ondevice_ai + run: flutter analyze --no-fatal-infos + + - name: Run Tests + working-directory: libraries/flutter_ondevice_ai + run: flutter test diff --git a/.github/workflows/publish-flutter.yml b/.github/workflows/publish-flutter.yml new file mode 100644 index 0000000..00e2cca --- /dev/null +++ b/.github/workflows/publish-flutter.yml @@ -0,0 +1,43 @@ +name: Publish Flutter + +on: + push: + tags: + - 'flutter-*' + +# Required for pub.dev OIDC authentication +permissions: + id-token: write + +jobs: + publish: + name: Publish to pub.dev + runs-on: ubuntu-latest + + steps: + - uses: actions/checkout@v4 + + - name: Setup Flutter + uses: subosito/flutter-action@v2 + with: + channel: stable + + - name: Install Dependencies + working-directory: libraries/flutter_ondevice_ai + run: flutter pub get + + - name: Analyze + working-directory: libraries/flutter_ondevice_ai + run: flutter analyze --no-fatal-infos + + - name: Run Tests + working-directory: libraries/flutter_ondevice_ai + run: flutter test + + - name: Dry Run + working-directory: libraries/flutter_ondevice_ai + run: dart pub publish --dry-run + + - name: Publish + working-directory: libraries/flutter_ondevice_ai + run: dart pub publish --force diff --git a/.gitignore b/.gitignore index 87ffae4..f55894b 100644 --- a/.gitignore +++ b/.gitignore @@ -73,6 +73,12 @@ bun.lockb # Claude Code .claude/settings.local.json +# Flutter/Dart +.dart_tool/ +.packages +pubspec.lock +!libraries/flutter_ondevice_ai/example/pubspec.lock + # Firebase .firebase/ firebase-debug.log diff --git a/.vscode/launch.json b/.vscode/launch.json index 0df8b93..7564ed4 100644 --- a/.vscode/launch.json +++ b/.vscode/launch.json @@ -1,6 +1,13 @@ { "version": "0.2.0", "configurations": [ + { + "type": "node-terminal", + "request": "launch", + "name": "🌍 Run Site", + "command": "bunx convex dev & bun run dev", + "cwd": "${workspaceFolder}/packages/site" + }, { "type": "node-terminal", "request": "launch", @@ -20,16 +27,16 @@ { "type": "node-terminal", "request": "launch", - "name": "📝 GQL: Generate Types", - "command": "bun run generate", - "cwd": "${workspaceFolder}/packages/gql" + "name": "🌐 Web SDK: Dev Server", + "command": "bun run dev", + "cwd": "${workspaceFolder}/packages/web" }, { "type": "node-terminal", "request": "launch", - "name": "🌍 Run Site", - "command": "bunx convex dev & bun run dev", - "cwd": "${workspaceFolder}/packages/site" + "name": "📝 GQL: Generate Types", + "command": "bun run generate", + "cwd": "${workspaceFolder}/packages/gql" }, { "type": "node-terminal", @@ -72,6 +79,27 @@ "name": "📱 React Native: Android", "command": "bun android", "cwd": "${workspaceFolder}/libraries/react-native-ondevice-ai/example" + }, + { + "type": "node-terminal", + "request": "launch", + "name": "🦋 Flutter: Run (iOS)", + "command": "flutter run", + "cwd": "${workspaceFolder}/libraries/flutter_ondevice_ai/example" + }, + { + "type": "node-terminal", + "request": "launch", + "name": "🦋 Flutter: Run (Android)", + "command": "flutter run", + "cwd": "${workspaceFolder}/libraries/flutter_ondevice_ai/example" + }, + { + "type": "node-terminal", + "request": "launch", + "name": "🦋 Flutter: Run (Web)", + "command": "flutter run -d chrome", + "cwd": "${workspaceFolder}/libraries/flutter_ondevice_ai/example" } ] } diff --git a/AGENTS.md b/AGENTS.md index 7dec12c..ed45b02 100644 --- a/AGENTS.md +++ b/AGENTS.md @@ -65,8 +65,9 @@ locanara-community/ │ ├── gql/ # GraphQL schema definitions │ └── site/ # Website (landing + docs + community) ├── libraries/ # Third-party framework integrations -│ ├── expo-ondevice-ai/ # Expo module -│ └── react-native-ondevice-ai/ # React Native Nitro module +│ ├── expo-ondevice-ai/ # Expo module +│ ├── react-native-ondevice-ai/ # React Native Nitro module +│ └── flutter_ondevice_ai/ # Flutter plugin └── .claude/ ├── commands/ # Slash commands └── guides/ # Project guides @@ -298,7 +299,11 @@ cd packages/android ## Libraries -Third-party framework integrations that use Locanara SDK. +Third-party framework integrations that wrap the Locanara SDK (`packages/`). + +### Source of Truth + +**`packages/` is the source of truth.** Libraries in `libraries/` are thin wrappers that call the SDK. When modifying AI behavior (prompts, chains, model management), always change `packages/apple/`, `packages/android/`, or `packages/web/` first — libraries just forward calls to the SDK. ### Available Libraries @@ -306,6 +311,48 @@ Third-party framework integrations that use Locanara SDK. | -------------------------- | ----------- | ------------------------------------------ | | `expo-ondevice-ai` | In Progress | Expo module for on-device AI | | `react-native-ondevice-ai` | In Progress | React Native Nitro module for on-device AI | +| `flutter_ondevice_ai` | In Progress | Flutter plugin for on-device AI | + +### Local Development Workflow + +Libraries depend on the SDK via package managers. During local development: + +- **Android**: Libraries use `mavenLocal()` → user runs `publishToMavenLocal` when SDK changes +- **iOS**: Libraries reference local pod/SPM path +- **Web**: Libraries use local npm link + +**When SDK changes are needed:** + +1. Modify code in `packages/apple/`, `packages/android/`, or `packages/web/` +2. Bump version in `locanara-versions.json` if API changed +3. User handles local publishing (mavenLocal, etc.) — **AI agents must NEVER publish** +4. Rebuild library example to verify + +### API Parity Across Libraries + +All three libraries **MUST** expose identical public APIs. When modifying one library, update the others: + +| Function | All libraries must expose | +| ------------------------------------- | ----------------------------------- | +| `initialize()` | Initialization result | +| `getDeviceCapability()` | Device capability info | +| `summarize(text, options?)` | Summarize result | +| `classify(text, options?)` | Classify result | +| `extract(text, options?)` | Extract result | +| `chat(message, options?)` | Chat result | +| `chatStream(message, options?)` | Chat result with streaming callback | +| `translate(text, options)` | Translate result | +| `rewrite(text, options)` | Rewrite result | +| `proofread(text, options?)` | Proofread result | +| `getAvailableModels()` | List of downloadable models | +| `getDownloadedModels()` | List of downloaded model IDs | +| `getLoadedModel()` | Currently loaded model ID or null | +| `getCurrentEngine()` | Active inference engine | +| `downloadModel(id, onProgress?)` | Download result with progress | +| `loadModel(id)` | Load result | +| `deleteModel(id)` | Delete result | +| `getPromptApiStatus()` | Prompt API status string | +| `downloadPromptApiModel(onProgress?)` | Download result with progress | ### expo-ondevice-ai @@ -348,6 +395,24 @@ bun run test # Run tests - `nitrogen/generated/` - Auto-generated bridge code (do not edit) - `nitro.json` - Nitro module configuration +### flutter_ondevice_ai + +Flutter plugin wrapping Locanara SDK. Supports iOS, Android, and Web. + +```bash +cd libraries/flutter_ondevice_ai +flutter pub get +flutter analyze +flutter test +``` + +**Structure follows flutter_inapp_purchase pattern:** + +- `lib/src/` - Dart source (plugin, types, web implementation) +- `android/` - Kotlin MethodChannel + EventChannel plugin +- `ios/` - Swift FlutterPlugin +- `example/` - Example Flutter app + ## Nitro Module Development (react-native-ondevice-ai) ### CRITICAL: Spec-First Development @@ -408,29 +473,7 @@ Follow this exact order — **never skip a step**: ### API Parity Checklist -The `react-native-ondevice-ai` public API **MUST** be identical to `expo-ondevice-ai`. When modifying either library, update both: - -| Function | Both libraries must expose | -| ------------------------------------- | --------------------------------------------- | -| `initialize()` | `Promise` | -| `getDeviceCapability()` | `Promise` | -| `summarize(text, options?)` | `Promise` | -| `classify(text, options?)` | `Promise` | -| `extract(text, options?)` | `Promise` | -| `chat(message, options?)` | `Promise` | -| `chatStream(message, options?)` | `Promise` with `onChunk` callback | -| `translate(text, options)` | `Promise` | -| `rewrite(text, options)` | `Promise` | -| `proofread(text, options?)` | `Promise` | -| `getAvailableModels()` | `Promise` | -| `getDownloadedModels()` | `Promise` | -| `getLoadedModel()` | `Promise` | -| `getCurrentEngine()` | `Promise` | -| `downloadModel(id, onProgress?)` | `Promise` | -| `loadModel(id)` | `Promise` | -| `deleteModel(id)` | `Promise` | -| `getPromptApiStatus()` | `Promise` | -| `downloadPromptApiModel(onProgress?)` | `Promise` | +All three libraries (`expo-ondevice-ai`, `react-native-ondevice-ai`, `flutter_ondevice_ai`) **MUST** expose identical APIs. See the **Libraries > API Parity Across Libraries** section for the full table. ## Publishing & Deployment (STRICTLY FORBIDDEN) diff --git a/libraries/expo-ondevice-ai/android/src/main/java/expo/modules/ondeviceai/ExpoOndeviceAiModule.kt b/libraries/expo-ondevice-ai/android/src/main/java/expo/modules/ondeviceai/ExpoOndeviceAiModule.kt index 7c489eb..bc482fb 100644 --- a/libraries/expo-ondevice-ai/android/src/main/java/expo/modules/ondeviceai/ExpoOndeviceAiModule.kt +++ b/libraries/expo-ondevice-ai/android/src/main/java/expo/modules/ondeviceai/ExpoOndeviceAiModule.kt @@ -1,5 +1,7 @@ package expo.modules.ondeviceai +import android.app.ActivityManager +import android.content.Context import com.locanara.Locanara import com.locanara.Platform import com.locanara.builtin.ChatChain @@ -10,6 +12,7 @@ import com.locanara.builtin.RewriteChain import com.locanara.builtin.SummarizeChain import com.locanara.builtin.TranslateChain import com.locanara.core.LocanaraDefaults +import com.locanara.engine.ModelRegistry import com.locanara.mlkit.PromptApiStatus import com.locanara.platform.PromptApiModel import expo.modules.kotlin.Promise @@ -32,6 +35,17 @@ class ExpoOndeviceAiModule : Module() { private val job = SupervisorJob() private val scope = CoroutineScope(job + Dispatchers.Main) + // Simulated model state (matches native example behavior) + private val downloadedModelIds = mutableSetOf() + private var loadedModelId: String? = null + + private fun getDeviceMemoryMB(context: Context): Int { + val am = context.getSystemService(Context.ACTIVITY_SERVICE) as ActivityManager + val memInfo = ActivityManager.MemoryInfo() + am.getMemoryInfo(memInfo) + return (memInfo.totalMem / (1024 * 1024)).toInt() + } + override fun definition() = ModuleDefinition { Name("ExpoOndeviceAi") @@ -45,21 +59,29 @@ class ExpoOndeviceAiModule : Module() { // MARK: - Model Management AsyncFunction("getAvailableModels") { promise: Promise -> - // Android uses Prompt API (Gemini Nano) — no external downloadable models - promise.resolve(emptyList>()) + val context = appContext.reactContext?.applicationContext + ?: throw IllegalStateException("React context is not available") + val memoryMB = getDeviceMemoryMB(context) + val models = ModelRegistry.getCompatibleModels(memoryMB) + promise.resolve(models.map { ExpoOndeviceAiSerialization.modelInfo(it) }) } AsyncFunction("getDownloadedModels") { promise: Promise -> - promise.resolve(emptyList()) + promise.resolve(downloadedModelIds.toList()) } AsyncFunction("getLoadedModel") { promise: Promise -> - promise.resolve(null) + promise.resolve(loadedModelId) } AsyncFunction("getCurrentEngine") { promise: Promise -> val status = locanara.getPromptApiStatus() - val engine = if (status is PromptApiStatus.Available) "prompt_api" else "none" + val engine = when (status) { + is PromptApiStatus.Available, + is PromptApiStatus.Downloadable, + is PromptApiStatus.Downloading -> "prompt_api" + else -> "none" + } promise.resolve(engine) } @@ -113,21 +135,32 @@ class ExpoOndeviceAiModule : Module() { } } - AsyncFunction("downloadModel") { _: String, promise: Promise -> - // Android doesn't support external model downloads - promise.reject( - "ERR_NOT_SUPPORTED", - "Model downloads are not supported on Android. Use downloadPromptApiModel() instead.", - null, - ) + AsyncFunction("downloadModel") { modelId: String, promise: Promise -> + val model = ModelRegistry.getModel(modelId) + if (model == null) { + promise.reject("ERR_NOT_FOUND", "Model not found: $modelId", null) + return@AsyncFunction + } + android.util.Log.d("ExpoOndeviceAi", "downloadModel: $modelId (${model.name}, ${model.sizeMB}MB) — simulated") + downloadedModelIds.add(modelId) + promise.resolve(true) } - AsyncFunction("loadModel") { _: String, promise: Promise -> - promise.reject("ERR_NOT_SUPPORTED", "Model loading is not supported on Android.", null) + AsyncFunction("loadModel") { modelId: String, promise: Promise -> + if (!downloadedModelIds.contains(modelId)) { + promise.reject("ERR_NOT_DOWNLOADED", "Model not downloaded: $modelId", null) + return@AsyncFunction + } + android.util.Log.d("ExpoOndeviceAi", "loadModel: $modelId — simulated") + loadedModelId = modelId + promise.resolve(null) } - AsyncFunction("deleteModel") { _: String, promise: Promise -> - promise.reject("ERR_NOT_SUPPORTED", "Model deletion is not supported on Android.", null) + AsyncFunction("deleteModel") { modelId: String, promise: Promise -> + android.util.Log.d("ExpoOndeviceAi", "deleteModel: $modelId — simulated") + downloadedModelIds.remove(modelId) + if (loadedModelId == modelId) loadedModelId = null + promise.resolve(null) } AsyncFunction("initialize") { promise: Promise -> diff --git a/libraries/expo-ondevice-ai/android/src/main/java/expo/modules/ondeviceai/ExpoOndeviceAiSerialization.kt b/libraries/expo-ondevice-ai/android/src/main/java/expo/modules/ondeviceai/ExpoOndeviceAiSerialization.kt index 6de755b..08d588b 100644 --- a/libraries/expo-ondevice-ai/android/src/main/java/expo/modules/ondeviceai/ExpoOndeviceAiSerialization.kt +++ b/libraries/expo-ondevice-ai/android/src/main/java/expo/modules/ondeviceai/ExpoOndeviceAiSerialization.kt @@ -25,6 +25,22 @@ object ExpoOndeviceAiSerialization { // endregion + // region Model Info + + fun modelInfo(m: DownloadableModelInfo): Map = + mapOf( + "modelId" to m.modelId, + "name" to m.name, + "version" to m.version, + "sizeMB" to m.sizeMB, + "quantization" to m.quantization.name, + "contextLength" to m.contextLength, + "minMemoryMB" to m.minMemoryMB, + "isMultimodal" to false, + ) + + // endregion + // region Result Serializers fun summarize(r: SummarizeResult): Map = diff --git a/libraries/expo-ondevice-ai/example/components/pages/FeatureDetail/ClassifyDemo.tsx b/libraries/expo-ondevice-ai/example/components/pages/FeatureDetail/ClassifyDemo.tsx index 8377dec..84ee2bf 100644 --- a/libraries/expo-ondevice-ai/example/components/pages/FeatureDetail/ClassifyDemo.tsx +++ b/libraries/expo-ondevice-ai/example/components/pages/FeatureDetail/ClassifyDemo.tsx @@ -19,18 +19,41 @@ import {DebugLogPanel, type DebugLog} from '../../shared/DebugLogPanel'; const DEFAULT_INPUT = 'The new iPhone features an incredible camera system with advanced computational photography.'; -const DEFAULT_CATEGORIES = - 'Technology, Sports, Entertainment, Business, Health'; +const DEFAULT_CATEGORIES = [ + 'Technology', + 'Sports', + 'Entertainment', + 'Business', + 'Health', +]; export function ClassifyDemo() { const {isModelReady} = useAppState(); const [inputText, setInputText] = useState(DEFAULT_INPUT); - const [categories, setCategories] = useState(DEFAULT_CATEGORIES); + const [selectedCategories, setSelectedCategories] = useState([ + ...DEFAULT_CATEGORIES, + ]); + const [customCategory, setCustomCategory] = useState(''); const [result, setResult] = useState(null); const [isLoading, setIsLoading] = useState(false); const [errorMessage, setErrorMessage] = useState(null); const [debugLog, setDebugLog] = useState(null); + const toggleCategory = (category: string) => { + setSelectedCategories((prev) => + prev.includes(category) + ? prev.filter((c) => c !== category) + : [...prev, category], + ); + }; + + const addCustomCategory = () => { + const trimmed = customCategory.trim(); + if (!trimmed || selectedCategories.includes(trimmed)) return; + setSelectedCategories((prev) => [...prev, trimmed]); + setCustomCategory(''); + }; + const executeClassify = async () => { setIsLoading(true); setErrorMessage(null); @@ -38,12 +61,7 @@ export function ClassifyDemo() { const start = Date.now(); try { - const categoryList = categories - .split(',') - .map((c) => c.trim()) - .filter(Boolean); - - const options = {categories: categoryList}; + const options = {categories: selectedCategories}; console.log('[DEBUG] classify request:', JSON.stringify(options)); const classifyResult = await classify(inputText, options); console.log('[DEBUG] classify response:', JSON.stringify(classifyResult)); @@ -72,6 +90,57 @@ export function ClassifyDemo() { {!isModelReady && } + + Categories + + {DEFAULT_CATEGORIES.map((category) => { + const selected = selectedCategories.includes(category); + return ( + toggleCategory(category)} + > + {selected && } + + {category} + + + ); + })} + + + + + {customCategory.trim() ? ( + + Add + + ) : null} + + + {selectedCategories.length > 0 && ( + + Selected: {selectedCategories.join(', ')} + + )} + + Text to Classify - - Categories (comma-separated) - - - {isLoading ? : null} @@ -174,19 +241,69 @@ const styles = StyleSheet.create({ color: '#000', marginBottom: 8, }, - textInput: { + chipContainer: { + flexDirection: 'row', + flexWrap: 'wrap', + gap: 8, + }, + chip: { + flexDirection: 'row', + alignItems: 'center', + paddingHorizontal: 14, + paddingVertical: 8, + borderRadius: 20, + backgroundColor: 'rgba(0, 0, 0, 0.05)', + }, + chipSelected: { + backgroundColor: 'rgba(0, 122, 255, 0.12)', + }, + chipCheck: { + fontSize: 13, + color: '#007AFF', + fontWeight: '600', + }, + chipText: { + fontSize: 14, + color: '#333', + }, + chipTextSelected: { + color: '#007AFF', + fontWeight: '600', + }, + customRow: { + flexDirection: 'row', + alignItems: 'center', + marginTop: 12, + gap: 8, + }, + customInput: { + flex: 1, backgroundColor: 'rgba(0, 0, 0, 0.05)', borderRadius: 8, - padding: 12, + padding: 10, fontSize: 15, - minHeight: 100, color: '#000', }, - categoryInput: { + addButton: { + paddingHorizontal: 12, + paddingVertical: 10, + }, + addButtonText: { + fontSize: 15, + fontWeight: '600', + color: '#007AFF', + }, + selectedText: { + fontSize: 13, + color: '#007AFF', + marginTop: 8, + }, + textInput: { backgroundColor: 'rgba(0, 0, 0, 0.05)', borderRadius: 8, padding: 12, fontSize: 15, + minHeight: 100, color: '#000', }, button: { diff --git a/libraries/expo-ondevice-ai/example/components/shared/ModelSelectionSheet.tsx b/libraries/expo-ondevice-ai/example/components/shared/ModelSelectionSheet.tsx index c8a132b..d6b71da 100644 --- a/libraries/expo-ondevice-ai/example/components/shared/ModelSelectionSheet.tsx +++ b/libraries/expo-ondevice-ai/example/components/shared/ModelSelectionSheet.tsx @@ -178,8 +178,8 @@ export function ModelSelectionSheet({ - {/* Downloadable Models (iOS only) */} - {Platform.OS === 'ios' && modelState.availableModels.length > 0 && ( + {/* Downloadable Models */} + {modelState.availableModels.length > 0 && ( Available Models {modelState.availableModels.map((model) => ( diff --git a/libraries/flutter_ondevice_ai/CHANGELOG.md b/libraries/flutter_ondevice_ai/CHANGELOG.md new file mode 100644 index 0000000..122072e --- /dev/null +++ b/libraries/flutter_ondevice_ai/CHANGELOG.md @@ -0,0 +1,7 @@ +## 0.1.0 + +- Initial release +- Support for iOS (Apple Intelligence), Android (Gemini Nano), Web (Chrome Built-in AI) +- AI features: summarize, classify, extract, chat, chatStream, translate, rewrite, proofread +- Model management: download, load, delete models (iOS) +- Prompt API support (Android) diff --git a/libraries/flutter_ondevice_ai/LICENSE b/libraries/flutter_ondevice_ai/LICENSE new file mode 100644 index 0000000..3e2ad68 --- /dev/null +++ b/libraries/flutter_ondevice_ai/LICENSE @@ -0,0 +1,21 @@ +MIT License + +Copyright (c) 2026 hyodotdev + +Permission is hereby granted, free of charge, to any person obtaining a copy +of this software and associated documentation files (the "Software"), to deal +in the Software without restriction, including without limitation the rights +to use, copy, modify, merge, publish, distribute, sublicense, and/or sell +copies of the Software, and to permit persons to whom the Software is +furnished to do so, subject to the following conditions: + +The above copyright notice and this permission notice shall be included in all +copies or substantial portions of the Software. + +THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR +IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, +FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE +AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER +LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, +OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE +SOFTWARE. diff --git a/libraries/flutter_ondevice_ai/README.md b/libraries/flutter_ondevice_ai/README.md new file mode 100644 index 0000000..5077417 --- /dev/null +++ b/libraries/flutter_ondevice_ai/README.md @@ -0,0 +1,34 @@ +# flutter_ondevice_ai + +Flutter plugin for on-device AI using [Locanara SDK](https://locanara.com). + +Supports iOS (Apple Intelligence / llama.cpp), Android (Gemini Nano), and Web (Chrome Built-in AI) from a single Dart API. + +## Installation + +```bash +flutter pub add flutter_ondevice_ai +``` + +## Quick Start + +```dart +import 'package:flutter_ondevice_ai/flutter_ondevice_ai.dart'; + +final ai = FlutterOndeviceAi.instance; +await ai.initialize(); + +final capability = await ai.getDeviceCapability(); +if (capability.isSupported) { + final result = await ai.summarize('Long text to summarize...'); + print(result.summary); +} +``` + +## Documentation + +Full documentation at [locanara.com/docs/libraries/flutter](https://locanara.com/docs/libraries/flutter) + +## License + +MIT diff --git a/libraries/flutter_ondevice_ai/analysis_options.yaml b/libraries/flutter_ondevice_ai/analysis_options.yaml new file mode 100644 index 0000000..0aac023 --- /dev/null +++ b/libraries/flutter_ondevice_ai/analysis_options.yaml @@ -0,0 +1,8 @@ +include: package:flutter_lints/flutter.yaml + +linter: + rules: + prefer_const_constructors: true + prefer_const_declarations: true + avoid_print: true + prefer_single_quotes: true diff --git a/libraries/flutter_ondevice_ai/android/build.gradle b/libraries/flutter_ondevice_ai/android/build.gradle new file mode 100644 index 0000000..3db7c96 --- /dev/null +++ b/libraries/flutter_ondevice_ai/android/build.gradle @@ -0,0 +1,67 @@ +group 'dev.hyodot.flutter_ondevice_ai' +version '0.1.0' + +buildscript { + ext.kotlin_version = '2.0.21' + repositories { + google() + mavenCentral() + } + + dependencies { + classpath 'com.android.tools.build:gradle:8.1.4' + classpath "org.jetbrains.kotlin:kotlin-gradle-plugin:$kotlin_version" + } +} + +// Read Locanara version from locanara-versions.json (Single Source of Truth) +def getLocanaraVersion() { + def candidates = [ + new File(project.projectDir, "../../../locanara-versions.json"), + new File(rootProject.projectDir, "../../../locanara-versions.json"), + new File(rootProject.projectDir, "../../../../locanara-versions.json"), + ] + for (candidate in candidates) { + if (candidate.exists()) { + def versionJson = candidate.text + def matcher = (versionJson =~ /"android"\s*:\s*"([^"]+)"/) + if (matcher) return matcher[0][1] + } + } + return "1.1.0" +} + +apply plugin: 'com.android.library' +apply plugin: 'kotlin-android' + +android { + namespace "dev.hyodot.flutter_ondevice_ai" + compileSdk 35 + + defaultConfig { + minSdk 31 + targetSdk 35 + versionCode 1 + versionName "0.1.0" + } + + compileOptions { + sourceCompatibility JavaVersion.VERSION_17 + targetCompatibility JavaVersion.VERSION_17 + } + + kotlinOptions { + jvmTarget = "17" + freeCompilerArgs += ["-Xskip-metadata-version-check"] + } + + lintOptions { + abortOnError false + } +} + +dependencies { + implementation "org.jetbrains.kotlin:kotlin-stdlib-jdk7:$kotlin_version" + implementation "org.jetbrains.kotlinx:kotlinx-coroutines-android:1.7.3" + implementation "com.locanara:locanara:${getLocanaraVersion()}" +} diff --git a/libraries/flutter_ondevice_ai/android/settings.gradle b/libraries/flutter_ondevice_ai/android/settings.gradle new file mode 100644 index 0000000..2862f33 --- /dev/null +++ b/libraries/flutter_ondevice_ai/android/settings.gradle @@ -0,0 +1 @@ +rootProject.name = 'flutter_ondevice_ai' diff --git a/libraries/flutter_ondevice_ai/android/src/main/AndroidManifest.xml b/libraries/flutter_ondevice_ai/android/src/main/AndroidManifest.xml new file mode 100644 index 0000000..0943cc7 --- /dev/null +++ b/libraries/flutter_ondevice_ai/android/src/main/AndroidManifest.xml @@ -0,0 +1,4 @@ + + + diff --git a/libraries/flutter_ondevice_ai/android/src/main/kotlin/dev/hyodot/flutter_ondevice_ai/ExecuTorchModelWrapper.kt b/libraries/flutter_ondevice_ai/android/src/main/kotlin/dev/hyodot/flutter_ondevice_ai/ExecuTorchModelWrapper.kt new file mode 100644 index 0000000..a360fa5 --- /dev/null +++ b/libraries/flutter_ondevice_ai/android/src/main/kotlin/dev/hyodot/flutter_ondevice_ai/ExecuTorchModelWrapper.kt @@ -0,0 +1,69 @@ +package dev.hyodot.flutter_ondevice_ai + +import com.locanara.core.GenerationConfig +import com.locanara.core.LocanaraModel +import com.locanara.core.ModelResponse +import com.locanara.engine.ExecuTorchEngine +import com.locanara.engine.InferenceConfig +import com.locanara.engine.PromptBuilder +import kotlinx.coroutines.flow.Flow +import kotlinx.coroutines.flow.flow + +/** + * Wraps [ExecuTorchEngine] as a [LocanaraModel] so the built-in chains + * (SummarizeChain, ChatChain, etc.) can use the loaded ExecuTorch model + * via [com.locanara.core.LocanaraDefaults.model]. + */ +class ExecuTorchModelWrapper( + private val engine: ExecuTorchEngine +) : LocanaraModel { + + override val name: String = "ExecuTorch (${engine.engineName})" + override val isReady: Boolean get() = engine.isLoaded + override val maxContextTokens: Int = 8192 + + private val template: PromptBuilder.ModelTemplate = engine.getPromptTemplate() + + override suspend fun generate(prompt: String, config: GenerationConfig?): ModelResponse { + val startTime = System.currentTimeMillis() + + val formatted = PromptBuilder.buildChatPrompt( + listOf(PromptBuilder.ChatMessage(PromptBuilder.ChatRole.USER, prompt)), + template + ) + + val inferenceConfig = InferenceConfig( + temperature = config?.temperature ?: 0.7f, + topK = config?.topK ?: 40, + maxTokens = config?.maxTokens ?: 2048 + ) + + val text = engine.generate(formatted, inferenceConfig) + + return ModelResponse( + text = text, + processingTimeMs = (System.currentTimeMillis() - startTime).toInt() + ) + } + + override fun stream(prompt: String, config: GenerationConfig?): Flow = flow { + val formatted = PromptBuilder.buildChatPrompt( + listOf(PromptBuilder.ChatMessage(PromptBuilder.ChatRole.USER, prompt)), + template + ) + + val inferenceConfig = InferenceConfig( + temperature = config?.temperature ?: 0.7f, + topK = config?.topK ?: 40, + maxTokens = config?.maxTokens ?: 2048 + ) + + engine.generateStreaming(formatted, inferenceConfig).collect { token -> + emit(token) + } + } + + fun unload() { + engine.unload() + } +} diff --git a/libraries/flutter_ondevice_ai/android/src/main/kotlin/dev/hyodot/flutter_ondevice_ai/FlutterOndeviceAiHelper.kt b/libraries/flutter_ondevice_ai/android/src/main/kotlin/dev/hyodot/flutter_ondevice_ai/FlutterOndeviceAiHelper.kt new file mode 100644 index 0000000..f74aacc --- /dev/null +++ b/libraries/flutter_ondevice_ai/android/src/main/kotlin/dev/hyodot/flutter_ondevice_ai/FlutterOndeviceAiHelper.kt @@ -0,0 +1,120 @@ +package dev.hyodot.flutter_ondevice_ai + +import com.locanara.RewriteOutputType +import com.locanara.composable.Memory +import com.locanara.composable.MemoryEntry +import com.locanara.core.ChainInput +import com.locanara.core.ChainOutput + +/** Decodes Flutter options maps into chain constructor parameters */ +object FlutterOndeviceAiHelper { + // region Summarize + + fun bulletCount(options: Map?): Int { + val outputType = options?.get("outputType") as? String + return when (outputType) { + "TWO_BULLETS" -> 2 + "THREE_BULLETS" -> 3 + else -> 1 + } + } + + fun inputType(options: Map?): String = + when (options?.get("inputType") as? String) { + "CONVERSATION" -> "conversation" + else -> "text" + } + + // endregion + + // region Classify + + fun classifyOptions(options: Map?): Pair, Int> { + @Suppress("UNCHECKED_CAST") + val categories = + (options?.get("categories") as? List) + ?: listOf("positive", "negative", "neutral") + val maxResults = (options?.get("maxResults") as? Number)?.toInt() ?: 3 + return Pair(categories, maxResults) + } + + // endregion + + // region Extract + + fun entityTypes(options: Map?): List { + @Suppress("UNCHECKED_CAST") + return (options?.get("entityTypes") as? List) + ?: listOf("person", "location", "date", "organization") + } + + // endregion + + // region Chat + + @Suppress("UNCHECKED_CAST") + fun chatOptions(options: Map?): Pair { + val systemPrompt = + (options?.get("systemPrompt") as? String) + ?: "You are a friendly, helpful assistant." + + val historyArray = options?.get("history") as? List> + val memory: Memory? = + if (!historyArray.isNullOrEmpty()) { + PrefilledMemory(historyArray) + } else { + null + } + + return Pair(systemPrompt, memory) + } + + // endregion + + // region Translate + + fun translateOptions(options: Map?): Pair { + val source = (options?.get("sourceLanguage") as? String) ?: "en" + val target = (options?.get("targetLanguage") as? String) ?: "en" + return Pair(source, target) + } + + // endregion + + // region Rewrite + + fun rewriteStyle(options: Map?): RewriteOutputType { + val outputType = options?.get("outputType") as? String + return outputType?.let { + runCatching { RewriteOutputType.valueOf(it) }.getOrNull() + } ?: RewriteOutputType.REPHRASE + } + + // endregion +} + +/** + * Memory adapter that provides pre-filled chat history from Flutter. + */ +private class PrefilledMemory( + history: List>, +) : Memory { + private val entries: List = + history.mapNotNull { msg -> + val role = msg["role"] ?: return@mapNotNull null + val content = msg["content"] ?: return@mapNotNull null + MemoryEntry(role = role, content = content) + } + + override suspend fun load(input: ChainInput): List = entries + + override suspend fun save( + input: ChainInput, + output: ChainOutput, + ) { } + + override suspend fun clear() { } + + override val estimatedTokenCount: Int + get() = entries.sumOf { it.content.length / 4 } +} diff --git a/libraries/flutter_ondevice_ai/android/src/main/kotlin/dev/hyodot/flutter_ondevice_ai/FlutterOndeviceAiPlugin.kt b/libraries/flutter_ondevice_ai/android/src/main/kotlin/dev/hyodot/flutter_ondevice_ai/FlutterOndeviceAiPlugin.kt new file mode 100644 index 0000000..38c9615 --- /dev/null +++ b/libraries/flutter_ondevice_ai/android/src/main/kotlin/dev/hyodot/flutter_ondevice_ai/FlutterOndeviceAiPlugin.kt @@ -0,0 +1,711 @@ +package dev.hyodot.flutter_ondevice_ai + +import android.app.ActivityManager +import android.content.Context +import com.locanara.Locanara +import com.locanara.Platform +import com.locanara.builtin.ChatChain +import com.locanara.builtin.ClassifyChain +import com.locanara.builtin.ExtractChain +import com.locanara.builtin.ProofreadChain +import com.locanara.builtin.RewriteChain +import com.locanara.builtin.SummarizeChain +import com.locanara.builtin.TranslateChain +import com.locanara.core.LocanaraDefaults +import com.locanara.engine.ExecuTorchEngine +import com.locanara.engine.ModelRegistry +import com.locanara.mlkit.PromptApiStatus +import com.locanara.platform.PromptApiModel +import io.flutter.embedding.engine.plugins.FlutterPlugin +import io.flutter.plugin.common.EventChannel +import io.flutter.plugin.common.MethodCall +import io.flutter.plugin.common.MethodChannel +import io.flutter.plugin.common.MethodChannel.MethodCallHandler +import io.flutter.plugin.common.MethodChannel.Result +import kotlinx.coroutines.CoroutineScope +import kotlinx.coroutines.Dispatchers +import kotlinx.coroutines.SupervisorJob +import kotlinx.coroutines.cancel +import kotlinx.coroutines.flow.collect +import kotlinx.coroutines.launch +import kotlinx.coroutines.withContext +import kotlinx.coroutines.withTimeout +import java.io.File +import java.net.HttpURLConnection +import java.net.URL + +private const val TAG = "[FlutterOndeviceAi]" + +class FlutterOndeviceAiPlugin : FlutterPlugin, MethodCallHandler { + private var channel: MethodChannel? = null + private var chatStreamChannel: EventChannel? = null + private var downloadProgressChannel: EventChannel? = null + private var chatStreamSink: EventChannel.EventSink? = null + private var downloadProgressSink: EventChannel.EventSink? = null + private var context: Context? = null + private val job = SupervisorJob() + private val scope = CoroutineScope(job + Dispatchers.Main) + + private val locanara: Locanara by lazy { + Locanara.getInstance(context!!) + } + + // Model management state + private var loadedModelId: String? = null + private var activeModelWrapper: ExecuTorchModelWrapper? = null + private var promptApiModel: PromptApiModel? = null // saved reference for switchToDeviceAI + + /** + * Override download URLs for models whose SDK-bundled URLs require authentication. + * The SDK's ModelRegistry is compiled into the Maven artifact so we patch URLs here. + */ + private data class ModelURLOverride( + val downloadURL: String, + val tokenizerURL: String, + val sizeMB: Int + ) + + private val modelURLOverrides = mapOf( + "llama-3.2-3b-instruct" to ModelURLOverride( + downloadURL = "https://huggingface.co/software-mansion/react-native-executorch-llama-3.2/resolve/main/llama-3.2-3B/spinquant/llama3_2_3B_spinquant.pte", + tokenizerURL = "https://huggingface.co/executorch-community/Llama-3.2-1B-ET/resolve/main/tokenizer.model", + sizeMB = 2550 + ) + ) + + /** Directory where downloaded models are stored */ + private fun modelsDir(): File = File(context!!.filesDir, "locanara/models") + + /** Directory for a specific model */ + private fun modelDir(modelId: String): File = File(modelsDir(), modelId) + + /** Check which models have been downloaded to disk */ + private fun getDownloadedModelIdsFromDisk(): List { + val dir = modelsDir() + if (!dir.exists()) return emptyList() + return dir.listFiles() + ?.filter { it.isDirectory && File(it, "model.pte").exists() } + ?.map { it.name } + ?: emptyList() + } + + private fun getDeviceMemoryMB(): Int { + val am = context!!.getSystemService(Context.ACTIVITY_SERVICE) as ActivityManager + val memInfo = ActivityManager.MemoryInfo() + am.getMemoryInfo(memInfo) + return (memInfo.totalMem / (1024 * 1024)).toInt() + } + + override fun onAttachedToEngine(binding: FlutterPlugin.FlutterPluginBinding) { + context = binding.applicationContext + + channel = MethodChannel(binding.binaryMessenger, "flutter_ondevice_ai") + channel?.setMethodCallHandler(this) + + chatStreamChannel = EventChannel(binding.binaryMessenger, "flutter_ondevice_ai/chat_stream") + chatStreamChannel?.setStreamHandler(object : EventChannel.StreamHandler { + override fun onListen(arguments: Any?, events: EventChannel.EventSink?) { + chatStreamSink = events + } + override fun onCancel(arguments: Any?) { + chatStreamSink = null + } + }) + + downloadProgressChannel = EventChannel(binding.binaryMessenger, "flutter_ondevice_ai/model_download_progress") + downloadProgressChannel?.setStreamHandler(object : EventChannel.StreamHandler { + override fun onListen(arguments: Any?, events: EventChannel.EventSink?) { + downloadProgressSink = events + } + override fun onCancel(arguments: Any?) { + downloadProgressSink = null + } + }) + } + + override fun onDetachedFromEngine(binding: FlutterPlugin.FlutterPluginBinding) { + activeModelWrapper?.unload() + activeModelWrapper = null + loadedModelId = null + channel?.setMethodCallHandler(null) + channel = null + chatStreamChannel?.setStreamHandler(null) + chatStreamChannel = null + downloadProgressChannel?.setStreamHandler(null) + downloadProgressChannel = null + job.cancel("Plugin detached") + } + + override fun onMethodCall(call: MethodCall, result: Result) { + scope.launch { + handleMethodCall(call, result) + } + } + + @Suppress("UNCHECKED_CAST") + private suspend fun handleMethodCall(call: MethodCall, result: Result) { + when (call.method) { + "initialize" -> handleInitialize(result) + "getDeviceCapability" -> handleGetDeviceCapability(result) + "summarize" -> handleSummarize(call, result) + "classify" -> handleClassify(call, result) + "extract" -> handleExtract(call, result) + "chat" -> handleChat(call, result) + "chatStream" -> handleChatStream(call, result) + "translate" -> handleTranslate(call, result) + "rewrite" -> handleRewrite(call, result) + "proofread" -> handleProofread(call, result) + "getAvailableModels" -> handleGetAvailableModels(result) + "getDownloadedModels" -> handleGetDownloadedModels(result) + "getLoadedModel" -> handleGetLoadedModel(result) + "getCurrentEngine" -> handleGetCurrentEngine(result) + "downloadModel" -> handleDownloadModel(call, result) + "loadModel" -> handleLoadModel(call, result) + "deleteModel" -> handleDeleteModel(call, result) + "getPromptApiStatus" -> handleGetPromptApiStatus(result) + "downloadPromptApiModel" -> handleDownloadPromptApiModel(result) + "switchToDeviceAI" -> handleSwitchToDeviceAI(result) + else -> result.notImplemented() + } + } + + // region Core + + private suspend fun handleInitialize(result: Result) { + try { + withTimeout(30_000L) { + locanara.initializeSDK(Platform.ANDROID) + } + val appContext = context ?: throw IllegalStateException("Context is not available") + val model = PromptApiModel(appContext) + promptApiModel = model + LocanaraDefaults.model = model + result.success(mapOf("success" to true)) + } catch (e: kotlinx.coroutines.TimeoutCancellationException) { + android.util.Log.e(TAG, "Initialize timed out after 30s", e) + result.error("ERR_INITIALIZE", "Initialization timed out. The device may not support on-device AI.", null) + } catch (e: Exception) { + android.util.Log.e(TAG, "Initialize failed", e) + result.error("ERR_INITIALIZE", e.message, null) + } + } + + private suspend fun handleGetDeviceCapability(result: Result) { + try { + val capability = locanara.getDeviceCapability() + result.success(FlutterOndeviceAiSerialization.deviceCapability(capability)) + } catch (e: Exception) { + result.error("ERR_DEVICE_CAPABILITY", e.message, null) + } + } + + // endregion + + // region AI Features + + @Suppress("UNCHECKED_CAST") + private suspend fun handleSummarize(call: MethodCall, result: Result) { + try { + val text = call.argument("text") ?: throw IllegalArgumentException("text is required") + val options = call.argument>("options") + val bulletCount = FlutterOndeviceAiHelper.bulletCount(options) + val inputType = FlutterOndeviceAiHelper.inputType(options) + val r = SummarizeChain(bulletCount = bulletCount, inputType = inputType).run(text) + result.success(FlutterOndeviceAiSerialization.summarize(r)) + } catch (e: Exception) { + android.util.Log.e(TAG, "summarize failed", e) + result.error("ERR_SUMMARIZE", e.message, null) + } + } + + @Suppress("UNCHECKED_CAST") + private suspend fun handleClassify(call: MethodCall, result: Result) { + try { + val text = call.argument("text") ?: throw IllegalArgumentException("text is required") + val options = call.argument>("options") + val (categories, maxResults) = FlutterOndeviceAiHelper.classifyOptions(options) + val r = ClassifyChain(categories = categories, maxResults = maxResults).run(text) + result.success(FlutterOndeviceAiSerialization.classify(r)) + } catch (e: Exception) { + result.error("ERR_CLASSIFY", e.message, null) + } + } + + @Suppress("UNCHECKED_CAST") + private suspend fun handleExtract(call: MethodCall, result: Result) { + try { + val text = call.argument("text") ?: throw IllegalArgumentException("text is required") + val options = call.argument>("options") + val entityTypes = FlutterOndeviceAiHelper.entityTypes(options) + val r = ExtractChain(entityTypes = entityTypes).run(text) + result.success(FlutterOndeviceAiSerialization.extract(r)) + } catch (e: Exception) { + result.error("ERR_EXTRACT", e.message, null) + } + } + + @Suppress("UNCHECKED_CAST") + private suspend fun handleChat(call: MethodCall, result: Result) { + try { + val message = call.argument("message") ?: throw IllegalArgumentException("message is required") + val options = call.argument>("options") + val (systemPrompt, memory) = FlutterOndeviceAiHelper.chatOptions(options) + val r = ChatChain(memory = memory, systemPrompt = systemPrompt).run(message) + result.success(FlutterOndeviceAiSerialization.chat(r)) + } catch (e: Exception) { + result.error("ERR_CHAT", e.message, null) + } + } + + @Suppress("UNCHECKED_CAST") + private suspend fun handleChatStream(call: MethodCall, result: Result) { + try { + val message = call.argument("message") ?: throw IllegalArgumentException("message is required") + val options = call.argument>("options") + val (systemPrompt, memory) = FlutterOndeviceAiHelper.chatOptions(options) + val chain = ChatChain(memory = memory, systemPrompt = systemPrompt) + var accumulated = "" + + chain.streamRun(message).collect { chunk -> + accumulated += chunk + chatStreamSink?.success( + mapOf( + "delta" to chunk, + "accumulated" to accumulated, + "isFinal" to false, + "conversationId" to null, + ) + ) + } + + chatStreamSink?.success( + mapOf( + "delta" to "", + "accumulated" to accumulated, + "isFinal" to true, + "conversationId" to null, + ) + ) + + result.success( + mapOf( + "message" to accumulated, + "conversationId" to null, + "canContinue" to true, + ) + ) + } catch (e: Exception) { + result.error("ERR_CHAT_STREAM", e.message, null) + } + } + + @Suppress("UNCHECKED_CAST") + private suspend fun handleTranslate(call: MethodCall, result: Result) { + try { + val text = call.argument("text") ?: throw IllegalArgumentException("text is required") + val options = call.argument>("options") + val (source, target) = FlutterOndeviceAiHelper.translateOptions(options) + val r = TranslateChain(sourceLanguage = source, targetLanguage = target).run(text) + result.success(FlutterOndeviceAiSerialization.translate(r)) + } catch (e: Exception) { + result.error("ERR_TRANSLATE", e.message, null) + } + } + + @Suppress("UNCHECKED_CAST") + private suspend fun handleRewrite(call: MethodCall, result: Result) { + try { + val text = call.argument("text") ?: throw IllegalArgumentException("text is required") + val options = call.argument>("options") + val style = FlutterOndeviceAiHelper.rewriteStyle(options) + val r = RewriteChain(style = style).run(text) + result.success(FlutterOndeviceAiSerialization.rewrite(r)) + } catch (e: Exception) { + result.error("ERR_REWRITE", e.message, null) + } + } + + @Suppress("UNCHECKED_CAST") + private suspend fun handleProofread(call: MethodCall, result: Result) { + try { + val text = call.argument("text") ?: throw IllegalArgumentException("text is required") + val r = ProofreadChain().run(text) + result.success(FlutterOndeviceAiSerialization.proofread(r)) + } catch (e: Exception) { + result.error("ERR_PROOFREAD", e.message, null) + } + } + + // endregion + + // region Model Management + + private fun handleGetAvailableModels(result: Result) { + val memoryMB = getDeviceMemoryMB() + val models = ModelRegistry.getCompatibleModels(memoryMB) + android.util.Log.d(TAG, "getAvailableModels: memoryMB=$memoryMB, count=${models.size}") + models.forEach { m -> + val override = modelURLOverrides[m.modelId] + val sizeMB = override?.sizeMB ?: m.sizeMB + android.util.Log.d(TAG, " model: ${m.modelId} (${m.name}, ${sizeMB}MB, ${m.quantization.name})") + } + result.success(models.map { m -> + val info = FlutterOndeviceAiSerialization.modelInfo(m) + val override = modelURLOverrides[m.modelId] + if (override != null) { + info.toMutableMap().apply { put("sizeMB", override.sizeMB) } + } else { + info + } + }) + } + + private fun handleGetDownloadedModels(result: Result) { + val ids = getDownloadedModelIdsFromDisk() + android.util.Log.d(TAG, "getDownloadedModels: $ids") + result.success(ids) + } + + private fun handleGetLoadedModel(result: Result) { + android.util.Log.d(TAG, "getLoadedModel: $loadedModelId") + result.success(loadedModelId) + } + + private fun handleGetCurrentEngine(result: Result) { + // If a custom model is loaded via ExecuTorch, report llama_cpp engine + if (activeModelWrapper != null && loadedModelId != null) { + android.util.Log.d(TAG, "getCurrentEngine: llama_cpp (ExecuTorch, model=$loadedModelId)") + result.success("llama_cpp") + return + } + val status = locanara.getPromptApiStatus() + val engine = when (status) { + is PromptApiStatus.Available, + is PromptApiStatus.Downloadable, + is PromptApiStatus.Downloading -> "prompt_api" + else -> "none" + } + android.util.Log.d(TAG, "getCurrentEngine: status=$status, engine=$engine") + result.success(engine) + } + + /** + * Download a model's .pte and tokenizer.bin files from HuggingFace. + * Reports progress via the downloadProgressSink EventChannel. + */ + private suspend fun handleDownloadModel(call: MethodCall, result: Result) { + val modelId = call.argument("modelId") + ?: return result.error("ERR_INVALID_ARGS", "modelId is required", null) + val modelInfo = ModelRegistry.getModel(modelId) + if (modelInfo == null) { + android.util.Log.e(TAG, "downloadModel: model not found: $modelId") + return result.error("ERR_NOT_FOUND", "Model not found: $modelId", null) + } + + // Already downloaded? + val dir = modelDir(modelId) + val modelFile = File(dir, "model.pte") + val tokenizerFile = File(dir, "tokenizer.model") + if (modelFile.exists() && tokenizerFile.exists()) { + android.util.Log.d(TAG, "downloadModel: $modelId already downloaded") + result.success(true) + return + } + + // Use override URLs (public repos) if available, otherwise fall back to SDK URLs + val override = modelURLOverrides[modelId] + val actualDownloadURL = override?.downloadURL ?: modelInfo.downloadURL + val actualTokenizerURL = override?.tokenizerURL ?: modelInfo.tokenizerURL + val actualSizeMB = override?.sizeMB ?: modelInfo.sizeMB + + android.util.Log.d(TAG, "downloadModel: $modelId (${modelInfo.name}, ${actualSizeMB}MB) — starting real download") + android.util.Log.d(TAG, "downloadModel: URL=$actualDownloadURL") + + try { + dir.mkdirs() + + // Download .pte model file + val totalBytes = actualSizeMB.toLong() * 1024 * 1024 + downloadFile( + url = actualDownloadURL, + destFile = modelFile, + modelId = modelId, + totalBytesEstimate = totalBytes + ) + + // Download tokenizer + if (actualTokenizerURL != null) { + android.util.Log.d(TAG, "downloadModel: downloading tokenizer from $actualTokenizerURL") + downloadFile( + url = actualTokenizerURL, + destFile = tokenizerFile, + modelId = modelId, + totalBytesEstimate = totalBytes, // keep progress at 100% during tokenizer + silent = true + ) + } + + // Report completed + scope.launch { + downloadProgressSink?.success( + mapOf( + "modelId" to modelId, + "bytesDownloaded" to modelFile.length(), + "totalBytes" to modelFile.length(), + "progress" to 1.0, + "state" to "completed", + ) + ) + } + + android.util.Log.d(TAG, "downloadModel: $modelId complete — ${modelFile.length() / (1024 * 1024)}MB") + result.success(true) + } catch (e: Exception) { + android.util.Log.e(TAG, "downloadModel: $modelId failed", e) + // Clean up partial download + dir.deleteRecursively() + result.error("ERR_DOWNLOAD_MODEL", "Download failed: ${e.message}", null) + } + } + + /** + * Download a single file with progress reporting. + */ + private suspend fun downloadFile( + url: String, + destFile: File, + modelId: String, + totalBytesEstimate: Long, + silent: Boolean = false + ) = withContext(Dispatchers.IO) { + val tempFile = File(destFile.parent, "${destFile.name}.tmp") + + // Follow redirects manually (HuggingFace redirects to CDN) + var currentUrl = url + var connection: HttpURLConnection + var redirectCount = 0 + while (true) { + connection = URL(currentUrl).openConnection() as HttpURLConnection + connection.connectTimeout = 30_000 + connection.readTimeout = 900_000 // 15 min timeout for large model downloads + connection.instanceFollowRedirects = false // handle manually + connection.setRequestProperty("User-Agent", "Locanara-Flutter/1.0") + connection.connect() + + val code = connection.responseCode + if (code in 301..302 || code == 307 || code == 308) { + val location = connection.getHeaderField("Location") + connection.disconnect() + if (location == null || ++redirectCount > 10) { + throw Exception("Too many redirects or missing Location header") + } + android.util.Log.d(TAG, "downloadFile: redirect $code → $location") + currentUrl = location + continue + } + + if (code !in 200..299) { + connection.disconnect() + throw Exception("HTTP $code from $currentUrl") + } + break + } + + try { + val contentLength = connection.contentLengthLong.let { if (it > 0) it else totalBytesEstimate } + + connection.inputStream.use { input -> + tempFile.outputStream().use { output -> + val buffer = ByteArray(256 * 1024) // 256KB buffer + var bytesRead: Long = 0 + var lastProgressReport = 0L + + while (true) { + val count = input.read(buffer) + if (count == -1) break + output.write(buffer, 0, count) + bytesRead += count + + // Report progress every 500KB (avoid flooding UI thread) + if (!silent && bytesRead - lastProgressReport > 512 * 1024) { + lastProgressReport = bytesRead + val progress = if (contentLength > 0) { + (bytesRead.toDouble() / contentLength).coerceAtMost(0.99) + } else 0.0 + + scope.launch { + downloadProgressSink?.success( + mapOf( + "modelId" to modelId, + "bytesDownloaded" to bytesRead, + "totalBytes" to contentLength, + "progress" to progress, + "state" to "downloading", + ) + ) + } + } + } + } + } + + // Atomic move: rename temp → final + if (!tempFile.renameTo(destFile)) { + tempFile.copyTo(destFile, overwrite = true) + tempFile.delete() + } + + android.util.Log.d(TAG, "downloadFile: ${destFile.name} done (${destFile.length() / (1024 * 1024)}MB)") + } finally { + connection.disconnect() + if (tempFile.exists()) tempFile.delete() + } + } + + /** + * Load a downloaded model into memory via ExecuTorchEngine. + * Switches LocanaraDefaults.model so all chains use the loaded model. + */ + private suspend fun handleLoadModel(call: MethodCall, result: Result) { + val modelId = call.argument("modelId") + ?: return result.error("ERR_INVALID_ARGS", "modelId is required", null) + + val dir = modelDir(modelId) + val modelFile = File(dir, "model.pte") + val tokenizerFile = File(dir, "tokenizer.model") + + if (!modelFile.exists()) { + android.util.Log.e(TAG, "loadModel: model file not found: ${modelFile.absolutePath}") + return result.error("ERR_NOT_DOWNLOADED", "Model not downloaded: $modelId", null) + } + if (!tokenizerFile.exists()) { + android.util.Log.e(TAG, "loadModel: tokenizer not found: ${tokenizerFile.absolutePath}") + return result.error("ERR_NOT_DOWNLOADED", "Tokenizer not found: $modelId", null) + } + + android.util.Log.d(TAG, "loadModel: $modelId — loading via ExecuTorchEngine...") + + try { + // Unload any previous model + activeModelWrapper?.unload() + activeModelWrapper = null + + val appContext = context ?: throw IllegalStateException("Context not available") + val engine = ExecuTorchEngine.create(appContext, modelFile, tokenizerFile) + val wrapper = ExecuTorchModelWrapper(engine) + + activeModelWrapper = wrapper + loadedModelId = modelId + LocanaraDefaults.model = wrapper + + android.util.Log.d(TAG, "loadModel: $modelId loaded successfully — LocanaraDefaults.model switched to ExecuTorch") + result.success(null) + } catch (e: Exception) { + android.util.Log.e(TAG, "loadModel: $modelId failed", e) + result.error("ERR_LOAD_MODEL", "Failed to load model: ${e.message}", null) + } + } + + /** + * Delete a downloaded model from disk. + * If the model is currently loaded, unloads it first and restores PromptApiModel. + */ + private fun handleDeleteModel(call: MethodCall, result: Result) { + val modelId = call.argument("modelId") + ?: return result.error("ERR_INVALID_ARGS", "modelId is required", null) + + android.util.Log.d(TAG, "deleteModel: $modelId") + + // If this model is loaded, unload first + if (loadedModelId == modelId) { + activeModelWrapper?.unload() + activeModelWrapper = null + loadedModelId = null + restorePromptApiModel() + } + + // Delete from disk + val dir = modelDir(modelId) + if (dir.exists()) { + dir.deleteRecursively() + android.util.Log.d(TAG, "deleteModel: $modelId deleted from disk") + } + + result.success(null) + } + + /** + * Switch back to device-native AI (Gemini Nano / Prompt API). + * Unloads the ExecuTorch model and restores PromptApiModel. + */ + private fun handleSwitchToDeviceAI(result: Result) { + android.util.Log.d(TAG, "switchToDeviceAI: unloading ExecuTorch, restoring PromptApiModel") + activeModelWrapper?.unload() + activeModelWrapper = null + loadedModelId = null + restorePromptApiModel() + result.success(null) + } + + private fun restorePromptApiModel() { + val model = promptApiModel + if (model != null) { + LocanaraDefaults.model = model + android.util.Log.d(TAG, "restorePromptApiModel: LocanaraDefaults.model = PromptApiModel") + } + } + + private fun handleGetPromptApiStatus(result: Result) { + val status = locanara.getPromptApiStatus() + val statusString = when (status) { + is PromptApiStatus.Available -> "available" + is PromptApiStatus.Downloadable -> "downloadable" + is PromptApiStatus.Downloading -> "downloading" + is PromptApiStatus.NotAvailable -> "not_available" + } + android.util.Log.d(TAG, "getPromptApiStatus: $statusString") + result.success(statusString) + } + + private suspend fun handleDownloadPromptApiModel(result: Result) { + try { + android.util.Log.d(TAG, "downloadPromptApiModel: starting...") + locanara.downloadPromptApiModel { progress -> + val pct = if (progress.bytesToDownload > 0) { + progress.bytesDownloaded.toDouble() / progress.bytesToDownload.toDouble() + } else { + 0.0 + } + scope.launch { + downloadProgressSink?.success( + mapOf( + "modelId" to "gemini-nano", + "bytesDownloaded" to progress.bytesDownloaded, + "totalBytes" to progress.bytesToDownload, + "progress" to pct, + "state" to "downloading", + ) + ) + } + } + scope.launch { + downloadProgressSink?.success( + mapOf( + "modelId" to "gemini-nano", + "bytesDownloaded" to 0L, + "totalBytes" to 0L, + "progress" to 1.0, + "state" to "completed", + ) + ) + } + android.util.Log.d(TAG, "downloadPromptApiModel: done") + result.success(true) + } catch (e: Exception) { + android.util.Log.e(TAG, "downloadPromptApiModel: failed", e) + result.error("ERR_DOWNLOAD_MODEL", e.message, null) + } + } + + // endregion +} diff --git a/libraries/flutter_ondevice_ai/android/src/main/kotlin/dev/hyodot/flutter_ondevice_ai/FlutterOndeviceAiSerialization.kt b/libraries/flutter_ondevice_ai/android/src/main/kotlin/dev/hyodot/flutter_ondevice_ai/FlutterOndeviceAiSerialization.kt new file mode 100644 index 0000000..7daf793 --- /dev/null +++ b/libraries/flutter_ondevice_ai/android/src/main/kotlin/dev/hyodot/flutter_ondevice_ai/FlutterOndeviceAiSerialization.kt @@ -0,0 +1,161 @@ +package dev.hyodot.flutter_ondevice_ai + +import com.locanara.* + +/** Serializes Locanara SDK result types into Flutter-compatible maps */ +object FlutterOndeviceAiSerialization { + // region Device Capability + + fun deviceCapability(capability: DeviceCapability): Map { + val availableSet = capability.availableFeatures.toSet() + val features = mutableMapOf() + for (feature in FeatureType.entries) { + features[featureKey(feature)] = availableSet.contains(feature) + } + + return mapOf( + "isSupported" to capability.supportsOnDeviceAI, + "isModelReady" to (capability.modelInfo?.isLoaded == true), + "platform" to "ANDROID", + "features" to features, + "availableMemoryMB" to (capability.availableMemoryMB ?: 0), + "isLowPowerMode" to capability.isLowPowerMode, + ) + } + + // endregion + + // region Model Info + + fun modelInfo(m: DownloadableModelInfo): Map = + mapOf( + "modelId" to m.modelId, + "name" to m.name, + "version" to m.version, + "sizeMB" to m.sizeMB, + "quantization" to m.quantization.name, + "contextLength" to m.contextLength, + "minMemoryMB" to m.minMemoryMB, + "isMultimodal" to false, + ) + + // endregion + + // region Result Serializers + + fun summarize(r: SummarizeResult): Map = + mapOf( + "summary" to r.summary, + "originalLength" to r.originalLength, + "summaryLength" to r.summaryLength, + "confidence" to (r.confidence ?: 0.0), + ) + + fun classify(r: ClassifyResult): Map { + val classifications = + r.classifications.map { c -> + mapOf( + "label" to c.label, + "score" to c.score, + "metadata" to (c.metadata ?: ""), + ) + } + return mapOf( + "classifications" to classifications, + "topClassification" to + mapOf( + "label" to r.topClassification.label, + "score" to r.topClassification.score, + ), + ) + } + + fun extract(r: ExtractResult): Map { + val entities = + r.entities.map { e -> + mapOf( + "type" to e.type, + "value" to e.value, + "confidence" to e.confidence, + "startPos" to (e.startPos ?: 0), + "endPos" to (e.endPos ?: 0), + ) + } + + val response = mutableMapOf("entities" to entities) + + r.keyValuePairs?.let { pairs -> + response["keyValuePairs"] = + pairs.map { p -> + mapOf( + "key" to p.key, + "value" to p.value, + "confidence" to (p.confidence ?: 0.0), + ) + } + } + + return response + } + + fun chat(r: ChatResult): Map { + val response = + mutableMapOf( + "message" to r.message, + "canContinue" to r.canContinue, + ) + r.conversationId?.let { response["conversationId"] = it } + r.suggestedPrompts?.let { response["suggestedPrompts"] = it } + return response + } + + fun translate(r: TranslateResult): Map = + mapOf( + "translatedText" to r.translatedText, + "sourceLanguage" to r.sourceLanguage, + "targetLanguage" to r.targetLanguage, + "confidence" to (r.confidence ?: 0.0), + ) + + fun rewrite(r: RewriteResult): Map { + val response = + mutableMapOf( + "rewrittenText" to r.rewrittenText, + "confidence" to (r.confidence ?: 0.0), + ) + r.style?.let { response["style"] = it.name } + r.alternatives?.let { response["alternatives"] = it } + return response + } + + fun proofread(r: ProofreadResult): Map { + val corrections = + r.corrections.map { c -> + mapOf( + "original" to c.original, + "corrected" to c.corrected, + "type" to (c.type ?: ""), + "confidence" to (c.confidence ?: 0.0), + "startPos" to (c.startPos ?: 0), + "endPos" to (c.endPos ?: 0), + ) + } + return mapOf( + "correctedText" to r.correctedText, + "corrections" to corrections, + "hasCorrections" to r.hasCorrections, + ) + } + + // endregion + + // region Helpers + + /** Convert FeatureType enum to camelCase key for Flutter */ + private fun featureKey(feature: FeatureType): String = + feature.name.lowercase().replace(Regex("_([a-z])")) { match -> + match.groupValues[1].uppercase() + } + + // endregion +} diff --git a/libraries/flutter_ondevice_ai/example/.gitignore b/libraries/flutter_ondevice_ai/example/.gitignore new file mode 100644 index 0000000..3820a95 --- /dev/null +++ b/libraries/flutter_ondevice_ai/example/.gitignore @@ -0,0 +1,45 @@ +# Miscellaneous +*.class +*.log +*.pyc +*.swp +.DS_Store +.atom/ +.build/ +.buildlog/ +.history +.svn/ +.swiftpm/ +migrate_working_dir/ + +# IntelliJ related +*.iml +*.ipr +*.iws +.idea/ + +# The .vscode folder contains launch configuration and tasks you configure in +# VS Code which you may wish to be included in version control, so this line +# is commented out by default. +#.vscode/ + +# Flutter/Dart/Pub related +**/doc/api/ +**/ios/Flutter/.last_build_id +.dart_tool/ +.flutter-plugins-dependencies +.pub-cache/ +.pub/ +/build/ +/coverage/ + +# Symbolication related +app.*.symbols + +# Obfuscation related +app.*.map.json + +# Android Studio will place build artifacts here +/android/app/debug +/android/app/profile +/android/app/release diff --git a/libraries/flutter_ondevice_ai/example/.metadata b/libraries/flutter_ondevice_ai/example/.metadata new file mode 100644 index 0000000..08cb0a9 --- /dev/null +++ b/libraries/flutter_ondevice_ai/example/.metadata @@ -0,0 +1,45 @@ +# This file tracks properties of this Flutter project. +# Used by Flutter tool to assess capabilities and perform upgrades etc. +# +# This file should be version controlled and should not be manually edited. + +version: + revision: "19074d12f7eaf6a8180cd4036a430c1d76de904e" + channel: "stable" + +project_type: app + +# Tracks metadata for the flutter migrate command +migration: + platforms: + - platform: root + create_revision: 19074d12f7eaf6a8180cd4036a430c1d76de904e + base_revision: 19074d12f7eaf6a8180cd4036a430c1d76de904e + - platform: android + create_revision: 19074d12f7eaf6a8180cd4036a430c1d76de904e + base_revision: 19074d12f7eaf6a8180cd4036a430c1d76de904e + - platform: ios + create_revision: 19074d12f7eaf6a8180cd4036a430c1d76de904e + base_revision: 19074d12f7eaf6a8180cd4036a430c1d76de904e + - platform: linux + create_revision: 19074d12f7eaf6a8180cd4036a430c1d76de904e + base_revision: 19074d12f7eaf6a8180cd4036a430c1d76de904e + - platform: macos + create_revision: 19074d12f7eaf6a8180cd4036a430c1d76de904e + base_revision: 19074d12f7eaf6a8180cd4036a430c1d76de904e + - platform: web + create_revision: 19074d12f7eaf6a8180cd4036a430c1d76de904e + base_revision: 19074d12f7eaf6a8180cd4036a430c1d76de904e + - platform: windows + create_revision: 19074d12f7eaf6a8180cd4036a430c1d76de904e + base_revision: 19074d12f7eaf6a8180cd4036a430c1d76de904e + + # User provided section + + # List of Local paths (relative to this file) that should be + # ignored by the migrate tool. + # + # Files that are not part of the templates will be ignored by default. + unmanaged_files: + - 'lib/main.dart' + - 'ios/Runner.xcodeproj/project.pbxproj' diff --git a/libraries/flutter_ondevice_ai/example/android/.gitignore b/libraries/flutter_ondevice_ai/example/android/.gitignore new file mode 100644 index 0000000..be3943c --- /dev/null +++ b/libraries/flutter_ondevice_ai/example/android/.gitignore @@ -0,0 +1,14 @@ +gradle-wrapper.jar +/.gradle +/captures/ +/gradlew +/gradlew.bat +/local.properties +GeneratedPluginRegistrant.java +.cxx/ + +# Remember to never publicly share your keystore. +# See https://flutter.dev/to/reference-keystore +key.properties +**/*.keystore +**/*.jks diff --git a/libraries/flutter_ondevice_ai/example/android/app/build.gradle.kts b/libraries/flutter_ondevice_ai/example/android/app/build.gradle.kts new file mode 100644 index 0000000..92031b3 --- /dev/null +++ b/libraries/flutter_ondevice_ai/example/android/app/build.gradle.kts @@ -0,0 +1,44 @@ +plugins { + id("com.android.application") + id("kotlin-android") + // The Flutter Gradle Plugin must be applied after the Android and Kotlin Gradle plugins. + id("dev.flutter.flutter-gradle-plugin") +} + +android { + namespace = "com.locanara.flutter.example" + compileSdk = flutter.compileSdkVersion + ndkVersion = flutter.ndkVersion + + compileOptions { + sourceCompatibility = JavaVersion.VERSION_17 + targetCompatibility = JavaVersion.VERSION_17 + } + + kotlinOptions { + jvmTarget = JavaVersion.VERSION_17.toString() + } + + defaultConfig { + // TODO: Specify your own unique Application ID (https://developer.android.com/studio/build/application-id.html). + applicationId = "com.locanara.flutter.example" + // You can update the following values to match your application needs. + // For more information, see: https://flutter.dev/to/review-gradle-config. + minSdk = 34 + targetSdk = flutter.targetSdkVersion + versionCode = flutter.versionCode + versionName = flutter.versionName + } + + buildTypes { + release { + // TODO: Add your own signing config for the release build. + // Signing with the debug keys for now, so `flutter run --release` works. + signingConfig = signingConfigs.getByName("debug") + } + } +} + +flutter { + source = "../.." +} diff --git a/libraries/flutter_ondevice_ai/example/android/app/src/debug/AndroidManifest.xml b/libraries/flutter_ondevice_ai/example/android/app/src/debug/AndroidManifest.xml new file mode 100644 index 0000000..399f698 --- /dev/null +++ b/libraries/flutter_ondevice_ai/example/android/app/src/debug/AndroidManifest.xml @@ -0,0 +1,7 @@ + + + + diff --git a/libraries/flutter_ondevice_ai/example/android/app/src/main/AndroidManifest.xml b/libraries/flutter_ondevice_ai/example/android/app/src/main/AndroidManifest.xml new file mode 100644 index 0000000..7d4554d --- /dev/null +++ b/libraries/flutter_ondevice_ai/example/android/app/src/main/AndroidManifest.xml @@ -0,0 +1,46 @@ + + + + + + + + + + + + + + + + + + + + + + diff --git a/libraries/flutter_ondevice_ai/example/android/app/src/main/kotlin/com/locanara/flutter/example/MainActivity.kt b/libraries/flutter_ondevice_ai/example/android/app/src/main/kotlin/com/locanara/flutter/example/MainActivity.kt new file mode 100644 index 0000000..50476ab --- /dev/null +++ b/libraries/flutter_ondevice_ai/example/android/app/src/main/kotlin/com/locanara/flutter/example/MainActivity.kt @@ -0,0 +1,5 @@ +package com.locanara.flutter.example + +import io.flutter.embedding.android.FlutterActivity + +class MainActivity : FlutterActivity() diff --git a/libraries/flutter_ondevice_ai/example/android/app/src/main/res/drawable-v21/launch_background.xml b/libraries/flutter_ondevice_ai/example/android/app/src/main/res/drawable-v21/launch_background.xml new file mode 100644 index 0000000..f74085f --- /dev/null +++ b/libraries/flutter_ondevice_ai/example/android/app/src/main/res/drawable-v21/launch_background.xml @@ -0,0 +1,12 @@ + + + + + + + + diff --git a/libraries/flutter_ondevice_ai/example/android/app/src/main/res/drawable/launch_background.xml b/libraries/flutter_ondevice_ai/example/android/app/src/main/res/drawable/launch_background.xml new file mode 100644 index 0000000..304732f --- /dev/null +++ b/libraries/flutter_ondevice_ai/example/android/app/src/main/res/drawable/launch_background.xml @@ -0,0 +1,12 @@ + + + + + + + + diff --git a/libraries/flutter_ondevice_ai/example/android/app/src/main/res/mipmap-hdpi/ic_launcher.png b/libraries/flutter_ondevice_ai/example/android/app/src/main/res/mipmap-hdpi/ic_launcher.png new file mode 100644 index 0000000..db77bb4 Binary files /dev/null and b/libraries/flutter_ondevice_ai/example/android/app/src/main/res/mipmap-hdpi/ic_launcher.png differ diff --git a/libraries/flutter_ondevice_ai/example/android/app/src/main/res/mipmap-mdpi/ic_launcher.png b/libraries/flutter_ondevice_ai/example/android/app/src/main/res/mipmap-mdpi/ic_launcher.png new file mode 100644 index 0000000..17987b7 Binary files /dev/null and b/libraries/flutter_ondevice_ai/example/android/app/src/main/res/mipmap-mdpi/ic_launcher.png differ diff --git a/libraries/flutter_ondevice_ai/example/android/app/src/main/res/mipmap-xhdpi/ic_launcher.png b/libraries/flutter_ondevice_ai/example/android/app/src/main/res/mipmap-xhdpi/ic_launcher.png new file mode 100644 index 0000000..09d4391 Binary files /dev/null and b/libraries/flutter_ondevice_ai/example/android/app/src/main/res/mipmap-xhdpi/ic_launcher.png differ diff --git a/libraries/flutter_ondevice_ai/example/android/app/src/main/res/mipmap-xxhdpi/ic_launcher.png b/libraries/flutter_ondevice_ai/example/android/app/src/main/res/mipmap-xxhdpi/ic_launcher.png new file mode 100644 index 0000000..d5f1c8d Binary files /dev/null and b/libraries/flutter_ondevice_ai/example/android/app/src/main/res/mipmap-xxhdpi/ic_launcher.png differ diff --git a/libraries/flutter_ondevice_ai/example/android/app/src/main/res/mipmap-xxxhdpi/ic_launcher.png b/libraries/flutter_ondevice_ai/example/android/app/src/main/res/mipmap-xxxhdpi/ic_launcher.png new file mode 100644 index 0000000..4d6372e Binary files /dev/null and b/libraries/flutter_ondevice_ai/example/android/app/src/main/res/mipmap-xxxhdpi/ic_launcher.png differ diff --git a/libraries/flutter_ondevice_ai/example/android/app/src/main/res/values-night/styles.xml b/libraries/flutter_ondevice_ai/example/android/app/src/main/res/values-night/styles.xml new file mode 100644 index 0000000..06952be --- /dev/null +++ b/libraries/flutter_ondevice_ai/example/android/app/src/main/res/values-night/styles.xml @@ -0,0 +1,18 @@ + + + + + + + diff --git a/libraries/flutter_ondevice_ai/example/android/app/src/main/res/values/styles.xml b/libraries/flutter_ondevice_ai/example/android/app/src/main/res/values/styles.xml new file mode 100644 index 0000000..cb1ef88 --- /dev/null +++ b/libraries/flutter_ondevice_ai/example/android/app/src/main/res/values/styles.xml @@ -0,0 +1,18 @@ + + + + + + + diff --git a/libraries/flutter_ondevice_ai/example/android/app/src/profile/AndroidManifest.xml b/libraries/flutter_ondevice_ai/example/android/app/src/profile/AndroidManifest.xml new file mode 100644 index 0000000..399f698 --- /dev/null +++ b/libraries/flutter_ondevice_ai/example/android/app/src/profile/AndroidManifest.xml @@ -0,0 +1,7 @@ + + + + diff --git a/libraries/flutter_ondevice_ai/example/android/build.gradle.kts b/libraries/flutter_ondevice_ai/example/android/build.gradle.kts new file mode 100644 index 0000000..17557fc --- /dev/null +++ b/libraries/flutter_ondevice_ai/example/android/build.gradle.kts @@ -0,0 +1,25 @@ +allprojects { + repositories { + mavenLocal() + google() + mavenCentral() + } +} + +val newBuildDir: Directory = + rootProject.layout.buildDirectory + .dir("../../build") + .get() +rootProject.layout.buildDirectory.value(newBuildDir) + +subprojects { + val newSubprojectBuildDir: Directory = newBuildDir.dir(project.name) + project.layout.buildDirectory.value(newSubprojectBuildDir) +} +subprojects { + project.evaluationDependsOn(":app") +} + +tasks.register("clean") { + delete(rootProject.layout.buildDirectory) +} diff --git a/libraries/flutter_ondevice_ai/example/android/gradle.properties b/libraries/flutter_ondevice_ai/example/android/gradle.properties new file mode 100644 index 0000000..fbee1d8 --- /dev/null +++ b/libraries/flutter_ondevice_ai/example/android/gradle.properties @@ -0,0 +1,2 @@ +org.gradle.jvmargs=-Xmx8G -XX:MaxMetaspaceSize=4G -XX:ReservedCodeCacheSize=512m -XX:+HeapDumpOnOutOfMemoryError +android.useAndroidX=true diff --git a/libraries/flutter_ondevice_ai/example/android/gradle/wrapper/gradle-wrapper.properties b/libraries/flutter_ondevice_ai/example/android/gradle/wrapper/gradle-wrapper.properties new file mode 100644 index 0000000..e4ef43f --- /dev/null +++ b/libraries/flutter_ondevice_ai/example/android/gradle/wrapper/gradle-wrapper.properties @@ -0,0 +1,5 @@ +distributionBase=GRADLE_USER_HOME +distributionPath=wrapper/dists +zipStoreBase=GRADLE_USER_HOME +zipStorePath=wrapper/dists +distributionUrl=https\://services.gradle.org/distributions/gradle-8.14-all.zip diff --git a/libraries/flutter_ondevice_ai/example/android/settings.gradle.kts b/libraries/flutter_ondevice_ai/example/android/settings.gradle.kts new file mode 100644 index 0000000..ca7fe06 --- /dev/null +++ b/libraries/flutter_ondevice_ai/example/android/settings.gradle.kts @@ -0,0 +1,26 @@ +pluginManagement { + val flutterSdkPath = + run { + val properties = java.util.Properties() + file("local.properties").inputStream().use { properties.load(it) } + val flutterSdkPath = properties.getProperty("flutter.sdk") + require(flutterSdkPath != null) { "flutter.sdk not set in local.properties" } + flutterSdkPath + } + + includeBuild("$flutterSdkPath/packages/flutter_tools/gradle") + + repositories { + google() + mavenCentral() + gradlePluginPortal() + } +} + +plugins { + id("dev.flutter.flutter-plugin-loader") version "1.0.0" + id("com.android.application") version "8.11.1" apply false + id("org.jetbrains.kotlin.android") version "2.2.20" apply false +} + +include(":app") diff --git a/libraries/flutter_ondevice_ai/example/ios/.gitignore b/libraries/flutter_ondevice_ai/example/ios/.gitignore new file mode 100644 index 0000000..7a7f987 --- /dev/null +++ b/libraries/flutter_ondevice_ai/example/ios/.gitignore @@ -0,0 +1,34 @@ +**/dgph +*.mode1v3 +*.mode2v3 +*.moved-aside +*.pbxuser +*.perspectivev3 +**/*sync/ +.sconsign.dblite +.tags* +**/.vagrant/ +**/DerivedData/ +Icon? +**/Pods/ +**/.symlinks/ +profile +xcuserdata +**/.generated/ +Flutter/App.framework +Flutter/Flutter.framework +Flutter/Flutter.podspec +Flutter/Generated.xcconfig +Flutter/ephemeral/ +Flutter/app.flx +Flutter/app.zip +Flutter/flutter_assets/ +Flutter/flutter_export_environment.sh +ServiceDefinitions.json +Runner/GeneratedPluginRegistrant.* + +# Exceptions to above rules. +!default.mode1v3 +!default.mode2v3 +!default.pbxuser +!default.perspectivev3 diff --git a/libraries/flutter_ondevice_ai/example/ios/Flutter/AppFrameworkInfo.plist b/libraries/flutter_ondevice_ai/example/ios/Flutter/AppFrameworkInfo.plist new file mode 100644 index 0000000..1dc6cf7 --- /dev/null +++ b/libraries/flutter_ondevice_ai/example/ios/Flutter/AppFrameworkInfo.plist @@ -0,0 +1,26 @@ + + + + + CFBundleDevelopmentRegion + en + CFBundleExecutable + App + CFBundleIdentifier + io.flutter.flutter.app + CFBundleInfoDictionaryVersion + 6.0 + CFBundleName + App + CFBundlePackageType + FMWK + CFBundleShortVersionString + 1.0 + CFBundleSignature + ???? + CFBundleVersion + 1.0 + MinimumOSVersion + 13.0 + + diff --git a/libraries/flutter_ondevice_ai/example/ios/Flutter/Debug.xcconfig b/libraries/flutter_ondevice_ai/example/ios/Flutter/Debug.xcconfig new file mode 100644 index 0000000..ec97fc6 --- /dev/null +++ b/libraries/flutter_ondevice_ai/example/ios/Flutter/Debug.xcconfig @@ -0,0 +1,2 @@ +#include? "Pods/Target Support Files/Pods-Runner/Pods-Runner.debug.xcconfig" +#include "Generated.xcconfig" diff --git a/libraries/flutter_ondevice_ai/example/ios/Flutter/Release.xcconfig b/libraries/flutter_ondevice_ai/example/ios/Flutter/Release.xcconfig new file mode 100644 index 0000000..c4855bf --- /dev/null +++ b/libraries/flutter_ondevice_ai/example/ios/Flutter/Release.xcconfig @@ -0,0 +1,2 @@ +#include? "Pods/Target Support Files/Pods-Runner/Pods-Runner.release.xcconfig" +#include "Generated.xcconfig" diff --git a/libraries/flutter_ondevice_ai/example/ios/LocanaraLlamaBridge/LocanaraLlamaBridge.podspec b/libraries/flutter_ondevice_ai/example/ios/LocanaraLlamaBridge/LocanaraLlamaBridge.podspec new file mode 100644 index 0000000..328eb6e --- /dev/null +++ b/libraries/flutter_ondevice_ai/example/ios/LocanaraLlamaBridge/LocanaraLlamaBridge.podspec @@ -0,0 +1,22 @@ +Pod::Spec.new do |s| + s.name = "LocanaraLlamaBridge" + s.version = "1.0.0" + s.summary = "llama.cpp bridge with isolated C++ interop" + s.homepage = "https://github.com/hyodotdev/locanara" + s.license = "MIT" + s.author = "Locanara" + s.platform = :ios, "17.0" + s.source = { :path => "." } + s.source_files = "Sources/**/*.swift" + s.dependency "Locanara" + s.swift_version = "5.0" + s.static_framework = true + s.pod_target_xcconfig = { + 'SWIFT_INCLUDE_PATHS' => '$(inherited) "$(PODS_CONFIGURATION_BUILD_DIR)"', + 'FRAMEWORK_SEARCH_PATHS' => '$(inherited) "$(PODS_CONFIGURATION_BUILD_DIR)"', + } + s.user_target_xcconfig = { + 'OTHER_LDFLAGS' => '$(inherited) -framework "llama"', + 'FRAMEWORK_SEARCH_PATHS' => '$(inherited) "$(PODS_CONFIGURATION_BUILD_DIR)"', + } +end diff --git a/libraries/flutter_ondevice_ai/example/ios/LocanaraLlamaBridge/Sources/LlamaCppBridgeEngine.swift b/libraries/flutter_ondevice_ai/example/ios/LocanaraLlamaBridge/Sources/LlamaCppBridgeEngine.swift new file mode 100644 index 0000000..3b92567 --- /dev/null +++ b/libraries/flutter_ondevice_ai/example/ios/LocanaraLlamaBridge/Sources/LlamaCppBridgeEngine.swift @@ -0,0 +1,264 @@ +// Auto-generated by expo-ondevice-ai config plugin +// This file is compiled with C++ interop enabled, isolated from React Native headers. + +import Foundation +import Locanara +import LocalLLMClient +import LocalLLMClientLlama +import os.log +#if os(iOS) +import UIKit +#endif + +private let logger = Logger(subsystem: "com.locanara.bridge", category: "LlamaCppBridge") + +// MARK: - Bridge Engine (InferenceEngine conformance) + +@available(iOS 17.0, *) +final class BridgedLlamaCppEngine: @unchecked Sendable, InferenceEngine, LlamaCppEngineProtocol { + + static var engineType: InferenceEngineType { .llamaCpp } + var engineName: String { "On-Device LLM (llama.cpp via bridge)" } + private(set) var isLoaded: Bool = false + var isMultimodal: Bool { mmprojPath != nil } + + private var llmSession: LLMSession? + private let modelPath: URL + private let mmprojPath: URL? + private var isCancelled = false + private var isInferencing = false + private let lock = NSLock() + + init(modelPath: URL, mmprojPath: URL?) { + self.modelPath = modelPath + self.mmprojPath = mmprojPath + } + + private func beginInference() async throws { + while lock.withLock({ isInferencing }) { + try await Task.sleep(nanoseconds: 100_000_000) + } + lock.withLock { + isInferencing = true + isCancelled = false + } + } + + private func endInference() { + lock.withLock { isInferencing = false } + } + + func loadModel() async throws { + guard !isLoaded else { return } + + guard FileManager.default.fileExists(atPath: modelPath.path) else { + throw LocanaraError.modelNotDownloaded(modelPath.lastPathComponent) + } + + let fileSize = (try? FileManager.default.attributesOfItem(atPath: modelPath.path)[.size] as? Int64) ?? 0 + guard fileSize >= 10_000_000 else { + throw LocanaraError.modelLoadFailed("Invalid model file: too small") + } + + let numThreads = max(4, ProcessInfo.processInfo.activeProcessorCount - 2) + let llamaParam = LlamaClient.Parameter( + context: 8192, + seed: nil, + numberOfThreads: numThreads, + batch: 512, + temperature: 0.5, + topK: 40, + topP: 0.9, + typicalP: 1.0, + penaltyLastN: 64, + penaltyRepeat: 1.2, + options: LlamaClient.Options( + extraEOSTokens: ["", ""], + verbose: false + ) + ) + + let localModel = LLMSession.LocalModel.llama( + url: modelPath, + mmprojURL: mmprojPath, + parameter: llamaParam + ) + llmSession = LLMSession(model: localModel) + try await llmSession?.prewarm() + isLoaded = true + logger.info("Bridge engine loaded model: \(self.modelPath.lastPathComponent)") + } + + func generate(prompt: String, config: InferenceConfig) async throws -> String { + try await beginInference() + defer { endInference() } + + guard isLoaded, let session = llmSession else { + throw LocanaraError.custom(.modelNotLoaded, "Model not loaded") + } + + do { + var result = try await session.respond(to: prompt) + + if let stops = config.stopSequences { + for stop in stops { + if let range = result.range(of: stop) { + result = String(result[.. maxChars { + let truncated = String(result.prefix(maxChars)) + if let period = truncated.lastIndex(of: ".") { + result = String(truncated[...period]) + } else { + result = truncated + } + } + + return result.trimmingCharacters(in: .whitespacesAndNewlines) + } catch let error as NSError { + if error.domain == "LLMSession" || error.code == -1 { + lock.withLock { isLoaded = false; llmSession = nil } + } + throw LocanaraError.executionFailed(error.localizedDescription) + } catch { + throw LocanaraError.executionFailed(error.localizedDescription) + } + } + + func generateStreaming(prompt: String, config: InferenceConfig) -> AsyncThrowingStream { + AsyncThrowingStream { continuation in + Task { [weak self] in + guard let self else { + continuation.finish(throwing: LocanaraError.custom(.modelNotLoaded, "Model not loaded")) + return + } + do { + try await self.beginInference() + } catch { + continuation.finish(throwing: error) + return + } + guard self.isLoaded, let session = self.llmSession else { + self.endInference() + continuation.finish(throwing: LocanaraError.custom(.modelNotLoaded, "Model not loaded")) + return + } + do { + for try await text in session.streamResponse(to: prompt) { + if self.lock.withLock({ self.isCancelled }) { break } + continuation.yield(text) + } + self.endInference() + continuation.finish() + } catch { + self.endInference() + continuation.finish(throwing: LocanaraError.executionFailed(error.localizedDescription)) + } + } + } + } + + func generateWithImage(prompt: String, imageData: Data, config: InferenceConfig) async throws -> String { + guard isMultimodal else { + throw LocanaraError.custom(.featureNotSupported, "mmproj file required for image input") + } + try await beginInference() + defer { endInference() } + + guard isLoaded, let session = llmSession else { + throw LocanaraError.custom(.modelNotLoaded, "Model not loaded") + } + + #if os(iOS) + guard let image = UIImage(data: imageData) else { + throw LocanaraError.custom(.invalidInput, "Failed to create image from data") + } + let attachment = LLMAttachment.image(image) + let response = try await session.respond(to: prompt, attachments: [attachment]) + return response.trimmingCharacters(in: .whitespacesAndNewlines) + #else + throw LocanaraError.custom(.featureNotSupported, "Image input not supported on this platform") + #endif + } + + func cancel() -> Bool { + lock.lock() + defer { lock.unlock() } + if !isCancelled { isCancelled = true; return true } + return false + } + + func unload() { + lock.lock() + llmSession = nil + isLoaded = false + isInferencing = false + lock.unlock() + logger.info("Bridge engine unloaded") + } +} + +// MARK: - Bridge Provider (@objc discoverable by Locanara SDK) + +@objc +@available(iOS 17.0, *) +public class LlamaCppBridgeEngine: NSObject, LlamaCppBridgeProvider { + + private var engine: BridgedLlamaCppEngine? + private var isLoading = false + private let loadLock = NSLock() + + public var isModelLoaded: Bool { + engine?.isLoaded ?? false + } + + public func loadAndRegisterModel(_ modelPath: String, mmprojPath: String?, completion: @escaping (NSError?) -> Void) { + loadLock.lock() + guard !isLoading else { + loadLock.unlock() + completion(NSError(domain: "LlamaCppBridge", code: -1, userInfo: [NSLocalizedDescriptionKey: "Model load already in progress"])) + return + } + isLoading = true + loadLock.unlock() + + Task { + do { + // Unload previous engine if any + if let oldEngine = self.engine { + oldEngine.unload() + InferenceRouter.shared.unregisterEngine() + } + + let modelURL = URL(fileURLWithPath: modelPath) + let mmprojURL = mmprojPath.map { URL(fileURLWithPath: $0) } + + let newEngine = BridgedLlamaCppEngine(modelPath: modelURL, mmprojPath: mmprojURL) + try await newEngine.loadModel() + + self.engine = newEngine + InferenceRouter.shared.registerEngine(newEngine as any InferenceEngine) + + logger.info("Bridge: model loaded and engine registered") + self.loadLock.withLock { self.isLoading = false } + completion(nil) + } catch { + logger.error("Bridge: loadModel failed: \(error.localizedDescription)") + self.loadLock.withLock { self.isLoading = false } + completion(error as NSError) + } + } + } + + public func unloadModel() { + engine?.unload() + InferenceRouter.shared.unregisterEngine() + engine = nil + logger.info("Bridge: model unloaded and engine unregistered") + } +} diff --git a/libraries/flutter_ondevice_ai/example/ios/Podfile b/libraries/flutter_ondevice_ai/example/ios/Podfile new file mode 100644 index 0000000..e2817d3 --- /dev/null +++ b/libraries/flutter_ondevice_ai/example/ios/Podfile @@ -0,0 +1,144 @@ +platform :ios, '17.0' + +# CocoaPods analytics sends network stats synchronously affecting flutter build latency. +ENV['COCOAPODS_DISABLE_STATS'] = 'true' + +project 'Runner', { + 'Debug' => :debug, + 'Profile' => :release, + 'Release' => :release, +} + +def flutter_root + generated_xcode_build_settings_path = File.expand_path(File.join('..', 'Flutter', 'Generated.xcconfig'), __FILE__) + unless File.exist?(generated_xcode_build_settings_path) + raise "#{generated_xcode_build_settings_path} must exist. If you're running pod install manually, make sure flutter pub get is executed first" + end + + File.foreach(generated_xcode_build_settings_path) do |line| + matches = line.match(/FLUTTER_ROOT\=(.*)/) + return matches[1].strip if matches + end + raise "FLUTTER_ROOT not found in #{generated_xcode_build_settings_path}. Try deleting Generated.xcconfig, then run flutter pub get" +end + +require File.expand_path(File.join('packages', 'flutter_tools', 'bin', 'podhelper'), flutter_root) + +flutter_ios_podfile_setup + +# LocanaraLlamaBridge: Isolated C++ interop for llama.cpp +def configure_llama_bridge(installer) + begin + pods_project = installer.pods_project + + # Add SPM package reference for LocalLLMClient + pkg_ref = pods_project.new(Xcodeproj::Project::Object::XCRemoteSwiftPackageReference) + pkg_ref.repositoryURL = 'https://github.com/tattn/LocalLLMClient.git' + pkg_ref.requirement = { 'kind' => 'branch', 'branch' => 'main' } + pods_project.root_object.package_references << pkg_ref + + # Find the bridge target (ONLY this target gets C++ interop) + bridge_target = pods_project.targets.find { |t| t.name == 'LocanaraLlamaBridge' } + unless bridge_target + puts "\u26a0\ufe0f [flutter_ondevice_ai] LocanaraLlamaBridge target not found" + return + end + + # Add SPM product dependencies to the bridge target + ['LocalLLMClient', 'LocalLLMClientLlama'].each do |product_name| + dep = pods_project.new(Xcodeproj::Project::Object::XCSwiftPackageProductDependency) + dep.product_name = product_name + dep.package = pkg_ref + bridge_target.package_product_dependencies << dep + end + + # Enable C++ interop and add SPM module search paths ONLY on the bridge target + bridge_target.build_configurations.each do |bc| + swift_flags = bc.build_settings['OTHER_SWIFT_FLAGS'] || '$(inherited)' + unless swift_flags.include?('-cxx-interoperability-mode') + bc.build_settings['OTHER_SWIFT_FLAGS'] = "#{swift_flags} -cxx-interoperability-mode=default -Xcc -std=c++20" + end + bc.build_settings['CLANG_CXX_LANGUAGE_STANDARD'] = 'c++20' + bc.build_settings['IPHONEOS_DEPLOYMENT_TARGET'] = '17.0' + end + + puts "\u2705 [flutter_ondevice_ai] LocanaraLlamaBridge configured with C++ interop" + rescue => e + puts "\u26a0\ufe0f [flutter_ondevice_ai] Bridge configuration failed: #{e.message}" + puts e.backtrace&.first(3)&.join("\n") + end +end + +# Embed SPM-produced dynamic frameworks (llama.framework) into the app bundle. +# Must run BEFORE Flutter's "Thin Binary" phase which finalizes the app. +def embed_spm_frameworks + begin + runner_project_path = File.join(__dir__, 'Runner.xcodeproj') + runner_project = Xcodeproj::Project.open(runner_project_path) + runner_target = runner_project.targets.find { |t| t.name == 'Runner' } + return unless runner_target + + phase_name = 'Embed SPM Frameworks' + + # Remove existing phase to avoid duplicates on re-install + runner_target.shell_script_build_phases + .select { |p| p.name == phase_name } + .each { |p| p.remove_from_project } + + phase = runner_target.new_shell_script_build_phase(phase_name) + phase.shell_script = <<~'SCRIPT' + # Copy SPM-produced dynamic frameworks (e.g., llama.framework) into the app bundle + DEST="${BUILT_PRODUCTS_DIR}/${FRAMEWORKS_FOLDER_PATH}" + mkdir -p "$DEST" + for FW_NAME in llama; do + FW_PATH="${BUILT_PRODUCTS_DIR}/${FW_NAME}.framework" + if [ -d "$FW_PATH" ]; then + echo "Embedding ${FW_NAME}.framework" + cp -R "$FW_PATH" "$DEST/" + if [ -n "${EXPANDED_CODE_SIGN_IDENTITY}" ] && [ "${CODE_SIGNING_ALLOWED}" = "YES" ]; then + codesign --force --sign "${EXPANDED_CODE_SIGN_IDENTITY}" --preserve-metadata=identifier,entitlements "$DEST/${FW_NAME}.framework" + fi + fi + done + SCRIPT + + # Move the phase BEFORE "Thin Binary" (which is the last shell script phase added by Flutter) + # Find the "Thin Binary" phase index + phases = runner_target.build_phases + thin_binary_index = phases.index { |p| + p.is_a?(Xcodeproj::Project::Object::PBXShellScriptBuildPhase) && + p.shell_script&.include?('embed_and_thin') + } + + if thin_binary_index + # Move our phase (currently last) to before Thin Binary + phases.move(phase, thin_binary_index) + end + + runner_project.save + puts "\u2705 [flutter_ondevice_ai] Added SPM framework embedding to Runner (before Thin Binary)" + rescue => e + puts "\u26a0\ufe0f [flutter_ondevice_ai] SPM embedding setup failed: #{e.message}" + puts e.backtrace&.first(3)&.join("\n") + end +end + +target 'Runner' do + use_frameworks! :linkage => :static + + pod 'Locanara', :path => '../../../../packages/apple' + pod 'LocanaraLlamaBridge', :path => 'LocanaraLlamaBridge' + + flutter_install_all_ios_pods File.dirname(File.realpath(__FILE__)) + target 'RunnerTests' do + inherit! :search_paths + end +end + +post_install do |installer| + configure_llama_bridge(installer) + embed_spm_frameworks + installer.pods_project.targets.each do |target| + flutter_additional_ios_build_settings(target) + end +end diff --git a/libraries/flutter_ondevice_ai/example/ios/Podfile.lock b/libraries/flutter_ondevice_ai/example/ios/Podfile.lock new file mode 100644 index 0000000..c06ca44 --- /dev/null +++ b/libraries/flutter_ondevice_ai/example/ios/Podfile.lock @@ -0,0 +1,34 @@ +PODS: + - Flutter (1.0.0) + - flutter_ondevice_ai (0.1.0): + - Flutter + - Locanara + - Locanara (1.1.0) + - LocanaraLlamaBridge (1.0.0): + - Locanara + +DEPENDENCIES: + - Flutter (from `Flutter`) + - flutter_ondevice_ai (from `.symlinks/plugins/flutter_ondevice_ai/ios`) + - Locanara (from `../../../../packages/apple`) + - LocanaraLlamaBridge (from `LocanaraLlamaBridge`) + +EXTERNAL SOURCES: + Flutter: + :path: Flutter + flutter_ondevice_ai: + :path: ".symlinks/plugins/flutter_ondevice_ai/ios" + Locanara: + :path: "../../../../packages/apple" + LocanaraLlamaBridge: + :path: LocanaraLlamaBridge + +SPEC CHECKSUMS: + Flutter: cabc95a1d2626b1b06e7179b784ebcf0c0cde467 + flutter_ondevice_ai: a9e0852796bb404f8c9faa3b271c7487d4a36ce5 + Locanara: c54177afb7c8881fd2ab571cd1f717f28a4d596b + LocanaraLlamaBridge: 40c628f7170bff79d4cc8a178d0e08a0a397844e + +PODFILE CHECKSUM: a7605560bbf51597b527becebb9c92377fed5f4a + +COCOAPODS: 1.16.2 diff --git a/libraries/flutter_ondevice_ai/example/ios/Runner.xcodeproj/project.pbxproj b/libraries/flutter_ondevice_ai/example/ios/Runner.xcodeproj/project.pbxproj new file mode 100644 index 0000000..9cf20ed --- /dev/null +++ b/libraries/flutter_ondevice_ai/example/ios/Runner.xcodeproj/project.pbxproj @@ -0,0 +1,756 @@ +// !$*UTF8*$! +{ + archiveVersion = 1; + classes = { + }; + objectVersion = 54; + objects = { + +/* Begin PBXBuildFile section */ + 1498D2341E8E89220040F4C2 /* GeneratedPluginRegistrant.m in Sources */ = {isa = PBXBuildFile; fileRef = 1498D2331E8E89220040F4C2 /* GeneratedPluginRegistrant.m */; }; + 331C808B294A63AB00263BE5 /* RunnerTests.swift in Sources */ = {isa = PBXBuildFile; fileRef = 331C807B294A618700263BE5 /* RunnerTests.swift */; }; + 3B3967161E833CAA004F5970 /* AppFrameworkInfo.plist in Resources */ = {isa = PBXBuildFile; fileRef = 3B3967151E833CAA004F5970 /* AppFrameworkInfo.plist */; }; + 67D1844F7149C97E4D91AED1 /* Pods_RunnerTests.framework in Frameworks */ = {isa = PBXBuildFile; fileRef = 5A186B4E8D8112172596D199 /* Pods_RunnerTests.framework */; }; + 74858FAF1ED2DC5600515810 /* AppDelegate.swift in Sources */ = {isa = PBXBuildFile; fileRef = 74858FAE1ED2DC5600515810 /* AppDelegate.swift */; }; + 78A318202AECB46A00862997 /* FlutterGeneratedPluginSwiftPackage in Frameworks */ = {isa = PBXBuildFile; productRef = 78A3181F2AECB46A00862997 /* FlutterGeneratedPluginSwiftPackage */; }; + 97C146FC1CF9000F007C117D /* Main.storyboard in Resources */ = {isa = PBXBuildFile; fileRef = 97C146FA1CF9000F007C117D /* Main.storyboard */; }; + 97C146FE1CF9000F007C117D /* Assets.xcassets in Resources */ = {isa = PBXBuildFile; fileRef = 97C146FD1CF9000F007C117D /* Assets.xcassets */; }; + 97C147011CF9000F007C117D /* LaunchScreen.storyboard in Resources */ = {isa = PBXBuildFile; fileRef = 97C146FF1CF9000F007C117D /* LaunchScreen.storyboard */; }; + D9CFDC0E08293758D81A4569 /* Pods_Runner.framework in Frameworks */ = {isa = PBXBuildFile; fileRef = 65BA4D4BDFA6975D443D3926 /* Pods_Runner.framework */; }; +/* End PBXBuildFile section */ + +/* Begin PBXContainerItemProxy section */ + 331C8085294A63A400263BE5 /* PBXContainerItemProxy */ = { + isa = PBXContainerItemProxy; + containerPortal = 97C146E61CF9000F007C117D /* Project object */; + proxyType = 1; + remoteGlobalIDString = 97C146ED1CF9000F007C117D; + remoteInfo = Runner; + }; +/* End PBXContainerItemProxy section */ + +/* Begin PBXCopyFilesBuildPhase section */ + 9705A1C41CF9048500538489 /* Embed Frameworks */ = { + isa = PBXCopyFilesBuildPhase; + buildActionMask = 2147483647; + dstPath = ""; + dstSubfolderSpec = 10; + files = ( + ); + name = "Embed Frameworks"; + runOnlyForDeploymentPostprocessing = 0; + }; +/* End PBXCopyFilesBuildPhase section */ + +/* Begin PBXFileReference section */ + 016E85409B94BB9D9174B017 /* Pods-RunnerTests.debug.xcconfig */ = {isa = PBXFileReference; includeInIndex = 1; lastKnownFileType = text.xcconfig; name = "Pods-RunnerTests.debug.xcconfig"; path = "Target Support Files/Pods-RunnerTests/Pods-RunnerTests.debug.xcconfig"; sourceTree = ""; }; + 1498D2321E8E86230040F4C2 /* GeneratedPluginRegistrant.h */ = {isa = PBXFileReference; lastKnownFileType = sourcecode.c.h; path = GeneratedPluginRegistrant.h; sourceTree = ""; }; + 1498D2331E8E89220040F4C2 /* GeneratedPluginRegistrant.m */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.objc; path = GeneratedPluginRegistrant.m; sourceTree = ""; }; + 331C807B294A618700263BE5 /* RunnerTests.swift */ = {isa = PBXFileReference; lastKnownFileType = sourcecode.swift; path = RunnerTests.swift; sourceTree = ""; }; + 331C8081294A63A400263BE5 /* RunnerTests.xctest */ = {isa = PBXFileReference; explicitFileType = wrapper.cfbundle; includeInIndex = 0; path = RunnerTests.xctest; sourceTree = BUILT_PRODUCTS_DIR; }; + 3B3967151E833CAA004F5970 /* AppFrameworkInfo.plist */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = text.plist.xml; name = AppFrameworkInfo.plist; path = Flutter/AppFrameworkInfo.plist; sourceTree = ""; }; + 47DFF8C48C1F439908505463 /* Pods-Runner.release.xcconfig */ = {isa = PBXFileReference; includeInIndex = 1; lastKnownFileType = text.xcconfig; name = "Pods-Runner.release.xcconfig"; path = "Target Support Files/Pods-Runner/Pods-Runner.release.xcconfig"; sourceTree = ""; }; + 4E411206C1FE7DC7D574C83E /* Pods-Runner.profile.xcconfig */ = {isa = PBXFileReference; includeInIndex = 1; lastKnownFileType = text.xcconfig; name = "Pods-Runner.profile.xcconfig"; path = "Target Support Files/Pods-Runner/Pods-Runner.profile.xcconfig"; sourceTree = ""; }; + 595F1BFAD4A48288E93E5BF9 /* Pods-Runner.debug.xcconfig */ = {isa = PBXFileReference; includeInIndex = 1; lastKnownFileType = text.xcconfig; name = "Pods-Runner.debug.xcconfig"; path = "Target Support Files/Pods-Runner/Pods-Runner.debug.xcconfig"; sourceTree = ""; }; + 5A186B4E8D8112172596D199 /* Pods_RunnerTests.framework */ = {isa = PBXFileReference; explicitFileType = wrapper.framework; includeInIndex = 0; path = Pods_RunnerTests.framework; sourceTree = BUILT_PRODUCTS_DIR; }; + 65BA4D4BDFA6975D443D3926 /* Pods_Runner.framework */ = {isa = PBXFileReference; explicitFileType = wrapper.framework; includeInIndex = 0; path = Pods_Runner.framework; sourceTree = BUILT_PRODUCTS_DIR; }; + 74858FAD1ED2DC5600515810 /* Runner-Bridging-Header.h */ = {isa = PBXFileReference; lastKnownFileType = sourcecode.c.h; path = "Runner-Bridging-Header.h"; sourceTree = ""; }; + 74858FAE1ED2DC5600515810 /* AppDelegate.swift */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.swift; path = AppDelegate.swift; sourceTree = ""; }; + 78E0A7A72DC9AD7400C4905E /* FlutterGeneratedPluginSwiftPackage */ = {isa = PBXFileReference; lastKnownFileType = wrapper; name = FlutterGeneratedPluginSwiftPackage; path = Flutter/ephemeral/Packages/FlutterGeneratedPluginSwiftPackage; sourceTree = ""; }; + 7AFA3C8E1D35360C0083082E /* Release.xcconfig */ = {isa = PBXFileReference; lastKnownFileType = text.xcconfig; name = Release.xcconfig; path = Flutter/Release.xcconfig; sourceTree = ""; }; + 9740EEB21CF90195004384FC /* Debug.xcconfig */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = text.xcconfig; name = Debug.xcconfig; path = Flutter/Debug.xcconfig; sourceTree = ""; }; + 9740EEB31CF90195004384FC /* Generated.xcconfig */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = text.xcconfig; name = Generated.xcconfig; path = Flutter/Generated.xcconfig; sourceTree = ""; }; + 97C146EE1CF9000F007C117D /* Runner.app */ = {isa = PBXFileReference; explicitFileType = wrapper.application; includeInIndex = 0; path = Runner.app; sourceTree = BUILT_PRODUCTS_DIR; }; + 97C146FB1CF9000F007C117D /* Base */ = {isa = PBXFileReference; lastKnownFileType = file.storyboard; name = Base; path = Base.lproj/Main.storyboard; sourceTree = ""; }; + 97C146FD1CF9000F007C117D /* Assets.xcassets */ = {isa = PBXFileReference; lastKnownFileType = folder.assetcatalog; path = Assets.xcassets; sourceTree = ""; }; + 97C147001CF9000F007C117D /* Base */ = {isa = PBXFileReference; lastKnownFileType = file.storyboard; name = Base; path = Base.lproj/LaunchScreen.storyboard; sourceTree = ""; }; + 97C147021CF9000F007C117D /* Info.plist */ = {isa = PBXFileReference; lastKnownFileType = text.plist.xml; path = Info.plist; sourceTree = ""; }; + D964C01A3C65248BF34CB761 /* Pods-RunnerTests.profile.xcconfig */ = {isa = PBXFileReference; includeInIndex = 1; lastKnownFileType = text.xcconfig; name = "Pods-RunnerTests.profile.xcconfig"; path = "Target Support Files/Pods-RunnerTests/Pods-RunnerTests.profile.xcconfig"; sourceTree = ""; }; + DE64E6E29E5C49C37771CFC7 /* Pods-RunnerTests.release.xcconfig */ = {isa = PBXFileReference; includeInIndex = 1; lastKnownFileType = text.xcconfig; name = "Pods-RunnerTests.release.xcconfig"; path = "Target Support Files/Pods-RunnerTests/Pods-RunnerTests.release.xcconfig"; sourceTree = ""; }; +/* End PBXFileReference section */ + +/* Begin PBXFrameworksBuildPhase section */ + 0F3A11EF9F48A60D85CD4C02 /* Frameworks */ = { + isa = PBXFrameworksBuildPhase; + buildActionMask = 2147483647; + files = ( + 67D1844F7149C97E4D91AED1 /* Pods_RunnerTests.framework in Frameworks */, + ); + runOnlyForDeploymentPostprocessing = 0; + }; + 97C146EB1CF9000F007C117D /* Frameworks */ = { + isa = PBXFrameworksBuildPhase; + buildActionMask = 2147483647; + files = ( + 78A318202AECB46A00862997 /* FlutterGeneratedPluginSwiftPackage in Frameworks */, + D9CFDC0E08293758D81A4569 /* Pods_Runner.framework in Frameworks */, + ); + runOnlyForDeploymentPostprocessing = 0; + }; +/* End PBXFrameworksBuildPhase section */ + +/* Begin PBXGroup section */ + 1EF994DC19D1BA65FD907BCD /* Frameworks */ = { + isa = PBXGroup; + children = ( + 65BA4D4BDFA6975D443D3926 /* Pods_Runner.framework */, + 5A186B4E8D8112172596D199 /* Pods_RunnerTests.framework */, + ); + name = Frameworks; + sourceTree = ""; + }; + 331C8082294A63A400263BE5 /* RunnerTests */ = { + isa = PBXGroup; + children = ( + 331C807B294A618700263BE5 /* RunnerTests.swift */, + ); + path = RunnerTests; + sourceTree = ""; + }; + 58FE9793BD6F1EE8EE12EA7E /* Pods */ = { + isa = PBXGroup; + children = ( + 595F1BFAD4A48288E93E5BF9 /* Pods-Runner.debug.xcconfig */, + 47DFF8C48C1F439908505463 /* Pods-Runner.release.xcconfig */, + 4E411206C1FE7DC7D574C83E /* Pods-Runner.profile.xcconfig */, + 016E85409B94BB9D9174B017 /* Pods-RunnerTests.debug.xcconfig */, + DE64E6E29E5C49C37771CFC7 /* Pods-RunnerTests.release.xcconfig */, + D964C01A3C65248BF34CB761 /* Pods-RunnerTests.profile.xcconfig */, + ); + name = Pods; + path = Pods; + sourceTree = ""; + }; + 9740EEB11CF90186004384FC /* Flutter */ = { + isa = PBXGroup; + children = ( + 78E0A7A72DC9AD7400C4905E /* FlutterGeneratedPluginSwiftPackage */, + 3B3967151E833CAA004F5970 /* AppFrameworkInfo.plist */, + 9740EEB21CF90195004384FC /* Debug.xcconfig */, + 7AFA3C8E1D35360C0083082E /* Release.xcconfig */, + 9740EEB31CF90195004384FC /* Generated.xcconfig */, + ); + name = Flutter; + sourceTree = ""; + }; + 97C146E51CF9000F007C117D = { + isa = PBXGroup; + children = ( + 9740EEB11CF90186004384FC /* Flutter */, + 97C146F01CF9000F007C117D /* Runner */, + 97C146EF1CF9000F007C117D /* Products */, + 331C8082294A63A400263BE5 /* RunnerTests */, + 58FE9793BD6F1EE8EE12EA7E /* Pods */, + 1EF994DC19D1BA65FD907BCD /* Frameworks */, + ); + sourceTree = ""; + }; + 97C146EF1CF9000F007C117D /* Products */ = { + isa = PBXGroup; + children = ( + 97C146EE1CF9000F007C117D /* Runner.app */, + 331C8081294A63A400263BE5 /* RunnerTests.xctest */, + ); + name = Products; + sourceTree = ""; + }; + 97C146F01CF9000F007C117D /* Runner */ = { + isa = PBXGroup; + children = ( + 97C146FA1CF9000F007C117D /* Main.storyboard */, + 97C146FD1CF9000F007C117D /* Assets.xcassets */, + 97C146FF1CF9000F007C117D /* LaunchScreen.storyboard */, + 97C147021CF9000F007C117D /* Info.plist */, + 1498D2321E8E86230040F4C2 /* GeneratedPluginRegistrant.h */, + 1498D2331E8E89220040F4C2 /* GeneratedPluginRegistrant.m */, + 74858FAE1ED2DC5600515810 /* AppDelegate.swift */, + 74858FAD1ED2DC5600515810 /* Runner-Bridging-Header.h */, + ); + path = Runner; + sourceTree = ""; + }; +/* End PBXGroup section */ + +/* Begin PBXNativeTarget section */ + 331C8080294A63A400263BE5 /* RunnerTests */ = { + isa = PBXNativeTarget; + buildConfigurationList = 331C8087294A63A400263BE5 /* Build configuration list for PBXNativeTarget "RunnerTests" */; + buildPhases = ( + CDCCCC10684A519CB103BFDE /* [CP] Check Pods Manifest.lock */, + 331C807D294A63A400263BE5 /* Sources */, + 331C807F294A63A400263BE5 /* Resources */, + 0F3A11EF9F48A60D85CD4C02 /* Frameworks */, + ); + buildRules = ( + ); + dependencies = ( + 331C8086294A63A400263BE5 /* PBXTargetDependency */, + ); + name = RunnerTests; + productName = RunnerTests; + productReference = 331C8081294A63A400263BE5 /* RunnerTests.xctest */; + productType = "com.apple.product-type.bundle.unit-test"; + }; + 97C146ED1CF9000F007C117D /* Runner */ = { + isa = PBXNativeTarget; + buildConfigurationList = 97C147051CF9000F007C117D /* Build configuration list for PBXNativeTarget "Runner" */; + buildPhases = ( + 5B29FA105729A2B2AAE6A274 /* [CP] Check Pods Manifest.lock */, + 9740EEB61CF901F6004384FC /* Run Script */, + 97C146EA1CF9000F007C117D /* Sources */, + 97C146EB1CF9000F007C117D /* Frameworks */, + 97C146EC1CF9000F007C117D /* Resources */, + 9705A1C41CF9048500538489 /* Embed Frameworks */, + 8D7C205DB9970B6C536D80DE /* Embed SPM Frameworks */, + 3B06AD1E1E4923F5004D2608 /* Thin Binary */, + ); + buildRules = ( + ); + dependencies = ( + ); + name = Runner; + packageProductDependencies = ( + 78A3181F2AECB46A00862997 /* FlutterGeneratedPluginSwiftPackage */, + ); + productName = Runner; + productReference = 97C146EE1CF9000F007C117D /* Runner.app */; + productType = "com.apple.product-type.application"; + }; +/* End PBXNativeTarget section */ + +/* Begin PBXProject section */ + 97C146E61CF9000F007C117D /* Project object */ = { + isa = PBXProject; + attributes = { + BuildIndependentTargetsInParallel = YES; + LastUpgradeCheck = 1510; + ORGANIZATIONNAME = ""; + TargetAttributes = { + 331C8080294A63A400263BE5 = { + CreatedOnToolsVersion = 14.0; + TestTargetID = 97C146ED1CF9000F007C117D; + }; + 97C146ED1CF9000F007C117D = { + CreatedOnToolsVersion = 7.3.1; + LastSwiftMigration = 1100; + }; + }; + }; + buildConfigurationList = 97C146E91CF9000F007C117D /* Build configuration list for PBXProject "Runner" */; + compatibilityVersion = "Xcode 9.3"; + developmentRegion = en; + hasScannedForEncodings = 0; + knownRegions = ( + en, + Base, + ); + mainGroup = 97C146E51CF9000F007C117D; + packageReferences = ( + 781AD8BC2B33823900A9FFBB /* XCLocalSwiftPackageReference "FlutterGeneratedPluginSwiftPackage" */, + ); + productRefGroup = 97C146EF1CF9000F007C117D /* Products */; + projectDirPath = ""; + projectRoot = ""; + targets = ( + 97C146ED1CF9000F007C117D /* Runner */, + 331C8080294A63A400263BE5 /* RunnerTests */, + ); + }; +/* End PBXProject section */ + +/* Begin PBXResourcesBuildPhase section */ + 331C807F294A63A400263BE5 /* Resources */ = { + isa = PBXResourcesBuildPhase; + buildActionMask = 2147483647; + files = ( + ); + runOnlyForDeploymentPostprocessing = 0; + }; + 97C146EC1CF9000F007C117D /* Resources */ = { + isa = PBXResourcesBuildPhase; + buildActionMask = 2147483647; + files = ( + 97C147011CF9000F007C117D /* LaunchScreen.storyboard in Resources */, + 3B3967161E833CAA004F5970 /* AppFrameworkInfo.plist in Resources */, + 97C146FE1CF9000F007C117D /* Assets.xcassets in Resources */, + 97C146FC1CF9000F007C117D /* Main.storyboard in Resources */, + ); + runOnlyForDeploymentPostprocessing = 0; + }; +/* End PBXResourcesBuildPhase section */ + +/* Begin PBXShellScriptBuildPhase section */ + 3B06AD1E1E4923F5004D2608 /* Thin Binary */ = { + isa = PBXShellScriptBuildPhase; + alwaysOutOfDate = 1; + buildActionMask = 2147483647; + files = ( + ); + inputPaths = ( + "${TARGET_BUILD_DIR}/${INFOPLIST_PATH}", + ); + name = "Thin Binary"; + outputPaths = ( + ); + runOnlyForDeploymentPostprocessing = 0; + shellPath = /bin/sh; + shellScript = "/bin/sh \"$FLUTTER_ROOT/packages/flutter_tools/bin/xcode_backend.sh\" embed_and_thin"; + }; + 5B29FA105729A2B2AAE6A274 /* [CP] Check Pods Manifest.lock */ = { + isa = PBXShellScriptBuildPhase; + buildActionMask = 2147483647; + files = ( + ); + inputFileListPaths = ( + ); + inputPaths = ( + "${PODS_PODFILE_DIR_PATH}/Podfile.lock", + "${PODS_ROOT}/Manifest.lock", + ); + name = "[CP] Check Pods Manifest.lock"; + outputFileListPaths = ( + ); + outputPaths = ( + "$(DERIVED_FILE_DIR)/Pods-Runner-checkManifestLockResult.txt", + ); + runOnlyForDeploymentPostprocessing = 0; + shellPath = /bin/sh; + shellScript = "diff \"${PODS_PODFILE_DIR_PATH}/Podfile.lock\" \"${PODS_ROOT}/Manifest.lock\" > /dev/null\nif [ $? != 0 ] ; then\n # print error to STDERR\n echo \"error: The sandbox is not in sync with the Podfile.lock. Run 'pod install' or update your CocoaPods installation.\" >&2\n exit 1\nfi\n# This output is used by Xcode 'outputs' to avoid re-running this script phase.\necho \"SUCCESS\" > \"${SCRIPT_OUTPUT_FILE_0}\"\n"; + showEnvVarsInLog = 0; + }; + 8D7C205DB9970B6C536D80DE /* Embed SPM Frameworks */ = { + isa = PBXShellScriptBuildPhase; + buildActionMask = 2147483647; + files = ( + ); + inputFileListPaths = ( + ); + inputPaths = ( + ); + name = "Embed SPM Frameworks"; + outputFileListPaths = ( + ); + outputPaths = ( + ); + runOnlyForDeploymentPostprocessing = 0; + shellPath = /bin/sh; + shellScript = "# Copy SPM-produced dynamic frameworks (e.g., llama.framework) into the app bundle\nDEST=\"${BUILT_PRODUCTS_DIR}/${FRAMEWORKS_FOLDER_PATH}\"\nmkdir -p \"$DEST\"\nfor FW_NAME in llama; do\n FW_PATH=\"${BUILT_PRODUCTS_DIR}/${FW_NAME}.framework\"\n if [ -d \"$FW_PATH\" ]; then\n echo \"Embedding ${FW_NAME}.framework\"\n cp -R \"$FW_PATH\" \"$DEST/\"\n if [ -n \"${EXPANDED_CODE_SIGN_IDENTITY}\" ] && [ \"${CODE_SIGNING_ALLOWED}\" = \"YES\" ]; then\n codesign --force --sign \"${EXPANDED_CODE_SIGN_IDENTITY}\" --preserve-metadata=identifier,entitlements \"$DEST/${FW_NAME}.framework\"\n fi\n fi\ndone\n"; + }; + 9740EEB61CF901F6004384FC /* Run Script */ = { + isa = PBXShellScriptBuildPhase; + alwaysOutOfDate = 1; + buildActionMask = 2147483647; + files = ( + ); + inputPaths = ( + ); + name = "Run Script"; + outputPaths = ( + ); + runOnlyForDeploymentPostprocessing = 0; + shellPath = /bin/sh; + shellScript = "/bin/sh \"$FLUTTER_ROOT/packages/flutter_tools/bin/xcode_backend.sh\" build"; + }; + CDCCCC10684A519CB103BFDE /* [CP] Check Pods Manifest.lock */ = { + isa = PBXShellScriptBuildPhase; + buildActionMask = 2147483647; + files = ( + ); + inputFileListPaths = ( + ); + inputPaths = ( + "${PODS_PODFILE_DIR_PATH}/Podfile.lock", + "${PODS_ROOT}/Manifest.lock", + ); + name = "[CP] Check Pods Manifest.lock"; + outputFileListPaths = ( + ); + outputPaths = ( + "$(DERIVED_FILE_DIR)/Pods-RunnerTests-checkManifestLockResult.txt", + ); + runOnlyForDeploymentPostprocessing = 0; + shellPath = /bin/sh; + shellScript = "diff \"${PODS_PODFILE_DIR_PATH}/Podfile.lock\" \"${PODS_ROOT}/Manifest.lock\" > /dev/null\nif [ $? != 0 ] ; then\n # print error to STDERR\n echo \"error: The sandbox is not in sync with the Podfile.lock. Run 'pod install' or update your CocoaPods installation.\" >&2\n exit 1\nfi\n# This output is used by Xcode 'outputs' to avoid re-running this script phase.\necho \"SUCCESS\" > \"${SCRIPT_OUTPUT_FILE_0}\"\n"; + showEnvVarsInLog = 0; + }; +/* End PBXShellScriptBuildPhase section */ + +/* Begin PBXSourcesBuildPhase section */ + 331C807D294A63A400263BE5 /* Sources */ = { + isa = PBXSourcesBuildPhase; + buildActionMask = 2147483647; + files = ( + 331C808B294A63AB00263BE5 /* RunnerTests.swift in Sources */, + ); + runOnlyForDeploymentPostprocessing = 0; + }; + 97C146EA1CF9000F007C117D /* Sources */ = { + isa = PBXSourcesBuildPhase; + buildActionMask = 2147483647; + files = ( + 74858FAF1ED2DC5600515810 /* AppDelegate.swift in Sources */, + 1498D2341E8E89220040F4C2 /* GeneratedPluginRegistrant.m in Sources */, + ); + runOnlyForDeploymentPostprocessing = 0; + }; +/* End PBXSourcesBuildPhase section */ + +/* Begin PBXTargetDependency section */ + 331C8086294A63A400263BE5 /* PBXTargetDependency */ = { + isa = PBXTargetDependency; + target = 97C146ED1CF9000F007C117D /* Runner */; + targetProxy = 331C8085294A63A400263BE5 /* PBXContainerItemProxy */; + }; +/* End PBXTargetDependency section */ + +/* Begin PBXVariantGroup section */ + 97C146FA1CF9000F007C117D /* Main.storyboard */ = { + isa = PBXVariantGroup; + children = ( + 97C146FB1CF9000F007C117D /* Base */, + ); + name = Main.storyboard; + sourceTree = ""; + }; + 97C146FF1CF9000F007C117D /* LaunchScreen.storyboard */ = { + isa = PBXVariantGroup; + children = ( + 97C147001CF9000F007C117D /* Base */, + ); + name = LaunchScreen.storyboard; + sourceTree = ""; + }; +/* End PBXVariantGroup section */ + +/* Begin XCBuildConfiguration section */ + 249021D3217E4FDB00AE95B9 /* Profile */ = { + isa = XCBuildConfiguration; + buildSettings = { + ALWAYS_SEARCH_USER_PATHS = NO; + ASSETCATALOG_COMPILER_GENERATE_SWIFT_ASSET_SYMBOL_EXTENSIONS = YES; + CLANG_ANALYZER_NONNULL = YES; + CLANG_CXX_LANGUAGE_STANDARD = "gnu++0x"; + CLANG_CXX_LIBRARY = "libc++"; + CLANG_ENABLE_MODULES = YES; + CLANG_ENABLE_OBJC_ARC = YES; + CLANG_WARN_BLOCK_CAPTURE_AUTORELEASING = YES; + CLANG_WARN_BOOL_CONVERSION = YES; + CLANG_WARN_COMMA = YES; + CLANG_WARN_CONSTANT_CONVERSION = YES; + CLANG_WARN_DEPRECATED_OBJC_IMPLEMENTATIONS = YES; + CLANG_WARN_DIRECT_OBJC_ISA_USAGE = YES_ERROR; + CLANG_WARN_EMPTY_BODY = YES; + CLANG_WARN_ENUM_CONVERSION = YES; + CLANG_WARN_INFINITE_RECURSION = YES; + CLANG_WARN_INT_CONVERSION = YES; + CLANG_WARN_NON_LITERAL_NULL_CONVERSION = YES; + CLANG_WARN_OBJC_IMPLICIT_RETAIN_SELF = YES; + CLANG_WARN_OBJC_LITERAL_CONVERSION = YES; + CLANG_WARN_OBJC_ROOT_CLASS = YES_ERROR; + CLANG_WARN_RANGE_LOOP_ANALYSIS = YES; + CLANG_WARN_STRICT_PROTOTYPES = YES; + CLANG_WARN_SUSPICIOUS_MOVE = YES; + CLANG_WARN_UNREACHABLE_CODE = YES; + CLANG_WARN__DUPLICATE_METHOD_MATCH = YES; + "CODE_SIGN_IDENTITY[sdk=iphoneos*]" = "iPhone Developer"; + COPY_PHASE_STRIP = NO; + DEBUG_INFORMATION_FORMAT = "dwarf-with-dsym"; + ENABLE_NS_ASSERTIONS = NO; + ENABLE_STRICT_OBJC_MSGSEND = YES; + ENABLE_USER_SCRIPT_SANDBOXING = NO; + GCC_C_LANGUAGE_STANDARD = gnu99; + GCC_NO_COMMON_BLOCKS = YES; + GCC_WARN_64_TO_32_BIT_CONVERSION = YES; + GCC_WARN_ABOUT_RETURN_TYPE = YES_ERROR; + GCC_WARN_UNDECLARED_SELECTOR = YES; + GCC_WARN_UNINITIALIZED_AUTOS = YES_AGGRESSIVE; + GCC_WARN_UNUSED_FUNCTION = YES; + GCC_WARN_UNUSED_VARIABLE = YES; + IPHONEOS_DEPLOYMENT_TARGET = 13.0; + MTL_ENABLE_DEBUG_INFO = NO; + SDKROOT = iphoneos; + SUPPORTED_PLATFORMS = iphoneos; + TARGETED_DEVICE_FAMILY = "1,2"; + VALIDATE_PRODUCT = YES; + }; + name = Profile; + }; + 249021D4217E4FDB00AE95B9 /* Profile */ = { + isa = XCBuildConfiguration; + baseConfigurationReference = 7AFA3C8E1D35360C0083082E /* Release.xcconfig */; + buildSettings = { + ASSETCATALOG_COMPILER_APPICON_NAME = AppIcon; + CLANG_ENABLE_MODULES = YES; + CURRENT_PROJECT_VERSION = "$(FLUTTER_BUILD_NUMBER)"; + DEVELOPMENT_TEAM = PRDQGB267K; + ENABLE_BITCODE = NO; + INFOPLIST_FILE = Runner/Info.plist; + LD_RUNPATH_SEARCH_PATHS = ( + "$(inherited)", + "@executable_path/Frameworks", + ); + PRODUCT_BUNDLE_IDENTIFIER = dev.hyodot.flutterOndeviceAiExample; + PRODUCT_NAME = "$(TARGET_NAME)"; + SWIFT_OBJC_BRIDGING_HEADER = "Runner/Runner-Bridging-Header.h"; + SWIFT_VERSION = 5.0; + VERSIONING_SYSTEM = "apple-generic"; + }; + name = Profile; + }; + 331C8088294A63A400263BE5 /* Debug */ = { + isa = XCBuildConfiguration; + baseConfigurationReference = 016E85409B94BB9D9174B017 /* Pods-RunnerTests.debug.xcconfig */; + buildSettings = { + BUNDLE_LOADER = "$(TEST_HOST)"; + CODE_SIGN_STYLE = Automatic; + CURRENT_PROJECT_VERSION = 1; + GENERATE_INFOPLIST_FILE = YES; + MARKETING_VERSION = 1.0; + PRODUCT_BUNDLE_IDENTIFIER = dev.hyodot.flutterOndeviceAiExample.RunnerTests; + PRODUCT_NAME = "$(TARGET_NAME)"; + SWIFT_ACTIVE_COMPILATION_CONDITIONS = DEBUG; + SWIFT_OPTIMIZATION_LEVEL = "-Onone"; + SWIFT_VERSION = 5.0; + TEST_HOST = "$(BUILT_PRODUCTS_DIR)/Runner.app/$(BUNDLE_EXECUTABLE_FOLDER_PATH)/Runner"; + }; + name = Debug; + }; + 331C8089294A63A400263BE5 /* Release */ = { + isa = XCBuildConfiguration; + baseConfigurationReference = DE64E6E29E5C49C37771CFC7 /* Pods-RunnerTests.release.xcconfig */; + buildSettings = { + BUNDLE_LOADER = "$(TEST_HOST)"; + CODE_SIGN_STYLE = Automatic; + CURRENT_PROJECT_VERSION = 1; + GENERATE_INFOPLIST_FILE = YES; + MARKETING_VERSION = 1.0; + PRODUCT_BUNDLE_IDENTIFIER = dev.hyodot.flutterOndeviceAiExample.RunnerTests; + PRODUCT_NAME = "$(TARGET_NAME)"; + SWIFT_VERSION = 5.0; + TEST_HOST = "$(BUILT_PRODUCTS_DIR)/Runner.app/$(BUNDLE_EXECUTABLE_FOLDER_PATH)/Runner"; + }; + name = Release; + }; + 331C808A294A63A400263BE5 /* Profile */ = { + isa = XCBuildConfiguration; + baseConfigurationReference = D964C01A3C65248BF34CB761 /* Pods-RunnerTests.profile.xcconfig */; + buildSettings = { + BUNDLE_LOADER = "$(TEST_HOST)"; + CODE_SIGN_STYLE = Automatic; + CURRENT_PROJECT_VERSION = 1; + GENERATE_INFOPLIST_FILE = YES; + MARKETING_VERSION = 1.0; + PRODUCT_BUNDLE_IDENTIFIER = dev.hyodot.flutterOndeviceAiExample.RunnerTests; + PRODUCT_NAME = "$(TARGET_NAME)"; + SWIFT_VERSION = 5.0; + TEST_HOST = "$(BUILT_PRODUCTS_DIR)/Runner.app/$(BUNDLE_EXECUTABLE_FOLDER_PATH)/Runner"; + }; + name = Profile; + }; + 97C147031CF9000F007C117D /* Debug */ = { + isa = XCBuildConfiguration; + buildSettings = { + ALWAYS_SEARCH_USER_PATHS = NO; + ASSETCATALOG_COMPILER_GENERATE_SWIFT_ASSET_SYMBOL_EXTENSIONS = YES; + CLANG_ANALYZER_NONNULL = YES; + CLANG_CXX_LANGUAGE_STANDARD = "gnu++0x"; + CLANG_CXX_LIBRARY = "libc++"; + CLANG_ENABLE_MODULES = YES; + CLANG_ENABLE_OBJC_ARC = YES; + CLANG_WARN_BLOCK_CAPTURE_AUTORELEASING = YES; + CLANG_WARN_BOOL_CONVERSION = YES; + CLANG_WARN_COMMA = YES; + CLANG_WARN_CONSTANT_CONVERSION = YES; + CLANG_WARN_DEPRECATED_OBJC_IMPLEMENTATIONS = YES; + CLANG_WARN_DIRECT_OBJC_ISA_USAGE = YES_ERROR; + CLANG_WARN_EMPTY_BODY = YES; + CLANG_WARN_ENUM_CONVERSION = YES; + CLANG_WARN_INFINITE_RECURSION = YES; + CLANG_WARN_INT_CONVERSION = YES; + CLANG_WARN_NON_LITERAL_NULL_CONVERSION = YES; + CLANG_WARN_OBJC_IMPLICIT_RETAIN_SELF = YES; + CLANG_WARN_OBJC_LITERAL_CONVERSION = YES; + CLANG_WARN_OBJC_ROOT_CLASS = YES_ERROR; + CLANG_WARN_RANGE_LOOP_ANALYSIS = YES; + CLANG_WARN_STRICT_PROTOTYPES = YES; + CLANG_WARN_SUSPICIOUS_MOVE = YES; + CLANG_WARN_UNREACHABLE_CODE = YES; + CLANG_WARN__DUPLICATE_METHOD_MATCH = YES; + "CODE_SIGN_IDENTITY[sdk=iphoneos*]" = "iPhone Developer"; + COPY_PHASE_STRIP = NO; + DEBUG_INFORMATION_FORMAT = dwarf; + ENABLE_STRICT_OBJC_MSGSEND = YES; + ENABLE_TESTABILITY = YES; + ENABLE_USER_SCRIPT_SANDBOXING = NO; + GCC_C_LANGUAGE_STANDARD = gnu99; + GCC_DYNAMIC_NO_PIC = NO; + GCC_NO_COMMON_BLOCKS = YES; + GCC_OPTIMIZATION_LEVEL = 0; + GCC_PREPROCESSOR_DEFINITIONS = ( + "DEBUG=1", + "$(inherited)", + ); + GCC_WARN_64_TO_32_BIT_CONVERSION = YES; + GCC_WARN_ABOUT_RETURN_TYPE = YES_ERROR; + GCC_WARN_UNDECLARED_SELECTOR = YES; + GCC_WARN_UNINITIALIZED_AUTOS = YES_AGGRESSIVE; + GCC_WARN_UNUSED_FUNCTION = YES; + GCC_WARN_UNUSED_VARIABLE = YES; + IPHONEOS_DEPLOYMENT_TARGET = 13.0; + MTL_ENABLE_DEBUG_INFO = YES; + ONLY_ACTIVE_ARCH = YES; + SDKROOT = iphoneos; + TARGETED_DEVICE_FAMILY = "1,2"; + }; + name = Debug; + }; + 97C147041CF9000F007C117D /* Release */ = { + isa = XCBuildConfiguration; + buildSettings = { + ALWAYS_SEARCH_USER_PATHS = NO; + ASSETCATALOG_COMPILER_GENERATE_SWIFT_ASSET_SYMBOL_EXTENSIONS = YES; + CLANG_ANALYZER_NONNULL = YES; + CLANG_CXX_LANGUAGE_STANDARD = "gnu++0x"; + CLANG_CXX_LIBRARY = "libc++"; + CLANG_ENABLE_MODULES = YES; + CLANG_ENABLE_OBJC_ARC = YES; + CLANG_WARN_BLOCK_CAPTURE_AUTORELEASING = YES; + CLANG_WARN_BOOL_CONVERSION = YES; + CLANG_WARN_COMMA = YES; + CLANG_WARN_CONSTANT_CONVERSION = YES; + CLANG_WARN_DEPRECATED_OBJC_IMPLEMENTATIONS = YES; + CLANG_WARN_DIRECT_OBJC_ISA_USAGE = YES_ERROR; + CLANG_WARN_EMPTY_BODY = YES; + CLANG_WARN_ENUM_CONVERSION = YES; + CLANG_WARN_INFINITE_RECURSION = YES; + CLANG_WARN_INT_CONVERSION = YES; + CLANG_WARN_NON_LITERAL_NULL_CONVERSION = YES; + CLANG_WARN_OBJC_IMPLICIT_RETAIN_SELF = YES; + CLANG_WARN_OBJC_LITERAL_CONVERSION = YES; + CLANG_WARN_OBJC_ROOT_CLASS = YES_ERROR; + CLANG_WARN_RANGE_LOOP_ANALYSIS = YES; + CLANG_WARN_STRICT_PROTOTYPES = YES; + CLANG_WARN_SUSPICIOUS_MOVE = YES; + CLANG_WARN_UNREACHABLE_CODE = YES; + CLANG_WARN__DUPLICATE_METHOD_MATCH = YES; + "CODE_SIGN_IDENTITY[sdk=iphoneos*]" = "iPhone Developer"; + COPY_PHASE_STRIP = NO; + DEBUG_INFORMATION_FORMAT = "dwarf-with-dsym"; + ENABLE_NS_ASSERTIONS = NO; + ENABLE_STRICT_OBJC_MSGSEND = YES; + ENABLE_USER_SCRIPT_SANDBOXING = NO; + GCC_C_LANGUAGE_STANDARD = gnu99; + GCC_NO_COMMON_BLOCKS = YES; + GCC_WARN_64_TO_32_BIT_CONVERSION = YES; + GCC_WARN_ABOUT_RETURN_TYPE = YES_ERROR; + GCC_WARN_UNDECLARED_SELECTOR = YES; + GCC_WARN_UNINITIALIZED_AUTOS = YES_AGGRESSIVE; + GCC_WARN_UNUSED_FUNCTION = YES; + GCC_WARN_UNUSED_VARIABLE = YES; + IPHONEOS_DEPLOYMENT_TARGET = 13.0; + MTL_ENABLE_DEBUG_INFO = NO; + SDKROOT = iphoneos; + SUPPORTED_PLATFORMS = iphoneos; + SWIFT_COMPILATION_MODE = wholemodule; + SWIFT_OPTIMIZATION_LEVEL = "-O"; + TARGETED_DEVICE_FAMILY = "1,2"; + VALIDATE_PRODUCT = YES; + }; + name = Release; + }; + 97C147061CF9000F007C117D /* Debug */ = { + isa = XCBuildConfiguration; + baseConfigurationReference = 9740EEB21CF90195004384FC /* Debug.xcconfig */; + buildSettings = { + ASSETCATALOG_COMPILER_APPICON_NAME = AppIcon; + CLANG_ENABLE_MODULES = YES; + CURRENT_PROJECT_VERSION = "$(FLUTTER_BUILD_NUMBER)"; + DEVELOPMENT_TEAM = PRDQGB267K; + ENABLE_BITCODE = NO; + INFOPLIST_FILE = Runner/Info.plist; + LD_RUNPATH_SEARCH_PATHS = ( + "$(inherited)", + "@executable_path/Frameworks", + ); + PRODUCT_BUNDLE_IDENTIFIER = dev.hyodot.flutterOndeviceAiExample; + PRODUCT_NAME = "$(TARGET_NAME)"; + SWIFT_OBJC_BRIDGING_HEADER = "Runner/Runner-Bridging-Header.h"; + SWIFT_OPTIMIZATION_LEVEL = "-Onone"; + SWIFT_VERSION = 5.0; + VERSIONING_SYSTEM = "apple-generic"; + }; + name = Debug; + }; + 97C147071CF9000F007C117D /* Release */ = { + isa = XCBuildConfiguration; + baseConfigurationReference = 7AFA3C8E1D35360C0083082E /* Release.xcconfig */; + buildSettings = { + ASSETCATALOG_COMPILER_APPICON_NAME = AppIcon; + CLANG_ENABLE_MODULES = YES; + CURRENT_PROJECT_VERSION = "$(FLUTTER_BUILD_NUMBER)"; + DEVELOPMENT_TEAM = PRDQGB267K; + ENABLE_BITCODE = NO; + INFOPLIST_FILE = Runner/Info.plist; + LD_RUNPATH_SEARCH_PATHS = ( + "$(inherited)", + "@executable_path/Frameworks", + ); + PRODUCT_BUNDLE_IDENTIFIER = dev.hyodot.flutterOndeviceAiExample; + PRODUCT_NAME = "$(TARGET_NAME)"; + SWIFT_OBJC_BRIDGING_HEADER = "Runner/Runner-Bridging-Header.h"; + SWIFT_VERSION = 5.0; + VERSIONING_SYSTEM = "apple-generic"; + }; + name = Release; + }; +/* End XCBuildConfiguration section */ + +/* Begin XCConfigurationList section */ + 331C8087294A63A400263BE5 /* Build configuration list for PBXNativeTarget "RunnerTests" */ = { + isa = XCConfigurationList; + buildConfigurations = ( + 331C8088294A63A400263BE5 /* Debug */, + 331C8089294A63A400263BE5 /* Release */, + 331C808A294A63A400263BE5 /* Profile */, + ); + defaultConfigurationIsVisible = 0; + defaultConfigurationName = Release; + }; + 97C146E91CF9000F007C117D /* Build configuration list for PBXProject "Runner" */ = { + isa = XCConfigurationList; + buildConfigurations = ( + 97C147031CF9000F007C117D /* Debug */, + 97C147041CF9000F007C117D /* Release */, + 249021D3217E4FDB00AE95B9 /* Profile */, + ); + defaultConfigurationIsVisible = 0; + defaultConfigurationName = Release; + }; + 97C147051CF9000F007C117D /* Build configuration list for PBXNativeTarget "Runner" */ = { + isa = XCConfigurationList; + buildConfigurations = ( + 97C147061CF9000F007C117D /* Debug */, + 97C147071CF9000F007C117D /* Release */, + 249021D4217E4FDB00AE95B9 /* Profile */, + ); + defaultConfigurationIsVisible = 0; + defaultConfigurationName = Release; + }; +/* End XCConfigurationList section */ + +/* Begin XCLocalSwiftPackageReference section */ + 781AD8BC2B33823900A9FFBB /* XCLocalSwiftPackageReference "FlutterGeneratedPluginSwiftPackage" */ = { + isa = XCLocalSwiftPackageReference; + relativePath = Flutter/ephemeral/Packages/FlutterGeneratedPluginSwiftPackage; + }; +/* End XCLocalSwiftPackageReference section */ + +/* Begin XCSwiftPackageProductDependency section */ + 78A3181F2AECB46A00862997 /* FlutterGeneratedPluginSwiftPackage */ = { + isa = XCSwiftPackageProductDependency; + productName = FlutterGeneratedPluginSwiftPackage; + }; +/* End XCSwiftPackageProductDependency section */ + }; + rootObject = 97C146E61CF9000F007C117D /* Project object */; +} diff --git a/libraries/flutter_ondevice_ai/example/ios/Runner.xcodeproj/xcshareddata/xcschemes/Runner.xcscheme b/libraries/flutter_ondevice_ai/example/ios/Runner.xcodeproj/xcshareddata/xcschemes/Runner.xcscheme new file mode 100644 index 0000000..c3fedb2 --- /dev/null +++ b/libraries/flutter_ondevice_ai/example/ios/Runner.xcodeproj/xcshareddata/xcschemes/Runner.xcscheme @@ -0,0 +1,119 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + diff --git a/libraries/flutter_ondevice_ai/example/ios/Runner/AppDelegate.swift b/libraries/flutter_ondevice_ai/example/ios/Runner/AppDelegate.swift new file mode 100644 index 0000000..6266644 --- /dev/null +++ b/libraries/flutter_ondevice_ai/example/ios/Runner/AppDelegate.swift @@ -0,0 +1,13 @@ +import Flutter +import UIKit + +@main +@objc class AppDelegate: FlutterAppDelegate { + override func application( + _ application: UIApplication, + didFinishLaunchingWithOptions launchOptions: [UIApplication.LaunchOptionsKey: Any]? + ) -> Bool { + GeneratedPluginRegistrant.register(with: self) + return super.application(application, didFinishLaunchingWithOptions: launchOptions) + } +} diff --git a/libraries/flutter_ondevice_ai/example/ios/Runner/Assets.xcassets/AppIcon.appiconset/Contents.json b/libraries/flutter_ondevice_ai/example/ios/Runner/Assets.xcassets/AppIcon.appiconset/Contents.json new file mode 100644 index 0000000..e882ab9 --- /dev/null +++ b/libraries/flutter_ondevice_ai/example/ios/Runner/Assets.xcassets/AppIcon.appiconset/Contents.json @@ -0,0 +1,122 @@ +{ + "images": [ + { + "size": "20x20", + "idiom": "iphone", + "filename": "Icon-App-20x20@2x.png", + "scale": "2x" + }, + { + "size": "20x20", + "idiom": "iphone", + "filename": "Icon-App-20x20@3x.png", + "scale": "3x" + }, + { + "size": "29x29", + "idiom": "iphone", + "filename": "Icon-App-29x29@1x.png", + "scale": "1x" + }, + { + "size": "29x29", + "idiom": "iphone", + "filename": "Icon-App-29x29@2x.png", + "scale": "2x" + }, + { + "size": "29x29", + "idiom": "iphone", + "filename": "Icon-App-29x29@3x.png", + "scale": "3x" + }, + { + "size": "40x40", + "idiom": "iphone", + "filename": "Icon-App-40x40@2x.png", + "scale": "2x" + }, + { + "size": "40x40", + "idiom": "iphone", + "filename": "Icon-App-40x40@3x.png", + "scale": "3x" + }, + { + "size": "60x60", + "idiom": "iphone", + "filename": "Icon-App-60x60@2x.png", + "scale": "2x" + }, + { + "size": "60x60", + "idiom": "iphone", + "filename": "Icon-App-60x60@3x.png", + "scale": "3x" + }, + { + "size": "20x20", + "idiom": "ipad", + "filename": "Icon-App-20x20@1x.png", + "scale": "1x" + }, + { + "size": "20x20", + "idiom": "ipad", + "filename": "Icon-App-20x20@2x.png", + "scale": "2x" + }, + { + "size": "29x29", + "idiom": "ipad", + "filename": "Icon-App-29x29@1x.png", + "scale": "1x" + }, + { + "size": "29x29", + "idiom": "ipad", + "filename": "Icon-App-29x29@2x.png", + "scale": "2x" + }, + { + "size": "40x40", + "idiom": "ipad", + "filename": "Icon-App-40x40@1x.png", + "scale": "1x" + }, + { + "size": "40x40", + "idiom": "ipad", + "filename": "Icon-App-40x40@2x.png", + "scale": "2x" + }, + { + "size": "76x76", + "idiom": "ipad", + "filename": "Icon-App-76x76@1x.png", + "scale": "1x" + }, + { + "size": "76x76", + "idiom": "ipad", + "filename": "Icon-App-76x76@2x.png", + "scale": "2x" + }, + { + "size": "83.5x83.5", + "idiom": "ipad", + "filename": "Icon-App-83.5x83.5@2x.png", + "scale": "2x" + }, + { + "size": "1024x1024", + "idiom": "ios-marketing", + "filename": "Icon-App-1024x1024@1x.png", + "scale": "1x" + } + ], + "info": { + "version": 1, + "author": "xcode" + } +} diff --git a/libraries/flutter_ondevice_ai/example/ios/Runner/Assets.xcassets/AppIcon.appiconset/Icon-App-1024x1024@1x.png b/libraries/flutter_ondevice_ai/example/ios/Runner/Assets.xcassets/AppIcon.appiconset/Icon-App-1024x1024@1x.png new file mode 100644 index 0000000..dc9ada4 Binary files /dev/null and b/libraries/flutter_ondevice_ai/example/ios/Runner/Assets.xcassets/AppIcon.appiconset/Icon-App-1024x1024@1x.png differ diff --git a/libraries/flutter_ondevice_ai/example/ios/Runner/Assets.xcassets/AppIcon.appiconset/Icon-App-20x20@1x.png b/libraries/flutter_ondevice_ai/example/ios/Runner/Assets.xcassets/AppIcon.appiconset/Icon-App-20x20@1x.png new file mode 100644 index 0000000..7353c41 Binary files /dev/null and b/libraries/flutter_ondevice_ai/example/ios/Runner/Assets.xcassets/AppIcon.appiconset/Icon-App-20x20@1x.png differ diff --git a/libraries/flutter_ondevice_ai/example/ios/Runner/Assets.xcassets/AppIcon.appiconset/Icon-App-20x20@2x.png b/libraries/flutter_ondevice_ai/example/ios/Runner/Assets.xcassets/AppIcon.appiconset/Icon-App-20x20@2x.png new file mode 100644 index 0000000..797d452 Binary files /dev/null and b/libraries/flutter_ondevice_ai/example/ios/Runner/Assets.xcassets/AppIcon.appiconset/Icon-App-20x20@2x.png differ diff --git a/libraries/flutter_ondevice_ai/example/ios/Runner/Assets.xcassets/AppIcon.appiconset/Icon-App-20x20@3x.png b/libraries/flutter_ondevice_ai/example/ios/Runner/Assets.xcassets/AppIcon.appiconset/Icon-App-20x20@3x.png new file mode 100644 index 0000000..6ed2d93 Binary files /dev/null and b/libraries/flutter_ondevice_ai/example/ios/Runner/Assets.xcassets/AppIcon.appiconset/Icon-App-20x20@3x.png differ diff --git a/libraries/flutter_ondevice_ai/example/ios/Runner/Assets.xcassets/AppIcon.appiconset/Icon-App-29x29@1x.png b/libraries/flutter_ondevice_ai/example/ios/Runner/Assets.xcassets/AppIcon.appiconset/Icon-App-29x29@1x.png new file mode 100644 index 0000000..4cd7b00 Binary files /dev/null and b/libraries/flutter_ondevice_ai/example/ios/Runner/Assets.xcassets/AppIcon.appiconset/Icon-App-29x29@1x.png differ diff --git a/libraries/flutter_ondevice_ai/example/ios/Runner/Assets.xcassets/AppIcon.appiconset/Icon-App-29x29@2x.png b/libraries/flutter_ondevice_ai/example/ios/Runner/Assets.xcassets/AppIcon.appiconset/Icon-App-29x29@2x.png new file mode 100644 index 0000000..fe73094 Binary files /dev/null and b/libraries/flutter_ondevice_ai/example/ios/Runner/Assets.xcassets/AppIcon.appiconset/Icon-App-29x29@2x.png differ diff --git a/libraries/flutter_ondevice_ai/example/ios/Runner/Assets.xcassets/AppIcon.appiconset/Icon-App-29x29@3x.png b/libraries/flutter_ondevice_ai/example/ios/Runner/Assets.xcassets/AppIcon.appiconset/Icon-App-29x29@3x.png new file mode 100644 index 0000000..321773c Binary files /dev/null and b/libraries/flutter_ondevice_ai/example/ios/Runner/Assets.xcassets/AppIcon.appiconset/Icon-App-29x29@3x.png differ diff --git a/libraries/flutter_ondevice_ai/example/ios/Runner/Assets.xcassets/AppIcon.appiconset/Icon-App-40x40@1x.png b/libraries/flutter_ondevice_ai/example/ios/Runner/Assets.xcassets/AppIcon.appiconset/Icon-App-40x40@1x.png new file mode 100644 index 0000000..797d452 Binary files /dev/null and b/libraries/flutter_ondevice_ai/example/ios/Runner/Assets.xcassets/AppIcon.appiconset/Icon-App-40x40@1x.png differ diff --git a/libraries/flutter_ondevice_ai/example/ios/Runner/Assets.xcassets/AppIcon.appiconset/Icon-App-40x40@2x.png b/libraries/flutter_ondevice_ai/example/ios/Runner/Assets.xcassets/AppIcon.appiconset/Icon-App-40x40@2x.png new file mode 100644 index 0000000..502f463 Binary files /dev/null and b/libraries/flutter_ondevice_ai/example/ios/Runner/Assets.xcassets/AppIcon.appiconset/Icon-App-40x40@2x.png differ diff --git a/libraries/flutter_ondevice_ai/example/ios/Runner/Assets.xcassets/AppIcon.appiconset/Icon-App-40x40@3x.png b/libraries/flutter_ondevice_ai/example/ios/Runner/Assets.xcassets/AppIcon.appiconset/Icon-App-40x40@3x.png new file mode 100644 index 0000000..0ec3034 Binary files /dev/null and b/libraries/flutter_ondevice_ai/example/ios/Runner/Assets.xcassets/AppIcon.appiconset/Icon-App-40x40@3x.png differ diff --git a/libraries/flutter_ondevice_ai/example/ios/Runner/Assets.xcassets/AppIcon.appiconset/Icon-App-60x60@2x.png b/libraries/flutter_ondevice_ai/example/ios/Runner/Assets.xcassets/AppIcon.appiconset/Icon-App-60x60@2x.png new file mode 100644 index 0000000..0ec3034 Binary files /dev/null and b/libraries/flutter_ondevice_ai/example/ios/Runner/Assets.xcassets/AppIcon.appiconset/Icon-App-60x60@2x.png differ diff --git a/libraries/flutter_ondevice_ai/example/ios/Runner/Assets.xcassets/AppIcon.appiconset/Icon-App-60x60@3x.png b/libraries/flutter_ondevice_ai/example/ios/Runner/Assets.xcassets/AppIcon.appiconset/Icon-App-60x60@3x.png new file mode 100644 index 0000000..e9f5fea Binary files /dev/null and b/libraries/flutter_ondevice_ai/example/ios/Runner/Assets.xcassets/AppIcon.appiconset/Icon-App-60x60@3x.png differ diff --git a/libraries/flutter_ondevice_ai/example/ios/Runner/Assets.xcassets/AppIcon.appiconset/Icon-App-76x76@1x.png b/libraries/flutter_ondevice_ai/example/ios/Runner/Assets.xcassets/AppIcon.appiconset/Icon-App-76x76@1x.png new file mode 100644 index 0000000..84ac32a Binary files /dev/null and b/libraries/flutter_ondevice_ai/example/ios/Runner/Assets.xcassets/AppIcon.appiconset/Icon-App-76x76@1x.png differ diff --git a/libraries/flutter_ondevice_ai/example/ios/Runner/Assets.xcassets/AppIcon.appiconset/Icon-App-76x76@2x.png b/libraries/flutter_ondevice_ai/example/ios/Runner/Assets.xcassets/AppIcon.appiconset/Icon-App-76x76@2x.png new file mode 100644 index 0000000..8953cba Binary files /dev/null and b/libraries/flutter_ondevice_ai/example/ios/Runner/Assets.xcassets/AppIcon.appiconset/Icon-App-76x76@2x.png differ diff --git a/libraries/flutter_ondevice_ai/example/ios/Runner/Assets.xcassets/AppIcon.appiconset/Icon-App-83.5x83.5@2x.png b/libraries/flutter_ondevice_ai/example/ios/Runner/Assets.xcassets/AppIcon.appiconset/Icon-App-83.5x83.5@2x.png new file mode 100644 index 0000000..0467bf1 Binary files /dev/null and b/libraries/flutter_ondevice_ai/example/ios/Runner/Assets.xcassets/AppIcon.appiconset/Icon-App-83.5x83.5@2x.png differ diff --git a/libraries/flutter_ondevice_ai/example/ios/Runner/Assets.xcassets/LaunchImage.imageset/Contents.json b/libraries/flutter_ondevice_ai/example/ios/Runner/Assets.xcassets/LaunchImage.imageset/Contents.json new file mode 100644 index 0000000..781d7cd --- /dev/null +++ b/libraries/flutter_ondevice_ai/example/ios/Runner/Assets.xcassets/LaunchImage.imageset/Contents.json @@ -0,0 +1,23 @@ +{ + "images": [ + { + "idiom": "universal", + "filename": "LaunchImage.png", + "scale": "1x" + }, + { + "idiom": "universal", + "filename": "LaunchImage@2x.png", + "scale": "2x" + }, + { + "idiom": "universal", + "filename": "LaunchImage@3x.png", + "scale": "3x" + } + ], + "info": { + "version": 1, + "author": "xcode" + } +} diff --git a/libraries/flutter_ondevice_ai/example/ios/Runner/Assets.xcassets/LaunchImage.imageset/LaunchImage.png b/libraries/flutter_ondevice_ai/example/ios/Runner/Assets.xcassets/LaunchImage.imageset/LaunchImage.png new file mode 100644 index 0000000..9da19ea Binary files /dev/null and b/libraries/flutter_ondevice_ai/example/ios/Runner/Assets.xcassets/LaunchImage.imageset/LaunchImage.png differ diff --git a/libraries/flutter_ondevice_ai/example/ios/Runner/Assets.xcassets/LaunchImage.imageset/LaunchImage@2x.png b/libraries/flutter_ondevice_ai/example/ios/Runner/Assets.xcassets/LaunchImage.imageset/LaunchImage@2x.png new file mode 100644 index 0000000..9da19ea Binary files /dev/null and b/libraries/flutter_ondevice_ai/example/ios/Runner/Assets.xcassets/LaunchImage.imageset/LaunchImage@2x.png differ diff --git a/libraries/flutter_ondevice_ai/example/ios/Runner/Assets.xcassets/LaunchImage.imageset/LaunchImage@3x.png b/libraries/flutter_ondevice_ai/example/ios/Runner/Assets.xcassets/LaunchImage.imageset/LaunchImage@3x.png new file mode 100644 index 0000000..9da19ea Binary files /dev/null and b/libraries/flutter_ondevice_ai/example/ios/Runner/Assets.xcassets/LaunchImage.imageset/LaunchImage@3x.png differ diff --git a/libraries/flutter_ondevice_ai/example/ios/Runner/Assets.xcassets/LaunchImage.imageset/README.md b/libraries/flutter_ondevice_ai/example/ios/Runner/Assets.xcassets/LaunchImage.imageset/README.md new file mode 100644 index 0000000..b5b843a --- /dev/null +++ b/libraries/flutter_ondevice_ai/example/ios/Runner/Assets.xcassets/LaunchImage.imageset/README.md @@ -0,0 +1,5 @@ +# Launch Screen Assets + +You can customize the launch screen with your own desired assets by replacing the image files in this directory. + +You can also do it by opening your Flutter project's Xcode project with `open ios/Runner.xcworkspace`, selecting `Runner/Assets.xcassets` in the Project Navigator and dropping in the desired images. diff --git a/libraries/flutter_ondevice_ai/example/ios/Runner/Base.lproj/LaunchScreen.storyboard b/libraries/flutter_ondevice_ai/example/ios/Runner/Base.lproj/LaunchScreen.storyboard new file mode 100644 index 0000000..f2e259c --- /dev/null +++ b/libraries/flutter_ondevice_ai/example/ios/Runner/Base.lproj/LaunchScreen.storyboard @@ -0,0 +1,37 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + diff --git a/libraries/flutter_ondevice_ai/example/ios/Runner/Base.lproj/Main.storyboard b/libraries/flutter_ondevice_ai/example/ios/Runner/Base.lproj/Main.storyboard new file mode 100644 index 0000000..f3c2851 --- /dev/null +++ b/libraries/flutter_ondevice_ai/example/ios/Runner/Base.lproj/Main.storyboard @@ -0,0 +1,26 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + diff --git a/libraries/flutter_ondevice_ai/example/ios/Runner/Info.plist b/libraries/flutter_ondevice_ai/example/ios/Runner/Info.plist new file mode 100644 index 0000000..e9471fe --- /dev/null +++ b/libraries/flutter_ondevice_ai/example/ios/Runner/Info.plist @@ -0,0 +1,49 @@ + + + + + CFBundleDevelopmentRegion + $(DEVELOPMENT_LANGUAGE) + CFBundleDisplayName + Flutter Ondevice Ai Example + CFBundleExecutable + $(EXECUTABLE_NAME) + CFBundleIdentifier + $(PRODUCT_BUNDLE_IDENTIFIER) + CFBundleInfoDictionaryVersion + 6.0 + CFBundleName + flutter_ondevice_ai_example + CFBundlePackageType + APPL + CFBundleShortVersionString + $(FLUTTER_BUILD_NAME) + CFBundleSignature + ???? + CFBundleVersion + $(FLUTTER_BUILD_NUMBER) + LSRequiresIPhoneOS + + UILaunchStoryboardName + LaunchScreen + UIMainStoryboardFile + Main + UISupportedInterfaceOrientations + + UIInterfaceOrientationPortrait + UIInterfaceOrientationLandscapeLeft + UIInterfaceOrientationLandscapeRight + + UISupportedInterfaceOrientations~ipad + + UIInterfaceOrientationPortrait + UIInterfaceOrientationPortraitUpsideDown + UIInterfaceOrientationLandscapeLeft + UIInterfaceOrientationLandscapeRight + + CADisableMinimumFrameDurationOnPhone + + UIApplicationSupportsIndirectInputEvents + + + diff --git a/libraries/flutter_ondevice_ai/example/ios/Runner/Runner-Bridging-Header.h b/libraries/flutter_ondevice_ai/example/ios/Runner/Runner-Bridging-Header.h new file mode 100644 index 0000000..308a2a5 --- /dev/null +++ b/libraries/flutter_ondevice_ai/example/ios/Runner/Runner-Bridging-Header.h @@ -0,0 +1 @@ +#import "GeneratedPluginRegistrant.h" diff --git a/libraries/flutter_ondevice_ai/example/ios/RunnerTests/RunnerTests.swift b/libraries/flutter_ondevice_ai/example/ios/RunnerTests/RunnerTests.swift new file mode 100644 index 0000000..86a7c3b --- /dev/null +++ b/libraries/flutter_ondevice_ai/example/ios/RunnerTests/RunnerTests.swift @@ -0,0 +1,12 @@ +import Flutter +import UIKit +import XCTest + +class RunnerTests: XCTestCase { + + func testExample() { + // If you add code to the Runner application, consider adding tests here. + // See https://developer.apple.com/documentation/xctest for more information about using XCTest. + } + +} diff --git a/libraries/flutter_ondevice_ai/example/lib/app_state.dart b/libraries/flutter_ondevice_ai/example/lib/app_state.dart new file mode 100644 index 0000000..7fea022 --- /dev/null +++ b/libraries/flutter_ondevice_ai/example/lib/app_state.dart @@ -0,0 +1,284 @@ +import 'dart:async'; + +import 'package:flutter/foundation.dart'; +import 'package:flutter_ondevice_ai/flutter_ondevice_ai.dart'; + +class FeatureInfo { + final String id; + final String name; + final String description; + final IconName icon; + final bool isAvailable; + final bool isComingSoon; + + const FeatureInfo({ + required this.id, + required this.name, + required this.description, + required this.icon, + this.isAvailable = false, + this.isComingSoon = false, + }); +} + +enum IconName { + description, + label, + documentScanner, + chatBubble, + language, + edit, + checkCircle, + image, + autoFixHigh, +} + +class DeviceInfoDisplay { + final String platform; + final String osVersion; + final bool supportsOnDeviceAI; + final String provider; + + const DeviceInfoDisplay({ + required this.platform, + required this.osVersion, + required this.supportsOnDeviceAI, + required this.provider, + }); +} + +class ModelState { + final InferenceEngine currentEngine; + final List availableModels; + final List downloadedModelIds; + final String? loadedModelId; + final ModelDownloadProgress? downloadProgress; + final bool isDownloading; + + const ModelState({ + this.currentEngine = InferenceEngine.none, + this.availableModels = const [], + this.downloadedModelIds = const [], + this.loadedModelId, + this.downloadProgress, + this.isDownloading = false, + }); + + ModelState copyWith({ + InferenceEngine? currentEngine, + List? availableModels, + List? downloadedModelIds, + String? loadedModelId, + ModelDownloadProgress? downloadProgress, + bool? isDownloading, + bool clearLoadedModelId = false, + bool clearDownloadProgress = false, + }) { + return ModelState( + currentEngine: currentEngine ?? this.currentEngine, + availableModels: availableModels ?? this.availableModels, + downloadedModelIds: downloadedModelIds ?? this.downloadedModelIds, + loadedModelId: clearLoadedModelId ? null : (loadedModelId ?? this.loadedModelId), + downloadProgress: clearDownloadProgress ? null : (downloadProgress ?? this.downloadProgress), + isDownloading: isDownloading ?? this.isDownloading, + ); + } +} + +enum SDKState { notInitialized, initializing, initialized, error } + +const _featureDefinitions = [ + (id: 'summarize', name: 'Summarize', description: 'Condense long text into concise summaries', icon: IconName.description), + (id: 'classify', name: 'Classify', description: 'Categorize content into predefined labels', icon: IconName.label), + (id: 'extract', name: 'Extract', description: 'Extract entities and key information from text', icon: IconName.documentScanner), + (id: 'chat', name: 'Chat', description: 'Have conversational interactions with AI', icon: IconName.chatBubble), + (id: 'translate', name: 'Translate', description: 'Translate text between languages', icon: IconName.language), + (id: 'rewrite', name: 'Rewrite', description: 'Rewrite text in different styles or tones', icon: IconName.edit), + (id: 'proofread', name: 'Proofread', description: 'Check and correct grammar and spelling', icon: IconName.checkCircle), + (id: 'describeImage', name: 'Describe Image', description: 'Generate descriptions for images', icon: IconName.image), + (id: 'generateImage', name: 'Generate Image', description: 'Generate images from text prompts', icon: IconName.autoFixHigh), +]; + +const _comingSoonFeatures = {'describeImage', 'generateImage'}; + +class AppState extends ChangeNotifier { + final _ai = FlutterOndeviceAi.instance; + + AppState() { + initializeSDK(); + } + + SDKState _sdkState = SDKState.notInitialized; + String? _errorMessage; + DeviceInfoDisplay? _deviceInfo; + DeviceCapability? _capability; + List _availableFeatures = []; + bool _isModelReady = false; + ModelState _modelState = const ModelState(); + + SDKState get sdkState => _sdkState; + String? get errorMessage => _errorMessage; + DeviceInfoDisplay? get deviceInfo => _deviceInfo; + DeviceCapability? get capability => _capability; + List get availableFeatures => _availableFeatures; + bool get isModelReady => _isModelReady; + ModelState get modelState => _modelState; + + Future initializeSDK() async { + if (_sdkState == SDKState.initializing || _sdkState == SDKState.initialized) { + return; + } + + _sdkState = SDKState.initializing; + _errorMessage = null; + notifyListeners(); + + try { + await _ai.initialize().timeout( + const Duration(seconds: 35), + onTimeout: () => throw TimeoutException('SDK initialization timed out'), + ); + final cap = await _ai.getDeviceCapability().timeout( + const Duration(seconds: 15), + onTimeout: () => throw TimeoutException('Device capability check timed out'), + ); + _capability = cap; + _isModelReady = cap.isModelReady; + + final isIOS = !kIsWeb && defaultTargetPlatform == TargetPlatform.iOS; + _deviceInfo = DeviceInfoDisplay( + platform: kIsWeb ? 'Web' : (isIOS ? 'iOS' : 'Android'), + osVersion: kIsWeb ? 'Chrome' : 'Unknown', + supportsOnDeviceAI: cap.isSupported, + provider: cap.platform == OndeviceAiPlatform.ios + ? 'Apple Intelligence' + : cap.platform == OndeviceAiPlatform.android + ? 'Gemini Nano' + : 'Chrome Built-in AI', + ); + + final modelReady = cap.isModelReady; + _availableFeatures = _featureDefinitions.map((def) { + final isComingSoon = _comingSoonFeatures.contains(def.id); + final featureMap = cap.features; + final isFeatureAvailable = featureMap[def.id] ?? false; + return FeatureInfo( + id: def.id, + name: def.name, + description: def.description, + icon: def.icon, + isAvailable: isComingSoon ? false : modelReady && isFeatureAvailable, + isComingSoon: isComingSoon, + ); + }).toList(); + + _sdkState = SDKState.initialized; + notifyListeners(); + + // Load model info after initialization + try { + final results = await Future.wait([ + _ai.getAvailableModels(), + _ai.getDownloadedModels(), + _ai.getLoadedModel(), + _ai.getCurrentEngine(), + ]); + _modelState = _modelState.copyWith( + availableModels: results[0] as List, + downloadedModelIds: results[1] as List, + loadedModelId: results[2] as String?, + currentEngine: results[3] as InferenceEngine, + clearLoadedModelId: results[2] == null, + ); + notifyListeners(); + } catch (_) { + // Model management may not be available on all devices + } + } catch (e) { + debugPrint('[AppState] ERROR: $e'); + _sdkState = SDKState.error; + _errorMessage = e.toString(); + notifyListeners(); + } + } + + Future refreshModels() async { + debugPrint('[AppState] refreshModels()'); + try { + final results = await Future.wait([ + _ai.getAvailableModels(), + _ai.getDownloadedModels(), + _ai.getLoadedModel(), + _ai.getCurrentEngine(), + ]); + _modelState = _modelState.copyWith( + availableModels: results[0] as List, + downloadedModelIds: results[1] as List, + loadedModelId: results[2] as String?, + currentEngine: results[3] as InferenceEngine, + clearLoadedModelId: results[2] == null, + ); + debugPrint('[AppState] refreshModels() done — engine=${_modelState.currentEngine}, loaded=${_modelState.loadedModelId}, downloaded=${_modelState.downloadedModelIds}'); + notifyListeners(); + } catch (e, st) { + debugPrint('[AppState] refreshModels() ERROR: $e\n$st'); + } + } + + Future downloadModelById(String modelId) async { + debugPrint('[AppState] downloadModel($modelId) starting...'); + _modelState = _modelState.copyWith(isDownloading: true, clearDownloadProgress: true); + notifyListeners(); + try { + await _ai.downloadModel(modelId, onProgress: (progress) { + debugPrint('[AppState] downloadModel($modelId) progress: ${(progress.progress * 100).round()}%'); + _modelState = _modelState.copyWith(downloadProgress: progress); + notifyListeners(); + }); + debugPrint('[AppState] downloadModel($modelId) completed'); + await refreshModels(); + } catch (e, st) { + debugPrint('[AppState] downloadModel($modelId) ERROR: $e\n$st'); + rethrow; + } finally { + _modelState = _modelState.copyWith(isDownloading: false, clearDownloadProgress: true); + notifyListeners(); + } + } + + Future loadModelById(String modelId) async { + debugPrint('[AppState] loadModel($modelId) starting...'); + try { + await _ai.loadModel(modelId); + debugPrint('[AppState] loadModel($modelId) success'); + await refreshModels(); + } catch (e, st) { + debugPrint('[AppState] loadModel($modelId) ERROR: $e\n$st'); + rethrow; + } + } + + Future deleteModelById(String modelId) async { + debugPrint('[AppState] deleteModel($modelId) starting...'); + try { + await _ai.deleteModel(modelId); + debugPrint('[AppState] deleteModel($modelId) success'); + await refreshModels(); + } catch (e, st) { + debugPrint('[AppState] deleteModel($modelId) ERROR: $e\n$st'); + rethrow; + } + } + + Future switchToDeviceAI() async { + debugPrint('[AppState] switchToDeviceAI() starting...'); + try { + await _ai.switchToDeviceAI(); + debugPrint('[AppState] switchToDeviceAI() success'); + await refreshModels(); + } catch (e, st) { + debugPrint('[AppState] switchToDeviceAI() ERROR: $e\n$st'); + rethrow; + } + } +} diff --git a/libraries/flutter_ondevice_ai/example/lib/main.dart b/libraries/flutter_ondevice_ai/example/lib/main.dart new file mode 100644 index 0000000..edbe329 --- /dev/null +++ b/libraries/flutter_ondevice_ai/example/lib/main.dart @@ -0,0 +1,122 @@ +import 'package:flutter/material.dart'; +import 'package:flutter/services.dart'; +import 'package:provider/provider.dart'; + +import 'app_state.dart'; +import 'screens/features_screen.dart'; +import 'screens/framework_screen.dart'; +import 'screens/device_screen.dart'; +import 'screens/settings_screen.dart'; +import 'screens/feature_detail_screen.dart'; +import 'screens/framework_detail_screen.dart'; + +void main() { + runApp(const MyApp()); +} + +class MyApp extends StatelessWidget { + const MyApp({super.key}); + + @override + Widget build(BuildContext context) { + return ChangeNotifierProvider( + create: (_) => AppState(), + child: MaterialApp( + title: 'OnDevice AI', + debugShowCheckedModeBanner: false, + theme: ThemeData( + useMaterial3: true, + scaffoldBackgroundColor: const Color(0xFFF2F2F7), + appBarTheme: const AppBarTheme( + backgroundColor: Colors.white, + foregroundColor: Color(0xFF333333), + elevation: 0, + scrolledUnderElevation: 0.5, + systemOverlayStyle: SystemUiOverlayStyle.dark, + ), + navigationBarTheme: NavigationBarThemeData( + backgroundColor: Colors.white, + indicatorColor: const Color(0xFF007AFF).withValues(alpha: 0.12), + labelTextStyle: WidgetStateProperty.resolveWith((states) { + if (states.contains(WidgetState.selected)) { + return const TextStyle(fontSize: 11, fontWeight: FontWeight.w600, color: Color(0xFF007AFF)); + } + return const TextStyle(fontSize: 11, fontWeight: FontWeight.w500, color: Color(0xFF8E8E93)); + }), + ), + ), + home: const MainScreen(), + onGenerateRoute: (settings) { + if (settings.name == '/feature-detail') { + final args = settings.arguments as Map; + return MaterialPageRoute( + builder: (_) => FeatureDetailScreen(id: args['id']!, name: args['name']!), + ); + } + if (settings.name == '/framework-detail') { + final args = settings.arguments as Map; + return MaterialPageRoute( + builder: (_) => FrameworkDetailScreen(id: args['id']!, name: args['name']!), + ); + } + return null; + }, + ), + ); + } +} + +class MainScreen extends StatefulWidget { + const MainScreen({super.key}); + + @override + State createState() => _MainScreenState(); +} + +class _MainScreenState extends State { + int _currentIndex = 0; + + static const _screens = [ + FeaturesScreen(), + FrameworkScreen(), + DeviceScreen(), + SettingsScreen(), + ]; + + @override + Widget build(BuildContext context) { + return Scaffold( + body: IndexedStack( + index: _currentIndex, + children: _screens, + ), + bottomNavigationBar: NavigationBar( + selectedIndex: _currentIndex, + onDestinationSelected: (index) => setState(() => _currentIndex = index), + height: 60, + destinations: const [ + NavigationDestination( + icon: Icon(Icons.auto_awesome_outlined, color: Color(0xFF8E8E93)), + selectedIcon: Icon(Icons.auto_awesome, color: Color(0xFF007AFF)), + label: 'Features', + ), + NavigationDestination( + icon: Icon(Icons.layers_outlined, color: Color(0xFF8E8E93)), + selectedIcon: Icon(Icons.layers, color: Color(0xFF007AFF)), + label: 'Framework', + ), + NavigationDestination( + icon: Icon(Icons.phone_android_outlined, color: Color(0xFF8E8E93)), + selectedIcon: Icon(Icons.phone_android, color: Color(0xFF007AFF)), + label: 'Device', + ), + NavigationDestination( + icon: Icon(Icons.settings_outlined, color: Color(0xFF8E8E93)), + selectedIcon: Icon(Icons.settings, color: Color(0xFF007AFF)), + label: 'Settings', + ), + ], + ), + ); + } +} diff --git a/libraries/flutter_ondevice_ai/example/lib/screens/device_screen.dart b/libraries/flutter_ondevice_ai/example/lib/screens/device_screen.dart new file mode 100644 index 0000000..b562914 --- /dev/null +++ b/libraries/flutter_ondevice_ai/example/lib/screens/device_screen.dart @@ -0,0 +1,103 @@ +import 'package:flutter/material.dart'; +import 'package:provider/provider.dart'; + +import '../app_state.dart'; +import '../widgets/shared/info_row.dart'; + +class DeviceScreen extends StatelessWidget { + const DeviceScreen({super.key}); + + @override + Widget build(BuildContext context) { + final state = context.watch(); + final cap = state.capability; + + return ListView( + children: [ + _section('DEVICE', [ + InfoRow(label: 'Platform', value: state.deviceInfo?.platform ?? 'Unknown'), + const _Separator(), + InfoRow(label: 'OS Version', value: state.deviceInfo?.osVersion ?? 'Unknown'), + ]), + _section('AI CAPABILITIES', [ + InfoRow( + label: 'On-Device AI', + value: state.deviceInfo?.supportsOnDeviceAI == true ? 'Supported' : 'Not Supported', + valueColor: state.deviceInfo?.supportsOnDeviceAI == true ? const Color(0xFF34C759) : const Color(0xFFFF3B30), + ), + const _Separator(), + InfoRow(label: 'Provider', value: state.deviceInfo?.provider ?? 'None'), + ]), + if (cap != null) + _section('AVAILABLE FEATURES', [ + ..._featureRows(cap.features), + const _Separator(), + const InfoRow(label: 'Describe Image', value: 'Coming Soon', valueColor: Color(0xFFFF9500)), + const _Separator(), + const InfoRow(label: 'Generate Image', value: 'Coming Soon', valueColor: Color(0xFFFF9500)), + ]), + _section('SDK', [ + const InfoRow(label: 'Module', value: 'flutter_ondevice_ai'), + const _Separator(), + const InfoRow(label: 'Version', value: '0.1.0'), + const _Separator(), + const InfoRow(label: 'Tier', value: 'Community'), + const _Separator(), + InfoRow(label: 'SDK State', value: state.sdkState.name), + ]), + const Padding( + padding: EdgeInsets.symmetric(vertical: 32), + child: Text( + 'All AI processing happens on-device.\nYour data never leaves this device.', + textAlign: TextAlign.center, + style: TextStyle(fontSize: 13, color: Color(0xFF666666), height: 1.54), + ), + ), + ], + ); + } + + List _featureRows(Map features) { + const names = ['summarize', 'classify', 'extract', 'chat', 'translate', 'rewrite', 'proofread']; + const labels = ['Summarize', 'Classify', 'Extract', 'Chat', 'Translate', 'Rewrite', 'Proofread']; + final widgets = []; + for (var i = 0; i < names.length; i++) { + if (i > 0) widgets.add(const _Separator()); + final available = features[names[i]] ?? false; + widgets.add(InfoRow( + label: labels[i], + value: available ? 'Yes' : 'No', + valueColor: available ? const Color(0xFF34C759) : const Color(0xFFFF3B30), + )); + } + return widgets; + } + + Widget _section(String title, List children) { + return Column( + crossAxisAlignment: CrossAxisAlignment.start, + children: [ + Padding( + padding: const EdgeInsets.fromLTRB(16, 20, 16, 8), + child: Text( + title, + style: const TextStyle(fontSize: 13, color: Color(0xFF666666), letterSpacing: 0.5), + ), + ), + Container(color: Colors.white, child: Column(children: children)), + ], + ); + } +} + +class _Separator extends StatelessWidget { + const _Separator(); + + @override + Widget build(BuildContext context) { + return const Padding( + padding: EdgeInsets.only(left: 16), + child: Divider(height: 0.5, thickness: 0.5, color: Color(0xFFC6C6C8)), + ); + } +} diff --git a/libraries/flutter_ondevice_ai/example/lib/screens/feature_detail_screen.dart b/libraries/flutter_ondevice_ai/example/lib/screens/feature_detail_screen.dart new file mode 100644 index 0000000..3dd8eec --- /dev/null +++ b/libraries/flutter_ondevice_ai/example/lib/screens/feature_detail_screen.dart @@ -0,0 +1,51 @@ +import 'package:flutter/material.dart'; + +import '../widgets/feature_detail/summarize_demo.dart'; +import '../widgets/feature_detail/classify_demo.dart'; +import '../widgets/feature_detail/extract_demo.dart'; +import '../widgets/feature_detail/chat_demo.dart'; +import '../widgets/feature_detail/translate_demo.dart'; +import '../widgets/feature_detail/rewrite_demo.dart'; +import '../widgets/feature_detail/proofread_demo.dart'; +import '../widgets/feature_detail/coming_soon_demo.dart'; + +class FeatureDetailScreen extends StatelessWidget { + final String id; + final String name; + + const FeatureDetailScreen({super.key, required this.id, required this.name}); + + @override + Widget build(BuildContext context) { + return Scaffold( + appBar: AppBar(title: Text(name)), + backgroundColor: const Color(0xFFF2F2F7), + body: _buildDemo(), + ); + } + + Widget _buildDemo() { + return switch (id) { + 'summarize' => const SummarizeDemo(), + 'classify' => const ClassifyDemo(), + 'extract' => const ExtractDemo(), + 'chat' => const ChatDemo(), + 'translate' => const TranslateDemo(), + 'rewrite' => const RewriteDemo(), + 'proofread' => const ProofreadDemo(), + 'describeImage' => const ComingSoonDemo( + icon: Icons.image, + title: 'Describe Image', + subtitle: 'Generate descriptions for images using on-device AI', + description: 'This feature will allow you to select an image and generate descriptive text using Apple Intelligence or Gemini Nano, depending on your device.', + ), + 'generateImage' => const ComingSoonDemo( + icon: Icons.auto_fix_high, + title: 'Generate Image', + subtitle: 'Generate images from text prompts using on-device AI', + description: 'This feature will allow you to generate images from text descriptions using Apple Intelligence or Gemini Nano, depending on your device.', + ), + _ => const Center(child: Text('Unknown Feature', style: TextStyle(fontSize: 17, color: Color(0xFF666666)))), + }; + } +} diff --git a/libraries/flutter_ondevice_ai/example/lib/screens/features_screen.dart b/libraries/flutter_ondevice_ai/example/lib/screens/features_screen.dart new file mode 100644 index 0000000..a30414f --- /dev/null +++ b/libraries/flutter_ondevice_ai/example/lib/screens/features_screen.dart @@ -0,0 +1,81 @@ +import 'package:flutter/material.dart'; +import 'package:provider/provider.dart'; + +import '../app_state.dart'; +import '../widgets/shared/ai_status_banner.dart'; +import '../widgets/shared/feature_row.dart'; +import 'feature_detail_screen.dart'; + +class FeaturesScreen extends StatelessWidget { + const FeaturesScreen({super.key}); + + @override + Widget build(BuildContext context) { + final state = context.watch(); + + if (state.sdkState == SDKState.notInitialized || state.sdkState == SDKState.initializing) { + return const Center( + child: Column( + mainAxisSize: MainAxisSize.min, + children: [ + CircularProgressIndicator(), + SizedBox(height: 16), + Text('Initializing Locanara SDK...', style: TextStyle(fontSize: 17, color: Color(0xFF666666))), + ], + ), + ); + } + + if (state.sdkState == SDKState.error) { + return Center( + child: Padding( + padding: const EdgeInsets.all(20), + child: Column( + mainAxisSize: MainAxisSize.min, + children: [ + const Text('Initialization Error', style: TextStyle(fontSize: 20, fontWeight: FontWeight.w600, color: Color(0xFFFF3B30))), + const SizedBox(height: 8), + Text(state.errorMessage ?? '', textAlign: TextAlign.center, style: const TextStyle(fontSize: 15, color: Color(0xFF666666))), + const SizedBox(height: 20), + ElevatedButton( + onPressed: state.initializeSDK, + style: ElevatedButton.styleFrom( + backgroundColor: const Color(0xFF007AFF), + foregroundColor: Colors.white, + padding: const EdgeInsets.symmetric(horizontal: 24, vertical: 12), + shape: RoundedRectangleBorder(borderRadius: BorderRadius.circular(8)), + ), + child: const Text('Retry', style: TextStyle(fontSize: 17, fontWeight: FontWeight.w600)), + ), + ], + ), + ), + ); + } + + return ListView.separated( + itemCount: state.availableFeatures.length + 1, + separatorBuilder: (_, i) => i == 0 + ? const SizedBox(height: 16) + : const Padding( + padding: EdgeInsets.only(left: 68), + child: Divider(height: 0.5, thickness: 0.5, color: Color(0xFFC6C6C8)), + ), + itemBuilder: (context, index) { + if (index == 0) return const AIStatusBanner(); + final feature = state.availableFeatures[index - 1]; + return GestureDetector( + onTap: feature.isAvailable + ? () => Navigator.push( + context, + MaterialPageRoute( + builder: (_) => FeatureDetailScreen(id: feature.id, name: feature.name), + ), + ) + : null, + child: FeatureRow(feature: feature), + ); + }, + ); + } +} diff --git a/libraries/flutter_ondevice_ai/example/lib/screens/framework_detail_screen.dart b/libraries/flutter_ondevice_ai/example/lib/screens/framework_detail_screen.dart new file mode 100644 index 0000000..7f5bb84 --- /dev/null +++ b/libraries/flutter_ondevice_ai/example/lib/screens/framework_detail_screen.dart @@ -0,0 +1,38 @@ +import 'package:flutter/material.dart'; + +import '../widgets/framework_detail/model_demo.dart'; +import '../widgets/framework_detail/chain_demo.dart'; +import '../widgets/framework_detail/pipeline_demo.dart'; +import '../widgets/framework_detail/memory_demo.dart'; +import '../widgets/framework_detail/guardrail_demo.dart'; +import '../widgets/framework_detail/session_demo.dart'; +import '../widgets/framework_detail/agent_demo.dart'; + +class FrameworkDetailScreen extends StatelessWidget { + final String id; + final String name; + + const FrameworkDetailScreen({super.key, required this.id, required this.name}); + + @override + Widget build(BuildContext context) { + return Scaffold( + appBar: AppBar(title: Text(name)), + backgroundColor: const Color(0xFFF2F2F7), + body: _buildDemo(), + ); + } + + Widget _buildDemo() { + return switch (id) { + 'model' => const ModelDemo(), + 'chain' => const ChainDemo(), + 'pipeline' => const PipelineDemo(), + 'memory' => const MemoryDemo(), + 'guardrail' => const GuardrailDemo(), + 'session' => const SessionDemo(), + 'agent' => const AgentDemo(), + _ => const Center(child: Text('Unknown Demo', style: TextStyle(fontSize: 17, color: Color(0xFF666666)))), + }; + } +} diff --git a/libraries/flutter_ondevice_ai/example/lib/screens/framework_screen.dart b/libraries/flutter_ondevice_ai/example/lib/screens/framework_screen.dart new file mode 100644 index 0000000..220bbb7 --- /dev/null +++ b/libraries/flutter_ondevice_ai/example/lib/screens/framework_screen.dart @@ -0,0 +1,96 @@ +import 'package:flutter/material.dart'; + +import '../widgets/shared/ai_status_banner.dart'; +import 'framework_detail_screen.dart'; + +class _FrameworkDemo { + final String id; + final String name; + final IconData icon; + final String description; + + const _FrameworkDemo({required this.id, required this.name, required this.icon, required this.description}); +} + +const _frameworkDemos = [ + _FrameworkDemo(id: 'model', name: 'Model', icon: Icons.memory, description: 'Direct model usage with GenerationConfig presets and streaming'), + _FrameworkDemo(id: 'chain', name: 'Chain', icon: Icons.link, description: 'ModelChain, SequentialChain, ParallelChain, ConditionalChain, and custom chains'), + _FrameworkDemo(id: 'pipeline', name: 'Pipeline DSL', icon: Icons.swap_horiz, description: 'Compose multiple AI steps into a single pipeline with compile-time type safety'), + _FrameworkDemo(id: 'memory', name: 'Memory', icon: Icons.lightbulb, description: 'BufferMemory and SummaryMemory \u2014 conversation history management'), + _FrameworkDemo(id: 'guardrail', name: 'Guardrail', icon: Icons.verified_user, description: 'Wrap chains with input length and content safety guardrails'), + _FrameworkDemo(id: 'session', name: 'Session', icon: Icons.chat_bubble_outline, description: 'Stateful chat with BufferMemory \u2014 see memory entries in real-time'), + _FrameworkDemo(id: 'agent', name: 'Agent + Tools', icon: Icons.account_circle, description: 'ReAct-lite agent with tools and step-by-step reasoning trace'), +]; + +class FrameworkScreen extends StatelessWidget { + const FrameworkScreen({super.key}); + + @override + Widget build(BuildContext context) { + return ListView.separated( + itemCount: _frameworkDemos.length + 1, + separatorBuilder: (_, i) => i == 0 + ? const SizedBox(height: 16) + : const Padding( + padding: EdgeInsets.only(left: 68), + child: Divider(height: 0.5, thickness: 0.5, color: Color(0xFFC6C6C8)), + ), + itemBuilder: (context, index) { + if (index == 0) { + return Column( + crossAxisAlignment: CrossAxisAlignment.start, + children: [ + const AIStatusBanner(), + const Padding( + padding: EdgeInsets.fromLTRB(16, 8, 16, 0), + child: Text( + "Explore Locanara's composable framework primitives \u2014 the building blocks for custom AI features.", + style: TextStyle(fontSize: 14, color: Color(0xFF666666), height: 1.43), + ), + ), + ], + ); + } + final demo = _frameworkDemos[index - 1]; + return GestureDetector( + onTap: () => Navigator.push( + context, + MaterialPageRoute( + builder: (_) => FrameworkDetailScreen(id: demo.id, name: demo.name), + ), + ), + child: Container( + color: Colors.white, + padding: const EdgeInsets.symmetric(vertical: 12, horizontal: 16), + child: Row( + children: [ + Container( + width: 40, + height: 40, + decoration: BoxDecoration( + color: const Color(0xFFF2F2F7), + borderRadius: BorderRadius.circular(8), + ), + child: Icon(demo.icon, size: 24, color: const Color(0xFF007AFF)), + ), + const SizedBox(width: 12), + Expanded( + child: Column( + crossAxisAlignment: CrossAxisAlignment.start, + children: [ + Text(demo.name, style: const TextStyle(fontSize: 17, fontWeight: FontWeight.w600)), + const SizedBox(height: 2), + Text(demo.description, maxLines: 2, overflow: TextOverflow.ellipsis, style: const TextStyle(fontSize: 13, color: Color(0xFF666666), height: 1.38)), + ], + ), + ), + const SizedBox(width: 8), + const Icon(Icons.chevron_right, size: 20, color: Color(0xFFC7C7CC)), + ], + ), + ), + ); + }, + ); + } +} diff --git a/libraries/flutter_ondevice_ai/example/lib/screens/settings_screen.dart b/libraries/flutter_ondevice_ai/example/lib/screens/settings_screen.dart new file mode 100644 index 0000000..88198af --- /dev/null +++ b/libraries/flutter_ondevice_ai/example/lib/screens/settings_screen.dart @@ -0,0 +1,269 @@ +import 'package:flutter/foundation.dart'; +import 'package:flutter/material.dart'; +import 'package:flutter/services.dart'; +import 'package:provider/provider.dart'; +import 'package:url_launcher/url_launcher.dart'; + +import '../app_state.dart'; + +class SettingsScreen extends StatelessWidget { + const SettingsScreen({super.key}); + + @override + Widget build(BuildContext context) { + final state = context.watch(); + final isIOS = !kIsWeb && defaultTargetPlatform == TargetPlatform.iOS; + final showIOSSetupGuide = isIOS && + state.capability?.isSupported == true && + !state.isModelReady; + final showWebSetupGuide = kIsWeb && !state.isModelReady; + + return ListView( + children: [ + // Setup Guide (iOS only, when model not ready) + if (showIOSSetupGuide) + _section('SETUP GUIDE', [ + const _SetupStep(number: '1', title: 'Open Settings', description: 'Go to Settings app on your device'), + const _Separator(), + const _SetupStep(number: '2', title: 'Apple Intelligence & Siri', description: 'Navigate to Apple Intelligence & Siri settings'), + const _Separator(), + const _SetupStep(number: '3', title: 'Enable Apple Intelligence', description: 'Turn on Apple Intelligence toggle'), + const _Separator(), + const _SetupStep(number: '4', title: 'Wait for Setup', description: 'Models will be downloaded in the background'), + ]), + // Setup Guide (Web, when Chrome Built-in AI not ready) + if (showWebSetupGuide) + _section('CHROME SETUP GUIDE', [ + const _SetupStep(number: '1', title: 'Use Chrome 138+', description: 'Download the latest version from chrome.com'), + const _Separator(), + const _SetupStep(number: '2', title: 'Enable Feature Flags', description: 'Open each URL below in Chrome and set to "Enabled"'), + const _ChromeFlagRow(flag: 'chrome://flags/#optimization-guide-on-device-model'), + const _ChromeFlagRow(flag: 'chrome://flags/#prompt-api-for-gemini-nano'), + const _ChromeFlagRow(flag: 'chrome://flags/#enable-experimental-web-platform-features'), + const _Separator(), + const _SetupStep(number: '3', title: 'Restart Chrome', description: 'Click "Relaunch" or close and reopen Chrome completely'), + const _Separator(), + const _SetupStep(number: '4', title: 'Verify Model Status', description: 'Go to chrome://on-device-internals → Model Status tab'), + const _Separator(), + const _SetupStep(number: '5', title: 'System Requirements', description: '22GB+ free disk space. GPU with 4GB+ VRAM or CPU with 16GB+ RAM.'), + ]), + if (isIOS) ...[ + _section('APPLE INTELLIGENCE', [ + _ActionRow( + icon: Icons.settings, + label: 'Open System Settings', + trailing: Icons.open_in_new, + onTap: () => _openSettings(), + ), + ]), + ], + _section('ACTIONS', [ + _ActionRow( + icon: Icons.refresh, + label: 'Refresh SDK State', + trailing: Icons.chevron_right, + onTap: () async { + await state.initializeSDK(); + if (context.mounted) { + ScaffoldMessenger.of(context).showSnackBar( + const SnackBar(content: Text('SDK state has been refreshed.')), + ); + } + }, + ), + ]), + _section('LINKS', [ + _ActionRow( + icon: Icons.menu_book, + label: 'Documentation', + trailing: Icons.open_in_new, + onTap: () => _launchUrl('https://locanara.com/docs'), + ), + const _Separator(), + _ActionRow( + icon: Icons.code, + label: 'GitHub Repository', + trailing: Icons.open_in_new, + onTap: () => _launchUrl('https://github.com/hyodotdev/locanara'), + ), + ]), + _section('ABOUT', [ + const _AboutRow(label: 'flutter_ondevice_ai', value: 'v0.1.0'), + const _Separator(), + const _AboutRow(label: 'Locanara SDK', value: 'Open Source'), + ]), + const Padding( + padding: EdgeInsets.symmetric(vertical: 32), + child: Text( + 'All AI processing happens on-device.\nYour data never leaves this device.', + textAlign: TextAlign.center, + style: TextStyle(fontSize: 13, color: Color(0xFF666666), height: 1.54), + ), + ), + ], + ); + } + + Widget _section(String title, List children) { + return Column( + crossAxisAlignment: CrossAxisAlignment.start, + children: [ + Padding( + padding: const EdgeInsets.fromLTRB(16, 20, 16, 8), + child: Text( + title, + style: const TextStyle(fontSize: 13, color: Color(0xFF666666), letterSpacing: 0.5), + ), + ), + Container(color: Colors.white, child: Column(children: children)), + ], + ); + } + + Future _openSettings() async { + if (!kIsWeb && defaultTargetPlatform == TargetPlatform.iOS) { + final uri = Uri.parse('app-settings:'); + if (await canLaunchUrl(uri)) { + await launchUrl(uri); + } + } + } + + Future _launchUrl(String url) async { + await launchUrl(Uri.parse(url), mode: LaunchMode.externalApplication); + } +} + +class _ActionRow extends StatelessWidget { + final IconData icon; + final String label; + final IconData trailing; + final VoidCallback onTap; + + const _ActionRow({required this.icon, required this.label, required this.trailing, required this.onTap}); + + @override + Widget build(BuildContext context) { + return InkWell( + onTap: onTap, + child: Padding( + padding: const EdgeInsets.symmetric(vertical: 12, horizontal: 16), + child: Row( + children: [ + Icon(icon, size: 22, color: const Color(0xFF007AFF)), + const SizedBox(width: 12), + Expanded(child: Text(label, style: const TextStyle(fontSize: 17))), + Icon(trailing, size: 20, color: const Color(0xFFC7C7CC)), + ], + ), + ), + ); + } +} + +class _AboutRow extends StatelessWidget { + final String label; + final String value; + + const _AboutRow({required this.label, required this.value}); + + @override + Widget build(BuildContext context) { + return Padding( + padding: const EdgeInsets.symmetric(vertical: 12, horizontal: 16), + child: Row( + mainAxisAlignment: MainAxisAlignment.spaceBetween, + children: [ + Text(label, style: const TextStyle(fontSize: 17)), + Text(value, style: const TextStyle(fontSize: 17, color: Color(0xFF666666))), + ], + ), + ); + } +} + +class _Separator extends StatelessWidget { + const _Separator(); + + @override + Widget build(BuildContext context) { + return const Padding( + padding: EdgeInsets.only(left: 50), + child: Divider(height: 0.5, thickness: 0.5, color: Color(0xFFC6C6C8)), + ); + } +} + +class _SetupStep extends StatelessWidget { + final String number; + final String title; + final String description; + + const _SetupStep({required this.number, required this.title, required this.description}); + + @override + Widget build(BuildContext context) { + return Padding( + padding: const EdgeInsets.symmetric(vertical: 12, horizontal: 16), + child: Row( + crossAxisAlignment: CrossAxisAlignment.start, + children: [ + Container( + width: 28, + height: 28, + decoration: const BoxDecoration(color: Color(0xFF007AFF), shape: BoxShape.circle), + alignment: Alignment.center, + child: Text(number, style: const TextStyle(color: Colors.white, fontSize: 14, fontWeight: FontWeight.w600)), + ), + const SizedBox(width: 12), + Expanded( + child: Column( + crossAxisAlignment: CrossAxisAlignment.start, + children: [ + Text(title, style: const TextStyle(fontSize: 15, fontWeight: FontWeight.w500)), + const SizedBox(height: 2), + Text(description, style: const TextStyle(fontSize: 13, color: Color(0xFF666666))), + ], + ), + ), + ], + ), + ); + } +} + +class _ChromeFlagRow extends StatelessWidget { + final String flag; + + const _ChromeFlagRow({required this.flag}); + + @override + Widget build(BuildContext context) { + return Padding( + padding: const EdgeInsets.fromLTRB(52, 4, 16, 4), + child: GestureDetector( + onTap: () { + Clipboard.setData(ClipboardData(text: flag)); + ScaffoldMessenger.of(context).showSnackBar( + SnackBar(content: Text('Copied: $flag'), duration: const Duration(seconds: 2)), + ); + }, + child: Container( + padding: const EdgeInsets.symmetric(horizontal: 10, vertical: 8), + decoration: BoxDecoration( + color: const Color(0xFFF2F2F7), + borderRadius: BorderRadius.circular(8), + ), + child: Row( + children: [ + Expanded( + child: Text(flag, style: const TextStyle(fontSize: 12, fontFamily: 'monospace', color: Color(0xFF007AFF))), + ), + const Icon(Icons.copy, size: 14, color: Color(0xFF999999)), + ], + ), + ), + ), + ); + } +} diff --git a/libraries/flutter_ondevice_ai/example/lib/widgets/feature_detail/chat_demo.dart b/libraries/flutter_ondevice_ai/example/lib/widgets/feature_detail/chat_demo.dart new file mode 100644 index 0000000..9ef5728 --- /dev/null +++ b/libraries/flutter_ondevice_ai/example/lib/widgets/feature_detail/chat_demo.dart @@ -0,0 +1,249 @@ +import 'package:flutter/material.dart'; +import 'package:flutter_ondevice_ai/flutter_ondevice_ai.dart'; + +class ChatDemo extends StatefulWidget { + const ChatDemo({super.key}); + + @override + State createState() => _ChatDemoState(); +} + +class _ChatDemoState extends State { + final _controller = TextEditingController(); + final _scrollController = ScrollController(); + final _ai = FlutterOndeviceAi.instance; + final _messages = <_Message>[]; + bool _isStreaming = true; + bool _loading = false; + + Future _send() async { + final text = _controller.text.trim(); + if (text.isEmpty || _loading) return; + _controller.clear(); + setState(() { + _messages.add(_Message(role: 'user', content: text)); + _loading = true; + }); + _scrollToBottom(); + + final history = _messages.where((m) => m.role != 'typing').map((m) => ChatMessage(role: m.role == 'user' ? ChatRole.user : ChatRole.assistant, content: m.content)).toList(); + + try { + if (_isStreaming) { + setState(() => _messages.add(_Message(role: 'assistant', content: ''))); + await _ai.chatStream(text, options: ChatStreamOptions( + systemPrompt: 'You are a helpful AI assistant. Keep answers brief.', + history: history.sublist(0, history.length - 1), + onChunk: (chunk) { + if (!mounted || _messages.isEmpty) return; + setState(() { _messages.last = _Message(role: 'assistant', content: chunk.accumulated); }); + _scrollToBottom(); + }, + )); + } else { + setState(() => _messages.add(_Message(role: 'typing', content: ''))); + final result = await _ai.chat(text, options: ChatOptions( + systemPrompt: 'You are a helpful AI assistant. Keep answers brief.', + history: history.sublist(0, history.length - 1), + )); + if (!mounted) return; + setState(() { _messages.last = _Message(role: 'assistant', content: result.message); }); + } + } catch (e) { + if (!mounted) return; + setState(() { + if (_messages.isNotEmpty && (_messages.last.role == 'typing' || _messages.last.content.isEmpty)) { + _messages.last = _Message(role: 'assistant', content: 'Error: $e'); + } else { + _messages.add(_Message(role: 'assistant', content: 'Error: $e')); + } + }); + } finally { + if (mounted) setState(() => _loading = false); + _scrollToBottom(); + } + } + + void _scrollToBottom() { + WidgetsBinding.instance.addPostFrameCallback((_) { + if (_scrollController.hasClients) { + _scrollController.animateTo(_scrollController.position.maxScrollExtent, duration: const Duration(milliseconds: 200), curve: Curves.easeOut); + } + }); + } + + @override + void dispose() { _controller.dispose(); _scrollController.dispose(); super.dispose(); } + + @override + Widget build(BuildContext context) { + return Column( + children: [ + Padding( + padding: const EdgeInsets.symmetric(horizontal: 16, vertical: 8), + child: Row( + children: [ + _modeButton('Standard', Icons.chat, !_isStreaming), + const SizedBox(width: 8), + _modeButton('Stream', Icons.bolt, _isStreaming), + ], + ), + ), + Expanded( + child: ListView.builder( + controller: _scrollController, + padding: const EdgeInsets.symmetric(horizontal: 16, vertical: 8), + itemCount: _messages.length, + itemBuilder: (context, index) { + final msg = _messages[index]; + if (msg.role == 'typing') { + return const Align( + alignment: Alignment.centerLeft, + child: Padding(padding: EdgeInsets.only(bottom: 8), child: _TypingIndicator()), + ); + } + final isUser = msg.role == 'user'; + return Align( + alignment: isUser ? Alignment.centerRight : Alignment.centerLeft, + child: Container( + margin: const EdgeInsets.only(bottom: 8), + padding: const EdgeInsets.symmetric(horizontal: 14, vertical: 10), + constraints: BoxConstraints(maxWidth: MediaQuery.of(context).size.width * 0.8), + decoration: BoxDecoration( + color: isUser ? const Color(0xFF007AFF) : Colors.white, + borderRadius: BorderRadius.only( + topLeft: const Radius.circular(18), + topRight: const Radius.circular(18), + bottomLeft: Radius.circular(isUser ? 18 : 4), + bottomRight: Radius.circular(isUser ? 4 : 18), + ), + ), + child: Text(msg.content, style: TextStyle(fontSize: 15, color: isUser ? Colors.white : const Color(0xFF333333), height: 1.47)), + ), + ); + }, + ), + ), + Container( + padding: const EdgeInsets.all(12), + decoration: const BoxDecoration( + color: Colors.white, + border: Border(top: BorderSide(color: Color(0xFFE5E5EA), width: 0.5)), + ), + child: Row( + children: [ + if (_messages.isNotEmpty) + IconButton(icon: const Icon(Icons.delete, color: Color(0xFFFF3B30)), onPressed: () => setState(() => _messages.clear())), + Expanded( + child: TextField( + controller: _controller, + decoration: InputDecoration( + hintText: 'Type a message...', + filled: true, + fillColor: const Color(0xFFF2F2F7), + contentPadding: const EdgeInsets.symmetric(horizontal: 16, vertical: 10), + border: OutlineInputBorder(borderRadius: BorderRadius.circular(20), borderSide: BorderSide.none), + ), + onSubmitted: (_) => _send(), + ), + ), + const SizedBox(width: 8), + GestureDetector( + onTap: _send, + child: Container( + width: 40, + height: 40, + decoration: BoxDecoration( + color: const Color(0xFF007AFF).withValues(alpha: (_controller.text.trim().isEmpty || _loading) ? 0.4 : 1.0), + shape: BoxShape.circle, + ), + child: const Icon(Icons.send, size: 20, color: Colors.white), + ), + ), + ], + ), + ), + ], + ); + } + + Widget _modeButton(String label, IconData icon, bool isSelected) { + return Expanded( + child: GestureDetector( + onTap: () => setState(() => _isStreaming = label == 'Stream'), + child: Container( + padding: const EdgeInsets.symmetric(vertical: 10), + decoration: BoxDecoration( + color: isSelected ? const Color(0xFF007AFF) : Colors.white, + borderRadius: BorderRadius.circular(8), + border: Border.all(color: isSelected ? const Color(0xFF007AFF) : const Color(0xFFE5E5EA)), + ), + child: Row( + mainAxisAlignment: MainAxisAlignment.center, + children: [ + Icon(icon, size: 16, color: isSelected ? Colors.white : const Color(0xFF333333)), + const SizedBox(width: 4), + Text(label, style: TextStyle(fontSize: 13, fontWeight: FontWeight.w600, color: isSelected ? Colors.white : const Color(0xFF333333))), + ], + ), + ), + ), + ); + } +} + +class _Message { + final String role; + final String content; + + _Message({required this.role, required this.content}); +} + +class _TypingIndicator extends StatefulWidget { + const _TypingIndicator(); + + @override + State<_TypingIndicator> createState() => _TypingIndicatorState(); +} + +class _TypingIndicatorState extends State<_TypingIndicator> with TickerProviderStateMixin { + late final List _controllers; + + @override + void initState() { + super.initState(); + _controllers = List.generate(3, (i) { + final ctrl = AnimationController(duration: const Duration(milliseconds: 600), vsync: this); + Future.delayed(Duration(milliseconds: i * 200), () { if (mounted) ctrl.repeat(reverse: true); }); + return ctrl; + }); + } + + @override + void dispose() { for (final c in _controllers) c.dispose(); super.dispose(); } + + @override + Widget build(BuildContext context) { + return Container( + padding: const EdgeInsets.symmetric(horizontal: 14, vertical: 10), + decoration: BoxDecoration(color: Colors.white, borderRadius: BorderRadius.circular(18)), + child: Row( + mainAxisSize: MainAxisSize.min, + children: _controllers.map((ctrl) { + return AnimatedBuilder( + animation: ctrl, + builder: (_, __) => Container( + margin: const EdgeInsets.symmetric(horizontal: 2), + width: 8, + height: 8, + decoration: BoxDecoration( + color: Color.lerp(const Color(0xFFC7C7CC), const Color(0xFF8E8E93), ctrl.value), + shape: BoxShape.circle, + ), + ), + ); + }).toList(), + ), + ); + } +} diff --git a/libraries/flutter_ondevice_ai/example/lib/widgets/feature_detail/classify_demo.dart b/libraries/flutter_ondevice_ai/example/lib/widgets/feature_detail/classify_demo.dart new file mode 100644 index 0000000..a440ed8 --- /dev/null +++ b/libraries/flutter_ondevice_ai/example/lib/widgets/feature_detail/classify_demo.dart @@ -0,0 +1,176 @@ +import 'package:flutter/material.dart'; +import 'package:flutter_ondevice_ai/flutter_ondevice_ai.dart'; + +import '../shared/debug_log_panel.dart'; +import '../shared/run_button.dart'; + +const _defaultCategories = [ + 'Technology', + 'Sports', + 'Entertainment', + 'Business', + 'Health', +]; + +class ClassifyDemo extends StatefulWidget { + const ClassifyDemo({super.key}); + + @override + State createState() => _ClassifyDemoState(); +} + +class _ClassifyDemoState extends State { + final _textController = TextEditingController(text: 'The new iPhone features a faster chip and improved camera system.'); + final _customController = TextEditingController(); + final _ai = FlutterOndeviceAi.instance; + final _selectedCategories = List.from(_defaultCategories); + bool _loading = false; + ClassifyResult? _result; + DebugLog? _debugLog; + + Future _run() async { + if (_loading || _textController.text.isEmpty || _selectedCategories.isEmpty) return; + setState(() { _loading = true; _result = null; _debugLog = null; }); + final sw = Stopwatch()..start(); + try { + final result = await _ai.classify(_textController.text, options: ClassifyOptions(categories: _selectedCategories)); + sw.stop(); + setState(() { + _result = result; + _debugLog = DebugLog(api: 'classify', request: {'text': _textController.text, 'categories': _selectedCategories}, response: {'classifications': result.classifications.map((c) => {'label': c.label, 'score': c.score}).toList()}, timing: sw.elapsedMilliseconds); + _loading = false; + }); + } catch (e) { + sw.stop(); + setState(() { _loading = false; }); + if (mounted) ScaffoldMessenger.of(context).showSnackBar(SnackBar(content: Text('Error: $e'))); + } + } + + void _toggleCategory(String category) { + setState(() { + if (_selectedCategories.contains(category)) { + _selectedCategories.remove(category); + } else { + _selectedCategories.add(category); + } + }); + } + + void _addCustomCategory() { + final custom = _customController.text.trim(); + if (custom.isEmpty || _selectedCategories.contains(custom)) return; + setState(() { + _selectedCategories.add(custom); + _customController.clear(); + }); + } + + @override + void dispose() { _textController.dispose(); _customController.dispose(); super.dispose(); } + + @override + Widget build(BuildContext context) { + return ListView( + padding: const EdgeInsets.all(16), + children: [ + const Text('CATEGORIES', style: TextStyle(fontSize: 14, fontWeight: FontWeight.w600, color: Color(0xFF666666))), + const SizedBox(height: 8), + Wrap( + spacing: 8, + runSpacing: 8, + children: _defaultCategories.map((cat) { + final selected = _selectedCategories.contains(cat); + return FilterChip( + label: Text(cat), + selected: selected, + onSelected: (_) => _toggleCategory(cat), + selectedColor: const Color(0xFF007AFF).withValues(alpha: 0.15), + checkmarkColor: const Color(0xFF007AFF), + labelStyle: TextStyle( + color: selected ? const Color(0xFF007AFF) : const Color(0xFF333333), + fontWeight: selected ? FontWeight.w600 : FontWeight.normal, + ), + ); + }).toList(), + ), + const SizedBox(height: 12), + Row( + children: [ + Expanded( + child: TextField( + controller: _customController, + decoration: InputDecoration( + hintText: 'Add custom category...', + filled: true, + fillColor: Colors.white, + border: OutlineInputBorder(borderRadius: BorderRadius.circular(10), borderSide: BorderSide.none), + contentPadding: const EdgeInsets.symmetric(horizontal: 12, vertical: 10), + ), + onSubmitted: (_) => _addCustomCategory(), + ), + ), + const SizedBox(width: 8), + TextButton( + onPressed: _customController.text.trim().isEmpty ? null : _addCustomCategory, + child: const Text('Add', style: TextStyle(fontWeight: FontWeight.w600)), + ), + ], + ), + if (_selectedCategories.isNotEmpty) ...[ + const SizedBox(height: 8), + Text( + 'Selected: ${_selectedCategories.join(', ')}', + style: const TextStyle(fontSize: 13, color: Color(0xFF007AFF)), + ), + ], + const SizedBox(height: 16), + TextField(controller: _textController, maxLines: 4, decoration: _inputDecoration('Enter text to classify...')), + const SizedBox(height: 12), + RunButton(label: 'Classify', loading: _loading, onPressed: _run), + if (_result != null) ...[ + const SizedBox(height: 16), + Container( + padding: const EdgeInsets.all(16), + decoration: BoxDecoration(color: Colors.white, borderRadius: BorderRadius.circular(10)), + child: Column( + children: _result!.classifications.asMap().entries.map((entry) { + final c = entry.value; + final isTop = entry.key == 0; + return Padding( + padding: EdgeInsets.only(top: entry.key > 0 ? 8 : 0), + child: Row( + children: [ + SizedBox(width: 80, child: Text(c.label, style: TextStyle(fontSize: 14, fontWeight: isTop ? FontWeight.w600 : FontWeight.normal))), + const SizedBox(width: 8), + Expanded( + child: ClipRRect( + borderRadius: BorderRadius.circular(4), + child: LinearProgressIndicator( + value: c.score, + backgroundColor: const Color(0xFFF2F2F7), + color: isTop ? const Color(0xFF007AFF) : const Color(0xFFC7C7CC), + minHeight: 8, + ), + ), + ), + const SizedBox(width: 8), + Text('${(c.score * 100).toStringAsFixed(0)}%', style: const TextStyle(fontSize: 13, fontWeight: FontWeight.w500)), + ], + ), + ); + }).toList(), + ), + ), + ], + DebugLogPanel(log: _debugLog), + ], + ); + } + + InputDecoration _inputDecoration(String hint) => InputDecoration( + hintText: hint, filled: true, fillColor: Colors.white, + border: OutlineInputBorder(borderRadius: BorderRadius.circular(10), borderSide: BorderSide.none), + ); + +} diff --git a/libraries/flutter_ondevice_ai/example/lib/widgets/feature_detail/coming_soon_demo.dart b/libraries/flutter_ondevice_ai/example/lib/widgets/feature_detail/coming_soon_demo.dart new file mode 100644 index 0000000..18c1f9f --- /dev/null +++ b/libraries/flutter_ondevice_ai/example/lib/widgets/feature_detail/coming_soon_demo.dart @@ -0,0 +1,54 @@ +import 'package:flutter/material.dart'; + +class ComingSoonDemo extends StatelessWidget { + final IconData icon; + final String title; + final String subtitle; + final String description; + + const ComingSoonDemo({ + super.key, + required this.icon, + required this.title, + required this.subtitle, + required this.description, + }); + + @override + Widget build(BuildContext context) { + return Center( + child: Padding( + padding: const EdgeInsets.all(32), + child: Column( + mainAxisSize: MainAxisSize.min, + children: [ + Container( + width: 120, + height: 120, + decoration: BoxDecoration( + color: Colors.black.withValues(alpha: 0.05), + borderRadius: BorderRadius.circular(24), + ), + child: Icon(icon, size: 80, color: const Color(0xFFC7C7CC)), + ), + const SizedBox(height: 24), + Text(title, style: const TextStyle(fontSize: 24, fontWeight: FontWeight.w600)), + const SizedBox(height: 8), + Text(subtitle, textAlign: TextAlign.center, style: const TextStyle(fontSize: 15, color: Color(0xFF666666))), + const SizedBox(height: 24), + Container( + padding: const EdgeInsets.symmetric(horizontal: 16, vertical: 8), + decoration: BoxDecoration( + color: const Color(0xFFFF9500), + borderRadius: BorderRadius.circular(16), + ), + child: const Text('Coming Soon', style: TextStyle(color: Colors.white, fontSize: 14, fontWeight: FontWeight.w600)), + ), + const SizedBox(height: 24), + Text(description, textAlign: TextAlign.center, style: const TextStyle(fontSize: 14, color: Color(0xFF666666), height: 1.43)), + ], + ), + ), + ); + } +} diff --git a/libraries/flutter_ondevice_ai/example/lib/widgets/feature_detail/extract_demo.dart b/libraries/flutter_ondevice_ai/example/lib/widgets/feature_detail/extract_demo.dart new file mode 100644 index 0000000..115d593 --- /dev/null +++ b/libraries/flutter_ondevice_ai/example/lib/widgets/feature_detail/extract_demo.dart @@ -0,0 +1,96 @@ +import 'package:flutter/material.dart'; +import 'package:flutter_ondevice_ai/flutter_ondevice_ai.dart'; + +import '../shared/debug_log_panel.dart'; +import '../shared/run_button.dart'; + +const _entityColors = { + 'person': Color(0xFF007AFF), + 'email': Color(0xFFFF9500), + 'phone': Color(0xFF34C759), + 'date': Color(0xFFAF52DE), + 'location': Color(0xFFFF3B30), +}; + +class ExtractDemo extends StatefulWidget { + const ExtractDemo({super.key}); + + @override + State createState() => _ExtractDemoState(); +} + +class _ExtractDemoState extends State { + final _controller = TextEditingController(text: 'John Smith works at Apple Inc. Contact him at john@apple.com or call (555) 123-4567. Meeting on March 15th in Cupertino.'); + final _ai = FlutterOndeviceAi.instance; + bool _loading = false; + ExtractResult? _result; + DebugLog? _debugLog; + + Future _run() async { + if (_loading || _controller.text.isEmpty) return; + setState(() { _loading = true; _result = null; _debugLog = null; }); + final sw = Stopwatch()..start(); + try { + final result = await _ai.extract(_controller.text, options: const ExtractOptions(entityTypes: ['person', 'email', 'phone', 'date', 'location'])); + sw.stop(); + setState(() { + _result = result; + _debugLog = DebugLog(api: 'extract', request: {'text': _controller.text}, response: {'entities': result.entities.map((e) => {'value': e.value, 'type': e.type, 'confidence': e.confidence}).toList()}, timing: sw.elapsedMilliseconds); + _loading = false; + }); + } catch (e) { + sw.stop(); + setState(() { _loading = false; }); + if (mounted) ScaffoldMessenger.of(context).showSnackBar(SnackBar(content: Text('Error: $e'))); + } + } + + @override + void dispose() { _controller.dispose(); super.dispose(); } + + @override + Widget build(BuildContext context) { + return ListView( + padding: const EdgeInsets.all(16), + children: [ + TextField(controller: _controller, maxLines: 4, decoration: _inputDecoration('Enter text to extract entities...')), + const SizedBox(height: 12), + RunButton(label: 'Extract', loading: _loading, onPressed: _run), + if (_result != null) ...[ + const SizedBox(height: 16), + Container( + padding: const EdgeInsets.all(16), + decoration: BoxDecoration(color: Colors.white, borderRadius: BorderRadius.circular(10)), + child: Column( + children: _result!.entities.map((entity) { + final color = _entityColors[entity.type] ?? const Color(0xFF8E8E93); + return Padding( + padding: const EdgeInsets.only(bottom: 8), + child: Row( + children: [ + Container( + padding: const EdgeInsets.symmetric(horizontal: 8, vertical: 3), + decoration: BoxDecoration(color: color, borderRadius: BorderRadius.circular(10)), + child: Text(entity.type, style: const TextStyle(fontSize: 11, fontWeight: FontWeight.w600, color: Colors.white)), + ), + const SizedBox(width: 8), + Expanded(child: Text(entity.value, style: const TextStyle(fontSize: 15))), + Text('${(entity.confidence * 100).toStringAsFixed(0)}%', style: const TextStyle(fontSize: 13, color: Color(0xFF666666))), + ], + ), + ); + }).toList(), + ), + ), + ], + DebugLogPanel(log: _debugLog), + ], + ); + } + + InputDecoration _inputDecoration(String hint) => InputDecoration( + hintText: hint, filled: true, fillColor: Colors.white, + border: OutlineInputBorder(borderRadius: BorderRadius.circular(10), borderSide: BorderSide.none), + ); + +} diff --git a/libraries/flutter_ondevice_ai/example/lib/widgets/feature_detail/proofread_demo.dart b/libraries/flutter_ondevice_ai/example/lib/widgets/feature_detail/proofread_demo.dart new file mode 100644 index 0000000..a62d66c --- /dev/null +++ b/libraries/flutter_ondevice_ai/example/lib/widgets/feature_detail/proofread_demo.dart @@ -0,0 +1,119 @@ +import 'package:flutter/material.dart'; +import 'package:flutter_ondevice_ai/flutter_ondevice_ai.dart'; + +import '../shared/debug_log_panel.dart'; +import '../shared/run_button.dart'; +import '../shared/stat_badge.dart'; + +const _correctionColors = { + 'grammar': Color(0xFF007AFF), + 'spelling': Color(0xFFFF9500), + 'punctuation': Color(0xFFAF52DE), + 'style': Color(0xFF34C759), +}; + +class ProofreadDemo extends StatefulWidget { + const ProofreadDemo({super.key}); + + @override + State createState() => _ProofreadDemoState(); +} + +class _ProofreadDemoState extends State { + final _controller = TextEditingController(text: 'Their going to the store tommorow and they will buys some grocerys for the party.'); + final _ai = FlutterOndeviceAi.instance; + bool _loading = false; + ProofreadResult? _result; + DebugLog? _debugLog; + + Future _run() async { + if (_loading || _controller.text.isEmpty) return; + setState(() { _loading = true; _result = null; _debugLog = null; }); + final sw = Stopwatch()..start(); + try { + final result = await _ai.proofread(_controller.text); + sw.stop(); + setState(() { + _result = result; + _debugLog = DebugLog(api: 'proofread', request: {'text': _controller.text}, response: {'correctedText': result.correctedText, 'corrections': result.corrections.length}, timing: sw.elapsedMilliseconds); + _loading = false; + }); + } catch (e) { + sw.stop(); + setState(() { _loading = false; }); + if (mounted) ScaffoldMessenger.of(context).showSnackBar(SnackBar(content: Text('Error: $e'))); + } + } + + @override + void dispose() { _controller.dispose(); super.dispose(); } + + @override + Widget build(BuildContext context) { + return ListView( + padding: const EdgeInsets.all(16), + children: [ + TextField(controller: _controller, maxLines: 4, decoration: _inputDecoration('Enter text with errors...')), + const SizedBox(height: 12), + RunButton(label: 'Proofread', loading: _loading, onPressed: _run), + if (_result != null) ...[ + const SizedBox(height: 16), + Container( + padding: const EdgeInsets.all(16), + decoration: BoxDecoration(color: Colors.white, borderRadius: BorderRadius.circular(10)), + child: Column( + crossAxisAlignment: CrossAxisAlignment.start, + children: [ + Row(children: [ + StatBadge(label: 'Corrections', value: '${_result!.corrections.length}'), + ]), + const SizedBox(height: 12), + const Text('CORRECTED TEXT', style: TextStyle(fontSize: 11, fontWeight: FontWeight.w700, color: Color(0xFF636366), letterSpacing: 0.5)), + const SizedBox(height: 4), + SelectableText(_result!.correctedText, style: const TextStyle(fontSize: 15, color: Color(0xFF34C759), fontWeight: FontWeight.w500, height: 1.47)), + if (_result!.corrections.isNotEmpty) ...[ + const SizedBox(height: 16), + const Text('DETAILS', style: TextStyle(fontSize: 11, fontWeight: FontWeight.w700, color: Color(0xFF636366), letterSpacing: 0.5)), + const SizedBox(height: 8), + ..._result!.corrections.map((c) => Padding( + padding: const EdgeInsets.only(bottom: 8), + child: Row( + children: [ + Expanded( + child: RichText( + text: TextSpan( + children: [ + TextSpan(text: c.original, style: const TextStyle(fontSize: 14, color: Color(0xFFFF3B30), decoration: TextDecoration.lineThrough)), + const TextSpan(text: ' \u2192 ', style: TextStyle(fontSize: 14, color: Color(0xFF666666))), + TextSpan(text: c.corrected, style: const TextStyle(fontSize: 14, color: Color(0xFF34C759), fontWeight: FontWeight.w500)), + ], + ), + ), + ), + Container( + padding: const EdgeInsets.symmetric(horizontal: 8, vertical: 3), + decoration: BoxDecoration( + color: _correctionColors[c.type] ?? const Color(0xFF8E8E93), + borderRadius: BorderRadius.circular(10), + ), + child: Text(c.type ?? 'other', style: const TextStyle(fontSize: 11, fontWeight: FontWeight.w600, color: Colors.white)), + ), + ], + ), + )), + ], + ], + ), + ), + ], + DebugLogPanel(log: _debugLog), + ], + ); + } + + InputDecoration _inputDecoration(String hint) => InputDecoration( + hintText: hint, filled: true, fillColor: Colors.white, + border: OutlineInputBorder(borderRadius: BorderRadius.circular(10), borderSide: BorderSide.none), + ); + +} diff --git a/libraries/flutter_ondevice_ai/example/lib/widgets/feature_detail/rewrite_demo.dart b/libraries/flutter_ondevice_ai/example/lib/widgets/feature_detail/rewrite_demo.dart new file mode 100644 index 0000000..ebe0fcb --- /dev/null +++ b/libraries/flutter_ondevice_ai/example/lib/widgets/feature_detail/rewrite_demo.dart @@ -0,0 +1,139 @@ +import 'package:flutter/material.dart'; +import 'package:flutter_ondevice_ai/flutter_ondevice_ai.dart'; + +import '../shared/debug_log_panel.dart'; +import '../shared/run_button.dart'; + +const _styles = [ + ('elaborate', 'Elaborate'), + ('emojify', 'Emojify'), + ('shorten', 'Shorten'), + ('friendly', 'Friendly'), + ('professional', 'Professional'), + ('rephrase', 'Rephrase'), +]; + +class RewriteDemo extends StatefulWidget { + const RewriteDemo({super.key}); + + @override + State createState() => _RewriteDemoState(); +} + +class _RewriteDemoState extends State { + final _controller = TextEditingController(text: 'Hey, just wanted to let you know that the project is going well and we should be done soon.'); + final _ai = FlutterOndeviceAi.instance; + String _style = 'professional'; + bool _loading = false; + RewriteResult? _result; + DebugLog? _debugLog; + + RewriteOutputType get _outputType => switch (_style) { + 'elaborate' => RewriteOutputType.elaborate, + 'emojify' => RewriteOutputType.emojify, + 'shorten' => RewriteOutputType.shorten, + 'friendly' => RewriteOutputType.friendly, + 'professional' => RewriteOutputType.professional, + _ => RewriteOutputType.rephrase, + }; + + Future _run() async { + if (_loading || _controller.text.isEmpty) return; + setState(() { _loading = true; _result = null; _debugLog = null; }); + final sw = Stopwatch()..start(); + try { + final result = await _ai.rewrite(_controller.text, options: RewriteOptions(outputType: _outputType)); + sw.stop(); + setState(() { + _result = result; + _debugLog = DebugLog(api: 'rewrite', request: {'text': _controller.text, 'style': _style}, response: {'rewrittenText': result.rewrittenText, 'alternatives': result.alternatives}, timing: sw.elapsedMilliseconds); + _loading = false; + }); + } catch (e) { + sw.stop(); + setState(() { _loading = false; }); + if (mounted) ScaffoldMessenger.of(context).showSnackBar(SnackBar(content: Text('Error: $e'))); + } + } + + @override + void dispose() { _controller.dispose(); super.dispose(); } + + @override + Widget build(BuildContext context) { + return ListView( + padding: const EdgeInsets.all(16), + children: [ + const Text('STYLE', style: TextStyle(fontSize: 14, fontWeight: FontWeight.w600, color: Color(0xFF666666))), + const SizedBox(height: 8), + _styleGrid(), + const SizedBox(height: 16), + TextField(controller: _controller, maxLines: 4, decoration: _inputDecoration('Enter text to rewrite...')), + const SizedBox(height: 12), + RunButton(label: 'Rewrite', loading: _loading, onPressed: _run), + if (_result != null) ...[ + const SizedBox(height: 16), + Container( + padding: const EdgeInsets.all(16), + decoration: BoxDecoration(color: Colors.white, borderRadius: BorderRadius.circular(10)), + child: Column( + crossAxisAlignment: CrossAxisAlignment.start, + children: [ + SelectableText(_result!.rewrittenText, style: const TextStyle(fontSize: 15, height: 1.47)), + if (_result!.alternatives != null && _result!.alternatives!.isNotEmpty) ...[ + const SizedBox(height: 12), + const Divider(height: 1, color: Color(0xFFE5E5EA)), + const SizedBox(height: 12), + const Text('ALTERNATIVES', style: TextStyle(fontSize: 11, fontWeight: FontWeight.w700, color: Color(0xFF636366), letterSpacing: 0.5)), + const SizedBox(height: 8), + ...(_result!.alternatives!.map((alt) => Padding( + padding: const EdgeInsets.only(bottom: 8), + child: Text(alt, style: const TextStyle(fontSize: 14, color: Color(0xFF666666), height: 1.43)), + ))), + ], + ], + ), + ), + ], + DebugLogPanel(log: _debugLog), + ], + ); + } + + Widget _styleGrid() { + return Column( + children: [ + Row(children: _styles.sublist(0, 3).map((s) => _styleChip(s)).toList()), + const SizedBox(height: 8), + Row(children: _styles.sublist(3).map((s) => _styleChip(s)).toList()), + ], + ); + } + + Widget _styleChip((String, String) style) { + final (value, label) = style; + final isSelected = value == _style; + return Expanded( + child: GestureDetector( + onTap: () => setState(() => _style = value), + child: Container( + margin: const EdgeInsets.only(right: 8), + padding: const EdgeInsets.symmetric(vertical: 10), + decoration: BoxDecoration( + color: isSelected ? const Color(0xFF007AFF) : Colors.white, + borderRadius: BorderRadius.circular(8), + border: Border.all(color: isSelected ? const Color(0xFF007AFF) : const Color(0xFFE5E5EA)), + ), + alignment: Alignment.center, + child: Text(label, style: TextStyle(fontSize: 13, fontWeight: FontWeight.w600, color: isSelected ? Colors.white : const Color(0xFF333333))), + ), + ), + ); + } + + InputDecoration _inputDecoration(String hint) => InputDecoration( + hintText: hint, filled: true, fillColor: Colors.white, + border: OutlineInputBorder(borderRadius: BorderRadius.circular(10), borderSide: BorderSide.none), + ); + +} diff --git a/libraries/flutter_ondevice_ai/example/lib/widgets/feature_detail/summarize_demo.dart b/libraries/flutter_ondevice_ai/example/lib/widgets/feature_detail/summarize_demo.dart new file mode 100644 index 0000000..e7a6a98 --- /dev/null +++ b/libraries/flutter_ondevice_ai/example/lib/widgets/feature_detail/summarize_demo.dart @@ -0,0 +1,143 @@ +import 'package:flutter/material.dart'; +import 'package:flutter_ondevice_ai/flutter_ondevice_ai.dart'; + +import '../shared/debug_log_panel.dart'; +import '../shared/run_button.dart'; +import '../shared/stat_badge.dart'; + +const _defaultText = + 'Apple Intelligence is the personal intelligence system that puts powerful generative ' + 'models right at the core of your iPhone, iPad, and Mac. It powers incredible new features ' + 'that understand and create language and images, take action across apps, and draw from ' + 'personal context to simplify and accelerate everyday tasks.'; + +class SummarizeDemo extends StatefulWidget { + const SummarizeDemo({super.key}); + + @override + State createState() => _SummarizeDemoState(); +} + +class _SummarizeDemoState extends State { + final _controller = TextEditingController(text: _defaultText); + final _ai = FlutterOndeviceAi.instance; + String _inputType = 'article'; + String _outputType = 'oneBullet'; + bool _loading = false; + SummarizeResult? _result; + DebugLog? _debugLog; + + SummarizeOutputType get _outputEnum => switch (_outputType) { + 'twoBullets' => SummarizeOutputType.twoBullets, + 'threeBullets' => SummarizeOutputType.threeBullets, + _ => SummarizeOutputType.oneBullet, + }; + + SummarizeInputType get _inputEnum => _inputType == 'conversation' + ? SummarizeInputType.conversation + : SummarizeInputType.article; + + Future _run() async { + if (_loading || _controller.text.isEmpty) return; + setState(() { _loading = true; _result = null; _debugLog = null; }); + final sw = Stopwatch()..start(); + final options = SummarizeOptions(outputType: _outputEnum, inputType: _inputEnum); + try { + final inputText = _controller.text; + final result = await _ai.summarize(inputText, options: options); + sw.stop(); + if (!mounted) return; + setState(() { + _result = result; + _debugLog = DebugLog( + api: 'summarize', + request: {'text': inputText.length > 100 ? inputText.substring(0, 100) : inputText, 'options': {'outputType': _outputType, 'inputType': _inputType}}, + response: {'summary': result.summary, 'originalLength': result.originalLength, 'summaryLength': result.summaryLength}, + timing: sw.elapsedMilliseconds, + ); + _loading = false; + }); + } catch (e) { + sw.stop(); + if (!mounted) return; + setState(() { _loading = false; }); + ScaffoldMessenger.of(context).showSnackBar(SnackBar(content: Text('Error: $e'))); + } + } + + @override + void dispose() { _controller.dispose(); super.dispose(); } + + @override + Widget build(BuildContext context) { + return ListView( + padding: const EdgeInsets.all(16), + children: [ + const Text('INPUT TYPE', style: TextStyle(fontSize: 14, fontWeight: FontWeight.w600, color: Color(0xFF666666))), + const SizedBox(height: 8), + _segmented(['article', 'conversation'], ['Article', 'Conversation'], _inputType, (v) => setState(() => _inputType = v)), + const SizedBox(height: 16), + const Text('OUTPUT TYPE', style: TextStyle(fontSize: 14, fontWeight: FontWeight.w600, color: Color(0xFF666666))), + const SizedBox(height: 8), + _segmented(['oneBullet', 'twoBullets', 'threeBullets'], ['1 Bullet', '2 Bullets', '3 Bullets'], _outputType, (v) => setState(() => _outputType = v)), + const SizedBox(height: 16), + TextField( + controller: _controller, + maxLines: 5, + decoration: InputDecoration( + hintText: 'Enter text to summarize...', + filled: true, + fillColor: Colors.white, + border: OutlineInputBorder(borderRadius: BorderRadius.circular(10), borderSide: BorderSide.none), + ), + ), + const SizedBox(height: 12), + RunButton(label: 'Summarize', loading: _loading, onPressed: _run), + if (_result != null) ...[ + const SizedBox(height: 16), + Container( + padding: const EdgeInsets.all(16), + decoration: BoxDecoration(color: Colors.white, borderRadius: BorderRadius.circular(10)), + child: Column( + crossAxisAlignment: CrossAxisAlignment.start, + children: [ + Row(children: [ + StatBadge(label: 'Original', value: '${_result!.originalLength} chars'), + const SizedBox(width: 8), + StatBadge(label: 'Summary', value: '${_result!.summaryLength} chars'), + ]), + const SizedBox(height: 12), + SelectableText(_result!.summary, style: const TextStyle(fontSize: 15, height: 1.47)), + ], + ), + ), + ], + DebugLogPanel(log: _debugLog), + ], + ); + } + + Widget _segmented(List values, List labels, String selected, ValueChanged onChanged) { + return Row( + children: List.generate(values.length, (i) { + final isSelected = values[i] == selected; + return Expanded( + child: GestureDetector( + onTap: () => onChanged(values[i]), + child: Container( + padding: const EdgeInsets.symmetric(vertical: 10), + margin: EdgeInsets.only(right: i < values.length - 1 ? 8 : 0), + decoration: BoxDecoration( + color: isSelected ? const Color(0xFF007AFF) : Colors.white, + borderRadius: BorderRadius.circular(8), + border: Border.all(color: isSelected ? const Color(0xFF007AFF) : const Color(0xFFE5E5EA)), + ), + alignment: Alignment.center, + child: Text(labels[i], style: TextStyle(fontSize: 13, fontWeight: FontWeight.w600, color: isSelected ? Colors.white : const Color(0xFF333333))), + ), + ), + ); + }), + ); + } +} diff --git a/libraries/flutter_ondevice_ai/example/lib/widgets/feature_detail/translate_demo.dart b/libraries/flutter_ondevice_ai/example/lib/widgets/feature_detail/translate_demo.dart new file mode 100644 index 0000000..81aa587 --- /dev/null +++ b/libraries/flutter_ondevice_ai/example/lib/widgets/feature_detail/translate_demo.dart @@ -0,0 +1,120 @@ +import 'package:flutter/material.dart'; +import 'package:flutter_ondevice_ai/flutter_ondevice_ai.dart'; + +import '../shared/debug_log_panel.dart'; +import '../shared/run_button.dart'; +import '../shared/stat_badge.dart'; + +const _languages = [ + ('en', 'English'), + ('ko', 'Korean'), + ('ja', 'Japanese'), + ('zh', 'Chinese'), + ('es', 'Spanish'), + ('fr', 'French'), + ('de', 'German'), +]; + +class TranslateDemo extends StatefulWidget { + const TranslateDemo({super.key}); + + @override + State createState() => _TranslateDemoState(); +} + +class _TranslateDemoState extends State { + final _controller = TextEditingController(text: 'Hello, how are you? I hope you are having a great day.'); + final _ai = FlutterOndeviceAi.instance; + String _targetLang = 'ko'; + bool _loading = false; + TranslateResult? _result; + DebugLog? _debugLog; + + Future _run() async { + if (_loading || _controller.text.isEmpty) return; + setState(() { _loading = true; _result = null; _debugLog = null; }); + final sw = Stopwatch()..start(); + try { + final result = await _ai.translate(_controller.text, options: TranslateOptions(sourceLanguage: 'en', targetLanguage: _targetLang)); + sw.stop(); + setState(() { + _result = result; + _debugLog = DebugLog(api: 'translate', request: {'text': _controller.text, 'source': 'en', 'target': _targetLang}, response: {'translatedText': result.translatedText, 'sourceLanguage': result.sourceLanguage, 'targetLanguage': result.targetLanguage}, timing: sw.elapsedMilliseconds); + _loading = false; + }); + } catch (e) { + sw.stop(); + setState(() { _loading = false; }); + if (mounted) ScaffoldMessenger.of(context).showSnackBar(SnackBar(content: Text('Error: $e'))); + } + } + + @override + void dispose() { _controller.dispose(); super.dispose(); } + + @override + Widget build(BuildContext context) { + return ListView( + padding: const EdgeInsets.all(16), + children: [ + const Text('TARGET LANGUAGE', style: TextStyle(fontSize: 14, fontWeight: FontWeight.w600, color: Color(0xFF666666))), + const SizedBox(height: 8), + SizedBox( + height: 40, + child: ListView.separated( + scrollDirection: Axis.horizontal, + itemCount: _languages.length, + separatorBuilder: (_, __) => const SizedBox(width: 8), + itemBuilder: (context, i) { + final (code, label) = _languages[i]; + final isSelected = code == _targetLang; + return GestureDetector( + onTap: () => setState(() => _targetLang = code), + child: Container( + padding: const EdgeInsets.symmetric(horizontal: 16), + decoration: BoxDecoration( + color: isSelected ? const Color(0xFF007AFF) : Colors.white, + borderRadius: BorderRadius.circular(20), + border: Border.all(color: isSelected ? const Color(0xFF007AFF) : const Color(0xFFE5E5EA)), + ), + alignment: Alignment.center, + child: Text(label, style: TextStyle(fontSize: 13, fontWeight: FontWeight.w600, color: isSelected ? Colors.white : const Color(0xFF333333))), + ), + ); + }, + ), + ), + const SizedBox(height: 16), + TextField(controller: _controller, maxLines: 4, decoration: _inputDecoration('Enter text to translate...')), + const SizedBox(height: 12), + RunButton(label: 'Translate', loading: _loading, onPressed: _run), + if (_result != null) ...[ + const SizedBox(height: 16), + Container( + padding: const EdgeInsets.all(16), + decoration: BoxDecoration(color: Colors.white, borderRadius: BorderRadius.circular(10)), + child: Column( + crossAxisAlignment: CrossAxisAlignment.start, + children: [ + Row(children: [ + StatBadge(label: 'From', value: _result!.sourceLanguage), + const SizedBox(width: 8), + StatBadge(label: 'To', value: _result!.targetLanguage), + ]), + const SizedBox(height: 12), + SelectableText(_result!.translatedText, style: const TextStyle(fontSize: 15, height: 1.47)), + ], + ), + ), + ], + DebugLogPanel(log: _debugLog), + ], + ); + } + + InputDecoration _inputDecoration(String hint) => InputDecoration( + hintText: hint, filled: true, fillColor: Colors.white, + border: OutlineInputBorder(borderRadius: BorderRadius.circular(10), borderSide: BorderSide.none), + ); + +} diff --git a/libraries/flutter_ondevice_ai/example/lib/widgets/framework_detail/agent_demo.dart b/libraries/flutter_ondevice_ai/example/lib/widgets/framework_detail/agent_demo.dart new file mode 100644 index 0000000..78ee9c7 --- /dev/null +++ b/libraries/flutter_ondevice_ai/example/lib/widgets/framework_detail/agent_demo.dart @@ -0,0 +1,165 @@ +import 'package:flutter/material.dart'; +import 'package:flutter_ondevice_ai/flutter_ondevice_ai.dart'; + +import '../shared/run_button.dart'; +import '../shared/stat_badge.dart'; +import 'code_pattern_card.dart'; + +const _documents = [ + 'Locanara is an on-device AI framework for iOS, Android, and Web. It provides composable chains, memory management, guardrails, and a pipeline DSL.', + 'Apple Intelligence uses Foundation Models to power features like summarization, rewriting, and proofreading directly on iPhone, iPad, and Mac.', + 'Gemini Nano is Google\'s smallest AI model, designed to run directly on mobile devices. It powers features in Android 14+ through the ML Kit API.', + 'On-device AI processes data locally without sending it to the cloud, ensuring privacy and low latency. It works offline and reduces server costs.', +]; + +class AgentDemo extends StatefulWidget { + const AgentDemo({super.key}); + + @override + State createState() => _AgentDemoState(); +} + +class _AgentDemoState extends State { + final _controller = TextEditingController(); + final _ai = FlutterOndeviceAi.instance; + bool _loading = false; + final _trace = <_Step>[]; + String? _finalAnswer; + int? _timing; + + final _suggestions = ['What is Locanara?', 'How does on-device AI work?', 'Tell me about Gemini Nano']; + + Future _run(String query) async { + if (_loading || query.isEmpty) return; + setState(() { _loading = true; _trace.clear(); _finalAnswer = null; _timing = null; }); + final sw = Stopwatch()..start(); + + try { + setState(() => _trace.add(_Step('Thought', 'I need to search for relevant documents about "$query"'))); + + final keywords = query.toLowerCase().split(' ').where((w) => w.length > 3).toList(); + final matches = _documents.where((doc) { + final lower = doc.toLowerCase(); + return keywords.any((k) => lower.contains(k)); + }).toList(); + + setState(() => _trace.add(_Step('Action', 'SearchDocuments("${keywords.join(", ")}")'))); + setState(() => _trace.add(_Step('Observation', matches.isEmpty ? 'No relevant documents found.' : 'Found ${matches.length} document(s):\n${matches.map((d) => '- ${d.substring(0, 80)}...').join('\n')}'))); + + if (matches.isNotEmpty) { + setState(() => _trace.add(_Step('Thought', 'I found relevant information. Let me process it with AI.'))); + final context = matches.join('\n\n'); + final result = await _ai.chat( + 'Based on the following context, answer the question: "$query"\n\nContext:\n$context', + options: const ChatOptions(systemPrompt: 'You are a helpful assistant. Answer based only on the provided context. Be concise.'), + ); + setState(() => _finalAnswer = result.message); + } else { + final result = await _ai.chat(query, options: const ChatOptions(systemPrompt: 'You are a helpful assistant. Keep your answer concise.')); + setState(() => _finalAnswer = result.message); + } + sw.stop(); + setState(() { _timing = sw.elapsedMilliseconds; _loading = false; }); + } catch (e) { + sw.stop(); + setState(() { _finalAnswer = 'Error: $e'; _loading = false; }); + } + } + + @override + void dispose() { _controller.dispose(); super.dispose(); } + + @override + Widget build(BuildContext context) { + return ListView( + padding: const EdgeInsets.all(16), + children: [ + const CodePatternCard( + title: 'Native Code Pattern', + code: '''// Swift - ReAct Agent +let agent = Agent( + model: model, + tools: [SearchTool(), SummarizeTool()], + maxSteps: 5 +) +let result = try await agent.run("query") +// Traces: Thought -> Action -> Observation''', + ), + const Text('SUGGESTIONS', style: TextStyle(fontSize: 14, fontWeight: FontWeight.w600, color: Color(0xFF666666))), + const SizedBox(height: 8), + Wrap( + spacing: 8, + runSpacing: 8, + children: _suggestions.map((s) => ActionChip( + label: Text(s, style: const TextStyle(fontSize: 13)), + onPressed: _loading ? null : () { _controller.text = s; _run(s); }, + backgroundColor: Colors.white, + side: const BorderSide(color: Color(0xFFE5E5EA)), + )).toList(), + ), + const SizedBox(height: 16), + TextField(controller: _controller, decoration: _inputDecoration('Ask a question...')), + const SizedBox(height: 12), + RunButton(label: 'Run Agent', loading: _loading, onPressed: () => _run(_controller.text)), + if (_trace.isNotEmpty) ...[ + const SizedBox(height: 16), + const Text('REASONING TRACE', style: TextStyle(fontSize: 14, fontWeight: FontWeight.w600, color: Color(0xFF666666))), + const SizedBox(height: 8), + ..._trace.map((step) => Container( + margin: const EdgeInsets.only(bottom: 8), + padding: const EdgeInsets.all(12), + decoration: BoxDecoration( + color: Colors.white, + borderRadius: BorderRadius.circular(10), + border: Border.all(color: switch (step.type) { + 'Thought' => const Color(0xFF007AFF), + 'Action' => const Color(0xFFFF9500), + _ => const Color(0xFF34C759), + }.withValues(alpha: 0.3)), + ), + child: Column( + crossAxisAlignment: CrossAxisAlignment.start, + children: [ + Text(step.type, style: TextStyle(fontSize: 12, fontWeight: FontWeight.w700, color: switch (step.type) { + 'Thought' => const Color(0xFF007AFF), + 'Action' => const Color(0xFFFF9500), + _ => const Color(0xFF34C759), + })), + const SizedBox(height: 4), + Text(step.content, style: const TextStyle(fontSize: 14, color: Color(0xFF333333), height: 1.43)), + ], + ), + )), + ], + if (_finalAnswer != null) ...[ + const SizedBox(height: 12), + Container( + padding: const EdgeInsets.all(16), + decoration: BoxDecoration(color: Colors.white, borderRadius: BorderRadius.circular(10)), + child: Column( + crossAxisAlignment: CrossAxisAlignment.start, + children: [ + if (_timing != null) Padding(padding: const EdgeInsets.only(bottom: 12), child: StatBadge(label: 'Time', value: '${_timing}ms')), + const Text('FINAL ANSWER', style: TextStyle(fontSize: 11, fontWeight: FontWeight.w700, color: Color(0xFF636366), letterSpacing: 0.5)), + const SizedBox(height: 8), + SelectableText(_finalAnswer!, style: const TextStyle(fontSize: 15, height: 1.47)), + ], + ), + ), + ], + ], + ); + } + + InputDecoration _inputDecoration(String hint) => InputDecoration( + hintText: hint, filled: true, fillColor: Colors.white, + border: OutlineInputBorder(borderRadius: BorderRadius.circular(10), borderSide: BorderSide.none), + ); + +} + +class _Step { + final String type; + final String content; + _Step(this.type, this.content); +} diff --git a/libraries/flutter_ondevice_ai/example/lib/widgets/framework_detail/chain_demo.dart b/libraries/flutter_ondevice_ai/example/lib/widgets/framework_detail/chain_demo.dart new file mode 100644 index 0000000..5b6c5ef --- /dev/null +++ b/libraries/flutter_ondevice_ai/example/lib/widgets/framework_detail/chain_demo.dart @@ -0,0 +1,152 @@ +import 'package:flutter/material.dart'; +import 'package:flutter_ondevice_ai/flutter_ondevice_ai.dart'; + +import '../shared/run_button.dart'; +import '../shared/stat_badge.dart'; +import 'code_pattern_card.dart'; + +class ChainDemo extends StatefulWidget { + const ChainDemo({super.key}); + + @override + State createState() => _ChainDemoState(); +} + +class _ChainDemoState extends State { + final _controller = TextEditingController(text: 'Apple announced the new M4 chip today, featuring a 10-core CPU and 16-core GPU that delivers unprecedented performance for professional workflows.'); + final _ai = FlutterOndeviceAi.instance; + String _chainType = 'sequential'; + bool _loading = false; + List<(String, String)> _steps = []; + int? _timing; + + Future _run() async { + if (_loading || _controller.text.isEmpty) return; + setState(() { _loading = true; _steps = []; _timing = null; }); + final sw = Stopwatch()..start(); + + try { + if (_chainType == 'sequential') { + final summary = await _ai.summarize(_controller.text, options: const SummarizeOptions(outputType: SummarizeOutputType.oneBullet)); + final classify = await _ai.classify(summary.summary, options: const ClassifyOptions(categories: ['Technology', 'Business', 'Science', 'Entertainment'])); + sw.stop(); + setState(() { + _steps = [('Summarize', summary.summary), ('Classify', classify.classifications.map((c) => '${c.label}: ${(c.score * 100).toStringAsFixed(0)}%').join(', '))]; + }); + } else if (_chainType == 'parallel') { + final results = await Future.wait([ + _ai.summarize(_controller.text), + _ai.classify(_controller.text, options: const ClassifyOptions(categories: ['Technology', 'Business', 'Science'])), + ]); + sw.stop(); + final summary = results[0] as SummarizeResult; + final classify = results[1] as ClassifyResult; + setState(() { + _steps = [('Summarize (parallel)', summary.summary), ('Classify (parallel)', classify.classifications.map((c) => '${c.label}: ${(c.score * 100).toStringAsFixed(0)}%').join(', '))]; + }); + } else { + final classify = await _ai.classify(_controller.text, options: const ClassifyOptions(categories: ['Technology', 'Business', 'Science', 'Entertainment'])); + final topCategory = classify.classifications.isNotEmpty ? classify.classifications.first.label : 'Unknown'; + final rewrite = await _ai.rewrite(_controller.text, options: RewriteOptions(outputType: topCategory == 'Technology' ? RewriteOutputType.professional : RewriteOutputType.friendly)); + sw.stop(); + setState(() { + _steps = [('Classify (condition)', topCategory), ('Rewrite (${topCategory == 'Technology' ? 'professional' : 'friendly'})', rewrite.rewrittenText)]; + }); + } + setState(() { _timing = sw.elapsedMilliseconds; _loading = false; }); + } catch (e) { + sw.stop(); + setState(() { _steps = [('Error', e.toString())]; _loading = false; }); + } + } + + @override + void dispose() { _controller.dispose(); super.dispose(); } + + @override + Widget build(BuildContext context) { + return ListView( + padding: const EdgeInsets.all(16), + children: [ + const CodePatternCard( + title: 'Native Code Pattern', + code: '''// Swift - SequentialChain +let pipeline = SequentialChain(chains: [ + SummarizeChain(model: model), + ClassifyChain(model: model), +]) +let result = try await pipeline.run("text")''', + ), + const Text('CHAIN TYPE', style: TextStyle(fontSize: 14, fontWeight: FontWeight.w600, color: Color(0xFF666666))), + const SizedBox(height: 8), + _typeRow(), + const SizedBox(height: 16), + TextField(controller: _controller, maxLines: 4, decoration: _inputDecoration('Enter text...')), + const SizedBox(height: 12), + RunButton(label: 'Run Chain', loading: _loading, onPressed: _run), + if (_steps.isNotEmpty) ...[ + const SizedBox(height: 16), + Container( + padding: const EdgeInsets.all(16), + decoration: BoxDecoration(color: Colors.white, borderRadius: BorderRadius.circular(10)), + child: Column( + crossAxisAlignment: CrossAxisAlignment.start, + children: [ + if (_timing != null) + Padding( + padding: const EdgeInsets.only(bottom: 12), + child: Row(children: [ + StatBadge(label: 'Time', value: '${_timing}ms'), + const SizedBox(width: 8), + StatBadge(label: 'Chain', value: _chainType), + ]), + ), + ..._steps.asMap().entries.map((entry) => Column( + crossAxisAlignment: CrossAxisAlignment.start, + children: [ + if (entry.key > 0) const Padding(padding: EdgeInsets.symmetric(vertical: 8), child: Divider(height: 1, color: Color(0xFFE5E5EA))), + Text(entry.value.$1, style: const TextStyle(fontSize: 13, fontWeight: FontWeight.w600, color: Color(0xFF007AFF))), + const SizedBox(height: 4), + Text(entry.value.$2, style: const TextStyle(fontSize: 15, color: Color(0xFF333333), height: 1.47)), + ], + )), + ], + ), + ), + ], + ], + ); + } + + Widget _typeRow() { + const types = [('sequential', 'Sequential'), ('parallel', 'Parallel'), ('conditional', 'Conditional')]; + return Row( + children: types.map((t) { + final (value, label) = t; + final isSelected = value == _chainType; + return Expanded( + child: GestureDetector( + onTap: () => setState(() => _chainType = value), + child: Container( + margin: const EdgeInsets.only(right: 8), + padding: const EdgeInsets.symmetric(vertical: 10), + decoration: BoxDecoration( + color: isSelected ? const Color(0xFF007AFF) : Colors.white, + borderRadius: BorderRadius.circular(8), + border: Border.all(color: isSelected ? const Color(0xFF007AFF) : const Color(0xFFE5E5EA)), + ), + alignment: Alignment.center, + child: Text(label, style: TextStyle(fontSize: 13, fontWeight: FontWeight.w600, color: isSelected ? Colors.white : const Color(0xFF333333))), + ), + ), + ); + }).toList(), + ); + } + + InputDecoration _inputDecoration(String hint) => InputDecoration( + hintText: hint, filled: true, fillColor: Colors.white, + border: OutlineInputBorder(borderRadius: BorderRadius.circular(10), borderSide: BorderSide.none), + ); + +} diff --git a/libraries/flutter_ondevice_ai/example/lib/widgets/framework_detail/code_pattern_card.dart b/libraries/flutter_ondevice_ai/example/lib/widgets/framework_detail/code_pattern_card.dart new file mode 100644 index 0000000..80507ef --- /dev/null +++ b/libraries/flutter_ondevice_ai/example/lib/widgets/framework_detail/code_pattern_card.dart @@ -0,0 +1,73 @@ +import 'package:flutter/material.dart'; + +class CodePatternCard extends StatefulWidget { + final String title; + final String code; + + const CodePatternCard({super.key, required this.title, required this.code}); + + @override + State createState() => _CodePatternCardState(); +} + +class _CodePatternCardState extends State { + bool _expanded = false; + + @override + Widget build(BuildContext context) { + return Container( + margin: const EdgeInsets.only(bottom: 12), + decoration: BoxDecoration( + color: const Color(0xFF1C1C1E), + borderRadius: BorderRadius.circular(10), + ), + clipBehavior: Clip.antiAlias, + child: Column( + children: [ + InkWell( + onTap: () => setState(() => _expanded = !_expanded), + child: Padding( + padding: const EdgeInsets.all(12), + child: Row( + children: [ + const Icon(Icons.code, size: 16, color: Color(0xFF8E8E93)), + const SizedBox(width: 8), + Expanded( + child: Text( + widget.title, + style: const TextStyle(fontSize: 14, fontWeight: FontWeight.w600, color: Color(0xFF8E8E93)), + ), + ), + Icon( + _expanded ? Icons.expand_less : Icons.expand_more, + size: 16, + color: const Color(0xFF8E8E93), + ), + ], + ), + ), + ), + if (_expanded) + Padding( + padding: const EdgeInsets.fromLTRB(12, 0, 12, 12), + child: SizedBox( + width: double.infinity, + child: SingleChildScrollView( + scrollDirection: Axis.horizontal, + child: SelectableText( + widget.code, + style: const TextStyle( + fontSize: 12, + color: Color(0xFFE5E5EA), + fontFamily: 'monospace', + height: 1.5, + ), + ), + ), + ), + ), + ], + ), + ); + } +} diff --git a/libraries/flutter_ondevice_ai/example/lib/widgets/framework_detail/guardrail_demo.dart b/libraries/flutter_ondevice_ai/example/lib/widgets/framework_detail/guardrail_demo.dart new file mode 100644 index 0000000..d1b77d3 --- /dev/null +++ b/libraries/flutter_ondevice_ai/example/lib/widgets/framework_detail/guardrail_demo.dart @@ -0,0 +1,153 @@ +import 'package:flutter/material.dart'; +import 'package:flutter_ondevice_ai/flutter_ondevice_ai.dart'; + +import '../shared/run_button.dart'; +import '../shared/stat_badge.dart'; +import 'code_pattern_card.dart'; + +class GuardrailDemo extends StatefulWidget { + const GuardrailDemo({super.key}); + + @override + State createState() => _GuardrailDemoState(); +} + +class _GuardrailDemoState extends State { + final _controller = TextEditingController(text: 'Summarize this article for me.'); + final _ai = FlutterOndeviceAi.instance; + double _maxLength = 500; + final _blockedPatterns = ['password', 'ssn', 'credit card']; + bool _loading = false; + String? _resultText; + String? _errorText; + int? _timing; + + Future _run() async { + if (_loading || _controller.text.isEmpty) return; + setState(() { _loading = true; _resultText = null; _errorText = null; _timing = null; }); + + final text = _controller.text; + if (text.length > _maxLength.toInt()) { + setState(() { _errorText = 'Input too long: ${text.length} chars exceeds limit of ${_maxLength.toInt()}'; _loading = false; }); + return; + } + + final lower = text.toLowerCase(); + for (final pattern in _blockedPatterns) { + if (lower.contains(pattern)) { + setState(() { _errorText = 'Content blocked: contains "$pattern"'; _loading = false; }); + return; + } + } + + final sw = Stopwatch()..start(); + try { + final result = await _ai.chat(text, options: const ChatOptions(systemPrompt: 'You are a helpful assistant.')); + sw.stop(); + setState(() { _resultText = result.message; _timing = sw.elapsedMilliseconds; _loading = false; }); + } catch (e) { + sw.stop(); + setState(() { _errorText = 'Error: $e'; _loading = false; }); + } + } + + @override + void dispose() { _controller.dispose(); super.dispose(); } + + @override + Widget build(BuildContext context) { + return ListView( + padding: const EdgeInsets.all(16), + children: [ + const CodePatternCard( + title: 'Native Code Pattern', + code: '''// Swift - Guardrail +let guardrail = InputLengthGuardrail(maxLength: 500) +let chain = GuardedChain( + chain: SummarizeChain(model: model), + guardrails: [guardrail] +) +let result = try await chain.run("text")''', + ), + const Text('MAX LENGTH', style: TextStyle(fontSize: 14, fontWeight: FontWeight.w600, color: Color(0xFF666666))), + const SizedBox(height: 4), + Row( + children: [ + Expanded( + child: Slider( + value: _maxLength, + min: 50, + max: 2000, + divisions: 39, + activeColor: const Color(0xFF007AFF), + onChanged: (v) => setState(() => _maxLength = v), + ), + ), + SizedBox(width: 60, child: Text('${_maxLength.toInt()}', textAlign: TextAlign.right, style: const TextStyle(fontSize: 15, fontWeight: FontWeight.w600))), + ], + ), + const SizedBox(height: 8), + const Text('BLOCKED PATTERNS', style: TextStyle(fontSize: 14, fontWeight: FontWeight.w600, color: Color(0xFF666666))), + const SizedBox(height: 4), + Wrap( + spacing: 8, + children: _blockedPatterns.map((p) => Chip( + label: Text(p, style: const TextStyle(fontSize: 12)), + backgroundColor: const Color(0xFFFFEBEE), + side: BorderSide.none, + )).toList(), + ), + const SizedBox(height: 12), + TextField( + controller: _controller, + maxLines: 3, + decoration: _inputDecoration('Enter text...'), + onChanged: (_) => setState(() {}), + ), + Padding( + padding: const EdgeInsets.only(top: 4), + child: Text( + '${_controller.text.length}/${_maxLength.toInt()} chars', + style: TextStyle(fontSize: 12, color: _controller.text.length > _maxLength ? const Color(0xFFFF3B30) : const Color(0xFF666666)), + ), + ), + const SizedBox(height: 12), + RunButton(label: 'Run with Guardrails', loading: _loading, onPressed: _run), + if (_errorText != null) ...[ + const SizedBox(height: 16), + Container( + padding: const EdgeInsets.all(16), + decoration: BoxDecoration(color: const Color(0xFFFFEBEE), borderRadius: BorderRadius.circular(10)), + child: Row( + children: [ + const Icon(Icons.block, color: Color(0xFFFF3B30)), + const SizedBox(width: 12), + Expanded(child: Text(_errorText!, style: const TextStyle(fontSize: 14, color: Color(0xFFFF3B30)))), + ], + ), + ), + ], + if (_resultText != null) ...[ + const SizedBox(height: 16), + Container( + padding: const EdgeInsets.all(16), + decoration: BoxDecoration(color: Colors.white, borderRadius: BorderRadius.circular(10)), + child: Column( + crossAxisAlignment: CrossAxisAlignment.start, + children: [ + if (_timing != null) Padding(padding: const EdgeInsets.only(bottom: 12), child: StatBadge(label: 'Time', value: '${_timing}ms')), + SelectableText(_resultText!, style: const TextStyle(fontSize: 15, height: 1.47)), + ], + ), + ), + ], + ], + ); + } + + InputDecoration _inputDecoration(String hint) => InputDecoration( + hintText: hint, filled: true, fillColor: Colors.white, + border: OutlineInputBorder(borderRadius: BorderRadius.circular(10), borderSide: BorderSide.none), + ); + +} diff --git a/libraries/flutter_ondevice_ai/example/lib/widgets/framework_detail/memory_demo.dart b/libraries/flutter_ondevice_ai/example/lib/widgets/framework_detail/memory_demo.dart new file mode 100644 index 0000000..eecf028 --- /dev/null +++ b/libraries/flutter_ondevice_ai/example/lib/widgets/framework_detail/memory_demo.dart @@ -0,0 +1,207 @@ +import 'package:flutter/material.dart'; +import 'package:flutter_ondevice_ai/flutter_ondevice_ai.dart'; + +import '../shared/stat_badge.dart'; +import 'code_pattern_card.dart'; + +class MemoryDemo extends StatefulWidget { + const MemoryDemo({super.key}); + + @override + State createState() => _MemoryDemoState(); +} + +class _MemoryDemoState extends State { + final _controller = TextEditingController(); + final _scrollController = ScrollController(); + final _ai = FlutterOndeviceAi.instance; + final _messages = <_Msg>[]; + String _memoryType = 'buffer'; + bool _loading = false; + final int _maxEntries = 4; + + List get _contextHistory { + final history = _messages.map((m) => ChatMessage(role: m.role == 'user' ? ChatRole.user : ChatRole.assistant, content: m.content)).toList(); + return history.length > _maxEntries ? history.sublist(history.length - _maxEntries) : history; + } + + int get _tokenEstimate => _contextHistory.fold(0, (sum, m) => sum + (m.content.length / 4).ceil()); + + Future _send() async { + final text = _controller.text.trim(); + if (text.isEmpty || _loading) return; + _controller.clear(); + setState(() { + _messages.add(_Msg('user', text)); + _loading = true; + }); + _scroll(); + + final history = _contextHistory; + try { + final result = await _ai.chat(text, options: ChatOptions( + systemPrompt: 'You are a helpful assistant. Keep responses concise. Always reference what the user previously said when possible.', + history: history.sublist(0, history.length > 1 ? history.length - 1 : 0), + )); + setState(() { _messages.add(_Msg('assistant', result.message)); _loading = false; }); + } catch (e) { + setState(() { _messages.add(_Msg('assistant', 'Error: $e')); _loading = false; }); + } + _scroll(); + } + + void _scroll() { + WidgetsBinding.instance.addPostFrameCallback((_) { + if (_scrollController.hasClients) _scrollController.animateTo(_scrollController.position.maxScrollExtent, duration: const Duration(milliseconds: 200), curve: Curves.easeOut); + }); + } + + @override + void dispose() { _controller.dispose(); _scrollController.dispose(); super.dispose(); } + + @override + Widget build(BuildContext context) { + return Column( + children: [ + Expanded( + child: ListView( + controller: _scrollController, + padding: const EdgeInsets.all(16), + children: [ + const CodePatternCard( + title: 'Native Code Pattern', + code: '''// Swift +let memory = BufferMemory(maxEntries: 4) +let chain = ChatChain( + model: model, memory: memory +) +// Each call auto-stores context +let r1 = try await chain.run("Hello") +let r2 = try await chain.run("Follow up") +memory.entries // last 4 entries''', + ), + Row( + children: [('buffer', 'Buffer'), ('summary', 'Summary')].map((t) { + final (value, label) = t; + final isSelected = value == _memoryType; + return Expanded( + child: GestureDetector( + onTap: () => setState(() => _memoryType = value), + child: Container( + margin: const EdgeInsets.only(right: 8), + padding: const EdgeInsets.symmetric(vertical: 10), + decoration: BoxDecoration( + color: isSelected ? const Color(0xFF007AFF) : Colors.white, + borderRadius: BorderRadius.circular(8), + border: Border.all(color: isSelected ? const Color(0xFF007AFF) : const Color(0xFFE5E5EA)), + ), + alignment: Alignment.center, + child: Text('${label}Memory', style: TextStyle(fontSize: 13, fontWeight: FontWeight.w600, color: isSelected ? Colors.white : const Color(0xFF333333))), + ), + ), + ); + }).toList(), + ), + const SizedBox(height: 8), + Container( + padding: const EdgeInsets.all(12), + decoration: BoxDecoration(color: Colors.white, borderRadius: BorderRadius.circular(10)), + child: Column( + crossAxisAlignment: CrossAxisAlignment.start, + children: [ + Row( + children: [ + const Icon(Icons.lightbulb, size: 18, color: Color(0xFF007AFF)), + const SizedBox(width: 8), + const Expanded(child: Text('Memory Inspector', style: TextStyle(fontSize: 14, fontWeight: FontWeight.w600))), + StatBadge(label: 'Entries', value: '${_contextHistory.length}'), + const SizedBox(width: 4), + StatBadge(label: 'Tokens', value: '~$_tokenEstimate'), + ], + ), + if (_contextHistory.isNotEmpty) ...[ + const SizedBox(height: 8), + ..._contextHistory.map((entry) => Padding( + padding: const EdgeInsets.symmetric(vertical: 2), + child: Row( + crossAxisAlignment: CrossAxisAlignment.start, + children: [ + Container( + width: 22, height: 22, + decoration: BoxDecoration( + color: entry.role == ChatRole.user ? const Color(0xFF007AFF) : const Color(0xFF34C759), + shape: BoxShape.circle, + ), + child: Center(child: Text(entry.role == ChatRole.user ? 'U' : 'A', style: const TextStyle(fontSize: 11, fontWeight: FontWeight.w700, color: Colors.white))), + ), + const SizedBox(width: 8), + Expanded(child: Text(entry.content, maxLines: 2, overflow: TextOverflow.ellipsis, style: const TextStyle(fontSize: 13, color: Color(0xFF666666), height: 1.38))), + ], + ), + )), + ], + ], + ), + ), + const SizedBox(height: 8), + ..._messages.map((msg) => Align( + alignment: msg.role == 'user' ? Alignment.centerRight : Alignment.centerLeft, + child: Container( + margin: const EdgeInsets.only(bottom: 8), + padding: const EdgeInsets.symmetric(horizontal: 14, vertical: 10), + constraints: BoxConstraints(maxWidth: MediaQuery.of(context).size.width * 0.8), + decoration: BoxDecoration( + color: msg.role == 'user' ? const Color(0xFF007AFF) : Colors.white, + borderRadius: BorderRadius.circular(18), + ), + child: Text(msg.content, style: TextStyle(fontSize: 15, color: msg.role == 'user' ? Colors.white : const Color(0xFF333333), height: 1.47)), + ), + )), + if (_loading) Row(children: [ + const SizedBox(width: 20, height: 20, child: CircularProgressIndicator(strokeWidth: 2, color: Color(0xFF007AFF))), + const SizedBox(width: 8), + const Text('Thinking...', style: TextStyle(fontSize: 13, color: Color(0xFF999999))), + ]), + ], + ), + ), + Container( + padding: const EdgeInsets.all(12), + decoration: const BoxDecoration(color: Colors.white, border: Border(top: BorderSide(color: Color(0xFFE5E5EA), width: 0.5))), + child: Row( + children: [ + if (_messages.isNotEmpty) + IconButton(icon: const Icon(Icons.delete, color: Color(0xFFFF3B30)), onPressed: () => setState(() => _messages.clear())), + Expanded( + child: TextField( + controller: _controller, + decoration: InputDecoration( + hintText: 'Type a message...', filled: true, fillColor: const Color(0xFFF2F2F7), + contentPadding: const EdgeInsets.symmetric(horizontal: 16, vertical: 10), + border: OutlineInputBorder(borderRadius: BorderRadius.circular(20), borderSide: BorderSide.none), + ), + onSubmitted: (_) => _send(), + ), + ), + const SizedBox(width: 8), + GestureDetector( + onTap: _send, + child: Container( + width: 40, height: 40, + decoration: const BoxDecoration(color: Color(0xFF007AFF), shape: BoxShape.circle), + child: const Icon(Icons.send, size: 20, color: Colors.white), + ), + ), + ], + ), + ), + ], + ); + } +} + +class _Msg { + final String role; + final String content; + _Msg(this.role, this.content); +} diff --git a/libraries/flutter_ondevice_ai/example/lib/widgets/framework_detail/model_demo.dart b/libraries/flutter_ondevice_ai/example/lib/widgets/framework_detail/model_demo.dart new file mode 100644 index 0000000..8a9c6f5 --- /dev/null +++ b/libraries/flutter_ondevice_ai/example/lib/widgets/framework_detail/model_demo.dart @@ -0,0 +1,155 @@ +import 'package:flutter/material.dart'; +import 'package:flutter_ondevice_ai/flutter_ondevice_ai.dart'; + +import '../shared/run_button.dart'; +import '../shared/stat_badge.dart'; +import 'code_pattern_card.dart'; + +class ModelDemo extends StatefulWidget { + const ModelDemo({super.key}); + + @override + State createState() => _ModelDemoState(); +} + +class _ModelDemoState extends State { + final _controller = TextEditingController(text: 'What is on-device AI and why is it important?'); + final _ai = FlutterOndeviceAi.instance; + String _preset = 'structured'; + bool _streaming = false; + bool _loading = false; + String _output = ''; + int? _timing; + + Future _run() async { + if (_loading || _controller.text.isEmpty) return; + setState(() { _loading = true; _output = ''; _timing = null; }); + final sw = Stopwatch()..start(); + + final systemPrompt = switch (_preset) { + 'creative' => 'You are a creative writer. Be expressive and use vivid language.', + 'conversational' => 'You are a friendly conversational partner. Be casual and warm.', + _ => 'You are a helpful assistant. Be precise and structured.', + }; + + try { + if (_streaming) { + await _ai.chatStream(_controller.text, options: ChatStreamOptions( + systemPrompt: systemPrompt, + onChunk: (chunk) { + setState(() { _output = chunk.accumulated; }); + }, + )); + } else { + final result = await _ai.chat(_controller.text, options: ChatOptions(systemPrompt: systemPrompt)); + setState(() { _output = result.message; }); + } + sw.stop(); + setState(() { _timing = sw.elapsedMilliseconds; _loading = false; }); + } catch (e) { + sw.stop(); + setState(() { _output = 'Error: $e'; _loading = false; }); + } + } + + @override + void dispose() { _controller.dispose(); super.dispose(); } + + @override + Widget build(BuildContext context) { + return ListView( + padding: const EdgeInsets.all(16), + children: [ + const CodePatternCard( + title: 'Native Code Pattern', + code: '''// Swift +let model = FoundationLanguageModel() +let response = try await model.generate( + "What is AI?", + config: .structured +) + +// Kotlin +val model = PromptApiModel(context) +val response = model.generate( + "What is AI?", + GenerationConfig.STRUCTURED +)''', + ), + const Text('PRESET', style: TextStyle(fontSize: 14, fontWeight: FontWeight.w600, color: Color(0xFF666666))), + const SizedBox(height: 8), + _presetRow(), + const SizedBox(height: 16), + Row( + children: [ + const Text('Streaming', style: TextStyle(fontSize: 15)), + const Spacer(), + Switch(value: _streaming, onChanged: (v) => setState(() => _streaming = v), activeColor: const Color(0xFF007AFF)), + ], + ), + const SizedBox(height: 8), + TextField(controller: _controller, maxLines: 3, decoration: _inputDecoration('Enter a prompt...')), + const SizedBox(height: 12), + RunButton(label: 'Generate', loading: _loading, onPressed: _run), + if (_output.isNotEmpty) ...[ + const SizedBox(height: 16), + Container( + padding: const EdgeInsets.all(16), + decoration: BoxDecoration(color: Colors.white, borderRadius: BorderRadius.circular(10)), + child: Column( + crossAxisAlignment: CrossAxisAlignment.start, + children: [ + if (_timing != null) + Padding( + padding: const EdgeInsets.only(bottom: 12), + child: Row( + children: [ + StatBadge(label: 'Time', value: '${_timing}ms'), + const SizedBox(width: 8), + StatBadge(label: 'Config', value: _preset), + const SizedBox(width: 8), + StatBadge(label: 'Mode', value: _streaming ? 'Stream' : 'Batch'), + ], + ), + ), + SelectableText(_output, style: const TextStyle(fontSize: 15, height: 1.47)), + ], + ), + ), + ], + ], + ); + } + + Widget _presetRow() { + const presets = [('structured', 'Structured'), ('creative', 'Creative'), ('conversational', 'Conversational')]; + return Row( + children: presets.map((p) { + final (value, label) = p; + final isSelected = value == _preset; + return Expanded( + child: GestureDetector( + onTap: () => setState(() => _preset = value), + child: Container( + margin: const EdgeInsets.only(right: 8), + padding: const EdgeInsets.symmetric(vertical: 10), + decoration: BoxDecoration( + color: isSelected ? const Color(0xFF007AFF) : Colors.white, + borderRadius: BorderRadius.circular(8), + border: Border.all(color: isSelected ? const Color(0xFF007AFF) : const Color(0xFFE5E5EA)), + ), + alignment: Alignment.center, + child: Text(label, style: TextStyle(fontSize: 13, fontWeight: FontWeight.w600, color: isSelected ? Colors.white : const Color(0xFF333333))), + ), + ), + ); + }).toList(), + ); + } + + InputDecoration _inputDecoration(String hint) => InputDecoration( + hintText: hint, filled: true, fillColor: Colors.white, + border: OutlineInputBorder(borderRadius: BorderRadius.circular(10), borderSide: BorderSide.none), + ); + +} diff --git a/libraries/flutter_ondevice_ai/example/lib/widgets/framework_detail/pipeline_demo.dart b/libraries/flutter_ondevice_ai/example/lib/widgets/framework_detail/pipeline_demo.dart new file mode 100644 index 0000000..20615ec --- /dev/null +++ b/libraries/flutter_ondevice_ai/example/lib/widgets/framework_detail/pipeline_demo.dart @@ -0,0 +1,131 @@ +import 'package:flutter/material.dart'; +import 'package:flutter_ondevice_ai/flutter_ondevice_ai.dart'; + +import '../shared/run_button.dart'; +import '../shared/stat_badge.dart'; +import 'code_pattern_card.dart'; + +class PipelineDemo extends StatefulWidget { + const PipelineDemo({super.key}); + + @override + State createState() => _PipelineDemoState(); +} + +class _PipelineDemoState extends State { + final _controller = TextEditingController(text: 'Their going to the store tommorow and they will buys some grocerys for the party.'); + final _ai = FlutterOndeviceAi.instance; + String _targetLang = 'ko'; + bool _loading = false; + List<(String, String)> _steps = []; + int? _timing; + + Future _run() async { + if (_loading || _controller.text.isEmpty) return; + setState(() { _loading = true; _steps = []; _timing = null; }); + final sw = Stopwatch()..start(); + + try { + final proofread = await _ai.proofread(_controller.text); + final translate = await _ai.translate(proofread.correctedText, options: TranslateOptions(sourceLanguage: 'en', targetLanguage: _targetLang)); + sw.stop(); + setState(() { + _steps = [('Proofread', proofread.correctedText), ('Translate (\u2192 $_targetLang)', translate.translatedText)]; + _timing = sw.elapsedMilliseconds; + _loading = false; + }); + } catch (e) { + sw.stop(); + setState(() { _steps = [('Error', e.toString())]; _loading = false; }); + } + } + + @override + void dispose() { _controller.dispose(); super.dispose(); } + + @override + Widget build(BuildContext context) { + return ListView( + padding: const EdgeInsets.all(16), + children: [ + const CodePatternCard( + title: 'Native Code Pattern', + code: '''// Swift - Pipeline DSL +let result = try await model.pipeline { + Proofread() + Translate(to: "ko") +}.run("text with typos")''', + ), + const Text('TARGET LANGUAGE', style: TextStyle(fontSize: 14, fontWeight: FontWeight.w600, color: Color(0xFF666666))), + const SizedBox(height: 8), + _langRow(), + const SizedBox(height: 16), + TextField(controller: _controller, maxLines: 3, decoration: _inputDecoration('Enter text with typos...')), + const SizedBox(height: 12), + RunButton(label: 'Run Pipeline', loading: _loading, onPressed: _run), + if (_steps.isNotEmpty) ...[ + const SizedBox(height: 16), + Container( + padding: const EdgeInsets.all(16), + decoration: BoxDecoration(color: Colors.white, borderRadius: BorderRadius.circular(10)), + child: Column( + crossAxisAlignment: CrossAxisAlignment.start, + children: [ + if (_timing != null) + Padding( + padding: const EdgeInsets.only(bottom: 12), + child: Row(children: [ + StatBadge(label: 'Time', value: '${_timing}ms'), + const SizedBox(width: 8), + StatBadge(label: 'Steps', value: '${_steps.length}'), + ]), + ), + ..._steps.asMap().entries.map((entry) => Column( + crossAxisAlignment: CrossAxisAlignment.start, + children: [ + if (entry.key > 0) const Padding(padding: EdgeInsets.symmetric(vertical: 8), child: Divider(height: 1, color: Color(0xFFE5E5EA))), + Text(entry.value.$1, style: const TextStyle(fontSize: 13, fontWeight: FontWeight.w600, color: Color(0xFF007AFF))), + const SizedBox(height: 4), + Text(entry.value.$2, style: const TextStyle(fontSize: 15, color: Color(0xFF333333), height: 1.47)), + ], + )), + ], + ), + ), + ], + ], + ); + } + + Widget _langRow() { + const langs = [('ko', 'Korean'), ('ja', 'Japanese'), ('es', 'Spanish'), ('fr', 'French')]; + return Row( + children: langs.map((l) { + final (code, label) = l; + final isSelected = code == _targetLang; + return Expanded( + child: GestureDetector( + onTap: () => setState(() => _targetLang = code), + child: Container( + margin: const EdgeInsets.only(right: 8), + padding: const EdgeInsets.symmetric(vertical: 10), + decoration: BoxDecoration( + color: isSelected ? const Color(0xFF007AFF) : Colors.white, + borderRadius: BorderRadius.circular(8), + border: Border.all(color: isSelected ? const Color(0xFF007AFF) : const Color(0xFFE5E5EA)), + ), + alignment: Alignment.center, + child: Text(label, style: TextStyle(fontSize: 13, fontWeight: FontWeight.w600, color: isSelected ? Colors.white : const Color(0xFF333333))), + ), + ), + ); + }).toList(), + ); + } + + InputDecoration _inputDecoration(String hint) => InputDecoration( + hintText: hint, filled: true, fillColor: Colors.white, + border: OutlineInputBorder(borderRadius: BorderRadius.circular(10), borderSide: BorderSide.none), + ); + +} diff --git a/libraries/flutter_ondevice_ai/example/lib/widgets/framework_detail/session_demo.dart b/libraries/flutter_ondevice_ai/example/lib/widgets/framework_detail/session_demo.dart new file mode 100644 index 0000000..69a7c0d --- /dev/null +++ b/libraries/flutter_ondevice_ai/example/lib/widgets/framework_detail/session_demo.dart @@ -0,0 +1,197 @@ +import 'package:flutter/material.dart'; +import 'package:flutter_ondevice_ai/flutter_ondevice_ai.dart'; + +import '../shared/stat_badge.dart'; +import 'code_pattern_card.dart'; + +class SessionDemo extends StatefulWidget { + const SessionDemo({super.key}); + + @override + State createState() => _SessionDemoState(); +} + +class _SessionDemoState extends State { + final _controller = TextEditingController(); + final _scrollController = ScrollController(); + final _ai = FlutterOndeviceAi.instance; + final _messages = <_Msg>[]; + bool _loading = false; + bool _showMemory = false; + final int _maxMemoryEntries = 6; + + List get _contextHistory { + final history = _messages.map((m) => ChatMessage(role: m.role == 'user' ? ChatRole.user : ChatRole.assistant, content: m.content)).toList(); + return history.length > _maxMemoryEntries ? history.sublist(history.length - _maxMemoryEntries) : history; + } + + int get _tokenEstimate => _contextHistory.fold(0, (sum, m) => sum + (m.content.length / 4).ceil()); + + Future _send() async { + final text = _controller.text.trim(); + if (text.isEmpty || _loading) return; + _controller.clear(); + setState(() { _messages.add(_Msg('user', text)); _loading = true; }); + _scroll(); + + final history = [..._messages.map((m) => ChatMessage(role: m.role == 'user' ? ChatRole.user : ChatRole.assistant, content: m.content))]; + final contextHistory = history.length > _maxMemoryEntries ? history.sublist(history.length - _maxMemoryEntries) : history; + + try { + setState(() => _messages.add(_Msg('assistant', '...'))); + var accumulated = ''; + await _ai.chatStream(text, options: ChatStreamOptions( + systemPrompt: 'You are a helpful assistant. Keep responses concise.', + history: contextHistory.sublist(0, contextHistory.length > 1 ? contextHistory.length - 1 : 0), + onChunk: (chunk) { + accumulated = chunk.accumulated; + setState(() { _messages.last = _Msg('assistant', accumulated); }); + _scroll(); + }, + )); + } catch (e) { + setState(() { _messages.last = _Msg('assistant', 'Error: $e'); }); + } finally { + setState(() => _loading = false); + _scroll(); + } + } + + void _scroll() { + WidgetsBinding.instance.addPostFrameCallback((_) { + if (_scrollController.hasClients) _scrollController.animateTo(_scrollController.position.maxScrollExtent, duration: const Duration(milliseconds: 200), curve: Curves.easeOut); + }); + } + + @override + void dispose() { _controller.dispose(); _scrollController.dispose(); super.dispose(); } + + @override + Widget build(BuildContext context) { + return Column( + children: [ + Expanded( + child: ListView( + controller: _scrollController, + padding: const EdgeInsets.all(16), + children: [ + const CodePatternCard( + title: 'Native Code Pattern', + code: '''// Swift - Stateful Session +let session = Session( + model: model, + memory: BufferMemory(maxEntries: 6), + systemPrompt: "You are helpful." +) + +// Each call auto-manages memory +let r1 = try await session.send("Hello!") +let r2 = try await session.send("Follow up") + +// Inspect memory state +session.memory.entries''', + ), + GestureDetector( + onTap: () => setState(() => _showMemory = !_showMemory), + child: Container( + padding: const EdgeInsets.all(12), + decoration: BoxDecoration(color: Colors.white, borderRadius: BorderRadius.circular(10)), + child: Row( + children: [ + const Icon(Icons.lightbulb, size: 18, color: Color(0xFF007AFF)), + const SizedBox(width: 8), + const Expanded(child: Text('Memory Inspector', style: TextStyle(fontSize: 14, fontWeight: FontWeight.w600))), + StatBadge(label: 'Entries', value: '${_contextHistory.length}'), + const SizedBox(width: 4), + StatBadge(label: 'Tokens', value: '~$_tokenEstimate'), + Icon(_showMemory ? Icons.expand_less : Icons.expand_more, size: 16, color: const Color(0xFF8E8E93)), + ], + ), + ), + ), + if (_showMemory && _contextHistory.isNotEmpty) + Container( + margin: const EdgeInsets.only(top: 4), + padding: const EdgeInsets.all(12), + decoration: BoxDecoration(color: Colors.white, borderRadius: BorderRadius.circular(10)), + child: Column( + children: _contextHistory.map((entry) => Padding( + padding: const EdgeInsets.symmetric(vertical: 2), + child: Row( + crossAxisAlignment: CrossAxisAlignment.start, + children: [ + Container( + width: 22, height: 22, + decoration: BoxDecoration(color: entry.role == ChatRole.user ? const Color(0xFF007AFF) : const Color(0xFF34C759), shape: BoxShape.circle), + child: Center(child: Text(entry.role == ChatRole.user ? 'U' : 'A', style: const TextStyle(fontSize: 11, fontWeight: FontWeight.w700, color: Colors.white))), + ), + const SizedBox(width: 8), + Expanded(child: Text(entry.content, maxLines: 2, overflow: TextOverflow.ellipsis, style: const TextStyle(fontSize: 13, color: Color(0xFF666666)))), + ], + ), + )).toList(), + ), + ), + const SizedBox(height: 8), + ..._messages.where((m) => m.content != '...').map((msg) => Align( + alignment: msg.role == 'user' ? Alignment.centerRight : Alignment.centerLeft, + child: Container( + margin: const EdgeInsets.only(bottom: 8), + padding: const EdgeInsets.symmetric(horizontal: 14, vertical: 10), + constraints: BoxConstraints(maxWidth: MediaQuery.of(context).size.width * 0.8), + decoration: BoxDecoration( + color: msg.role == 'user' ? const Color(0xFF007AFF) : Colors.white, + borderRadius: BorderRadius.circular(18), + ), + child: Text(msg.content, style: TextStyle(fontSize: 15, color: msg.role == 'user' ? Colors.white : const Color(0xFF333333))), + ), + )), + if (_loading && _messages.isNotEmpty && _messages.last.content == '...') + Row(children: [ + const SizedBox(width: 20, height: 20, child: CircularProgressIndicator(strokeWidth: 2, color: Color(0xFF007AFF))), + const SizedBox(width: 8), + const Text('Thinking...', style: TextStyle(fontSize: 13, color: Color(0xFF999999))), + ]), + ], + ), + ), + Container( + padding: const EdgeInsets.all(12), + decoration: const BoxDecoration(color: Colors.white, border: Border(top: BorderSide(color: Color(0xFFE5E5EA), width: 0.5))), + child: Row( + children: [ + if (_messages.isNotEmpty) + IconButton(icon: const Icon(Icons.delete, color: Color(0xFFFF3B30)), onPressed: () => setState(() => _messages.clear())), + Expanded( + child: TextField( + controller: _controller, + decoration: InputDecoration( + hintText: 'Type a message...', filled: true, fillColor: const Color(0xFFF2F2F7), + contentPadding: const EdgeInsets.symmetric(horizontal: 16, vertical: 10), + border: OutlineInputBorder(borderRadius: BorderRadius.circular(20), borderSide: BorderSide.none), + ), + onSubmitted: (_) => _send(), + ), + ), + const SizedBox(width: 8), + GestureDetector( + onTap: _send, + child: Container( + width: 40, height: 40, + decoration: const BoxDecoration(color: Color(0xFF007AFF), shape: BoxShape.circle), + child: const Icon(Icons.send, size: 20, color: Colors.white), + ), + ), + ], + ), + ), + ], + ); + } +} + +class _Msg { + final String role; + String content; + _Msg(this.role, this.content); +} diff --git a/libraries/flutter_ondevice_ai/example/lib/widgets/shared/ai_model_required_banner.dart b/libraries/flutter_ondevice_ai/example/lib/widgets/shared/ai_model_required_banner.dart new file mode 100644 index 0000000..241cb23 --- /dev/null +++ b/libraries/flutter_ondevice_ai/example/lib/widgets/shared/ai_model_required_banner.dart @@ -0,0 +1,32 @@ +import 'package:flutter/material.dart'; + +class AIModelRequiredBanner extends StatelessWidget { + const AIModelRequiredBanner({super.key}); + + @override + Widget build(BuildContext context) { + return Container( + padding: const EdgeInsets.all(12), + decoration: BoxDecoration( + color: const Color(0xFFFFF3E0), + borderRadius: BorderRadius.circular(10), + ), + child: const Row( + children: [ + Icon(Icons.warning, size: 20, color: Color(0xFFFF9500)), + SizedBox(width: 12), + Expanded( + child: Column( + crossAxisAlignment: CrossAxisAlignment.start, + children: [ + Text('AI Model Not Available', style: TextStyle(fontSize: 15, fontWeight: FontWeight.w600)), + SizedBox(height: 2), + Text('This feature requires on-device AI support', style: TextStyle(fontSize: 13, color: Color(0xFF666666))), + ], + ), + ), + ], + ), + ); + } +} diff --git a/libraries/flutter_ondevice_ai/example/lib/widgets/shared/ai_status_banner.dart b/libraries/flutter_ondevice_ai/example/lib/widgets/shared/ai_status_banner.dart new file mode 100644 index 0000000..9e1ba46 --- /dev/null +++ b/libraries/flutter_ondevice_ai/example/lib/widgets/shared/ai_status_banner.dart @@ -0,0 +1,181 @@ +import 'package:flutter/foundation.dart'; +import 'package:flutter/material.dart'; +import 'package:flutter_ondevice_ai/flutter_ondevice_ai.dart'; +import 'package:provider/provider.dart'; +import 'package:url_launcher/url_launcher.dart'; + +import '../../app_state.dart'; +import 'model_selection_sheet.dart'; + +const _engineLabels = { + InferenceEngine.foundationModels: 'Apple Intelligence', + InferenceEngine.llamaCpp: 'llama.cpp Model', + InferenceEngine.promptApi: 'Gemini Nano', +}; + +class AIStatusBanner extends StatelessWidget { + const AIStatusBanner({super.key}); + + @override + Widget build(BuildContext context) { + final state = context.watch(); + + // Loading state + if (state.sdkState == SDKState.initializing || state.sdkState == SDKState.notInitialized) { + final isIOS = !kIsWeb && defaultTargetPlatform == TargetPlatform.iOS; + final checkingText = kIsWeb + ? 'Checking Chrome Built-in AI...' + : isIOS + ? 'Checking Apple Intelligence...' + : 'Checking Gemini Nano...'; + + return _BannerContainer( + color: const Color(0xFFE3F2FD), + icon: const Icon(Icons.hourglass_empty, size: 20, color: Color(0xFF007AFF)), + title: checkingText, + subtitle: 'Please wait while checking device capabilities', + ); + } + + // Model ready - tappable to manage models + if (state.isModelReady) { + final engineLabel = _engineLabels[state.modelState.currentEngine] ?? + (kIsWeb + ? 'Chrome Built-in AI' + : (!kIsWeb && defaultTargetPlatform == TargetPlatform.iOS) + ? 'Apple Intelligence' + : 'Gemini Nano'); + + return GestureDetector( + onTap: () => _showModelSheet(context), + child: _BannerContainer( + color: const Color(0xFFE3F2FD), + icon: const Icon(Icons.auto_awesome, size: 20, color: Color(0xFF007AFF)), + title: '$engineLabel Active', + subtitle: 'Tap to manage models', + trailing: const Icon(Icons.chevron_right, size: 20, color: Color(0xFFC7C7CC)), + ), + ); + } + + // Web: Chrome Built-in AI not available + if (kIsWeb) { + return _BannerContainer( + color: const Color(0xFFFFEBEE), + icon: const Icon(Icons.warning, size: 20, color: Color(0xFFFF3B30)), + title: 'Chrome Built-in AI Not Available', + subtitle: 'Requires Chrome 138+ with Gemini Nano enabled', + ); + } + + // iOS: Apple Intelligence required but not ready + if (!kIsWeb && defaultTargetPlatform == TargetPlatform.iOS && state.capability?.isSupported == true) { + return _BannerContainer( + color: const Color(0xFFFFF3E0), + icon: const Icon(Icons.auto_awesome_outlined, size: 20, color: Color(0xFFFF9500)), + title: 'Apple Intelligence Required', + subtitle: 'Enable Apple Intelligence in Settings', + trailing: GestureDetector( + onTap: () => _openSettings(), + child: Container( + padding: const EdgeInsets.symmetric(horizontal: 16, vertical: 8), + decoration: BoxDecoration(color: const Color(0xFFFF9500), borderRadius: BorderRadius.circular(8)), + child: const Text('Enable', style: TextStyle(color: Colors.white, fontSize: 14, fontWeight: FontWeight.w600)), + ), + ), + ); + } + + // Android/other: AI supported but model not ready + if (state.capability?.isSupported == true) { + return _BannerContainer( + color: const Color(0xFFFFF3E0), + icon: const Icon(Icons.auto_awesome_outlined, size: 20, color: Color(0xFFFF9500)), + title: 'AI Model Not Ready', + subtitle: 'Enable on-device AI in system settings', + ); + } + + // Device not supported + final isIOS = !kIsWeb && defaultTargetPlatform == TargetPlatform.iOS; + return _BannerContainer( + color: const Color(0xFFFFEBEE), + icon: const Icon(Icons.warning, size: 20, color: Color(0xFFFF3B30)), + title: 'Device Not Supported', + subtitle: isIOS + ? 'Requires iPhone 15 Pro or newer with iOS 18.1+' + : 'This device does not support on-device AI', + ); + } + + Future _openSettings() async { + if (!kIsWeb && defaultTargetPlatform == TargetPlatform.iOS) { + final uri = Uri.parse('app-settings:'); + if (await canLaunchUrl(uri)) { + await launchUrl(uri); + } + } + } + + void _showModelSheet(BuildContext context) { + showModalBottomSheet( + context: context, + isScrollControlled: true, + useSafeArea: true, + backgroundColor: const Color(0xFFF2F2F7), + shape: const RoundedRectangleBorder(borderRadius: BorderRadius.vertical(top: Radius.circular(16))), + builder: (_) => ChangeNotifierProvider.value( + value: context.read(), + child: const SizedBox( + height: double.infinity, + child: ModelSelectionSheet(), + ), + ), + ); + } +} + +class _BannerContainer extends StatelessWidget { + final Color color; + final Widget icon; + final String title; + final String subtitle; + final Widget? trailing; + + const _BannerContainer({ + required this.color, + required this.icon, + required this.title, + required this.subtitle, + this.trailing, + }); + + @override + Widget build(BuildContext context) { + return Container( + margin: const EdgeInsets.symmetric(horizontal: 16, vertical: 8), + padding: const EdgeInsets.all(12), + decoration: BoxDecoration( + color: color, + borderRadius: BorderRadius.circular(10), + ), + child: Row( + children: [ + icon, + const SizedBox(width: 12), + Expanded( + child: Column( + crossAxisAlignment: CrossAxisAlignment.start, + children: [ + Text(title, style: const TextStyle(fontSize: 15, fontWeight: FontWeight.w600)), + const SizedBox(height: 2), + Text(subtitle, style: const TextStyle(fontSize: 13, color: Color(0xFF666666))), + ], + ), + ), + if (trailing != null) trailing!, + ], + ), + ); + } +} diff --git a/libraries/flutter_ondevice_ai/example/lib/widgets/shared/debug_log_panel.dart b/libraries/flutter_ondevice_ai/example/lib/widgets/shared/debug_log_panel.dart new file mode 100644 index 0000000..d52e972 --- /dev/null +++ b/libraries/flutter_ondevice_ai/example/lib/widgets/shared/debug_log_panel.dart @@ -0,0 +1,116 @@ +import 'dart:convert'; + +import 'package:flutter/material.dart'; + +class DebugLog { + final String api; + final dynamic request; + final dynamic response; + final int timing; + + const DebugLog({ + required this.api, + required this.request, + required this.response, + required this.timing, + }); +} + +class DebugLogPanel extends StatefulWidget { + final DebugLog? log; + + const DebugLogPanel({super.key, this.log}); + + @override + State createState() => _DebugLogPanelState(); +} + +class _DebugLogPanelState extends State { + bool _expanded = false; + + @override + Widget build(BuildContext context) { + if (widget.log == null) return const SizedBox.shrink(); + final log = widget.log!; + + return Container( + margin: const EdgeInsets.only(top: 12), + decoration: BoxDecoration( + color: const Color(0xFF1C1C1E), + borderRadius: BorderRadius.circular(10), + ), + clipBehavior: Clip.antiAlias, + child: Column( + children: [ + InkWell( + onTap: () => setState(() => _expanded = !_expanded), + child: Padding( + padding: const EdgeInsets.all(12), + child: Row( + children: [ + const Icon(Icons.code, size: 16, color: Color(0xFF8E8E93)), + const SizedBox(width: 8), + const Expanded( + child: Text( + 'Debug Log', + style: TextStyle(fontSize: 14, fontWeight: FontWeight.w600, color: Color(0xFF8E8E93)), + ), + ), + Text( + '${log.timing}ms', + style: const TextStyle(fontSize: 13, fontWeight: FontWeight.w500, color: Color(0xFF30D158)), + ), + const SizedBox(width: 4), + Icon( + _expanded ? Icons.expand_less : Icons.expand_more, + size: 16, + color: const Color(0xFF8E8E93), + ), + ], + ), + ), + ), + if (_expanded) + Padding( + padding: const EdgeInsets.fromLTRB(12, 0, 12, 12), + child: Column( + crossAxisAlignment: CrossAxisAlignment.start, + children: [ + _label('API'), + Text(log.api, style: const TextStyle(fontSize: 14, fontWeight: FontWeight.w600, color: Color(0xFF0A84FF), fontFamily: 'monospace')), + _label('Request'), + _codeBlock(log.request), + _label('Response'), + _codeBlock(log.response), + ], + ), + ), + ], + ), + ); + } + + Widget _label(String text) { + return Padding( + padding: const EdgeInsets.only(top: 8, bottom: 4), + child: Text( + text.toUpperCase(), + style: const TextStyle(fontSize: 11, fontWeight: FontWeight.w700, color: Color(0xFF636366), letterSpacing: 0.5), + ), + ); + } + + Widget _codeBlock(dynamic data) { + final text = const JsonEncoder.withIndent(' ').convert(data); + return SizedBox( + height: 200, + child: SingleChildScrollView( + scrollDirection: Axis.horizontal, + child: SelectableText( + text, + style: const TextStyle(fontSize: 12, color: Color(0xFFE5E5EA), fontFamily: 'monospace', height: 1.5), + ), + ), + ); + } +} diff --git a/libraries/flutter_ondevice_ai/example/lib/widgets/shared/feature_row.dart b/libraries/flutter_ondevice_ai/example/lib/widgets/shared/feature_row.dart new file mode 100644 index 0000000..79bb02c --- /dev/null +++ b/libraries/flutter_ondevice_ai/example/lib/widgets/shared/feature_row.dart @@ -0,0 +1,88 @@ +import 'package:flutter/material.dart'; + +import '../../app_state.dart'; + +IconData _iconDataFor(IconName name) { + return switch (name) { + IconName.description => Icons.description, + IconName.label => Icons.label, + IconName.documentScanner => Icons.document_scanner, + IconName.chatBubble => Icons.chat_bubble, + IconName.language => Icons.language, + IconName.edit => Icons.edit, + IconName.checkCircle => Icons.check_circle, + IconName.image => Icons.image, + IconName.autoFixHigh => Icons.auto_fix_high, + }; +} + +class FeatureRow extends StatelessWidget { + final FeatureInfo feature; + + const FeatureRow({super.key, required this.feature}); + + @override + Widget build(BuildContext context) { + return Container( + color: Colors.white, + padding: const EdgeInsets.symmetric(vertical: 12, horizontal: 16), + child: Row( + children: [ + Container( + width: 40, + height: 40, + decoration: BoxDecoration( + color: feature.isAvailable ? const Color(0xFFF2F2F7) : const Color(0xFFE5E5EA), + borderRadius: BorderRadius.circular(8), + ), + child: Icon( + _iconDataFor(feature.icon), + size: 24, + color: feature.isAvailable ? const Color(0xFF007AFF) : const Color(0xFF999999), + ), + ), + const SizedBox(width: 12), + Expanded( + child: Column( + crossAxisAlignment: CrossAxisAlignment.start, + children: [ + Text( + feature.name, + style: TextStyle( + fontSize: 17, + fontWeight: FontWeight.w600, + color: feature.isAvailable ? Colors.black : const Color(0xFF999999), + ), + ), + const SizedBox(height: 2), + Text( + feature.description, + maxLines: 2, + overflow: TextOverflow.ellipsis, + style: const TextStyle(fontSize: 13, color: Color(0xFF666666), height: 1.38), + ), + ], + ), + ), + const SizedBox(width: 8), + if (feature.isComingSoon) + Container( + padding: const EdgeInsets.symmetric(horizontal: 8, vertical: 3), + decoration: BoxDecoration( + color: const Color(0xFF8E8E93), + borderRadius: BorderRadius.circular(10), + ), + child: const Text( + 'Coming Soon', + style: TextStyle(fontSize: 11, fontWeight: FontWeight.w600, color: Colors.white), + ), + ) + else if (!feature.isAvailable) + const Icon(Icons.lock, size: 16, color: Color(0xFF999999)) + else + const Icon(Icons.chevron_right, size: 20, color: Color(0xFFC7C7CC)), + ], + ), + ); + } +} diff --git a/libraries/flutter_ondevice_ai/example/lib/widgets/shared/info_row.dart b/libraries/flutter_ondevice_ai/example/lib/widgets/shared/info_row.dart new file mode 100644 index 0000000..0ece86e --- /dev/null +++ b/libraries/flutter_ondevice_ai/example/lib/widgets/shared/info_row.dart @@ -0,0 +1,23 @@ +import 'package:flutter/material.dart'; + +class InfoRow extends StatelessWidget { + final String label; + final String value; + final Color? valueColor; + + const InfoRow({super.key, required this.label, required this.value, this.valueColor}); + + @override + Widget build(BuildContext context) { + return Padding( + padding: const EdgeInsets.symmetric(vertical: 12, horizontal: 16), + child: Row( + mainAxisAlignment: MainAxisAlignment.spaceBetween, + children: [ + Text(label, style: const TextStyle(fontSize: 17, color: Colors.black)), + Text(value, style: TextStyle(fontSize: 17, color: valueColor ?? const Color(0xFF666666))), + ], + ), + ); + } +} diff --git a/libraries/flutter_ondevice_ai/example/lib/widgets/shared/model_selection_sheet.dart b/libraries/flutter_ondevice_ai/example/lib/widgets/shared/model_selection_sheet.dart new file mode 100644 index 0000000..5d896b9 --- /dev/null +++ b/libraries/flutter_ondevice_ai/example/lib/widgets/shared/model_selection_sheet.dart @@ -0,0 +1,566 @@ +import 'package:flutter/foundation.dart'; +import 'package:flutter/material.dart'; +import 'package:flutter/services.dart'; +import 'package:flutter_ondevice_ai/flutter_ondevice_ai.dart'; +import 'package:provider/provider.dart'; + +import '../../app_state.dart'; + +const _engineDisplayNames = { + InferenceEngine.foundationModels: 'Apple Intelligence', + InferenceEngine.llamaCpp: 'llama.cpp', + InferenceEngine.mlx: 'MLX', + InferenceEngine.coreMl: 'CoreML', + InferenceEngine.promptApi: 'Gemini Nano', + InferenceEngine.none: 'Not Available', +}; + +class ModelSelectionSheet extends StatefulWidget { + const ModelSelectionSheet({super.key}); + + @override + State createState() => _ModelSelectionSheetState(); +} + +class _ModelSelectionSheetState extends State { + String? _actionLoading; + + bool get _isBusy => _actionLoading != null; + + Future _handleDownload(BuildContext context, DownloadableModelInfo model) async { + if (_isBusy) return; + debugPrint('[ModelSheet] download ${model.modelId} (${model.name}, ${model.sizeMB}MB)'); + setState(() => _actionLoading = model.modelId); + try { + await context.read().downloadModelById(model.modelId); + debugPrint('[ModelSheet] download ${model.modelId} done'); + } catch (e, st) { + debugPrint('[ModelSheet] download ${model.modelId} FAILED: $e\n$st'); + if (context.mounted) { + ScaffoldMessenger.of(context).showSnackBar(const SnackBar(content: Text('Download failed. Please try again.'))); + } + } finally { + if (mounted) setState(() => _actionLoading = null); + } + } + + Future _handleLoad(BuildContext context, String modelId) async { + if (_isBusy) return; + debugPrint('[ModelSheet] load $modelId'); + setState(() => _actionLoading = modelId); + try { + await context.read().loadModelById(modelId); + debugPrint('[ModelSheet] load $modelId done'); + } catch (e, st) { + debugPrint('[ModelSheet] load $modelId FAILED: $e\n$st'); + if (context.mounted) { + ScaffoldMessenger.of(context).showSnackBar(const SnackBar(content: Text('Failed to load model. Please try again.'))); + } + } finally { + if (mounted) setState(() => _actionLoading = null); + } + } + + Future _handleDelete(BuildContext context, String modelId) async { + if (_isBusy) return; + final confirmed = await showDialog( + context: context, + builder: (ctx) => AlertDialog( + title: const Text('Delete Model'), + content: const Text('Are you sure you want to delete this model? You can re-download it later.'), + actions: [ + TextButton(onPressed: () => Navigator.pop(ctx, false), child: const Text('Cancel')), + TextButton( + onPressed: () => Navigator.pop(ctx, true), + style: TextButton.styleFrom(foregroundColor: const Color(0xFFFF3B30)), + child: const Text('Delete'), + ), + ], + ), + ); + if (confirmed != true || !context.mounted) return; + + debugPrint('[ModelSheet] delete $modelId'); + setState(() => _actionLoading = modelId); + try { + await context.read().deleteModelById(modelId); + debugPrint('[ModelSheet] delete $modelId done'); + } catch (e, st) { + debugPrint('[ModelSheet] delete $modelId FAILED: $e\n$st'); + if (context.mounted) { + ScaffoldMessenger.of(context).showSnackBar(const SnackBar(content: Text('Failed to delete model. Please try again.'))); + } + } finally { + if (mounted) setState(() => _actionLoading = null); + } + } + + Future _handleSwitchToDeviceAI(BuildContext context) async { + if (_isBusy) return; + debugPrint('[ModelSheet] switchToDeviceAI'); + setState(() => _actionLoading = '__switch__'); + try { + await context.read().switchToDeviceAI(); + debugPrint('[ModelSheet] switchToDeviceAI done'); + } catch (e, st) { + debugPrint('[ModelSheet] switchToDeviceAI FAILED: $e\n$st'); + if (context.mounted) { + ScaffoldMessenger.of(context).showSnackBar(const SnackBar(content: Text('Failed to switch engine. Please try again.'))); + } + } finally { + if (mounted) setState(() => _actionLoading = null); + } + } + + @override + Widget build(BuildContext context) { + final state = context.watch(); + final ms = state.modelState; + final isIOS = !kIsWeb && defaultTargetPlatform == TargetPlatform.iOS; + + // On web, Chrome manages models — show "Chrome Built-in AI" when ready + final bool isWebReady = kIsWeb && state.isModelReady; + final bool engineAvailable = ms.currentEngine != InferenceEngine.none || isWebReady; + final engineName = isWebReady + ? 'Chrome Built-in AI' + : (_engineDisplayNames[ms.currentEngine] ?? ms.currentEngine.name); + + return Container( + color: const Color(0xFFF2F2F7), + child: Column( + children: [ + // Header + Padding( + padding: const EdgeInsets.fromLTRB(16, 16, 8, 8), + child: Row( + mainAxisAlignment: MainAxisAlignment.spaceBetween, + children: [ + const Text('On-Device AI Models', style: TextStyle(fontSize: 20, fontWeight: FontWeight.w700)), + IconButton( + onPressed: () => Navigator.pop(context), + icon: const Icon(Icons.cancel_outlined, size: 28, color: Color(0xFF666666)), + ), + ], + ), + ), + + // Download Progress + if (ms.isDownloading && ms.downloadProgress != null) + Padding( + padding: const EdgeInsets.symmetric(horizontal: 16, vertical: 4), + child: Column( + children: [ + ClipRRect( + borderRadius: BorderRadius.circular(2), + child: LinearProgressIndicator( + value: ms.downloadProgress!.progress, + backgroundColor: const Color(0xFFE5E5EA), + color: const Color(0xFF007AFF), + minHeight: 4, + ), + ), + const SizedBox(height: 4), + Align( + alignment: Alignment.centerLeft, + child: Text( + 'Downloading... ${(ms.downloadProgress!.progress * 100).round()}%', + style: const TextStyle(fontSize: 12, color: Color(0xFF666666)), + ), + ), + ], + ), + ), + + Expanded( + child: ListView( + children: [ + // Active Engine section + _SectionHeader(title: 'Active Engine'), + Padding( + padding: const EdgeInsets.symmetric(horizontal: 16), + child: Container( + padding: const EdgeInsets.all(16), + decoration: BoxDecoration(color: Colors.white, borderRadius: BorderRadius.circular(12)), + child: Row( + children: [ + Icon( + engineAvailable ? Icons.auto_awesome : Icons.cancel, + size: 24, + color: engineAvailable ? const Color(0xFF007AFF) : const Color(0xFFFF3B30), + ), + const SizedBox(width: 12), + Expanded( + child: Column( + crossAxisAlignment: CrossAxisAlignment.start, + children: [ + Text(engineName, style: const TextStyle(fontSize: 17, fontWeight: FontWeight.w600)), + const SizedBox(height: 2), + Text( + !engineAvailable + ? 'No AI engine available' + : kIsWeb + ? 'Gemini Nano via Chrome' + : isIOS + ? 'Apple on-device AI' + : 'Google on-device AI', + style: const TextStyle(fontSize: 13, color: Color(0xFF666666)), + ), + ], + ), + ), + if (engineAvailable) + Container( + padding: const EdgeInsets.symmetric(horizontal: 10, vertical: 4), + decoration: BoxDecoration(color: const Color(0xFFE3F9E5), borderRadius: BorderRadius.circular(12)), + child: const Text('Active', style: TextStyle(fontSize: 12, fontWeight: FontWeight.w600, color: Color(0xFF34C759))), + ), + ], + ), + ), + ), + + // Switch back to Device AI when using llama.cpp + if (ms.currentEngine == InferenceEngine.llamaCpp) + Padding( + padding: const EdgeInsets.fromLTRB(16, 8, 16, 0), + child: _ActionButton( + label: isIOS ? 'Switch to Apple Intelligence' : 'Switch to Device AI', + icon: Icons.auto_awesome, + loading: _actionLoading == '__switch__', + onTap: _isBusy ? null : () => _handleSwitchToDeviceAI(context), + expand: true, + outlined: true, + ), + ), + + // Chrome Setup Guide (web only, when not available) + if (kIsWeb && !state.isModelReady) ...[ + _SectionHeader(title: 'Setup Guide'), + const _ChromeSetupGuide(), + ], + + // Available Models + if (ms.availableModels.isNotEmpty) ...[ + _SectionHeader(title: 'Available Models'), + ...ms.availableModels.map((model) { + final downloaded = ms.downloadedModelIds.contains(model.modelId); + final isActiveModel = ms.loadedModelId == model.modelId && ms.currentEngine == InferenceEngine.llamaCpp; + final loading = _actionLoading == model.modelId; + return Padding( + padding: const EdgeInsets.fromLTRB(16, 0, 16, 8), + child: _ModelRow( + model: model, + downloaded: downloaded, + loaded: isActiveModel, + loading: loading, + isDownloading: ms.isDownloading, + isBusy: _isBusy, + onDownload: () => _handleDownload(context, model), + onLoad: () => _handleLoad(context, model.modelId), + onDelete: () => _handleDelete(context, model.modelId), + ), + ); + }), + ], + + // About section + _SectionHeader(title: 'About'), + Padding( + padding: const EdgeInsets.fromLTRB(16, 0, 16, 32), + child: Container( + padding: const EdgeInsets.all(12), + decoration: BoxDecoration(color: Colors.white, borderRadius: BorderRadius.circular(12)), + child: const Column( + children: [ + _AboutRow(icon: Icons.lock, title: 'Private', subtitle: 'All data stays on your device'), + _AboutRow(icon: Icons.cloud_off, title: 'Offline', subtitle: 'Works without internet connection'), + _AboutRow(icon: Icons.flash_on, title: 'Fast', subtitle: 'Low latency, hardware-accelerated'), + ], + ), + ), + ), + ], + ), + ), + ], + ), + ); + } +} + +class _ChromeSetupGuide extends StatelessWidget { + const _ChromeSetupGuide(); + + static const _steps = [ + ( + title: '1. Use Chrome 138+', + desc: 'Download the latest version from chrome.com', + ), + ( + title: '2. Enable Feature Flags', + desc: 'Open each URL in Chrome and set to "Enabled":', + ), + ( + title: '3. Restart Chrome', + desc: 'Click "Relaunch" or close and reopen Chrome completely.', + ), + ( + title: '4. Verify Model Status', + desc: 'Check chrome://on-device-internals → Model Status tab', + ), + ( + title: '5. Requirements', + desc: '22GB+ free disk space. GPU with 4GB+ VRAM or CPU with 16GB+ RAM.', + ), + ]; + + static const _flags = [ + 'chrome://flags/#optimization-guide-on-device-model', + 'chrome://flags/#prompt-api-for-gemini-nano', + 'chrome://flags/#enable-experimental-web-platform-features', + ]; + + @override + Widget build(BuildContext context) { + return Padding( + padding: const EdgeInsets.symmetric(horizontal: 16), + child: Container( + padding: const EdgeInsets.all(16), + decoration: BoxDecoration(color: Colors.white, borderRadius: BorderRadius.circular(12)), + child: Column( + crossAxisAlignment: CrossAxisAlignment.start, + children: [ + for (var i = 0; i < _steps.length; i++) ...[ + if (i > 0) const SizedBox(height: 12), + Text(_steps[i].title, style: const TextStyle(fontSize: 14, fontWeight: FontWeight.w600)), + const SizedBox(height: 2), + Text(_steps[i].desc, style: const TextStyle(fontSize: 13, color: Color(0xFF666666))), + // Show flag URLs for step 2 + if (i == 1) + Padding( + padding: const EdgeInsets.only(top: 8), + child: Column( + children: _flags.map((flag) => Padding( + padding: const EdgeInsets.only(bottom: 6), + child: GestureDetector( + onTap: () { + Clipboard.setData(ClipboardData(text: flag)); + ScaffoldMessenger.of(context).showSnackBar( + SnackBar(content: Text('Copied: $flag'), duration: const Duration(seconds: 2)), + ); + }, + child: Container( + width: double.infinity, + padding: const EdgeInsets.symmetric(horizontal: 10, vertical: 8), + decoration: BoxDecoration( + color: const Color(0xFFF2F2F7), + borderRadius: BorderRadius.circular(8), + ), + child: Row( + children: [ + Expanded( + child: Text(flag, style: const TextStyle(fontSize: 12, fontFamily: 'monospace', color: Color(0xFF007AFF))), + ), + const Icon(Icons.copy, size: 14, color: Color(0xFF999999)), + ], + ), + ), + ), + )).toList(), + ), + ), + ], + ], + ), + ), + ); + } +} + +class _SectionHeader extends StatelessWidget { + final String title; + const _SectionHeader({required this.title}); + + @override + Widget build(BuildContext context) { + return Padding( + padding: const EdgeInsets.fromLTRB(20, 16, 16, 8), + child: Text( + title.toUpperCase(), + style: const TextStyle(fontSize: 13, fontWeight: FontWeight.w600, color: Color(0xFF666666), letterSpacing: 0.5), + ), + ); + } +} + +class _ModelRow extends StatelessWidget { + final DownloadableModelInfo model; + final bool downloaded; + final bool loaded; + final bool loading; + final bool isDownloading; + final bool isBusy; + final VoidCallback onDownload; + final VoidCallback onLoad; + final VoidCallback onDelete; + + const _ModelRow({ + required this.model, + required this.downloaded, + required this.loaded, + required this.loading, + required this.isDownloading, + required this.isBusy, + required this.onDownload, + required this.onLoad, + required this.onDelete, + }); + + @override + Widget build(BuildContext context) { + return Container( + padding: const EdgeInsets.all(14), + decoration: BoxDecoration(color: Colors.white, borderRadius: BorderRadius.circular(12)), + child: Row( + children: [ + Expanded( + child: Column( + crossAxisAlignment: CrossAxisAlignment.start, + children: [ + Text(model.name, style: const TextStyle(fontSize: 15, fontWeight: FontWeight.w600)), + const SizedBox(height: 2), + Text( + '${model.sizeMB} MB \u00B7 ${model.quantization} \u00B7 ${model.contextLength} ctx${model.isMultimodal ? ' \u00B7 Vision' : ''}', + style: const TextStyle(fontSize: 12, color: Color(0xFF888888)), + ), + ], + ), + ), + if (loaded) + _ActionButton(label: 'Loaded', loading: loading, color: const Color(0xFF007AFF), badge: true) + else if (downloaded) + Row( + mainAxisSize: MainAxisSize.min, + children: [ + _ActionButton(label: 'Load', loading: loading, onTap: isBusy ? null : onLoad, disabled: isBusy && !loading), + const SizedBox(width: 8), + _ActionButton(icon: Icons.delete_outline, loading: false, onTap: isBusy ? null : onDelete, color: const Color(0xFFFF3B30), disabled: isBusy), + ], + ) + else + _ActionButton( + label: '${model.sizeMB} MB', + icon: Icons.cloud_download_outlined, + loading: loading, + onTap: (isDownloading || isBusy) ? null : onDownload, + disabled: isDownloading || isBusy, + ), + ], + ), + ); + } +} + +class _ActionButton extends StatelessWidget { + final String? label; + final IconData? icon; + final bool loading; + final bool badge; + final bool expand; + final bool outlined; + final bool disabled; + final Color color; + final VoidCallback? onTap; + + const _ActionButton({ + this.label, + this.icon, + this.loading = false, + this.badge = false, + this.expand = false, + this.outlined = false, + this.disabled = false, + this.color = const Color(0xFF007AFF), + this.onTap, + }); + + static const _height = 32.0; + static const _fontSize = 13.0; + + @override + Widget build(BuildContext context) { + final effectiveColor = disabled ? const Color(0xFF999999) : color; + final bgColor = outlined + ? Colors.transparent + : disabled + ? const Color(0xFFF2F2F7) + : badge + ? const Color(0xFFE3F2FD) + : const Color(0xFFE3F2FD); + + final content = loading + ? SizedBox(width: 18, height: 18, child: CircularProgressIndicator(strokeWidth: 2, color: effectiveColor)) + : Row( + mainAxisSize: MainAxisSize.min, + children: [ + if (icon != null) ...[ + Icon(icon, size: 15, color: effectiveColor), + if (label != null) const SizedBox(width: 4), + ], + if (label != null) + Text(label!, style: TextStyle(fontSize: _fontSize, fontWeight: FontWeight.w600, color: effectiveColor)), + ], + ); + + final decoration = outlined + ? BoxDecoration( + border: Border.all(color: effectiveColor), + borderRadius: BorderRadius.circular(10), + ) + : BoxDecoration( + color: bgColor, + borderRadius: BorderRadius.circular(badge ? 12 : 8), + ); + + final widget = GestureDetector( + onTap: (loading || disabled) ? null : onTap, + child: Container( + height: _height, + padding: EdgeInsets.symmetric(horizontal: badge ? 10 : 12), + decoration: decoration, + alignment: Alignment.center, + child: content, + ), + ); + + return expand ? SizedBox(width: double.infinity, child: widget) : widget; + } +} + +class _AboutRow extends StatelessWidget { + final IconData icon; + final String title; + final String subtitle; + + const _AboutRow({required this.icon, required this.title, required this.subtitle}); + + @override + Widget build(BuildContext context) { + return Padding( + padding: const EdgeInsets.symmetric(vertical: 8), + child: Row( + children: [ + Icon(icon, size: 20, color: const Color(0xFF007AFF)), + const SizedBox(width: 12), + Column( + crossAxisAlignment: CrossAxisAlignment.start, + children: [ + Text(title, style: const TextStyle(fontSize: 15, fontWeight: FontWeight.w600)), + Text(subtitle, style: const TextStyle(fontSize: 13, color: Color(0xFF666666))), + ], + ), + ], + ), + ); + } +} diff --git a/libraries/flutter_ondevice_ai/example/lib/widgets/shared/run_button.dart b/libraries/flutter_ondevice_ai/example/lib/widgets/shared/run_button.dart new file mode 100644 index 0000000..dba7ea6 --- /dev/null +++ b/libraries/flutter_ondevice_ai/example/lib/widgets/shared/run_button.dart @@ -0,0 +1,40 @@ +import 'package:flutter/material.dart'; + +class RunButton extends StatelessWidget { + final String label; + final bool loading; + final VoidCallback? onPressed; + + const RunButton({ + super.key, + required this.label, + required this.loading, + required this.onPressed, + }); + + static const _height = 50.0; + + @override + Widget build(BuildContext context) { + return SizedBox( + width: double.infinity, + height: _height, + child: ElevatedButton( + onPressed: loading ? null : onPressed, + style: ElevatedButton.styleFrom( + backgroundColor: const Color(0xFF007AFF), + foregroundColor: Colors.white, + shape: RoundedRectangleBorder(borderRadius: BorderRadius.circular(10)), + disabledBackgroundColor: const Color(0xFF007AFF).withValues(alpha: 0.6), + ), + child: loading + ? const SizedBox( + height: 20, + width: 20, + child: CircularProgressIndicator(strokeWidth: 2, color: Colors.white), + ) + : Text(label, style: const TextStyle(fontSize: 17, fontWeight: FontWeight.w600)), + ), + ); + } +} diff --git a/libraries/flutter_ondevice_ai/example/lib/widgets/shared/stat_badge.dart b/libraries/flutter_ondevice_ai/example/lib/widgets/shared/stat_badge.dart new file mode 100644 index 0000000..a2aec29 --- /dev/null +++ b/libraries/flutter_ondevice_ai/example/lib/widgets/shared/stat_badge.dart @@ -0,0 +1,34 @@ +import 'package:flutter/material.dart'; + +class StatBadge extends StatelessWidget { + final String label; + final String value; + + const StatBadge({super.key, required this.label, required this.value}); + + @override + Widget build(BuildContext context) { + return Container( + padding: const EdgeInsets.symmetric(horizontal: 12, vertical: 6), + decoration: BoxDecoration( + color: const Color(0xFFF2F2F7), + borderRadius: BorderRadius.circular(8), + ), + child: Column( + crossAxisAlignment: CrossAxisAlignment.start, + mainAxisSize: MainAxisSize.min, + children: [ + Text( + label.toUpperCase(), + style: const TextStyle(fontSize: 11, color: Color(0xFF666666), letterSpacing: 0.5), + ), + const SizedBox(height: 2), + Text( + value, + style: const TextStyle(fontSize: 14, fontWeight: FontWeight.w600, color: Colors.black), + ), + ], + ), + ); + } +} diff --git a/libraries/flutter_ondevice_ai/example/pubspec.lock b/libraries/flutter_ondevice_ai/example/pubspec.lock new file mode 100644 index 0000000..4e83fba --- /dev/null +++ b/libraries/flutter_ondevice_ai/example/pubspec.lock @@ -0,0 +1,313 @@ +# Generated by pub +# See https://dart.dev/tools/pub/glossary#lockfile +packages: + async: + dependency: transitive + description: + name: async + sha256: "758e6d74e971c3e5aceb4110bfd6698efc7f501675bcfe0c775459a8140750eb" + url: "https://pub.dev" + source: hosted + version: "2.13.0" + boolean_selector: + dependency: transitive + description: + name: boolean_selector + sha256: "8aab1771e1243a5063b8b0ff68042d67334e3feab9e95b9490f9a6ebf73b42ea" + url: "https://pub.dev" + source: hosted + version: "2.1.2" + characters: + dependency: transitive + description: + name: characters + sha256: f71061c654a3380576a52b451dd5532377954cf9dbd272a78fc8479606670803 + url: "https://pub.dev" + source: hosted + version: "1.4.0" + clock: + dependency: transitive + description: + name: clock + sha256: fddb70d9b5277016c77a80201021d40a2247104d9f4aa7bab7157b7e3f05b84b + url: "https://pub.dev" + source: hosted + version: "1.1.2" + collection: + dependency: transitive + description: + name: collection + sha256: "2f5709ae4d3d59dd8f7cd309b4e023046b57d8a6c82130785d2b0e5868084e76" + url: "https://pub.dev" + source: hosted + version: "1.19.1" + fake_async: + dependency: transitive + description: + name: fake_async + sha256: "5368f224a74523e8d2e7399ea1638b37aecfca824a3cc4dfdf77bf1fa905ac44" + url: "https://pub.dev" + source: hosted + version: "1.3.3" + flutter: + dependency: "direct main" + description: flutter + source: sdk + version: "0.0.0" + flutter_lints: + dependency: "direct dev" + description: + name: flutter_lints + sha256: "5398f14efa795ffb7a33e9b6a08798b26a180edac4ad7db3f231e40f82ce11e1" + url: "https://pub.dev" + source: hosted + version: "5.0.0" + flutter_ondevice_ai: + dependency: "direct main" + description: + path: ".." + relative: true + source: path + version: "0.1.0" + flutter_test: + dependency: "direct dev" + description: flutter + source: sdk + version: "0.0.0" + flutter_web_plugins: + dependency: transitive + description: flutter + source: sdk + version: "0.0.0" + leak_tracker: + dependency: transitive + description: + name: leak_tracker + sha256: "33e2e26bdd85a0112ec15400c8cbffea70d0f9c3407491f672a2fad47915e2de" + url: "https://pub.dev" + source: hosted + version: "11.0.2" + leak_tracker_flutter_testing: + dependency: transitive + description: + name: leak_tracker_flutter_testing + sha256: "1dbc140bb5a23c75ea9c4811222756104fbcd1a27173f0c34ca01e16bea473c1" + url: "https://pub.dev" + source: hosted + version: "3.0.10" + leak_tracker_testing: + dependency: transitive + description: + name: leak_tracker_testing + sha256: "8d5a2d49f4a66b49744b23b018848400d23e54caf9463f4eb20df3eb8acb2eb1" + url: "https://pub.dev" + source: hosted + version: "3.0.2" + lints: + dependency: transitive + description: + name: lints + sha256: c35bb79562d980e9a453fc715854e1ed39e24e7d0297a880ef54e17f9874a9d7 + url: "https://pub.dev" + source: hosted + version: "5.1.1" + matcher: + dependency: transitive + description: + name: matcher + sha256: dc58c723c3c24bf8d3e2d3ad3f2f9d7bd9cf43ec6feaa64181775e60190153f2 + url: "https://pub.dev" + source: hosted + version: "0.12.17" + material_color_utilities: + dependency: transitive + description: + name: material_color_utilities + sha256: f7142bb1154231d7ea5f96bc7bde4bda2a0945d2806bb11670e30b850d56bdec + url: "https://pub.dev" + source: hosted + version: "0.11.1" + meta: + dependency: transitive + description: + name: meta + sha256: "23f08335362185a5ea2ad3a4e597f1375e78bce8a040df5c600c8d3552ef2394" + url: "https://pub.dev" + source: hosted + version: "1.17.0" + nested: + dependency: transitive + description: + name: nested + sha256: "03bac4c528c64c95c722ec99280375a6f2fc708eec17c7b3f07253b626cd2a20" + url: "https://pub.dev" + source: hosted + version: "1.0.0" + path: + dependency: transitive + description: + name: path + sha256: "75cca69d1490965be98c73ceaea117e8a04dd21217b37b292c9ddbec0d955bc5" + url: "https://pub.dev" + source: hosted + version: "1.9.1" + plugin_platform_interface: + dependency: transitive + description: + name: plugin_platform_interface + sha256: "4820fbfdb9478b1ebae27888254d445073732dae3d6ea81f0b7e06d5dedc3f02" + url: "https://pub.dev" + source: hosted + version: "2.1.8" + provider: + dependency: "direct main" + description: + name: provider + sha256: "4e82183fa20e5ca25703ead7e05de9e4cceed1fbd1eadc1ac3cb6f565a09f272" + url: "https://pub.dev" + source: hosted + version: "6.1.5+1" + sky_engine: + dependency: transitive + description: flutter + source: sdk + version: "0.0.0" + source_span: + dependency: transitive + description: + name: source_span + sha256: "56a02f1f4cd1a2d96303c0144c93bd6d909eea6bee6bf5a0e0b685edbd4c47ab" + url: "https://pub.dev" + source: hosted + version: "1.10.2" + stack_trace: + dependency: transitive + description: + name: stack_trace + sha256: "8b27215b45d22309b5cddda1aa2b19bdfec9df0e765f2de506401c071d38d1b1" + url: "https://pub.dev" + source: hosted + version: "1.12.1" + stream_channel: + dependency: transitive + description: + name: stream_channel + sha256: "969e04c80b8bcdf826f8f16579c7b14d780458bd97f56d107d3950fdbeef059d" + url: "https://pub.dev" + source: hosted + version: "2.1.4" + string_scanner: + dependency: transitive + description: + name: string_scanner + sha256: "921cd31725b72fe181906c6a94d987c78e3b98c2e205b397ea399d4054872b43" + url: "https://pub.dev" + source: hosted + version: "1.4.1" + term_glyph: + dependency: transitive + description: + name: term_glyph + sha256: "7f554798625ea768a7518313e58f83891c7f5024f88e46e7182a4558850a4b8e" + url: "https://pub.dev" + source: hosted + version: "1.2.2" + test_api: + dependency: transitive + description: + name: test_api + sha256: ab2726c1a94d3176a45960b6234466ec367179b87dd74f1611adb1f3b5fb9d55 + url: "https://pub.dev" + source: hosted + version: "0.7.7" + url_launcher: + dependency: "direct main" + description: + name: url_launcher + sha256: f6a7e5c4835bb4e3026a04793a4199ca2d14c739ec378fdfe23fc8075d0439f8 + url: "https://pub.dev" + source: hosted + version: "6.3.2" + url_launcher_android: + dependency: transitive + description: + name: url_launcher_android + sha256: "767344bf3063897b5cf0db830e94f904528e6dd50a6dfaf839f0abf509009611" + url: "https://pub.dev" + source: hosted + version: "6.3.28" + url_launcher_ios: + dependency: transitive + description: + name: url_launcher_ios + sha256: "580fe5dfb51671ae38191d316e027f6b76272b026370708c2d898799750a02b0" + url: "https://pub.dev" + source: hosted + version: "6.4.1" + url_launcher_linux: + dependency: transitive + description: + name: url_launcher_linux + sha256: d5e14138b3bc193a0f63c10a53c94b91d399df0512b1f29b94a043db7482384a + url: "https://pub.dev" + source: hosted + version: "3.2.2" + url_launcher_macos: + dependency: transitive + description: + name: url_launcher_macos + sha256: "368adf46f71ad3c21b8f06614adb38346f193f3a59ba8fe9a2fd74133070ba18" + url: "https://pub.dev" + source: hosted + version: "3.2.5" + url_launcher_platform_interface: + dependency: transitive + description: + name: url_launcher_platform_interface + sha256: "552f8a1e663569be95a8190206a38187b531910283c3e982193e4f2733f01029" + url: "https://pub.dev" + source: hosted + version: "2.3.2" + url_launcher_web: + dependency: transitive + description: + name: url_launcher_web + sha256: d0412fcf4c6b31ecfdb7762359b7206ffba3bbffd396c6d9f9c4616ece476c1f + url: "https://pub.dev" + source: hosted + version: "2.4.2" + url_launcher_windows: + dependency: transitive + description: + name: url_launcher_windows + sha256: "712c70ab1b99744ff066053cbe3e80c73332b38d46e5e945c98689b2e66fc15f" + url: "https://pub.dev" + source: hosted + version: "3.1.5" + vector_math: + dependency: transitive + description: + name: vector_math + sha256: d530bd74fea330e6e364cda7a85019c434070188383e1cd8d9777ee586914c5b + url: "https://pub.dev" + source: hosted + version: "2.2.0" + vm_service: + dependency: transitive + description: + name: vm_service + sha256: "45caa6c5917fa127b5dbcfbd1fa60b14e583afdc08bfc96dda38886ca252eb60" + url: "https://pub.dev" + source: hosted + version: "15.0.2" + web: + dependency: transitive + description: + name: web + sha256: "868d88a33d8a87b18ffc05f9f030ba328ffefba92d6c127917a2ba740f9cfe4a" + url: "https://pub.dev" + source: hosted + version: "1.1.1" +sdks: + dart: ">=3.10.0 <4.0.0" + flutter: ">=3.38.0" diff --git a/libraries/flutter_ondevice_ai/example/pubspec.yaml b/libraries/flutter_ondevice_ai/example/pubspec.yaml new file mode 100644 index 0000000..aeabbf0 --- /dev/null +++ b/libraries/flutter_ondevice_ai/example/pubspec.yaml @@ -0,0 +1,23 @@ +name: flutter_ondevice_ai_example +description: Example app for flutter_ondevice_ai plugin. + +publish_to: 'none' + +environment: + sdk: ">=3.3.0 <4.0.0" + +dependencies: + flutter: + sdk: flutter + flutter_ondevice_ai: + path: ../ + provider: ^6.1.0 + url_launcher: ^6.2.0 + +dev_dependencies: + flutter_test: + sdk: flutter + flutter_lints: ^5.0.0 + +flutter: + uses-material-design: true diff --git a/libraries/flutter_ondevice_ai/example/web/favicon.png b/libraries/flutter_ondevice_ai/example/web/favicon.png new file mode 100644 index 0000000..8aaa46a Binary files /dev/null and b/libraries/flutter_ondevice_ai/example/web/favicon.png differ diff --git a/libraries/flutter_ondevice_ai/example/web/icons/Icon-192.png b/libraries/flutter_ondevice_ai/example/web/icons/Icon-192.png new file mode 100644 index 0000000..b749bfe Binary files /dev/null and b/libraries/flutter_ondevice_ai/example/web/icons/Icon-192.png differ diff --git a/libraries/flutter_ondevice_ai/example/web/icons/Icon-512.png b/libraries/flutter_ondevice_ai/example/web/icons/Icon-512.png new file mode 100644 index 0000000..88cfd48 Binary files /dev/null and b/libraries/flutter_ondevice_ai/example/web/icons/Icon-512.png differ diff --git a/libraries/flutter_ondevice_ai/example/web/icons/Icon-maskable-192.png b/libraries/flutter_ondevice_ai/example/web/icons/Icon-maskable-192.png new file mode 100644 index 0000000..eb9b4d7 Binary files /dev/null and b/libraries/flutter_ondevice_ai/example/web/icons/Icon-maskable-192.png differ diff --git a/libraries/flutter_ondevice_ai/example/web/icons/Icon-maskable-512.png b/libraries/flutter_ondevice_ai/example/web/icons/Icon-maskable-512.png new file mode 100644 index 0000000..d69c566 Binary files /dev/null and b/libraries/flutter_ondevice_ai/example/web/icons/Icon-maskable-512.png differ diff --git a/libraries/flutter_ondevice_ai/example/web/index.html b/libraries/flutter_ondevice_ai/example/web/index.html new file mode 100644 index 0000000..9936fe2 --- /dev/null +++ b/libraries/flutter_ondevice_ai/example/web/index.html @@ -0,0 +1,38 @@ + + + + + + + + + + + + + + + + + + + + flutter_ondevice_ai_example + + + + + + diff --git a/libraries/flutter_ondevice_ai/example/web/manifest.json b/libraries/flutter_ondevice_ai/example/web/manifest.json new file mode 100644 index 0000000..b2b5ceb --- /dev/null +++ b/libraries/flutter_ondevice_ai/example/web/manifest.json @@ -0,0 +1,35 @@ +{ + "name": "flutter_ondevice_ai_example", + "short_name": "flutter_ondevice_ai_example", + "start_url": ".", + "display": "standalone", + "background_color": "#0175C2", + "theme_color": "#0175C2", + "description": "A new Flutter project.", + "orientation": "portrait-primary", + "prefer_related_applications": false, + "icons": [ + { + "src": "icons/Icon-192.png", + "sizes": "192x192", + "type": "image/png" + }, + { + "src": "icons/Icon-512.png", + "sizes": "512x512", + "type": "image/png" + }, + { + "src": "icons/Icon-maskable-192.png", + "sizes": "192x192", + "type": "image/png", + "purpose": "maskable" + }, + { + "src": "icons/Icon-maskable-512.png", + "sizes": "512x512", + "type": "image/png", + "purpose": "maskable" + } + ] +} diff --git a/libraries/flutter_ondevice_ai/ios/Classes/FlutterOndeviceAiHelper.swift b/libraries/flutter_ondevice_ai/ios/Classes/FlutterOndeviceAiHelper.swift new file mode 100644 index 0000000..761f837 --- /dev/null +++ b/libraries/flutter_ondevice_ai/ios/Classes/FlutterOndeviceAiHelper.swift @@ -0,0 +1,101 @@ +import Locanara + +/// Decodes Flutter options dictionaries into chain constructor parameters +@available(iOS 15.0, macOS 14.0, *) +enum FlutterOndeviceAiHelper { + + // MARK: - Summarize + + static func bulletCount(from options: [String: Any]?) -> Int { + guard let opts = options else { return 1 } + if let outputType = opts["outputType"] as? String { + switch outputType { + case "TWO_BULLETS": return 2 + case "THREE_BULLETS": return 3 + default: return 1 + } + } + return 1 + } + + static func inputType(from options: [String: Any]?) -> String { + guard let opts = options, + let inputType = opts["inputType"] as? String, + inputType == "CONVERSATION" else { + return "text" + } + return "conversation" + } + + // MARK: - Classify + + static func classifyOptions(from options: [String: Any]?) -> (categories: [String], maxResults: Int) { + guard let opts = options else { + return (["positive", "negative", "neutral"], 3) + } + let categories = (opts["categories"] as? [String]) ?? ["positive", "negative", "neutral"] + let maxResults = (opts["maxResults"] as? Int) ?? 3 + return (categories, maxResults) + } + + // MARK: - Extract + + static func entityTypes(from options: [String: Any]?) -> [String] { + (options?["entityTypes"] as? [String]) ?? ["person", "location", "date", "organization"] + } + + // MARK: - Chat + + static func chatOptions(from options: [String: Any]?) -> (systemPrompt: String, memory: (any Memory)?) { + let systemPrompt = (options?["systemPrompt"] as? String) ?? "You are a friendly, helpful assistant." + + var memory: (any Memory)? = nil + if let historyArray = options?["history"] as? [[String: String]], !historyArray.isEmpty { + memory = PrefilledMemory(history: historyArray) + } + + return (systemPrompt, memory) + } + + // MARK: - Translate + + static func translateOptions(from options: [String: Any]?) -> (sourceLanguage: String, targetLanguage: String) { + let source = (options?["sourceLanguage"] as? String) ?? "en" + let target = (options?["targetLanguage"] as? String) ?? "en" + return (source, target) + } + + // MARK: - Rewrite + + static func rewriteStyle(from options: [String: Any]?) -> RewriteOutputType { + guard let opts = options, + let outputTypeStr = opts["outputType"] as? String, + let outputType = RewriteOutputType(rawValue: outputTypeStr) else { + return .rephrase + } + return outputType + } +} + +// MARK: - Prefilled Memory + +/// Memory adapter that provides pre-filled chat history from Flutter. +@available(iOS 15.0, macOS 14.0, *) +final class PrefilledMemory: Memory, @unchecked Sendable { + private let entries: [MemoryEntry] + + init(history: [[String: String]]) { + self.entries = history.compactMap { msg in + guard let role = msg["role"], let content = msg["content"] else { return nil } + return MemoryEntry(role: role, content: content) + } + } + + func load(for input: ChainInput) async -> [MemoryEntry] { entries } + func save(input: ChainInput, output: ChainOutput) async { } + func clear() async { } + + var estimatedTokenCount: Int { + entries.reduce(0) { $0 + ($1.content.count / 4) } + } +} diff --git a/libraries/flutter_ondevice_ai/ios/Classes/FlutterOndeviceAiPlugin.swift b/libraries/flutter_ondevice_ai/ios/Classes/FlutterOndeviceAiPlugin.swift new file mode 100644 index 0000000..de5af8f --- /dev/null +++ b/libraries/flutter_ondevice_ai/ios/Classes/FlutterOndeviceAiPlugin.swift @@ -0,0 +1,419 @@ +import Flutter +import Locanara + +private let TAG = "[FlutterOndeviceAi]" + +public class FlutterOndeviceAiPlugin: NSObject, FlutterPlugin, FlutterStreamHandler { + private let client = LocanaraClient.shared + private var channel: FlutterMethodChannel? + private var chatStreamSink: FlutterEventSink? + private var downloadProgressSink: FlutterEventSink? + + public static func register(with registrar: FlutterPluginRegistrar) { + let channel = FlutterMethodChannel( + name: "flutter_ondevice_ai", + binaryMessenger: registrar.messenger() + ) + let instance = FlutterOndeviceAiPlugin() + registrar.addMethodCallDelegate(instance, channel: channel) + instance.channel = channel + + let chatStreamChannel = FlutterEventChannel( + name: "flutter_ondevice_ai/chat_stream", + binaryMessenger: registrar.messenger() + ) + chatStreamChannel.setStreamHandler(instance) + + let downloadProgressChannel = FlutterEventChannel( + name: "flutter_ondevice_ai/model_download_progress", + binaryMessenger: registrar.messenger() + ) + let downloadHandler = DownloadProgressHandler() + downloadProgressChannel.setStreamHandler(downloadHandler) + instance.downloadProgressHandler = downloadHandler + } + + private var downloadProgressHandler: DownloadProgressHandler? + + // MARK: - FlutterStreamHandler (chat stream) + + public func onListen(withArguments arguments: Any?, eventSink events: @escaping FlutterEventSink) -> FlutterError? { + chatStreamSink = events + return nil + } + + public func onCancel(withArguments arguments: Any?) -> FlutterError? { + chatStreamSink = nil + return nil + } + + // MARK: - Method Call Handler + + public func handle(_ call: FlutterMethodCall, result: @escaping FlutterResult) { + Task { @MainActor in + await handleAsync(call, result: result) + } + } + + @MainActor + private func handleAsync(_ call: FlutterMethodCall, result: @escaping FlutterResult) async { + let args = call.arguments as? [String: Any] + + switch call.method { + case "initialize": + await handleInitialize(result: result) + case "getDeviceCapability": + await handleGetDeviceCapability(result: result) + case "summarize": + await handleSummarize(args, result: result) + case "classify": + await handleClassify(args, result: result) + case "extract": + await handleExtract(args, result: result) + case "chat": + await handleChat(args, result: result) + case "chatStream": + await handleChatStream(args, result: result) + case "translate": + await handleTranslate(args, result: result) + case "rewrite": + await handleRewrite(args, result: result) + case "proofread": + await handleProofread(args, result: result) + case "getAvailableModels": + handleGetAvailableModels(result: result) + case "getDownloadedModels": + handleGetDownloadedModels(result: result) + case "getLoadedModel": + handleGetLoadedModel(result: result) + case "getCurrentEngine": + handleGetCurrentEngine(result: result) + case "downloadModel": + await handleDownloadModel(args, result: result) + case "loadModel": + await handleLoadModel(args, result: result) + case "deleteModel": + handleDeleteModel(args, result: result) + case "switchToDeviceAI": + await handleSwitchToDeviceAI(result: result) + case "getPromptApiStatus": + result("not_available") + case "downloadPromptApiModel": + result(false) + default: + result(FlutterMethodNotImplemented) + } + } + + // MARK: - Core + + private func handleInitialize(result: @escaping FlutterResult) async { + NSLog("\(TAG) initialize() called") + do { + try await client.initialize() + let engine = self.client.getCurrentEngine() + NSLog("\(TAG) initialize() success — engine: \(engine.rawValue)") + result(["success": true]) + } catch { + NSLog("\(TAG) initialize() FAILED: \(error)") + result(FlutterError(code: "ERR_INITIALIZE", message: error.localizedDescription, details: nil)) + } + } + + private func handleGetDeviceCapability(result: @escaping FlutterResult) async { + do { + let capability = try client.getDeviceCapability() + let deviceInfo = try? client.getDeviceInfoIOS() + result(FlutterOndeviceAiSerialization.deviceCapability( + capability, + isModelReady: client.isModelReady(), + supportsAppleIntelligence: deviceInfo?.supportsAppleIntelligence ?? false + )) + } catch { + result(FlutterError(code: "ERR_DEVICE_CAPABILITY", message: error.localizedDescription, details: nil)) + } + } + + // MARK: - AI Features + + private func handleSummarize(_ args: [String: Any]?, result: @escaping FlutterResult) async { + guard let text = args?["text"] as? String else { + result(FlutterError(code: "ERR_INVALID_ARGS", message: "text is required", details: nil)) + return + } + let options = args?["options"] as? [String: Any] + NSLog("\(TAG) summarize() input length: \(text.count)") + + do { + let bulletCount = FlutterOndeviceAiHelper.bulletCount(from: options) + let inputType = FlutterOndeviceAiHelper.inputType(from: options) + let r = try await SummarizeChain(bulletCount: bulletCount, inputType: inputType).run(text) + NSLog("\(TAG) summarize() done — summary length: \(r.summaryLength)") + result(FlutterOndeviceAiSerialization.summarize(r)) + } catch { + result(FlutterError(code: "ERR_SUMMARIZE", message: error.localizedDescription, details: nil)) + } + } + + private func handleClassify(_ args: [String: Any]?, result: @escaping FlutterResult) async { + guard let text = args?["text"] as? String else { + result(FlutterError(code: "ERR_INVALID_ARGS", message: "text is required", details: nil)) + return + } + let options = args?["options"] as? [String: Any] + NSLog("\(TAG) classify() input length: \(text.count)") + + do { + let (categories, maxResults) = FlutterOndeviceAiHelper.classifyOptions(from: options) + let r = try await ClassifyChain(categories: categories, maxResults: maxResults).run(text) + NSLog("\(TAG) classify() done — top: \(r.topClassification.label)") + result(FlutterOndeviceAiSerialization.classify(r)) + } catch { + result(FlutterError(code: "ERR_CLASSIFY", message: error.localizedDescription, details: nil)) + } + } + + private func handleExtract(_ args: [String: Any]?, result: @escaping FlutterResult) async { + guard let text = args?["text"] as? String else { + result(FlutterError(code: "ERR_INVALID_ARGS", message: "text is required", details: nil)) + return + } + let options = args?["options"] as? [String: Any] + NSLog("\(TAG) extract() input length: \(text.count)") + + do { + let entityTypes = FlutterOndeviceAiHelper.entityTypes(from: options) + let r = try await ExtractChain(entityTypes: entityTypes).run(text) + NSLog("\(TAG) extract() done — \(r.entities.count) entities") + result(FlutterOndeviceAiSerialization.extract(r)) + } catch { + result(FlutterError(code: "ERR_EXTRACT", message: error.localizedDescription, details: nil)) + } + } + + private func handleChat(_ args: [String: Any]?, result: @escaping FlutterResult) async { + guard let message = args?["message"] as? String else { + result(FlutterError(code: "ERR_INVALID_ARGS", message: "message is required", details: nil)) + return + } + let options = args?["options"] as? [String: Any] + NSLog("\(TAG) chat() message: \(message.prefix(100))") + + do { + let (systemPrompt, memory) = FlutterOndeviceAiHelper.chatOptions(from: options) + let r = try await ChatChain(memory: memory, systemPrompt: systemPrompt).run(message) + NSLog("\(TAG) chat() done — response length: \(r.message.count)") + result(FlutterOndeviceAiSerialization.chat(r)) + } catch { + result(FlutterError(code: "ERR_CHAT", message: error.localizedDescription, details: nil)) + } + } + + private func handleChatStream(_ args: [String: Any]?, result: @escaping FlutterResult) async { + guard let message = args?["message"] as? String else { + result(FlutterError(code: "ERR_INVALID_ARGS", message: "message is required", details: nil)) + return + } + let options = args?["options"] as? [String: Any] + NSLog("\(TAG) chatStream() message: \(message.prefix(100))") + + do { + let (systemPrompt, memory) = FlutterOndeviceAiHelper.chatOptions(from: options) + let chain = ChatChain(memory: memory, systemPrompt: systemPrompt) + var accumulated = "" + + for try await chunk in chain.streamRun(message) { + accumulated += chunk + chatStreamSink?([ + "delta": chunk, + "accumulated": accumulated, + "isFinal": false, + "conversationId": NSNull() + ]) + } + + chatStreamSink?([ + "delta": "", + "accumulated": accumulated, + "isFinal": true, + "conversationId": NSNull() + ]) + + NSLog("\(TAG) chatStream() done — total length: \(accumulated.count)") + result([ + "message": accumulated, + "conversationId": NSNull(), + "canContinue": true + ]) + } catch { + result(FlutterError(code: "ERR_CHAT_STREAM", message: error.localizedDescription, details: nil)) + } + } + + private func handleTranslate(_ args: [String: Any]?, result: @escaping FlutterResult) async { + guard let text = args?["text"] as? String else { + result(FlutterError(code: "ERR_INVALID_ARGS", message: "text is required", details: nil)) + return + } + let options = args?["options"] as? [String: Any] + NSLog("\(TAG) translate() input length: \(text.count)") + + do { + let (source, target) = FlutterOndeviceAiHelper.translateOptions(from: options) + let r = try await TranslateChain(sourceLanguage: source, targetLanguage: target).run(text) + NSLog("\(TAG) translate() done — \(r.sourceLanguage) → \(r.targetLanguage)") + result(FlutterOndeviceAiSerialization.translate(r)) + } catch { + result(FlutterError(code: "ERR_TRANSLATE", message: error.localizedDescription, details: nil)) + } + } + + private func handleRewrite(_ args: [String: Any]?, result: @escaping FlutterResult) async { + guard let text = args?["text"] as? String else { + result(FlutterError(code: "ERR_INVALID_ARGS", message: "text is required", details: nil)) + return + } + let options = args?["options"] as? [String: Any] + NSLog("\(TAG) rewrite() input length: \(text.count)") + + do { + let style = FlutterOndeviceAiHelper.rewriteStyle(from: options) + let r = try await RewriteChain(style: style).run(text) + NSLog("\(TAG) rewrite() done") + result(FlutterOndeviceAiSerialization.rewrite(r)) + } catch { + result(FlutterError(code: "ERR_REWRITE", message: error.localizedDescription, details: nil)) + } + } + + private func handleProofread(_ args: [String: Any]?, result: @escaping FlutterResult) async { + guard let text = args?["text"] as? String else { + result(FlutterError(code: "ERR_INVALID_ARGS", message: "text is required", details: nil)) + return + } + NSLog("\(TAG) proofread() input length: \(text.count)") + + do { + let r = try await ProofreadChain().run(text) + NSLog("\(TAG) proofread() done — \(r.corrections.count) corrections") + result(FlutterOndeviceAiSerialization.proofread(r)) + } catch { + result(FlutterError(code: "ERR_PROOFREAD", message: error.localizedDescription, details: nil)) + } + } + + // MARK: - Model Management + + private func handleGetAvailableModels(result: @escaping FlutterResult) { + let models = client.getAvailableModels() + NSLog("\(TAG) getAvailableModels() → \(models.count) models") + result(models.map { FlutterOndeviceAiSerialization.modelInfo($0) }) + } + + private func handleGetDownloadedModels(result: @escaping FlutterResult) { + let ids = client.getDownloadedModels() + NSLog("\(TAG) getDownloadedModels() → \(ids)") + result(ids) + } + + private func handleGetLoadedModel(result: @escaping FlutterResult) { + let id = client.getLoadedModel() + NSLog("\(TAG) getLoadedModel() → \(id ?? "nil")") + result(id) + } + + private func handleGetCurrentEngine(result: @escaping FlutterResult) { + let engine = client.getCurrentEngine().rawValue + NSLog("\(TAG) getCurrentEngine() → \(engine)") + result(engine) + } + + private func handleDownloadModel(_ args: [String: Any]?, result: @escaping FlutterResult) async { + guard let modelId = args?["modelId"] as? String else { + result(FlutterError(code: "ERR_INVALID_ARGS", message: "modelId is required", details: nil)) + return + } + NSLog("\(TAG) downloadModel(\(modelId)) starting...") + + do { + let progressStream = try await client.downloadModelWithProgress(modelId) + for await progress in progressStream { + NSLog("\(TAG) downloadModel(\(modelId)) progress: \(Int(progress.progress * 100))%") + downloadProgressHandler?.sink?([ + "modelId": progress.modelId, + "bytesDownloaded": progress.bytesDownloaded, + "totalBytes": progress.totalBytes, + "progress": progress.progress, + "state": progress.state.rawValue + ]) + } + NSLog("\(TAG) downloadModel(\(modelId)) completed") + result(true) + } catch { + NSLog("\(TAG) downloadModel(\(modelId)) FAILED: \(error)") + result(FlutterError(code: "ERR_DOWNLOAD_MODEL", message: error.localizedDescription, details: nil)) + } + } + + private func handleLoadModel(_ args: [String: Any]?, result: @escaping FlutterResult) async { + guard let modelId = args?["modelId"] as? String else { + result(FlutterError(code: "ERR_INVALID_ARGS", message: "modelId is required", details: nil)) + return + } + NSLog("\(TAG) loadModel(\(modelId)) starting...") + + do { + try await client.loadModel(modelId) + NSLog("\(TAG) loadModel(\(modelId)) success") + result(nil) + } catch { + NSLog("\(TAG) loadModel(\(modelId)) FAILED: \(error)") + result(FlutterError(code: "ERR_LOAD_MODEL", message: error.localizedDescription, details: nil)) + } + } + + private func handleDeleteModel(_ args: [String: Any]?, result: @escaping FlutterResult) { + guard let modelId = args?["modelId"] as? String else { + result(FlutterError(code: "ERR_INVALID_ARGS", message: "modelId is required", details: nil)) + return + } + NSLog("\(TAG) deleteModel(\(modelId)) called") + + do { + try client.deleteModel(modelId) + NSLog("\(TAG) deleteModel(\(modelId)) success") + result(nil) + } catch { + NSLog("\(TAG) deleteModel(\(modelId)) FAILED: \(error)") + result(FlutterError(code: "ERR_DELETE_MODEL", message: error.localizedDescription, details: nil)) + } + } + + private func handleSwitchToDeviceAI(result: @escaping FlutterResult) async { + NSLog("\(TAG) switchToDeviceAI() called") + do { + try await client.switchToDeviceAI() + NSLog("\(TAG) switchToDeviceAI() success") + result(nil) + } catch { + NSLog("\(TAG) switchToDeviceAI() FAILED: \(error)") + result(FlutterError(code: "ERR_SWITCH_ENGINE", message: error.localizedDescription, details: nil)) + } + } +} + +// MARK: - Download Progress Stream Handler + +private class DownloadProgressHandler: NSObject, FlutterStreamHandler { + var sink: FlutterEventSink? + + func onListen(withArguments arguments: Any?, eventSink events: @escaping FlutterEventSink) -> FlutterError? { + sink = events + return nil + } + + func onCancel(withArguments arguments: Any?) -> FlutterError? { + sink = nil + return nil + } +} diff --git a/libraries/flutter_ondevice_ai/ios/Classes/FlutterOndeviceAiSerialization.swift b/libraries/flutter_ondevice_ai/ios/Classes/FlutterOndeviceAiSerialization.swift new file mode 100644 index 0000000..d59383d --- /dev/null +++ b/libraries/flutter_ondevice_ai/ios/Classes/FlutterOndeviceAiSerialization.swift @@ -0,0 +1,151 @@ +import Locanara + +/// Serializes Locanara SDK result types into Flutter-compatible dictionaries +enum FlutterOndeviceAiSerialization { + + // MARK: - Device Capability + + static func deviceCapability( + _ capability: DeviceCapability, + isModelReady: Bool, + supportsAppleIntelligence: Bool + ) -> [String: Any] { + let availableSet = Set(capability.availableFeatures) + var features: [String: Bool] = [:] + for feature in FeatureType.allCases { + features["\(feature)"] = availableSet.contains(feature) + } + + return [ + "isSupported": capability.supportsOnDeviceAI, + "isModelReady": isModelReady, + "supportsAppleIntelligence": supportsAppleIntelligence, + "platform": "IOS", + "features": features, + "availableMemoryMB": capability.availableMemoryMB ?? 0, + "isLowPowerMode": capability.isLowPowerMode + ] + } + + // MARK: - Result Serializers + + static func summarize(_ r: SummarizeResult) -> [String: Any] { + [ + "summary": r.summary, + "originalLength": r.originalLength, + "summaryLength": r.summaryLength, + "confidence": r.confidence ?? 0.0 + ] + } + + static func classify(_ r: ClassifyResult) -> [String: Any] { + let classifications = r.classifications.map { c in + [ + "label": c.label, + "score": c.score, + "metadata": c.metadata ?? "" + ] as [String: Any] + } + return [ + "classifications": classifications, + "topClassification": [ + "label": r.topClassification.label, + "score": r.topClassification.score + ] + ] + } + + static func extract(_ r: ExtractResult) -> [String: Any] { + let entities = r.entities.map { e in + [ + "type": e.type, + "value": e.value, + "confidence": e.confidence, + "startPos": e.startPos ?? 0, + "endPos": e.endPos ?? 0 + ] as [String: Any] + } + + var response: [String: Any] = ["entities": entities] + + if let keyValuePairs = r.keyValuePairs { + response["keyValuePairs"] = keyValuePairs.map { p in + [ + "key": p.key, + "value": p.value, + "confidence": p.confidence ?? 0.0 + ] as [String: Any] + } + } + + return response + } + + static func chat(_ r: ChatResult) -> [String: Any] { + var response: [String: Any] = [ + "message": r.message, + "canContinue": r.canContinue + ] + if let conversationId = r.conversationId { + response["conversationId"] = conversationId + } + if let suggestedPrompts = r.suggestedPrompts { + response["suggestedPrompts"] = suggestedPrompts + } + return response + } + + static func translate(_ r: TranslateResult) -> [String: Any] { + [ + "translatedText": r.translatedText, + "sourceLanguage": r.sourceLanguage, + "targetLanguage": r.targetLanguage, + "confidence": r.confidence ?? 0.0 + ] + } + + static func rewrite(_ r: RewriteResult) -> [String: Any] { + var response: [String: Any] = [ + "rewrittenText": r.rewrittenText, + "confidence": r.confidence ?? 0.0 + ] + if let style = r.style { + response["style"] = style.rawValue + } + if let alternatives = r.alternatives { + response["alternatives"] = alternatives + } + return response + } + + static func proofread(_ r: ProofreadResult) -> [String: Any] { + let corrections = r.corrections.map { c in + [ + "original": c.original, + "corrected": c.corrected, + "type": c.type ?? "", + "confidence": c.confidence ?? 0.0, + "startPos": c.startPos ?? 0, + "endPos": c.endPos ?? 0 + ] as [String: Any] + } + return [ + "correctedText": r.correctedText, + "corrections": corrections, + "hasCorrections": r.hasCorrections + ] + } + + static func modelInfo(_ m: DownloadableModelInfo) -> [String: Any] { + [ + "modelId": m.modelId, + "name": m.name, + "version": m.version, + "sizeMB": m.sizeMB, + "quantization": m.quantization.rawValue, + "contextLength": m.contextLength, + "minMemoryMB": m.minMemoryMB, + "isMultimodal": m.isMultimodal + ] + } +} diff --git a/libraries/flutter_ondevice_ai/ios/flutter_ondevice_ai.podspec b/libraries/flutter_ondevice_ai/ios/flutter_ondevice_ai.podspec new file mode 100644 index 0000000..9deabb6 --- /dev/null +++ b/libraries/flutter_ondevice_ai/ios/flutter_ondevice_ai.podspec @@ -0,0 +1,20 @@ +Pod::Spec.new do |s| + s.name = 'flutter_ondevice_ai' + s.version = '0.1.0' + s.summary = 'Flutter plugin for on-device AI using Locanara SDK' + s.description = 'Flutter plugin for on-device AI supporting Apple Intelligence, Gemini Nano, and Chrome Built-in AI.' + s.homepage = 'https://github.com/hyodotdev/locanara' + s.license = { :file => '../LICENSE' } + s.author = { 'hyodotdev' => 'hyochan.dev@gmail.com' } + s.source = { :path => '.' } + s.source_files = 'Classes/**/*.swift' + s.dependency 'Flutter' + s.dependency 'Locanara' + s.ios.deployment_target = '17.0' + s.swift_version = '5.9' + s.static_framework = true + s.pod_target_xcconfig = { + 'DEFINES_MODULE' => 'YES', + 'SWIFT_COMPILATION_MODE' => 'wholemodule' + } +end diff --git a/libraries/flutter_ondevice_ai/lib/flutter_ondevice_ai.dart b/libraries/flutter_ondevice_ai/lib/flutter_ondevice_ai.dart new file mode 100644 index 0000000..2a6b144 --- /dev/null +++ b/libraries/flutter_ondevice_ai/lib/flutter_ondevice_ai.dart @@ -0,0 +1,3 @@ +export 'src/flutter_ondevice_ai_plugin.dart'; +export 'src/types.dart'; +export 'src/errors.dart'; diff --git a/libraries/flutter_ondevice_ai/lib/src/errors.dart b/libraries/flutter_ondevice_ai/lib/src/errors.dart new file mode 100644 index 0000000..6433f68 --- /dev/null +++ b/libraries/flutter_ondevice_ai/lib/src/errors.dart @@ -0,0 +1,24 @@ +import 'package:flutter/services.dart'; + +class OndeviceAiException implements Exception { + final String code; + final String message; + final dynamic details; + + const OndeviceAiException({ + required this.code, + required this.message, + this.details, + }); + + factory OndeviceAiException.fromPlatformException(PlatformException e) { + return OndeviceAiException( + code: e.code, + message: e.message ?? 'Unknown error', + details: e.details, + ); + } + + @override + String toString() => 'OndeviceAiException($code): $message'; +} diff --git a/libraries/flutter_ondevice_ai/lib/src/flutter_ondevice_ai_plugin.dart b/libraries/flutter_ondevice_ai/lib/src/flutter_ondevice_ai_plugin.dart new file mode 100644 index 0000000..2c5eeac --- /dev/null +++ b/libraries/flutter_ondevice_ai/lib/src/flutter_ondevice_ai_plugin.dart @@ -0,0 +1,334 @@ +import 'dart:async'; + +import 'package:flutter/services.dart'; +import 'package:flutter/foundation.dart'; + +import 'types.dart'; +import 'errors.dart'; + +class FlutterOndeviceAi { + static FlutterOndeviceAi? _instance; + + static FlutterOndeviceAi get instance { + _instance ??= FlutterOndeviceAi._(); + return _instance!; + } + + final MethodChannel _channel = const MethodChannel('flutter_ondevice_ai'); + final EventChannel _chatStreamChannel = const EventChannel( + 'flutter_ondevice_ai/chat_stream', + ); + final EventChannel _downloadProgressChannel = const EventChannel( + 'flutter_ondevice_ai/model_download_progress', + ); + + FlutterOndeviceAi._(); + + @visibleForTesting + FlutterOndeviceAi.forTesting(); + + // ============================================================================ + // Core API + // ============================================================================ + + Future initialize() async { + try { + final result = await _channel.invokeMethod('initialize'); + return InitializeResult.fromJson(Map.from(result ?? {})); + } on PlatformException catch (e) { + throw OndeviceAiException.fromPlatformException(e); + } + } + + Future getDeviceCapability() async { + try { + final result = await _channel.invokeMethod('getDeviceCapability'); + return DeviceCapability.fromJson( + Map.from(result ?? {}), + ); + } on PlatformException catch (e) { + throw OndeviceAiException.fromPlatformException(e); + } + } + + // ============================================================================ + // AI Features + // ============================================================================ + + Future summarize( + String text, { + SummarizeOptions? options, + }) async { + try { + final args = {'text': text}; + if (options != null) args['options'] = options.toJson(); + final result = await _channel.invokeMethod('summarize', args); + return SummarizeResult.fromJson( + Map.from(result ?? {}), + ); + } on PlatformException catch (e) { + throw OndeviceAiException.fromPlatformException(e); + } + } + + Future classify( + String text, { + ClassifyOptions? options, + }) async { + try { + final args = {'text': text}; + if (options != null) args['options'] = options.toJson(); + final result = await _channel.invokeMethod('classify', args); + return ClassifyResult.fromJson( + Map.from(result ?? {}), + ); + } on PlatformException catch (e) { + throw OndeviceAiException.fromPlatformException(e); + } + } + + Future extract( + String text, { + ExtractOptions? options, + }) async { + try { + final args = {'text': text}; + if (options != null) args['options'] = options.toJson(); + final result = await _channel.invokeMethod('extract', args); + return ExtractResult.fromJson( + Map.from(result ?? {}), + ); + } on PlatformException catch (e) { + throw OndeviceAiException.fromPlatformException(e); + } + } + + Future chat( + String message, { + ChatOptions? options, + }) async { + try { + final args = {'message': message}; + if (options != null) args['options'] = options.toJson(); + final result = await _channel.invokeMethod('chat', args); + return ChatResult.fromJson(Map.from(result ?? {})); + } on PlatformException catch (e) { + throw OndeviceAiException.fromPlatformException(e); + } + } + + Future chatStream( + String message, { + ChatStreamOptions? options, + }) async { + StreamSubscription? subscription; + try { + if (options?.onChunk != null) { + subscription = _chatStreamChannel + .receiveBroadcastStream() + .listen((event) { + final chunk = ChatStreamChunk.fromJson( + Map.from(event as Map), + ); + options!.onChunk!(chunk); + }); + } + + final args = {'message': message}; + if (options != null) args['options'] = options.toJson(); + final result = await _channel.invokeMethod('chatStream', args); + return ChatResult.fromJson(Map.from(result ?? {})); + } on PlatformException catch (e) { + throw OndeviceAiException.fromPlatformException(e); + } finally { + await subscription?.cancel(); + } + } + + Future translate( + String text, { + required TranslateOptions options, + }) async { + try { + final args = { + 'text': text, + 'options': options.toJson(), + }; + final result = await _channel.invokeMethod('translate', args); + return TranslateResult.fromJson( + Map.from(result ?? {}), + ); + } on PlatformException catch (e) { + throw OndeviceAiException.fromPlatformException(e); + } + } + + Future rewrite( + String text, { + required RewriteOptions options, + }) async { + try { + final args = { + 'text': text, + 'options': options.toJson(), + }; + final result = await _channel.invokeMethod('rewrite', args); + return RewriteResult.fromJson( + Map.from(result ?? {}), + ); + } on PlatformException catch (e) { + throw OndeviceAiException.fromPlatformException(e); + } + } + + Future proofread( + String text, { + ProofreadOptions? options, + }) async { + try { + final args = {'text': text}; + if (options != null) args['options'] = options.toJson(); + final result = await _channel.invokeMethod('proofread', args); + return ProofreadResult.fromJson( + Map.from(result ?? {}), + ); + } on PlatformException catch (e) { + throw OndeviceAiException.fromPlatformException(e); + } + } + + // ============================================================================ + // Model Management + // ============================================================================ + + Future> getAvailableModels() async { + try { + final result = await _channel.invokeMethod('getAvailableModels'); + return (result ?? []) + .map( + (m) => + DownloadableModelInfo.fromJson( + Map.from(m as Map), + ), + ) + .toList(); + } on PlatformException catch (e) { + throw OndeviceAiException.fromPlatformException(e); + } + } + + Future> getDownloadedModels() async { + try { + final result = await _channel.invokeMethod('getDownloadedModels'); + return (result ?? []).map((id) => id.toString()).toList(); + } on PlatformException catch (e) { + throw OndeviceAiException.fromPlatformException(e); + } + } + + Future getLoadedModel() async { + try { + return await _channel.invokeMethod('getLoadedModel'); + } on PlatformException catch (e) { + throw OndeviceAiException.fromPlatformException(e); + } + } + + Future getCurrentEngine() async { + try { + final result = await _channel.invokeMethod('getCurrentEngine'); + return InferenceEngine.fromJson(result ?? 'none'); + } on PlatformException catch (e) { + throw OndeviceAiException.fromPlatformException(e); + } + } + + Future downloadModel( + String modelId, { + void Function(ModelDownloadProgress)? onProgress, + }) async { + StreamSubscription? subscription; + try { + if (onProgress != null) { + subscription = _downloadProgressChannel + .receiveBroadcastStream() + .listen((event) { + final progress = ModelDownloadProgress.fromJson( + Map.from(event as Map), + ); + onProgress(progress); + }); + } + + final result = await _channel.invokeMethod( + 'downloadModel', + {'modelId': modelId}, + ); + return result ?? false; + } on PlatformException catch (e) { + throw OndeviceAiException.fromPlatformException(e); + } finally { + await subscription?.cancel(); + } + } + + Future loadModel(String modelId) async { + try { + await _channel.invokeMethod('loadModel', {'modelId': modelId}); + } on PlatformException catch (e) { + throw OndeviceAiException.fromPlatformException(e); + } + } + + Future deleteModel(String modelId) async { + try { + await _channel.invokeMethod('deleteModel', {'modelId': modelId}); + } on PlatformException catch (e) { + throw OndeviceAiException.fromPlatformException(e); + } + } + + Future switchToDeviceAI() async { + try { + await _channel.invokeMethod('switchToDeviceAI'); + } on PlatformException catch (e) { + throw OndeviceAiException.fromPlatformException(e); + } + } + + Future getPromptApiStatus() async { + try { + final result = await _channel.invokeMethod('getPromptApiStatus'); + return result ?? 'not_available'; + } on PlatformException catch (e) { + throw OndeviceAiException.fromPlatformException(e); + } + } + + Future downloadPromptApiModel({ + void Function(ModelDownloadProgress)? onProgress, + }) async { + StreamSubscription? subscription; + try { + if (onProgress != null) { + subscription = _downloadProgressChannel + .receiveBroadcastStream() + .listen((event) { + final progress = ModelDownloadProgress.fromJson( + Map.from(event as Map), + ); + onProgress(progress); + }); + } + + final result = await _channel.invokeMethod( + 'downloadPromptApiModel', + ); + return result ?? false; + } on PlatformException catch (e) { + throw OndeviceAiException.fromPlatformException(e); + } finally { + await subscription?.cancel(); + } + } +} diff --git a/libraries/flutter_ondevice_ai/lib/src/flutter_ondevice_ai_web.dart b/libraries/flutter_ondevice_ai/lib/src/flutter_ondevice_ai_web.dart new file mode 100644 index 0000000..5a185d7 --- /dev/null +++ b/libraries/flutter_ondevice_ai/lib/src/flutter_ondevice_ai_web.dart @@ -0,0 +1,821 @@ +import 'dart:async'; +import 'dart:convert'; +import 'dart:js_interop'; +import 'dart:js_interop_unsafe'; + +import 'package:flutter/services.dart'; +import 'package:flutter_web_plugins/flutter_web_plugins.dart'; + +// ============================================================================ +// JS Interop Bindings for Chrome Built-in AI APIs +// ============================================================================ + +extension type _ChromeSummarizer(JSObject _) implements JSObject { + external JSPromise summarize(JSString text); + external void destroy(); +} + +extension type _ChromeTranslator(JSObject _) implements JSObject { + external JSPromise translate(JSString text); + external void destroy(); +} + +extension type _ChromeRewriter(JSObject _) implements JSObject { + external JSPromise rewrite(JSString text); + external void destroy(); +} + +extension type _ChromeWriter(JSObject _) implements JSObject { + external JSPromise write(JSString prompt); + external void destroy(); +} + +extension type _ChromeLanguageModelSession(JSObject _) implements JSObject { + external JSPromise prompt(JSString message); + external JSObject promptStreaming(JSString message); + external void destroy(); +} + +// ============================================================================ +// Global API Accessors +// ============================================================================ + +bool _hasApi(String name) { + final obj = globalContext.getProperty(name.toJS); + return obj != null && !obj.isUndefinedOrNull; +} + +JSObject? _getGlobalApi(String name) { + final obj = globalContext.getProperty(name.toJS); + if (obj == null || obj.isUndefinedOrNull) return null; + return obj as JSObject; +} + +// ============================================================================ +// Availability Helpers (matching Expo's 3-second timeout + fallback) +// ============================================================================ + +Future _checkAvailabilityBool(String apiName) async { + try { + final obj = _getGlobalApi(apiName); + if (obj == null) return false; + + final availabilityProp = obj.getProperty('availability'.toJS); + if (availabilityProp == null || availabilityProp.isUndefinedOrNull) { + return _hasApi(apiName); + } + + final availabilityFn = availabilityProp as JSFunction; + final promise = availabilityFn.callAsFunction(obj) as JSPromise; + + // 3-second timeout matching Expo implementation + final status = await Future.any([ + promise.toDart.then((v) => v.toDart), + Future.delayed(const Duration(seconds: 3), () => throw TimeoutException('timeout')), + ]); + + return status == 'available' || + status == 'readily' || + status == 'downloadable' || + status == 'after-download'; + } catch (_) { + // On error/timeout, fall back to checking if the API object exists + return _hasApi(apiName); + } +} + +Future _checkLanguageModelAvailability() async { + final lmApi = _getLanguageModelApi(); + if (lmApi == null) return false; + + // Start with true if API exists (matching Expo: let hasLanguageModel = !!lm) + bool hasLanguageModel = true; + + final availabilityProp = lmApi.getProperty('availability'.toJS); + if (availabilityProp != null && !availabilityProp.isUndefinedOrNull) { + try { + final availabilityFn = availabilityProp as JSFunction; + final promise = + availabilityFn.callAsFunction(lmApi) as JSPromise; + + final status = await Future.any([ + promise.toDart.then((v) => v.toDart), + Future.delayed(const Duration(seconds: 3), () => throw TimeoutException('timeout')), + ]); + + hasLanguageModel = status == 'readily' || + status == 'available' || + status == 'downloadable' || + status == 'after-download'; + } catch (_) { + // On error/timeout, keep true (API object exists) + hasLanguageModel = true; + } + } + + return hasLanguageModel; +} + +// ============================================================================ +// Chrome AI Factory Functions +// ============================================================================ + +Future<_ChromeSummarizer> _createSummarizer({ + String type = 'key-points', + String length = 'long', + String format = 'markdown', +}) async { + final api = _getGlobalApi('Summarizer')!; + final createFn = api.getProperty('create'.toJS) as JSFunction; + final options = { + 'type': type, + 'length': length, + 'format': format, + }.jsify(); + final promise = createFn.callAsFunction(api, options) as JSPromise; + final result = await promise.toDart; + return result as _ChromeSummarizer; +} + +Future<_ChromeTranslator> _createTranslator({ + required String sourceLanguage, + required String targetLanguage, +}) async { + final api = _getGlobalApi('Translator')!; + final createFn = api.getProperty('create'.toJS) as JSFunction; + final options = { + 'sourceLanguage': sourceLanguage, + 'targetLanguage': targetLanguage, + }.jsify(); + final promise = createFn.callAsFunction(api, options) as JSPromise; + final result = await promise.toDart; + return result as _ChromeTranslator; +} + +Future<_ChromeRewriter> _createRewriter({ + String tone = 'as-is', + String length = 'as-is', +}) async { + final api = _getGlobalApi('Rewriter')!; + final createFn = api.getProperty('create'.toJS) as JSFunction; + final options = {'tone': tone, 'length': length}.jsify(); + final promise = createFn.callAsFunction(api, options) as JSPromise; + final result = await promise.toDart; + return result as _ChromeRewriter; +} + +Future<_ChromeWriter> _createWriter() async { + final api = _getGlobalApi('Writer')!; + final createFn = api.getProperty('create'.toJS) as JSFunction; + final options = {}.jsify(); + final promise = createFn.callAsFunction(api, options) as JSPromise; + final result = await promise.toDart; + return result as _ChromeWriter; +} + +JSObject? _getLanguageModelApi() { + // Try globalThis.LanguageModel first (newer API) + final lm = _getGlobalApi('LanguageModel'); + if (lm != null) return lm; + // Try globalThis.ai.languageModel (older API) + final ai = _getGlobalApi('ai'); + if (ai != null) { + final languageModel = ai.getProperty('languageModel'.toJS); + if (languageModel != null && !languageModel.isUndefinedOrNull) { + return languageModel as JSObject; + } + } + return null; +} + +Future<_ChromeLanguageModelSession> _createLanguageModel({ + List>? initialPrompts, +}) async { + final api = _getLanguageModelApi(); + if (api == null) throw Exception('LanguageModel API not available'); + + final createFn = api.getProperty('create'.toJS) as JSFunction; + final options = {}; + if (initialPrompts != null && initialPrompts.isNotEmpty) { + options['initialPrompts'] = initialPrompts; + } + final promise = createFn.callAsFunction(api, options.jsify()) as JSPromise; + final result = await promise.toDart; + return result as _ChromeLanguageModelSession; +} + +// ============================================================================ +// Entity extraction helpers (matching Expo's typeNormalize + confidenceMap) +// ============================================================================ + +const _typeNormalize = { + 'person': 'person', + 'persons': 'person', + 'people': 'person', + 'name': 'person', + 'names': 'person', + 'email': 'email', + 'emails': 'email', + 'phone': 'phone', + 'phones': 'phone', + 'phone_number': 'phone', + 'phone_numbers': 'phone', + 'date': 'date', + 'dates': 'date', + 'location': 'location', + 'locations': 'location', + 'place': 'location', + 'places': 'location', + 'organization': 'organization', + 'organizations': 'organization', + 'org': 'organization', + 'orgs': 'organization', + 'contact': 'email', +}; + +const _confidenceMap = { + 'person': 0.95, + 'email': 0.98, + 'phone': 0.97, + 'date': 0.96, + 'location': 0.92, + 'organization': 0.9, +}; + +List> _walkEntities(dynamic obj, [String? parentKey]) { + final entities = >[]; + + if (obj is List) { + for (final item in obj) { + if (item is String) { + final normalized = + _typeNormalize[(parentKey ?? '').toLowerCase()] ?? + parentKey ?? + 'unknown'; + entities.add({ + 'type': normalized, + 'value': item, + 'confidence': _confidenceMap[normalized] ?? 0.85, + }); + } else { + entities.addAll(_walkEntities(item, parentKey)); + } + } + } else if (obj is Map) { + for (final entry in obj.entries) { + entities.addAll(_walkEntities(entry.value, entry.key.toString())); + } + } else if (obj != null) { + final normalized = + _typeNormalize[(parentKey ?? '').toLowerCase()] ?? + parentKey ?? + 'unknown'; + entities.add({ + 'type': normalized, + 'value': obj.toString(), + 'confidence': _confidenceMap[normalized] ?? 0.85, + }); + } + + return entities; +} + +// ============================================================================ +// Cached instances +// ============================================================================ + +const _maxCachedTranslators = 10; + +_ChromeSummarizer? _cachedSummarizer; +String _cachedSummarizerKey = ''; +_ChromeLanguageModelSession? _cachedLanguageModel; +String? _cachedSystemPrompt; +final Map _cachedTranslators = {}; +_ChromeRewriter? _cachedRewriter; +_ChromeWriter? _cachedWriter; + +// ============================================================================ +// Web Plugin +// ============================================================================ + +class FlutterOndeviceAiWebPlugin { + late final StreamController> _chatStreamController; + + FlutterOndeviceAiWebPlugin._() { + _chatStreamController = StreamController>.broadcast(); + } + + static void registerWith(Registrar registrar) { + final plugin = FlutterOndeviceAiWebPlugin._(); + + final channel = MethodChannel( + 'flutter_ondevice_ai', + const StandardMethodCodec(), + registrar, + ); + channel.setMethodCallHandler(plugin.handleMethodCall); + + final chatStreamEventChannel = PluginEventChannel( + 'flutter_ondevice_ai/chat_stream', + const StandardMethodCodec(), + registrar, + ); + chatStreamEventChannel.setController(plugin._chatStreamController); + } + + Future handleMethodCall(MethodCall call) async { + switch (call.method) { + case 'initialize': + return {'success': true}; + case 'getDeviceCapability': + return await _getDeviceCapability(); + case 'summarize': + return await _summarize(call.arguments as Map); + case 'classify': + return await _classify(call.arguments as Map); + case 'extract': + return await _extract(call.arguments as Map); + case 'chat': + return await _chat(call.arguments as Map); + case 'chatStream': + return await _chatStream(call.arguments as Map); + case 'translate': + return await _translate(call.arguments as Map); + case 'rewrite': + return await _rewrite(call.arguments as Map); + case 'proofread': + return await _proofread(call.arguments as Map); + case 'getAvailableModels': + return >[]; + case 'getDownloadedModels': + return []; + case 'getLoadedModel': + return null; + case 'getCurrentEngine': + return 'none'; + case 'downloadModel': + return false; + case 'loadModel': + return null; + case 'deleteModel': + return null; + case 'getPromptApiStatus': + return await _getPromptApiStatus(); + case 'downloadPromptApiModel': + return false; + default: + throw PlatformException( + code: 'Unimplemented', + message: '${call.method} is not implemented on web', + ); + } + } + + Future> _getDeviceCapability() async { + // Parallel availability checks with timeout (matching Expo) + final results = await Future.wait([ + _checkAvailabilityBool('Summarizer'), + _checkAvailabilityBool('Rewriter'), + _checkAvailabilityBool('Writer'), + _checkLanguageModelAvailability(), + ]); + + final hasSummarizer = results[0]; + final hasRewriter = results[1]; + final hasWriter = results[2]; + final hasLanguageModel = results[3]; + final hasTranslator = _hasApi('Translator'); + + return { + 'isSupported': hasSummarizer || hasLanguageModel || hasTranslator, + 'isModelReady': hasSummarizer || hasLanguageModel, + 'platform': 'WEB', + 'features': { + 'summarize': hasSummarizer, + 'classify': hasLanguageModel, + 'extract': hasLanguageModel, + 'chat': hasLanguageModel, + 'translate': hasTranslator, + 'rewrite': hasRewriter, + 'proofread': hasLanguageModel || hasWriter, + }, + }; + } + + Future> _summarize(Map args) async { + if (_getGlobalApi('Summarizer') == null) { + throw PlatformException( + code: 'ERR_NOT_AVAILABLE', + message: 'Summarizer API not available in this browser', + ); + } + + final text = args['text'] as String; + final options = args['options'] as Map?; + + const optionsKey = 'key-points:long'; + if (_cachedSummarizer == null || _cachedSummarizerKey != optionsKey) { + _cachedSummarizer?.destroy(); + _cachedSummarizer = await _createSummarizer(); + _cachedSummarizerKey = optionsKey; + } + + final raw = + (await _cachedSummarizer!.summarize(text.toJS).toDart).toDart; + + final outputType = options?['outputType'] as String?; + final bulletCount = + outputType == 'ONE_BULLET' + ? 1 + : outputType == 'TWO_BULLETS' + ? 2 + : 3; + + final bullets = + raw + .split('\n') + .map((l) => l.trim()) + .where((l) => l.startsWith('*') || l.startsWith('-')) + .toList(); + final summary = + bullets.isNotEmpty ? bullets.take(bulletCount).join('\n') : raw; + + return { + 'summary': summary, + 'originalLength': text.length, + 'summaryLength': summary.length, + }; + } + + Future> _classify(Map args) async { + final text = args['text'] as String; + final options = args['options'] as Map?; + + final categories = + (options?['categories'] as List?)?.cast() ?? + ['positive', 'negative', 'neutral']; + + final session = await _createLanguageModel(); + final prompt = + 'Classify the following text into one of these categories: ${categories.join(', ')}.\n\nText: $text\n\nRespond with ONLY the category name.'; + final response = (await session.prompt(prompt.toJS).toDart).toDart; + session.destroy(); + + final category = response.trim(); + final isValid = categories.any( + (c) => c.toLowerCase() == category.toLowerCase(), + ); + + return { + 'classifications': [ + { + 'label': isValid ? category : categories[0], + 'score': isValid ? 0.9 : 0.5, + }, + ], + 'topClassification': { + 'label': isValid ? category : categories[0], + 'score': isValid ? 0.9 : 0.5, + }, + }; + } + + Future> _extract(Map args) async { + final text = args['text'] as String; + + final session = await _createLanguageModel(); + final prompt = + 'Extract entities from this text. Return JSON with these exact keys: "person", "email", "phone", "date", "location", "organization". Each key maps to an array of strings. Only include keys that have values.\n\nText: $text\n\nRespond with valid JSON only, no markdown.'; + final response = (await session.prompt(prompt.toJS).toDart).toDart; + session.destroy(); + + try { + // Strip markdown code fence if present (matching Expo) + final jsonStr = response + .replaceFirst(RegExp(r'^```(?:json)?\s*\n?', multiLine: true), '') + .replaceFirst(RegExp(r'\n?```\s*$', multiLine: true), '') + .trim(); + final parsed = jsonDecode(jsonStr); + + final entities = _walkEntities(parsed); + return {'entities': entities}; + } catch (_) { + return { + 'entities': [ + {'type': 'raw', 'value': response, 'confidence': 0.5}, + ], + }; + } + } + + Future> _chat(Map args) async { + final message = args['message'] as String; + final options = args['options'] as Map?; + + final newSystemPrompt = options?['systemPrompt'] as String?; + if (_cachedLanguageModel == null || + newSystemPrompt != _cachedSystemPrompt) { + _cachedLanguageModel?.destroy(); + final initialPrompts = >[]; + if (newSystemPrompt != null) { + initialPrompts.add({'role': 'system', 'content': newSystemPrompt}); + } + _cachedLanguageModel = await _createLanguageModel( + initialPrompts: initialPrompts.isNotEmpty ? initialPrompts : null, + ); + _cachedSystemPrompt = newSystemPrompt; + } + + final response = + (await _cachedLanguageModel!.prompt(message.toJS).toDart).toDart; + return {'message': response, 'canContinue': true}; + } + + Future> _chatStream(Map args) async { + final message = args['message'] as String; + final options = args['options'] as Map?; + + final newSystemPrompt = options?['systemPrompt'] as String?; + if (_cachedLanguageModel == null || + newSystemPrompt != _cachedSystemPrompt) { + _cachedLanguageModel?.destroy(); + final initialPrompts = >[]; + if (newSystemPrompt != null) { + initialPrompts.add({'role': 'system', 'content': newSystemPrompt}); + } + _cachedLanguageModel = await _createLanguageModel( + initialPrompts: initialPrompts.isNotEmpty ? initialPrompts : null, + ); + _cachedSystemPrompt = newSystemPrompt; + } + + // Try real streaming with promptStreaming + try { + final streamObj = + _cachedLanguageModel!.promptStreaming(message.toJS); + + // Access Symbol.asyncIterator via JS interop + final symbolAsyncIterator = globalContext + .getProperty('Symbol'.toJS); + if (symbolAsyncIterator != null && !symbolAsyncIterator.isUndefinedOrNull) { + final asyncIteratorSymbol = + (symbolAsyncIterator as JSObject).getProperty('asyncIterator'.toJS); + if (asyncIteratorSymbol != null && + !asyncIteratorSymbol.isUndefinedOrNull) { + final iteratorFn = + streamObj.getProperty(asyncIteratorSymbol) as JSFunction?; + if (iteratorFn != null) { + final iterator = + iteratorFn.callAsFunction(streamObj) as JSObject; + final nextFn = + iterator.getProperty('next'.toJS) as JSFunction; + + var accumulated = ''; + + while (true) { + final resultPromise = + nextFn.callAsFunction(iterator) as JSPromise; + final result = (await resultPromise.toDart) as JSObject; + + final done = result.getProperty('done'.toJS); + if (done != null && + !done.isUndefinedOrNull && + (done as JSBoolean).toDart) { + break; + } + + final value = result.getProperty('value'.toJS); + if (value == null || value.isUndefinedOrNull) continue; + + final text = (value as JSString).toDart; + + // Chrome may return cumulative or delta text depending on version + String delta; + if (text.length >= accumulated.length && + text.startsWith(accumulated)) { + delta = text.substring(accumulated.length); + accumulated = text; + } else { + delta = text; + accumulated += text; + } + + _chatStreamController.add({ + 'delta': delta, + 'accumulated': accumulated, + 'isFinal': false, + }); + } + + _chatStreamController.add({ + 'delta': '', + 'accumulated': accumulated, + 'isFinal': true, + }); + + return {'message': accumulated, 'canContinue': true}; + } + } + } + } catch (_) { + // promptStreaming not available, fall through to non-streaming + } + + // Fallback to non-streaming + final response = + (await _cachedLanguageModel!.prompt(message.toJS).toDart).toDart; + + _chatStreamController.add({ + 'delta': response, + 'accumulated': response, + 'isFinal': true, + }); + + return {'message': response, 'canContinue': true}; + } + + Future> _translate(Map args) async { + if (_getGlobalApi('Translator') == null) { + throw PlatformException( + code: 'ERR_NOT_AVAILABLE', + message: 'Translator API not available in this browser', + ); + } + + final text = args['text'] as String; + final options = args['options'] as Map?; + final sourceLanguage = (options?['sourceLanguage'] as String?) ?? 'en'; + final targetLanguage = + (options?['targetLanguage'] as String?) ?? 'en'; + + final key = '$sourceLanguage-$targetLanguage'; + if (!_cachedTranslators.containsKey(key)) { + if (_cachedTranslators.length >= _maxCachedTranslators) { + final oldestKey = _cachedTranslators.keys.first; + _cachedTranslators[oldestKey]?.destroy(); + _cachedTranslators.remove(oldestKey); + } + _cachedTranslators[key] = await _createTranslator( + sourceLanguage: sourceLanguage, + targetLanguage: targetLanguage, + ); + } + + final translator = _cachedTranslators[key]!; + final translatedText = + (await translator.translate(text.toJS).toDart).toDart; + + return { + 'translatedText': translatedText, + 'sourceLanguage': sourceLanguage, + 'targetLanguage': targetLanguage, + }; + } + + Future> _rewrite(Map args) async { + if (_getGlobalApi('Rewriter') == null) { + throw PlatformException( + code: 'ERR_NOT_AVAILABLE', + message: 'Rewriter API not available in this browser', + ); + } + + final text = args['text'] as String; + final options = args['options'] as Map?; + final outputType = (options?['outputType'] as String?) ?? 'REPHRASE'; + + const toneMap = { + 'FRIENDLY': 'more-casual', + 'PROFESSIONAL': 'more-formal', + 'ELABORATE': 'as-is', + 'SHORTEN': 'as-is', + 'EMOJIFY': 'more-casual', + 'REPHRASE': 'as-is', + }; + const lengthMap = {'ELABORATE': 'longer', 'SHORTEN': 'shorter'}; + + _cachedRewriter?.destroy(); + _cachedRewriter = await _createRewriter( + tone: toneMap[outputType] ?? 'as-is', + length: lengthMap[outputType] ?? 'as-is', + ); + + final rewrittenText = + (await _cachedRewriter!.rewrite(text.toJS).toDart).toDart; + return {'rewrittenText': rewrittenText, 'style': outputType}; + } + + Future> _proofread(Map args) async { + final text = args['text'] as String; + + // Prefer LanguageModel for structured proofreading (matching Expo) + final lmApi = _getLanguageModelApi(); + if (lmApi != null) { + try { + final session = await _createLanguageModel(); + final prompt = + 'You are a proofreader. Fix ONLY spelling, grammar, and punctuation errors. Do NOT change meaning, tense, or style. Return JSON with this exact format:\n{"correctedText":"the full corrected text","corrections":[{"original":"misspeled","corrected":"misspelled","type":"spelling"}]}\n\nType must be one of: "spelling", "grammar", "punctuation".\nIf there are no errors, return: {"correctedText":"","corrections":[]}\nRespond with valid JSON only, no markdown, no explanation.\n\nText to proofread:\n$text'; + final response = + (await session.prompt(prompt.toJS).toDart).toDart; + session.destroy(); + + try { + // Parse JSON response (matching Expo's JSON parsing) + final jsonStr = response + .replaceFirst( + RegExp(r'^```(?:json)?\s*\n?', multiLine: true), '') + .replaceFirst(RegExp(r'\n?```\s*$', multiLine: true), '') + .trim(); + final parsed = jsonDecode(jsonStr) as Map; + + final correctedText = + (parsed['correctedText'] as String?) ?? text; + final rawCorrections = parsed['corrections']; + final corrections = >[]; + + if (rawCorrections is List) { + for (final c in rawCorrections) { + if (c is Map) { + corrections.add({ + 'original': (c['original'] as String?) ?? '', + 'corrected': (c['corrected'] as String?) ?? '', + 'type': (c['type'] as String?) ?? 'grammar', + 'confidence': 0.9, + }); + } + } + } + + return { + 'correctedText': correctedText, + 'corrections': corrections, + 'hasCorrections': corrections.isNotEmpty, + }; + } catch (_) { + // JSON parse failed — fall through to Writer API + } + } catch (_) { + // LanguageModel failed — fall through to Writer API + } + } + + // Fallback to Writer API with word-diff (matching Expo) + if (_getGlobalApi('Writer') != null) { + _cachedWriter ??= await _createWriter(); + final correctedText = + (await _cachedWriter! + .write( + 'Proofread and correct this text. Fix ONLY spelling, grammar, and punctuation. Do NOT change meaning, tense, or word choice. Return only the corrected text:\n\n$text' + .toJS, + ) + .toDart) + .toDart; + + // Word-diff to find corrections (matching Expo) + final corrections = >[]; + final origWords = text.split(RegExp(r'\s+')); + final corrWords = correctedText.split(RegExp(r'\s+')); + if (origWords.length == corrWords.length) { + for (var i = 0; i < origWords.length; i++) { + if (origWords[i] != corrWords[i]) { + corrections.add({ + 'original': origWords[i], + 'corrected': corrWords[i], + 'type': 'spelling', + 'confidence': 0.85, + }); + } + } + } + + return { + 'correctedText': correctedText, + 'corrections': corrections, + 'hasCorrections': correctedText != text, + }; + } + + throw PlatformException( + code: 'ERR_NOT_AVAILABLE', + message: 'Writer or LanguageModel API not available in this browser', + ); + } + + Future _getPromptApiStatus() async { + final lmApi = _getLanguageModelApi(); + if (lmApi == null) return 'not_available'; + try { + final availabilityProp = lmApi.getProperty('availability'.toJS); + if (availabilityProp == null || availabilityProp.isUndefinedOrNull) { + return 'available'; + } + final availabilityFn = availabilityProp as JSFunction; + final result = + availabilityFn.callAsFunction(lmApi) as JSPromise; + final status = (await result.toDart).toDart; + return status; + } catch (_) { + return 'not_available'; + } + } +} diff --git a/libraries/flutter_ondevice_ai/lib/src/types.dart b/libraries/flutter_ondevice_ai/lib/src/types.dart new file mode 100644 index 0000000..61c3d04 --- /dev/null +++ b/libraries/flutter_ondevice_ai/lib/src/types.dart @@ -0,0 +1,828 @@ +// Locanara On-Device AI Types for Flutter +// Mirrors expo-ondevice-ai/src/types.ts + +// ============================================================================ +// Enums +// ============================================================================ + +enum SummarizeInputType { + article, + conversation; + + String toJson() { + switch (this) { + case SummarizeInputType.article: + return 'ARTICLE'; + case SummarizeInputType.conversation: + return 'CONVERSATION'; + } + } + + static SummarizeInputType fromJson(String value) { + switch (value) { + case 'ARTICLE': + return SummarizeInputType.article; + case 'CONVERSATION': + return SummarizeInputType.conversation; + default: + return SummarizeInputType.article; + } + } +} + +enum SummarizeOutputType { + oneBullet, + twoBullets, + threeBullets; + + String toJson() { + switch (this) { + case SummarizeOutputType.oneBullet: + return 'ONE_BULLET'; + case SummarizeOutputType.twoBullets: + return 'TWO_BULLETS'; + case SummarizeOutputType.threeBullets: + return 'THREE_BULLETS'; + } + } + + static SummarizeOutputType fromJson(String value) { + switch (value) { + case 'ONE_BULLET': + return SummarizeOutputType.oneBullet; + case 'TWO_BULLETS': + return SummarizeOutputType.twoBullets; + case 'THREE_BULLETS': + return SummarizeOutputType.threeBullets; + default: + return SummarizeOutputType.oneBullet; + } + } +} + +enum RewriteOutputType { + elaborate, + emojify, + shorten, + friendly, + professional, + rephrase; + + String toJson() { + switch (this) { + case RewriteOutputType.elaborate: + return 'ELABORATE'; + case RewriteOutputType.emojify: + return 'EMOJIFY'; + case RewriteOutputType.shorten: + return 'SHORTEN'; + case RewriteOutputType.friendly: + return 'FRIENDLY'; + case RewriteOutputType.professional: + return 'PROFESSIONAL'; + case RewriteOutputType.rephrase: + return 'REPHRASE'; + } + } + + static RewriteOutputType fromJson(String value) { + switch (value) { + case 'ELABORATE': + return RewriteOutputType.elaborate; + case 'EMOJIFY': + return RewriteOutputType.emojify; + case 'SHORTEN': + return RewriteOutputType.shorten; + case 'FRIENDLY': + return RewriteOutputType.friendly; + case 'PROFESSIONAL': + return RewriteOutputType.professional; + case 'REPHRASE': + return RewriteOutputType.rephrase; + default: + return RewriteOutputType.rephrase; + } + } +} + +enum ProofreadInputType { + keyboard, + voice; + + String toJson() { + switch (this) { + case ProofreadInputType.keyboard: + return 'KEYBOARD'; + case ProofreadInputType.voice: + return 'VOICE'; + } + } + + static ProofreadInputType fromJson(String value) { + switch (value) { + case 'KEYBOARD': + return ProofreadInputType.keyboard; + case 'VOICE': + return ProofreadInputType.voice; + default: + return ProofreadInputType.keyboard; + } + } +} + +enum OndeviceAiPlatform { + ios, + android, + web; + + String toJson() { + switch (this) { + case OndeviceAiPlatform.ios: + return 'IOS'; + case OndeviceAiPlatform.android: + return 'ANDROID'; + case OndeviceAiPlatform.web: + return 'WEB'; + } + } + + static OndeviceAiPlatform fromJson(String value) { + switch (value) { + case 'IOS': + return OndeviceAiPlatform.ios; + case 'ANDROID': + return OndeviceAiPlatform.android; + case 'WEB': + return OndeviceAiPlatform.web; + default: + return OndeviceAiPlatform.android; + } + } +} + +enum InferenceEngine { + foundationModels, + llamaCpp, + mlx, + coreMl, + promptApi, + none; + + String toJson() { + switch (this) { + case InferenceEngine.foundationModels: + return 'foundation_models'; + case InferenceEngine.llamaCpp: + return 'llama_cpp'; + case InferenceEngine.mlx: + return 'mlx'; + case InferenceEngine.coreMl: + return 'core_ml'; + case InferenceEngine.promptApi: + return 'prompt_api'; + case InferenceEngine.none: + return 'none'; + } + } + + static InferenceEngine fromJson(String value) { + switch (value) { + case 'foundation_models': + return InferenceEngine.foundationModels; + case 'llama_cpp': + return InferenceEngine.llamaCpp; + case 'mlx': + return InferenceEngine.mlx; + case 'core_ml': + return InferenceEngine.coreMl; + case 'prompt_api': + return InferenceEngine.promptApi; + default: + return InferenceEngine.none; + } + } +} + +enum ModelDownloadState { + pending, + downloading, + verifying, + completed, + failed, + cancelled; + + String toJson() => name; + + static ModelDownloadState fromJson(String value) { + switch (value) { + case 'pending': + return ModelDownloadState.pending; + case 'downloading': + return ModelDownloadState.downloading; + case 'verifying': + return ModelDownloadState.verifying; + case 'completed': + return ModelDownloadState.completed; + case 'failed': + return ModelDownloadState.failed; + case 'cancelled': + return ModelDownloadState.cancelled; + default: + return ModelDownloadState.pending; + } + } +} + +enum ChatRole { + user, + assistant, + system; + + String toJson() => name; + + static ChatRole fromJson(String value) { + switch (value) { + case 'user': + return ChatRole.user; + case 'assistant': + return ChatRole.assistant; + case 'system': + return ChatRole.system; + default: + return ChatRole.user; + } + } +} + +// ============================================================================ +// Core Types +// ============================================================================ + +class InitializeResult { + final bool success; + + const InitializeResult({required this.success}); + + factory InitializeResult.fromJson(Map json) { + return InitializeResult(success: json['success'] as bool? ?? false); + } +} + +class DeviceCapability { + final bool isSupported; + final bool isModelReady; + final bool? supportsAppleIntelligence; + final OndeviceAiPlatform platform; + final Map features; + final int? availableMemoryMB; + final bool? isLowPowerMode; + + const DeviceCapability({ + required this.isSupported, + required this.isModelReady, + this.supportsAppleIntelligence, + required this.platform, + required this.features, + this.availableMemoryMB, + this.isLowPowerMode, + }); + + factory DeviceCapability.fromJson(Map json) { + final featuresRaw = json['features']; + final features = {}; + if (featuresRaw is Map) { + for (final entry in featuresRaw.entries) { + features[entry.key.toString()] = entry.value as bool? ?? false; + } + } + + return DeviceCapability( + isSupported: json['isSupported'] as bool? ?? false, + isModelReady: json['isModelReady'] as bool? ?? false, + supportsAppleIntelligence: json['supportsAppleIntelligence'] as bool?, + platform: OndeviceAiPlatform.fromJson( + json['platform'] as String? ?? 'ANDROID', + ), + features: features, + availableMemoryMB: json['availableMemoryMB'] as int?, + isLowPowerMode: json['isLowPowerMode'] as bool?, + ); + } +} + +// ============================================================================ +// Options Types +// ============================================================================ + +class SummarizeOptions { + final SummarizeInputType? inputType; + final SummarizeOutputType? outputType; + + const SummarizeOptions({this.inputType, this.outputType}); + + Map toJson() { + final map = {}; + if (inputType != null) map['inputType'] = inputType!.toJson(); + if (outputType != null) map['outputType'] = outputType!.toJson(); + return map; + } +} + +class ClassifyOptions { + final List? categories; + final int? maxResults; + + const ClassifyOptions({this.categories, this.maxResults}); + + Map toJson() { + final map = {}; + if (categories != null) map['categories'] = categories; + if (maxResults != null) map['maxResults'] = maxResults; + return map; + } +} + +class ExtractOptions { + final List? entityTypes; + final bool? extractKeyValues; + + const ExtractOptions({this.entityTypes, this.extractKeyValues}); + + Map toJson() { + final map = {}; + if (entityTypes != null) map['entityTypes'] = entityTypes; + if (extractKeyValues != null) map['extractKeyValues'] = extractKeyValues; + return map; + } +} + +class ChatMessage { + final ChatRole role; + final String content; + + const ChatMessage({required this.role, required this.content}); + + Map toJson() => { + 'role': role.toJson(), + 'content': content, + }; + + factory ChatMessage.fromJson(Map json) { + return ChatMessage( + role: ChatRole.fromJson(json['role'] as String? ?? 'user'), + content: json['content'] as String? ?? '', + ); + } +} + +class ChatOptions { + final String? conversationId; + final String? systemPrompt; + final List? history; + + const ChatOptions({this.conversationId, this.systemPrompt, this.history}); + + Map toJson() { + final map = {}; + if (conversationId != null) map['conversationId'] = conversationId; + if (systemPrompt != null) map['systemPrompt'] = systemPrompt; + if (history != null) { + map['history'] = history!.map((m) => m.toJson()).toList(); + } + return map; + } +} + +class ChatStreamOptions extends ChatOptions { + final void Function(ChatStreamChunk chunk)? onChunk; + + const ChatStreamOptions({ + super.conversationId, + super.systemPrompt, + super.history, + this.onChunk, + }); + + @override + Map toJson() { + // onChunk is stripped — not serializable + return super.toJson(); + } +} + +class TranslateOptions { + final String? sourceLanguage; + final String targetLanguage; + + const TranslateOptions({ + this.sourceLanguage, + required this.targetLanguage, + }); + + Map toJson() { + final map = {'targetLanguage': targetLanguage}; + if (sourceLanguage != null) map['sourceLanguage'] = sourceLanguage; + return map; + } +} + +class RewriteOptions { + final RewriteOutputType outputType; + + const RewriteOptions({required this.outputType}); + + Map toJson() => {'outputType': outputType.toJson()}; +} + +class ProofreadOptions { + final ProofreadInputType? inputType; + + const ProofreadOptions({this.inputType}); + + Map toJson() { + final map = {}; + if (inputType != null) map['inputType'] = inputType!.toJson(); + return map; + } +} + +// ============================================================================ +// Result Types +// ============================================================================ + +class SummarizeResult { + final String summary; + final int originalLength; + final int summaryLength; + final double? confidence; + + const SummarizeResult({ + required this.summary, + required this.originalLength, + required this.summaryLength, + this.confidence, + }); + + factory SummarizeResult.fromJson(Map json) { + return SummarizeResult( + summary: json['summary'] as String? ?? '', + originalLength: (json['originalLength'] as num?)?.toInt() ?? 0, + summaryLength: (json['summaryLength'] as num?)?.toInt() ?? 0, + confidence: (json['confidence'] as num?)?.toDouble(), + ); + } +} + +class Classification { + final String label; + final double score; + final String? metadata; + + const Classification({ + required this.label, + required this.score, + this.metadata, + }); + + factory Classification.fromJson(Map json) { + return Classification( + label: json['label'] as String? ?? '', + score: (json['score'] as num?)?.toDouble() ?? 0.0, + metadata: json['metadata'] as String?, + ); + } +} + +class ClassifyResult { + final List classifications; + final Classification topClassification; + + const ClassifyResult({ + required this.classifications, + required this.topClassification, + }); + + factory ClassifyResult.fromJson(Map json) { + final classificationsRaw = json['classifications'] as List? ?? []; + final classifications = + classificationsRaw + .map( + (c) => + Classification.fromJson(Map.from(c as Map)), + ) + .toList(); + + final topRaw = json['topClassification'] as Map?; + final topClassification = + topRaw != null + ? Classification.fromJson(Map.from(topRaw)) + : (classifications.isNotEmpty + ? classifications.first + : const Classification(label: '', score: 0.0)); + + return ClassifyResult( + classifications: classifications, + topClassification: topClassification, + ); + } +} + +class Entity { + final String type; + final String value; + final double confidence; + final int? startPos; + final int? endPos; + + const Entity({ + required this.type, + required this.value, + required this.confidence, + this.startPos, + this.endPos, + }); + + factory Entity.fromJson(Map json) { + return Entity( + type: json['type'] as String? ?? '', + value: json['value'] as String? ?? '', + confidence: (json['confidence'] as num?)?.toDouble() ?? 0.0, + startPos: json['startPos'] as int?, + endPos: json['endPos'] as int?, + ); + } +} + +class KeyValuePair { + final String key; + final String value; + final double? confidence; + + const KeyValuePair({ + required this.key, + required this.value, + this.confidence, + }); + + factory KeyValuePair.fromJson(Map json) { + return KeyValuePair( + key: json['key'] as String? ?? '', + value: json['value'] as String? ?? '', + confidence: (json['confidence'] as num?)?.toDouble(), + ); + } +} + +class ExtractResult { + final List entities; + final List? keyValuePairs; + + const ExtractResult({required this.entities, this.keyValuePairs}); + + factory ExtractResult.fromJson(Map json) { + final entitiesRaw = json['entities'] as List? ?? []; + final entities = + entitiesRaw + .map( + (e) => Entity.fromJson(Map.from(e as Map)), + ) + .toList(); + + final kvRaw = json['keyValuePairs'] as List?; + final keyValuePairs = + kvRaw + ?.map( + (p) => KeyValuePair.fromJson(Map.from(p as Map)), + ) + .toList(); + + return ExtractResult(entities: entities, keyValuePairs: keyValuePairs); + } +} + +class ChatResult { + final String message; + final String? conversationId; + final bool canContinue; + final List? suggestedPrompts; + + const ChatResult({ + required this.message, + this.conversationId, + required this.canContinue, + this.suggestedPrompts, + }); + + factory ChatResult.fromJson(Map json) { + return ChatResult( + message: json['message'] as String? ?? '', + conversationId: json['conversationId'] as String?, + canContinue: json['canContinue'] as bool? ?? false, + suggestedPrompts: + (json['suggestedPrompts'] as List?) + ?.map((s) => s.toString()) + .toList(), + ); + } +} + +class ChatStreamChunk { + final String delta; + final String accumulated; + final bool isFinal; + final String? conversationId; + + const ChatStreamChunk({ + required this.delta, + required this.accumulated, + required this.isFinal, + this.conversationId, + }); + + factory ChatStreamChunk.fromJson(Map json) { + return ChatStreamChunk( + delta: json['delta'] as String? ?? '', + accumulated: json['accumulated'] as String? ?? '', + isFinal: json['isFinal'] as bool? ?? false, + conversationId: json['conversationId'] as String?, + ); + } +} + +class TranslateResult { + final String translatedText; + final String sourceLanguage; + final String targetLanguage; + final double? confidence; + + const TranslateResult({ + required this.translatedText, + required this.sourceLanguage, + required this.targetLanguage, + this.confidence, + }); + + factory TranslateResult.fromJson(Map json) { + return TranslateResult( + translatedText: json['translatedText'] as String? ?? '', + sourceLanguage: json['sourceLanguage'] as String? ?? '', + targetLanguage: json['targetLanguage'] as String? ?? '', + confidence: (json['confidence'] as num?)?.toDouble(), + ); + } +} + +class RewriteResult { + final String rewrittenText; + final RewriteOutputType? style; + final List? alternatives; + final double? confidence; + + const RewriteResult({ + required this.rewrittenText, + this.style, + this.alternatives, + this.confidence, + }); + + factory RewriteResult.fromJson(Map json) { + return RewriteResult( + rewrittenText: json['rewrittenText'] as String? ?? '', + style: + json['style'] != null + ? RewriteOutputType.fromJson(json['style'] as String) + : null, + alternatives: + (json['alternatives'] as List?) + ?.map((s) => s.toString()) + .toList(), + confidence: (json['confidence'] as num?)?.toDouble(), + ); + } +} + +class ProofreadCorrection { + final String original; + final String corrected; + final String? type; + final double? confidence; + final int? startPos; + final int? endPos; + + const ProofreadCorrection({ + required this.original, + required this.corrected, + this.type, + this.confidence, + this.startPos, + this.endPos, + }); + + factory ProofreadCorrection.fromJson(Map json) { + return ProofreadCorrection( + original: json['original'] as String? ?? '', + corrected: json['corrected'] as String? ?? '', + type: json['type'] as String?, + confidence: (json['confidence'] as num?)?.toDouble(), + startPos: json['startPos'] as int?, + endPos: json['endPos'] as int?, + ); + } +} + +class ProofreadResult { + final String correctedText; + final List corrections; + final bool hasCorrections; + + const ProofreadResult({ + required this.correctedText, + required this.corrections, + required this.hasCorrections, + }); + + factory ProofreadResult.fromJson(Map json) { + final correctionsRaw = json['corrections'] as List? ?? []; + final corrections = + correctionsRaw + .map( + (c) => ProofreadCorrection.fromJson( + Map.from(c as Map), + ), + ) + .toList(); + + return ProofreadResult( + correctedText: json['correctedText'] as String? ?? '', + corrections: corrections, + hasCorrections: json['hasCorrections'] as bool? ?? false, + ); + } +} + +// ============================================================================ +// Model Management Types +// ============================================================================ + +class DownloadableModelInfo { + final String modelId; + final String name; + final String version; + final double sizeMB; + final String quantization; + final int contextLength; + final int minMemoryMB; + final bool isMultimodal; + + const DownloadableModelInfo({ + required this.modelId, + required this.name, + required this.version, + required this.sizeMB, + required this.quantization, + required this.contextLength, + required this.minMemoryMB, + required this.isMultimodal, + }); + + factory DownloadableModelInfo.fromJson(Map json) { + return DownloadableModelInfo( + modelId: json['modelId'] as String? ?? '', + name: json['name'] as String? ?? '', + version: json['version'] as String? ?? '', + sizeMB: (json['sizeMB'] as num?)?.toDouble() ?? 0.0, + quantization: json['quantization'] as String? ?? '', + contextLength: (json['contextLength'] as num?)?.toInt() ?? 0, + minMemoryMB: (json['minMemoryMB'] as num?)?.toInt() ?? 0, + isMultimodal: json['isMultimodal'] as bool? ?? false, + ); + } +} + +class ModelDownloadProgress { + final String modelId; + final int bytesDownloaded; + final int totalBytes; + final double progress; + final ModelDownloadState state; + + const ModelDownloadProgress({ + required this.modelId, + required this.bytesDownloaded, + required this.totalBytes, + required this.progress, + required this.state, + }); + + factory ModelDownloadProgress.fromJson(Map json) { + return ModelDownloadProgress( + modelId: json['modelId'] as String? ?? '', + bytesDownloaded: (json['bytesDownloaded'] as num?)?.toInt() ?? 0, + totalBytes: (json['totalBytes'] as num?)?.toInt() ?? 0, + progress: (json['progress'] as num?)?.toDouble() ?? 0.0, + state: ModelDownloadState.fromJson( + json['state'] as String? ?? 'pending', + ), + ); + } +} diff --git a/libraries/flutter_ondevice_ai/pubspec.yaml b/libraries/flutter_ondevice_ai/pubspec.yaml new file mode 100644 index 0000000..c42d89e --- /dev/null +++ b/libraries/flutter_ondevice_ai/pubspec.yaml @@ -0,0 +1,34 @@ +name: flutter_ondevice_ai +description: Flutter plugin for on-device AI using Locanara SDK. Supports iOS (Apple Intelligence), Android (Gemini Nano), and Web (Chrome Built-in AI). +version: 0.1.0 +homepage: https://github.com/hyodotdev/locanara +repository: https://github.com/hyodotdev/locanara +issue_tracker: https://github.com/hyodotdev/locanara/issues + +environment: + sdk: ">=3.3.0 <4.0.0" + flutter: ">=3.22.0" + +dependencies: + flutter: + sdk: flutter + flutter_web_plugins: + sdk: flutter + web: ^1.1.0 + +dev_dependencies: + flutter_test: + sdk: flutter + flutter_lints: ^5.0.0 + +flutter: + plugin: + platforms: + android: + package: dev.hyodot.flutter_ondevice_ai + pluginClass: FlutterOndeviceAiPlugin + ios: + pluginClass: FlutterOndeviceAiPlugin + web: + pluginClass: FlutterOndeviceAiWebPlugin + fileName: src/flutter_ondevice_ai_web.dart diff --git a/libraries/flutter_ondevice_ai/test/flutter_ondevice_ai_test.dart b/libraries/flutter_ondevice_ai/test/flutter_ondevice_ai_test.dart new file mode 100644 index 0000000..b56b430 --- /dev/null +++ b/libraries/flutter_ondevice_ai/test/flutter_ondevice_ai_test.dart @@ -0,0 +1,365 @@ +import 'package:flutter/services.dart'; +import 'package:flutter_test/flutter_test.dart'; +import 'package:flutter_ondevice_ai/flutter_ondevice_ai.dart'; + +void main() { + TestWidgetsFlutterBinding.ensureInitialized(); + + const channel = MethodChannel('flutter_ondevice_ai'); + late FlutterOndeviceAi plugin; + + setUp(() { + plugin = FlutterOndeviceAi.forTesting(); + }); + + tearDown(() { + TestDefaultBinaryMessengerBinding.instance.defaultBinaryMessenger + .setMockMethodCallHandler(channel, null); + }); + + void mockChannel(Future Function(MethodCall call)? handler) { + TestDefaultBinaryMessengerBinding.instance.defaultBinaryMessenger + .setMockMethodCallHandler(channel, handler); + } + + group('initialize', () { + test('returns InitializeResult with success true', () async { + mockChannel((call) async { + expect(call.method, 'initialize'); + return {'success': true}; + }); + + final result = await plugin.initialize(); + expect(result.success, isTrue); + }); + + test('returns InitializeResult with success false', () async { + mockChannel((call) async { + return {'success': false}; + }); + + final result = await plugin.initialize(); + expect(result.success, isFalse); + }); + }); + + group('getDeviceCapability', () { + test('parses device capability correctly', () async { + mockChannel((call) async { + expect(call.method, 'getDeviceCapability'); + return { + 'isSupported': true, + 'isModelReady': true, + 'supportsAppleIntelligence': true, + 'platform': 'IOS', + 'features': { + 'summarize': true, + 'classify': true, + 'extract': true, + 'chat': true, + 'translate': true, + 'rewrite': true, + 'proofread': true, + }, + 'availableMemoryMB': 8192, + 'isLowPowerMode': false, + }; + }); + + final result = await plugin.getDeviceCapability(); + expect(result.isSupported, isTrue); + expect(result.isModelReady, isTrue); + expect(result.supportsAppleIntelligence, isTrue); + expect(result.platform, OndeviceAiPlatform.ios); + expect(result.features['summarize'], isTrue); + expect(result.features['chat'], isTrue); + expect(result.availableMemoryMB, 8192); + expect(result.isLowPowerMode, isFalse); + }); + }); + + group('summarize', () { + test('sends correct arguments and parses result', () async { + mockChannel((call) async { + expect(call.method, 'summarize'); + final args = call.arguments as Map; + expect(args['text'], 'Hello world'); + expect(args['options']['outputType'], 'THREE_BULLETS'); + expect(args['options']['inputType'], 'ARTICLE'); + return { + 'summary': 'Summary text', + 'originalLength': 11, + 'summaryLength': 12, + 'confidence': 0.95, + }; + }); + + final result = await plugin.summarize( + 'Hello world', + options: const SummarizeOptions( + outputType: SummarizeOutputType.threeBullets, + inputType: SummarizeInputType.article, + ), + ); + expect(result.summary, 'Summary text'); + expect(result.originalLength, 11); + expect(result.summaryLength, 12); + expect(result.confidence, 0.95); + }); + + test('works without options', () async { + mockChannel((call) async { + final args = call.arguments as Map; + expect(args['text'], 'Test'); + expect(args.containsKey('options'), isFalse); + return { + 'summary': 'Test summary', + 'originalLength': 4, + 'summaryLength': 12, + }; + }); + + final result = await plugin.summarize('Test'); + expect(result.summary, 'Test summary'); + expect(result.confidence, isNull); + }); + }); + + group('classify', () { + test('sends correct arguments and parses result', () async { + mockChannel((call) async { + expect(call.method, 'classify'); + final args = call.arguments as Map; + expect(args['text'], 'Great product!'); + return { + 'classifications': [ + {'label': 'positive', 'score': 0.95, 'metadata': ''}, + {'label': 'neutral', 'score': 0.04}, + ], + 'topClassification': {'label': 'positive', 'score': 0.95}, + }; + }); + + final result = await plugin.classify( + 'Great product!', + options: const ClassifyOptions( + categories: ['positive', 'negative', 'neutral'], + maxResults: 3, + ), + ); + expect(result.classifications.length, 2); + expect(result.topClassification.label, 'positive'); + expect(result.topClassification.score, 0.95); + }); + }); + + group('extract', () { + test('parses entities and key-value pairs', () async { + mockChannel((call) async { + return { + 'entities': [ + { + 'type': 'person', + 'value': 'John', + 'confidence': 0.95, + 'startPos': 0, + 'endPos': 4, + }, + ], + 'keyValuePairs': [ + {'key': 'name', 'value': 'John', 'confidence': 0.9}, + ], + }; + }); + + final result = await plugin.extract('John is here'); + expect(result.entities.length, 1); + expect(result.entities[0].type, 'person'); + expect(result.entities[0].value, 'John'); + expect(result.keyValuePairs?.length, 1); + expect(result.keyValuePairs![0].key, 'name'); + }); + }); + + group('chat', () { + test('sends message with options', () async { + mockChannel((call) async { + expect(call.method, 'chat'); + final args = call.arguments as Map; + expect(args['message'], 'Hello'); + final opts = args['options'] as Map; + expect(opts['systemPrompt'], 'Be helpful'); + return { + 'message': 'Hi there!', + 'canContinue': true, + 'conversationId': 'conv-123', + }; + }); + + final result = await plugin.chat( + 'Hello', + options: const ChatOptions(systemPrompt: 'Be helpful'), + ); + expect(result.message, 'Hi there!'); + expect(result.canContinue, isTrue); + expect(result.conversationId, 'conv-123'); + }); + }); + + group('translate', () { + test('sends correct translate options', () async { + mockChannel((call) async { + expect(call.method, 'translate'); + final args = call.arguments as Map; + expect(args['text'], 'Hello'); + final opts = args['options'] as Map; + expect(opts['targetLanguage'], 'ko'); + expect(opts['sourceLanguage'], 'en'); + return { + 'translatedText': '안녕하세요', + 'sourceLanguage': 'en', + 'targetLanguage': 'ko', + 'confidence': 0.98, + }; + }); + + final result = await plugin.translate( + 'Hello', + options: const TranslateOptions( + sourceLanguage: 'en', + targetLanguage: 'ko', + ), + ); + expect(result.translatedText, '안녕하세요'); + expect(result.sourceLanguage, 'en'); + expect(result.targetLanguage, 'ko'); + }); + }); + + group('rewrite', () { + test('sends correct rewrite options', () async { + mockChannel((call) async { + final args = call.arguments as Map; + final opts = args['options'] as Map; + expect(opts['outputType'], 'PROFESSIONAL'); + return { + 'rewrittenText': 'Please find attached.', + 'style': 'PROFESSIONAL', + 'confidence': 0.9, + }; + }); + + final result = await plugin.rewrite( + 'Here it is', + options: const RewriteOptions(outputType: RewriteOutputType.professional), + ); + expect(result.rewrittenText, 'Please find attached.'); + expect(result.style, RewriteOutputType.professional); + }); + }); + + group('proofread', () { + test('parses corrections correctly', () async { + mockChannel((call) async { + return { + 'correctedText': 'The quick brown fox', + 'corrections': [ + { + 'original': 'teh', + 'corrected': 'the', + 'type': 'spelling', + 'confidence': 0.99, + 'startPos': 0, + 'endPos': 3, + }, + ], + 'hasCorrections': true, + }; + }); + + final result = await plugin.proofread('Teh quick brown fox'); + expect(result.correctedText, 'The quick brown fox'); + expect(result.hasCorrections, isTrue); + expect(result.corrections.length, 1); + expect(result.corrections[0].original, 'teh'); + expect(result.corrections[0].corrected, 'the'); + expect(result.corrections[0].type, 'spelling'); + }); + }); + + group('model management', () { + test('getAvailableModels parses model info', () async { + mockChannel((call) async { + return [ + { + 'modelId': 'llama-3.2-1b', + 'name': 'Llama 3.2 1B', + 'version': '1.0.0', + 'sizeMB': 750.0, + 'quantization': 'int4', + 'contextLength': 4096, + 'minMemoryMB': 2048, + 'isMultimodal': false, + }, + ]; + }); + + final models = await plugin.getAvailableModels(); + expect(models.length, 1); + expect(models[0].modelId, 'llama-3.2-1b'); + expect(models[0].name, 'Llama 3.2 1B'); + expect(models[0].sizeMB, 750.0); + expect(models[0].isMultimodal, isFalse); + }); + + test('getDownloadedModels returns list of strings', () async { + mockChannel((call) async { + return ['model-1', 'model-2']; + }); + + final ids = await plugin.getDownloadedModels(); + expect(ids, ['model-1', 'model-2']); + }); + + test('getLoadedModel returns null when no model loaded', () async { + mockChannel((call) async => null); + + final id = await plugin.getLoadedModel(); + expect(id, isNull); + }); + + test('getCurrentEngine parses engine type', () async { + mockChannel((call) async => 'foundation_models'); + + final engine = await plugin.getCurrentEngine(); + expect(engine, InferenceEngine.foundationModels); + }); + + test('getPromptApiStatus returns status string', () async { + mockChannel((call) async => 'available'); + + final status = await plugin.getPromptApiStatus(); + expect(status, 'available'); + }); + }); + + group('error handling', () { + test('wraps PlatformException in OndeviceAiException', () async { + mockChannel((call) async { + throw PlatformException( + code: 'ERR_SUMMARIZE', + message: 'Model not ready', + ); + }); + + expect( + () => plugin.summarize('test'), + throwsA( + isA() + .having((e) => e.code, 'code', 'ERR_SUMMARIZE') + .having((e) => e.message, 'message', 'Model not ready'), + ), + ); + }); + }); +} diff --git a/libraries/flutter_ondevice_ai/test/types_test.dart b/libraries/flutter_ondevice_ai/test/types_test.dart new file mode 100644 index 0000000..3630b44 --- /dev/null +++ b/libraries/flutter_ondevice_ai/test/types_test.dart @@ -0,0 +1,286 @@ +import 'package:flutter_test/flutter_test.dart'; +import 'package:flutter_ondevice_ai/flutter_ondevice_ai.dart'; + +void main() { + group('SummarizeInputType', () { + test('toJson returns correct strings', () { + expect(SummarizeInputType.article.toJson(), 'ARTICLE'); + expect(SummarizeInputType.conversation.toJson(), 'CONVERSATION'); + }); + + test('fromJson parses correctly', () { + expect(SummarizeInputType.fromJson('ARTICLE'), SummarizeInputType.article); + expect( + SummarizeInputType.fromJson('CONVERSATION'), + SummarizeInputType.conversation, + ); + expect(SummarizeInputType.fromJson('UNKNOWN'), SummarizeInputType.article); + }); + }); + + group('SummarizeOutputType', () { + test('toJson returns correct strings', () { + expect(SummarizeOutputType.oneBullet.toJson(), 'ONE_BULLET'); + expect(SummarizeOutputType.twoBullets.toJson(), 'TWO_BULLETS'); + expect(SummarizeOutputType.threeBullets.toJson(), 'THREE_BULLETS'); + }); + + test('fromJson parses correctly', () { + expect( + SummarizeOutputType.fromJson('TWO_BULLETS'), + SummarizeOutputType.twoBullets, + ); + }); + }); + + group('RewriteOutputType', () { + test('round-trip all values', () { + for (final value in RewriteOutputType.values) { + final json = value.toJson(); + final parsed = RewriteOutputType.fromJson(json); + expect(parsed, value); + } + }); + }); + + group('InferenceEngine', () { + test('toJson returns snake_case', () { + expect(InferenceEngine.foundationModels.toJson(), 'foundation_models'); + expect(InferenceEngine.llamaCpp.toJson(), 'llama_cpp'); + expect(InferenceEngine.promptApi.toJson(), 'prompt_api'); + expect(InferenceEngine.none.toJson(), 'none'); + }); + + test('fromJson parses correctly', () { + expect( + InferenceEngine.fromJson('foundation_models'), + InferenceEngine.foundationModels, + ); + expect(InferenceEngine.fromJson('unknown'), InferenceEngine.none); + }); + }); + + group('ModelDownloadState', () { + test('round-trip all values', () { + for (final value in ModelDownloadState.values) { + final json = value.toJson(); + final parsed = ModelDownloadState.fromJson(json); + expect(parsed, value); + } + }); + }); + + group('ChatRole', () { + test('round-trip all values', () { + for (final value in ChatRole.values) { + final json = value.toJson(); + final parsed = ChatRole.fromJson(json); + expect(parsed, value); + } + }); + }); + + group('SummarizeOptions', () { + test('toJson includes only non-null fields', () { + const opts = SummarizeOptions(); + expect(opts.toJson(), isEmpty); + + const opts2 = SummarizeOptions( + inputType: SummarizeInputType.conversation, + outputType: SummarizeOutputType.twoBullets, + ); + expect(opts2.toJson(), { + 'inputType': 'CONVERSATION', + 'outputType': 'TWO_BULLETS', + }); + }); + }); + + group('ChatOptions', () { + test('toJson serializes history correctly', () { + const opts = ChatOptions( + systemPrompt: 'Be helpful', + history: [ + ChatMessage(role: ChatRole.user, content: 'Hi'), + ChatMessage(role: ChatRole.assistant, content: 'Hello!'), + ], + ); + final json = opts.toJson(); + expect(json['systemPrompt'], 'Be helpful'); + expect(json['history'], isList); + final history = json['history'] as List; + expect(history.length, 2); + expect(history[0]['role'], 'user'); + expect(history[0]['content'], 'Hi'); + }); + }); + + group('TranslateOptions', () { + test('toJson always includes targetLanguage', () { + const opts = TranslateOptions(targetLanguage: 'ko'); + expect(opts.toJson(), {'targetLanguage': 'ko'}); + + const opts2 = TranslateOptions( + sourceLanguage: 'en', + targetLanguage: 'ko', + ); + expect(opts2.toJson(), { + 'sourceLanguage': 'en', + 'targetLanguage': 'ko', + }); + }); + }); + + group('SummarizeResult', () { + test('fromJson parses all fields', () { + final result = SummarizeResult.fromJson({ + 'summary': 'Test summary', + 'originalLength': 100, + 'summaryLength': 20, + 'confidence': 0.95, + }); + expect(result.summary, 'Test summary'); + expect(result.originalLength, 100); + expect(result.summaryLength, 20); + expect(result.confidence, 0.95); + }); + + test('fromJson handles missing optional fields', () { + final result = SummarizeResult.fromJson({ + 'summary': 'Test', + 'originalLength': 10, + 'summaryLength': 4, + }); + expect(result.confidence, isNull); + }); + }); + + group('ClassifyResult', () { + test('fromJson parses classifications', () { + final result = ClassifyResult.fromJson({ + 'classifications': [ + {'label': 'positive', 'score': 0.9, 'metadata': 'test'}, + {'label': 'neutral', 'score': 0.1}, + ], + 'topClassification': {'label': 'positive', 'score': 0.9}, + }); + expect(result.classifications.length, 2); + expect(result.classifications[0].metadata, 'test'); + expect(result.classifications[1].metadata, isNull); + expect(result.topClassification.label, 'positive'); + }); + }); + + group('ExtractResult', () { + test('fromJson parses entities', () { + final result = ExtractResult.fromJson({ + 'entities': [ + { + 'type': 'person', + 'value': 'John', + 'confidence': 0.95, + 'startPos': 0, + 'endPos': 4, + }, + ], + }); + expect(result.entities.length, 1); + expect(result.entities[0].type, 'person'); + expect(result.keyValuePairs, isNull); + }); + }); + + group('ChatResult', () { + test('fromJson parses all fields', () { + final result = ChatResult.fromJson({ + 'message': 'Hello!', + 'conversationId': 'conv-1', + 'canContinue': true, + 'suggestedPrompts': ['Tell me more', 'Thanks'], + }); + expect(result.message, 'Hello!'); + expect(result.conversationId, 'conv-1'); + expect(result.canContinue, isTrue); + expect(result.suggestedPrompts, ['Tell me more', 'Thanks']); + }); + }); + + group('ChatStreamChunk', () { + test('fromJson parses correctly', () { + final chunk = ChatStreamChunk.fromJson({ + 'delta': 'Hello', + 'accumulated': 'Hello', + 'isFinal': false, + }); + expect(chunk.delta, 'Hello'); + expect(chunk.accumulated, 'Hello'); + expect(chunk.isFinal, isFalse); + expect(chunk.conversationId, isNull); + }); + }); + + group('ProofreadResult', () { + test('fromJson parses corrections', () { + final result = ProofreadResult.fromJson({ + 'correctedText': 'The fox', + 'corrections': [ + { + 'original': 'teh', + 'corrected': 'the', + 'type': 'spelling', + 'confidence': 0.99, + 'startPos': 0, + 'endPos': 3, + }, + ], + 'hasCorrections': true, + }); + expect(result.hasCorrections, isTrue); + expect(result.corrections.length, 1); + expect(result.corrections[0].type, 'spelling'); + }); + }); + + group('DownloadableModelInfo', () { + test('fromJson parses all fields', () { + final info = DownloadableModelInfo.fromJson({ + 'modelId': 'test-model', + 'name': 'Test Model', + 'version': '1.0.0', + 'sizeMB': 500.0, + 'quantization': 'int4', + 'contextLength': 4096, + 'minMemoryMB': 2048, + 'isMultimodal': true, + }); + expect(info.modelId, 'test-model'); + expect(info.sizeMB, 500.0); + expect(info.isMultimodal, isTrue); + }); + }); + + group('ModelDownloadProgress', () { + test('fromJson parses all fields', () { + final progress = ModelDownloadProgress.fromJson({ + 'modelId': 'model-1', + 'bytesDownloaded': 500000, + 'totalBytes': 1000000, + 'progress': 0.5, + 'state': 'downloading', + }); + expect(progress.modelId, 'model-1'); + expect(progress.progress, 0.5); + expect(progress.state, ModelDownloadState.downloading); + }); + }); + + group('ChatMessage', () { + test('round-trip serialization', () { + const msg = ChatMessage(role: ChatRole.assistant, content: 'Hello!'); + final json = msg.toJson(); + final parsed = ChatMessage.fromJson(json); + expect(parsed.role, ChatRole.assistant); + expect(parsed.content, 'Hello!'); + }); + }); +} diff --git a/libraries/react-native-ondevice-ai/android/src/main/java/com/margelo/nitro/ondeviceai/HybridOndeviceAi.kt b/libraries/react-native-ondevice-ai/android/src/main/java/com/margelo/nitro/ondeviceai/HybridOndeviceAi.kt index 4e3817b..28cf8ef 100644 --- a/libraries/react-native-ondevice-ai/android/src/main/java/com/margelo/nitro/ondeviceai/HybridOndeviceAi.kt +++ b/libraries/react-native-ondevice-ai/android/src/main/java/com/margelo/nitro/ondeviceai/HybridOndeviceAi.kt @@ -1,5 +1,7 @@ package com.margelo.nitro.ondeviceai +import android.app.ActivityManager +import android.content.Context import com.facebook.react.bridge.ReactApplicationContext import com.locanara.DeviceCapability import com.locanara.FeatureType @@ -13,6 +15,7 @@ import com.locanara.builtin.RewriteChain import com.locanara.builtin.SummarizeChain import com.locanara.builtin.TranslateChain import com.locanara.core.LocanaraDefaults +import com.locanara.engine.ModelRegistry import com.locanara.mlkit.PromptApiStatus import com.locanara.platform.PromptApiModel import com.margelo.nitro.NitroModules @@ -40,6 +43,17 @@ class HybridOndeviceAi : HybridOndeviceAiSpec() { private val chatStreamListeners = java.util.concurrent.CopyOnWriteArrayList<(NitroChatStreamChunk) -> Unit>() private val modelDownloadProgressListeners = java.util.concurrent.CopyOnWriteArrayList<(NitroModelDownloadProgress) -> Unit>() + // Simulated model state (matches native example behavior) + private val downloadedModelIds = mutableSetOf() + private var loadedModelId: String? = null + + private fun getDeviceMemoryMB(): Int { + val am = context.getSystemService(Context.ACTIVITY_SERVICE) as ActivityManager + val memInfo = ActivityManager.MemoryInfo() + am.getMemoryInfo(memInfo) + return (memInfo.totalMem / (1024 * 1024)).toInt() + } + // ────────────────────────────────────────────────────────────────── // Initialization // ────────────────────────────────────────────────────────────────── @@ -236,31 +250,50 @@ class HybridOndeviceAi : HybridOndeviceAiSpec() { // ────────────────────────────────────────────────────────────────── override fun getAvailableModels(): Promise> { - return Promise.async { emptyArray() } + return Promise.async { + val memoryMB = getDeviceMemoryMB() + ModelRegistry.getCompatibleModels(memoryMB).map { m -> + NitroModelInfo( + modelId = m.modelId, + name = m.name, + version = m.version, + sizeMB = m.sizeMB.toDouble(), + quantization = m.quantization.name, + contextLength = m.contextLength.toDouble(), + minMemoryMB = m.minMemoryMB.toDouble(), + isMultimodal = false, + ) + }.toTypedArray() + } } override fun getDownloadedModels(): Promise> { - return Promise.async { emptyArray() } + return Promise.async { downloadedModelIds.toTypedArray() } } override fun getLoadedModel(): Promise { - return Promise.async { "" } + return Promise.async { loadedModelId ?: "" } } override fun getCurrentEngine(): Promise { return Promise.async { val status = locanara.getPromptApiStatus() - if (status is PromptApiStatus.Available) { - NitroInferenceEngine.PROMPT_API - } else { - NitroInferenceEngine.NONE + when (status) { + is PromptApiStatus.Available, + is PromptApiStatus.Downloadable, + is PromptApiStatus.Downloading -> NitroInferenceEngine.PROMPT_API + else -> NitroInferenceEngine.NONE } } } override fun downloadModel(modelId: String): Promise { return Promise.async { - throw Exception("Model downloads are not supported on Android. Use downloadPromptApiModel() instead.") + val model = ModelRegistry.getModel(modelId) + ?: throw Exception("Model not found: $modelId") + android.util.Log.d("OndeviceAi", "downloadModel: $modelId (${model.name}, ${model.sizeMB}MB) — simulated") + downloadedModelIds.add(modelId) + true } } @@ -274,13 +307,19 @@ class HybridOndeviceAi : HybridOndeviceAiSpec() { override fun loadModel(modelId: String): Promise { return Promise.async { - throw Exception("Model loading is not supported on Android.") + if (!downloadedModelIds.contains(modelId)) { + throw Exception("Model not downloaded: $modelId") + } + android.util.Log.d("OndeviceAi", "loadModel: $modelId — simulated") + loadedModelId = modelId } } override fun deleteModel(modelId: String): Promise { return Promise.async { - throw Exception("Model deletion is not supported on Android.") + android.util.Log.d("OndeviceAi", "deleteModel: $modelId — simulated") + downloadedModelIds.remove(modelId) + if (loadedModelId == modelId) loadedModelId = null } } diff --git a/libraries/react-native-ondevice-ai/example/src/components/pages/FeatureDetail/ClassifyDemo.tsx b/libraries/react-native-ondevice-ai/example/src/components/pages/FeatureDetail/ClassifyDemo.tsx index 658b0c7..8febae0 100644 --- a/libraries/react-native-ondevice-ai/example/src/components/pages/FeatureDetail/ClassifyDemo.tsx +++ b/libraries/react-native-ondevice-ai/example/src/components/pages/FeatureDetail/ClassifyDemo.tsx @@ -18,18 +18,41 @@ import {AIModelRequiredBanner} from './AIModelRequiredBanner'; const DEFAULT_INPUT = 'The new iPhone features an incredible camera system with advanced computational photography.'; -const DEFAULT_CATEGORIES = - 'Technology, Sports, Entertainment, Business, Health'; +const DEFAULT_CATEGORIES = [ + 'Technology', + 'Sports', + 'Entertainment', + 'Business', + 'Health', +]; export function ClassifyDemo() { const {isModelReady} = useAppState(); const [inputText, setInputText] = useState(DEFAULT_INPUT); - const [categories, setCategories] = useState(DEFAULT_CATEGORIES); + const [selectedCategories, setSelectedCategories] = useState([ + ...DEFAULT_CATEGORIES, + ]); + const [customCategory, setCustomCategory] = useState(''); const [result, setResult] = useState(null); const [isLoading, setIsLoading] = useState(false); const [errorMessage, setErrorMessage] = useState(null); const [debugLog, setDebugLog] = useState(null); + const toggleCategory = (category: string) => { + setSelectedCategories((prev) => + prev.includes(category) + ? prev.filter((c) => c !== category) + : [...prev, category], + ); + }; + + const addCustomCategory = () => { + const trimmed = customCategory.trim(); + if (!trimmed || selectedCategories.includes(trimmed)) return; + setSelectedCategories((prev) => [...prev, trimmed]); + setCustomCategory(''); + }; + const executeClassify = async () => { setIsLoading(true); setErrorMessage(null); @@ -37,12 +60,7 @@ export function ClassifyDemo() { const start = Date.now(); try { - const categoryList = categories - .split(',') - .map((c) => c.trim()) - .filter(Boolean); - - const options = {categories: categoryList}; + const options = {categories: selectedCategories}; console.log('[DEBUG] classify request:', JSON.stringify(options)); const classifyResult = await classify(inputText, options); console.log('[DEBUG] classify response:', JSON.stringify(classifyResult)); @@ -61,6 +79,57 @@ export function ClassifyDemo() { {!isModelReady && } + + Categories + + {DEFAULT_CATEGORIES.map((category) => { + const selected = selectedCategories.includes(category); + return ( + toggleCategory(category)} + > + {selected && } + + {category} + + + ); + })} + + + + + {customCategory.trim() ? ( + + Add + + ) : null} + + + {selectedCategories.length > 0 && ( + + Selected: {selectedCategories.join(', ')} + + )} + + Text to Classify - - Categories (comma-separated) - - - {isLoading ? : null} @@ -163,19 +230,69 @@ const styles = StyleSheet.create({ color: '#000', marginBottom: 8, }, - textInput: { + chipContainer: { + flexDirection: 'row', + flexWrap: 'wrap', + gap: 8, + }, + chip: { + flexDirection: 'row', + alignItems: 'center', + paddingHorizontal: 14, + paddingVertical: 8, + borderRadius: 20, + backgroundColor: 'rgba(0, 0, 0, 0.05)', + }, + chipSelected: { + backgroundColor: 'rgba(0, 122, 255, 0.12)', + }, + chipCheck: { + fontSize: 13, + color: '#007AFF', + fontWeight: '600', + }, + chipText: { + fontSize: 14, + color: '#333', + }, + chipTextSelected: { + color: '#007AFF', + fontWeight: '600', + }, + customRow: { + flexDirection: 'row', + alignItems: 'center', + marginTop: 12, + gap: 8, + }, + customInput: { + flex: 1, backgroundColor: 'rgba(0, 0, 0, 0.05)', borderRadius: 8, - padding: 12, + padding: 10, fontSize: 15, - minHeight: 100, color: '#000', }, - categoryInput: { + addButton: { + paddingHorizontal: 12, + paddingVertical: 10, + }, + addButtonText: { + fontSize: 15, + fontWeight: '600', + color: '#007AFF', + }, + selectedText: { + fontSize: 13, + color: '#007AFF', + marginTop: 8, + }, + textInput: { backgroundColor: 'rgba(0, 0, 0, 0.05)', borderRadius: 8, padding: 12, fontSize: 15, + minHeight: 100, color: '#000', }, button: { diff --git a/libraries/react-native-ondevice-ai/example/src/components/shared/ModelSelectionSheet.tsx b/libraries/react-native-ondevice-ai/example/src/components/shared/ModelSelectionSheet.tsx index e5030c5..218d2c0 100644 --- a/libraries/react-native-ondevice-ai/example/src/components/shared/ModelSelectionSheet.tsx +++ b/libraries/react-native-ondevice-ai/example/src/components/shared/ModelSelectionSheet.tsx @@ -178,8 +178,8 @@ export function ModelSelectionSheet({ - {/* Downloadable Models (iOS only) */} - {Platform.OS === 'ios' && modelState.availableModels.length > 0 && ( + {/* Downloadable Models */} + {modelState.availableModels.length > 0 && ( Available Models {modelState.availableModels.map((model) => ( diff --git a/locanara-versions.json b/locanara-versions.json index 8438e65..4390579 100644 --- a/locanara-versions.json +++ b/locanara-versions.json @@ -4,5 +4,6 @@ "apple": "1.1.0", "android": "1.1.0", "expo": "0.1.0", - "react-native": "0.1.0" + "react-native": "0.1.0", + "flutter": "0.1.0" } diff --git a/packages/android/locanara/src/main/kotlin/com/locanara/engine/ModelRegistry.kt b/packages/android/locanara/src/main/kotlin/com/locanara/engine/ModelRegistry.kt index c46f630..6b3fb01 100644 --- a/packages/android/locanara/src/main/kotlin/com/locanara/engine/ModelRegistry.kt +++ b/packages/android/locanara/src/main/kotlin/com/locanara/engine/ModelRegistry.kt @@ -22,15 +22,15 @@ object ModelRegistry { modelId = "llama-3.2-3b-instruct", name = "Llama 3.2 3B", version = "3.2", - sizeMB = 2560, - quantization = QuantizationType.INT8, + sizeMB = 2550, + quantization = QuantizationType.INT4, contextLength = 8192, - downloadURL = "https://huggingface.co/meta-llama/Llama-3.2-3B-Instruct-ExecuTorch/resolve/main/llama3_2-3b-it-pte-q8.pte", + downloadURL = "https://huggingface.co/software-mansion/react-native-executorch-llama-3.2/resolve/main/llama-3.2-3B/spinquant/llama3_2_3B_spinquant.pte", checksum = "sha256:auto", minMemoryMB = 6000, supportedFeatures = FeatureType.entries.toList(), promptFormat = PromptFormat.LLAMA, - tokenizerURL = "https://huggingface.co/meta-llama/Llama-3.2-3B-Instruct-ExecuTorch/resolve/main/tokenizer.bin" + tokenizerURL = "https://huggingface.co/executorch-community/Llama-3.2-1B-ET/resolve/main/tokenizer.model" ) ) diff --git a/packages/site/src/lib/navigation.ts b/packages/site/src/lib/navigation.ts index 89b0d5f..47be391 100644 --- a/packages/site/src/lib/navigation.ts +++ b/packages/site/src/lib/navigation.ts @@ -113,7 +113,7 @@ export const LIBRARIES_NAV: readonly NavSection[] = [ titleTo: "/docs/libraries", items: [ { to: "/docs/libraries/expo", label: "Expo Module" }, - // Add more library integrations here + { to: "/docs/libraries/flutter", label: "Flutter Plugin" }, ], }, ] as const; diff --git a/packages/site/src/pages/docs/index.tsx b/packages/site/src/pages/docs/index.tsx index 63094e2..0bf6700 100644 --- a/packages/site/src/pages/docs/index.tsx +++ b/packages/site/src/pages/docs/index.tsx @@ -52,6 +52,7 @@ import Introduction from "./introduction"; import WhyLocanara from "./why-locanara"; import LibrariesIndex from "./libraries/index"; import ExpoLibrary from "./libraries/expo"; +import FlutterLibrary from "./libraries/flutter"; import { NotFound } from "../404"; function Docs() { @@ -332,6 +333,15 @@ function Docs() { expo-ondevice-ai +
  • + (isActive ? "active" : "")} + onClick={closeSidebar} + > + flutter_ondevice_ai + +
  • @@ -401,6 +411,7 @@ function Docs() { } /> } /> } /> + } /> } /> diff --git a/packages/site/src/pages/docs/libraries/expo.tsx b/packages/site/src/pages/docs/libraries/expo.tsx index 008f2de..f8461b8 100644 --- a/packages/site/src/pages/docs/libraries/expo.tsx +++ b/packages/site/src/pages/docs/libraries/expo.tsx @@ -112,6 +112,60 @@ if (capability.isSupported) { +
    +

    Framework

    +

    + Under the hood, Locanara is a composable AI framework inspired by + LangChain. The built-in utils above are pre-built{" "} + Chains — but you can compose your + own multi-step AI workflows using the native SDK directly. +

    +
      +
    • + + Chain + {" "} + - Composable building block for AI logic +
    • +
    • + + Pipeline + {" "} + - Chain multiple steps with type-safe DSL +
    • +
    • + + Memory + {" "} + - Conversation context (Buffer / Summary) +
    • +
    • + + Guardrail + {" "} + - Input/output validation and safety +
    • +
    • + + Session + {" "} + - Stateful conversation management +
    • +
    • + + Agent + {" "} + - Autonomous ReAct-style reasoning with tools +
    • +
    + {`// Example: Pipeline DSL (native SDK) +// Proofread → Translate in one pipeline +const result = await model.pipeline() + .proofread() + .translate({ to: 'ko' }) + .run('Hello wrold, how are you?');`} +
    +

    Model Management

    @@ -165,7 +219,10 @@ await deleteModel(models[0].modelId);`} ); diff --git a/packages/site/src/pages/docs/libraries/flutter.tsx b/packages/site/src/pages/docs/libraries/flutter.tsx new file mode 100644 index 0000000..59009b7 --- /dev/null +++ b/packages/site/src/pages/docs/libraries/flutter.tsx @@ -0,0 +1,262 @@ +import { Link } from "react-router-dom"; +import CodeBlock from "../../../components/docs/CodeBlock"; +import { SEO } from "../../../components/SEO"; +import PageNavigation from "../../../components/docs/PageNavigation"; + +function FlutterLibrary() { + return ( +

    + +

    flutter_ondevice_ai

    +

    + Flutter plugin for on-device AI using Locanara SDK. Supports iOS, + Android, and Web (Chrome Built-in AI) from a single Dart API. +

    + +
    + In Progress + iOS 17+ + Android 14+ + Web (Chrome 138+) +
    + +
    +

    Installation

    + {`flutter pub add flutter_ondevice_ai`} +
    + +
    +

    Requirements

    +
      +
    • Flutter 3.3+, Dart SDK >=3.3.0
    • +
    • + iOS 17+ (llama.cpp with GGUF models) / iOS 26+ (Apple Intelligence) +
    • +
    • Android 14+ (Gemini Nano via ExecuTorch)
    • +
    • Web: Chrome 138+ with Gemini Nano enabled
    • +
    +
    + +
    +

    Quick Start

    + {`import 'package:flutter_ondevice_ai/flutter_ondevice_ai.dart'; + +final ai = FlutterOndeviceAi.instance; + +// Initialize +await ai.initialize(); + +// Check device support +final capability = await ai.getDeviceCapability(); +if (capability.isSupported) { + // Use on-device AI + final result = await ai.summarize('Long text to summarize...'); + print(result.summary); +}`} +
    + +
    +

    Available APIs

    +

    + All APIs are accessed via the FlutterOndeviceAi.instance{" "} + singleton. The API surface is identical to the native SDKs. See the{" "} + API Reference for detailed + documentation. +

    +
      +
    • + + getDeviceCapability() + {" "} + - Check device AI support +
    • +
    • + + summarize() + {" "} + - Text summarization +
    • +
    • + + classify() + {" "} + - Text classification +
    • +
    • + + extract() + {" "} + - Entity extraction +
    • +
    • + + chat() + {" "} + / chatStream() - Conversational AI +
    • +
    • + + translate() + {" "} + - Language translation +
    • +
    • + + rewrite() + {" "} + - Text rewriting +
    • +
    • + + proofread() + {" "} + - Grammar correction +
    • +
    +
    + +
    +

    Framework

    +

    + Under the hood, Locanara is a composable AI framework inspired by + LangChain. The built-in utils above are pre-built{" "} + Chains — but you can compose your + own multi-step AI workflows using the native SDK directly. +

    +
      +
    • + + Chain + {" "} + - Composable building block for AI logic +
    • +
    • + + Pipeline + {" "} + - Chain multiple steps with type-safe DSL +
    • +
    • + + Memory + {" "} + - Conversation context (Buffer / Summary) +
    • +
    • + + Guardrail + {" "} + - Input/output validation and safety +
    • +
    • + + Session + {" "} + - Stateful conversation management +
    • +
    • + + Agent + {" "} + - Autonomous ReAct-style reasoning with tools +
    • +
    + {`// Example: Pipeline DSL (native SDK) +// Proofread → Translate in one pipeline +let result = try await model.pipeline { + Proofread() + Translate(to: "ko") +}.run("Hello wrold, how are you?")`} +
    + +
    +

    Chat Streaming

    + {`final result = await ai.chatStream( + 'Tell me about on-device AI', + options: ChatStreamOptions( + onChunk: (chunk) { + // Real-time streaming + print(chunk.delta); + }, + ), +); +print(result.message);`} +
    + +
    +

    Model Management

    + {`// Browse available models +final models = await ai.getAvailableModels(); + +// Download with progress +await ai.downloadModel( + models.first.modelId, + onProgress: (progress) { + print('\${(progress.progress * 100).round()}%'); + }, +); + +// Load and switch engine +await ai.loadModel(models.first.modelId); + +// Check current engine +final engine = await ai.getCurrentEngine(); + +// Clean up +await ai.deleteModel(models.first.modelId);`} +
    + +
    +

    Web Support

    +

    + On web, the plugin uses Chrome Built-in AI (Gemini Nano) APIs + directly. Chrome 138+ is required with the following flags enabled: +

    +
      +
    • + chrome://flags/#optimization-guide-on-device-model +
    • +
    • + chrome://flags/#prompt-api-for-gemini-nano +
    • +
    • + + chrome://flags/#enable-experimental-web-platform-features + +
    • +
    +

    + See the Web Setup Guide for details. +

    +
    + +
    +

    Source Code

    +

    + + github.com/hyodotdev/locanara/libraries/flutter_ondevice_ai + +

    +
    + + +
    + ); +} + +export default FlutterLibrary; diff --git a/packages/site/src/pages/docs/libraries/index.tsx b/packages/site/src/pages/docs/libraries/index.tsx index 53c7997..2ba5bbd 100644 --- a/packages/site/src/pages/docs/libraries/index.tsx +++ b/packages/site/src/pages/docs/libraries/index.tsx @@ -9,7 +9,7 @@ function LibrariesIndex() { title="Libraries" description="Third-party framework integrations for Locanara SDK." path="/docs/libraries" - keywords="expo, react native, on-device AI, locanara" + keywords="expo, react native, flutter, on-device AI, locanara" />

    Libraries

    @@ -27,12 +27,12 @@ function LibrariesIndex() { In Progress -

    -
    ⚛️
    -

    react-native-ondevice-ai

    -

    React Native module for on-device AI

    - Planned -
    + +
    🦋
    +

    flutter_ondevice_ai

    +

    Flutter plugin for on-device AI

    + In Progress +
    diff --git a/packages/site/src/pages/docs/libraries/react-native.tsx b/packages/site/src/pages/docs/libraries/react-native.tsx new file mode 100644 index 0000000..356f9e6 --- /dev/null +++ b/packages/site/src/pages/docs/libraries/react-native.tsx @@ -0,0 +1,187 @@ +import { Link } from "react-router-dom"; +import CodeBlock from "../../../components/docs/CodeBlock"; +import { SEO } from "../../../components/SEO"; +import PageNavigation from "../../../components/docs/PageNavigation"; + +function ReactNativeLibrary() { + return ( +
    + +

    react-native-ondevice-ai

    +

    + React Native module for on-device AI using Locanara SDK and{" "} + + Nitro Modules + + . For bare React Native apps without Expo. Expo users should use{" "} + expo-ondevice-ai instead. +

    + +
    + In Progress + iOS 17+ + Android 14+ +
    + +
    +

    Installation

    + {`npm install react-native-ondevice-ai react-native-nitro-modules +cd ios && pod install`} +
    + +
    +

    Requirements

    +
      +
    • React Native 0.76+
    • +
    • Nitro Modules
    • +
    • + iOS 17+ (llama.cpp with GGUF models) / iOS 26+ (Apple Intelligence) +
    • +
    • Android 14+ (Gemini Nano)
    • +
    +

    + Note: Web is not supported. Nitro Modules is a + native-only bridge. For web support, use{" "} + expo-ondevice-ai. +

    +
    + +
    +

    Quick Start

    + {`import { getDeviceCapability, summarize } from 'react-native-ondevice-ai'; + +// Check device support +const capability = await getDeviceCapability(); +if (capability.isSupported) { + // Use on-device AI + const result = await summarize('Long text to summarize...'); + console.log(result.summary); +}`} +
    + +
    +

    Available APIs

    +

    + This library exposes the same API as{" "} + expo-ondevice-ai. See the{" "} + API Reference for detailed + documentation. +

    +
      +
    • + + getDeviceCapability() + {" "} + - Check device AI support +
    • +
    • + + summarize() + {" "} + - Text summarization +
    • +
    • + + classify() + {" "} + - Text classification +
    • +
    • + + extract() + {" "} + - Entity extraction +
    • +
    • + + chat() + {" "} + - Conversational AI +
    • +
    • + + translate() + {" "} + - Language translation +
    • +
    • + + rewrite() + {" "} + - Text rewriting +
    • +
    • + + proofread() + {" "} + - Grammar correction +
    • +
    +
    + +
    +

    Chat Streaming

    + {`import { chatStream } from 'react-native-ondevice-ai'; + +const result = await chatStream('Tell me about on-device AI', { + onChunk: (chunk) => { + // Real-time streaming via Nitro listener pattern + console.log(chunk.delta); + }, +}); +console.log(result.message);`} +
    + +
    +

    Model Management

    + {`import { + getAvailableModels, downloadModel, loadModel, + getCurrentEngine, deleteModel +} from 'react-native-ondevice-ai'; + +// Browse available models (iOS) +const models = await getAvailableModels(); + +// Download with progress +await downloadModel(models[0].modelId, (progress) => { + console.log(\`\${Math.round(progress.progress * 100)}%\`); +}); + +// Load and switch engine +await loadModel(models[0].modelId); + +// Check current engine +const engine = await getCurrentEngine();`} +
    + +
    +

    Source Code

    +

    + + github.com/hyodotdev/locanara/libraries/react-native-ondevice-ai + +

    +
    + + +
    + ); +} + +export default ReactNativeLibrary;