Skip to content
Merged
Show file tree
Hide file tree
Changes from 1 commit
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
32 changes: 32 additions & 0 deletions .claude/guides/09-expo-ondevice-ai.md
Original file line number Diff line number Diff line change
Expand Up @@ -228,6 +228,38 @@ bun web
- AI Status Banner → opens Model Selection Sheet for engine/model management
- Model Selection Sheet: download, load, delete GGUF models; switch engines

## Cross-Library iOS llama.cpp Bridge Comparison

All three libraries (expo, react-native, flutter) use the same `LocanaraLlamaBridge` isolation pattern for llama.cpp, but Flutter requires extra steps due to its framework linking model.

### Why the Bridge Exists

C++ interop is **viral** in Swift — any module importing a C++ interop module must also enable it. React Native's `ExpoModulesCore-Swift.h` has a `GenericTypedArray` class that collides with llama.cpp types. The bridge pod isolates C++ interop from framework headers.

### Key Differences by Library

| Aspect | Expo | React Native (bare) | Flutter |
| --------------------------------- | ------------------------------- | --------------------------- | ------------------------------------- |
| `use_frameworks!` | Not used (static libs) | Not used (static libs) | **Required** (`:linkage => :static`) |
| SPM `llama.framework` type | Static (linked into binary) | Static (linked into binary) | **Dynamic** (must be embedded) |
| Extra embed phase | Config plugin handles it | Not needed | **Required** (`embed_spm_frameworks`) |
| Bridge pod generation | Auto-generated by config plugin | Manual in example | Manual in example |
| Bridge podspec `static_framework` | Not needed | Not needed | **Required** (`true`) |

### Common Components (All Libraries)

1. **`LocanaraLlamaBridge/Sources/LlamaCppBridgeEngine.swift`** — identical bridge engine implementing `InferenceEngine` + `LlamaCppBridgeProvider`
2. **`configure_llama_bridge(installer)`** — post_install hook adding SPM packages + C++ interop to bridge target
3. **`user_target_xcconfig`** — `-framework "llama"` linker flag + framework search paths

### Flutter-Only Requirements

1. **`use_frameworks! :linkage => :static`** — without this, hundreds of `Undefined symbol: _ggml_*` errors
2. **`embed_spm_frameworks`** — copies `llama.framework` from `BUILT_PRODUCTS_DIR` to `Runner.app/Frameworks/` (must run BEFORE "Thin Binary" phase)
3. **`static_framework = true`** in bridge podspec

See `12-flutter-ondevice-ai.md` for complete Flutter-specific details.

## Notes

- `enableLocalDev` requires `localPath` pointing to the monorepo root
Expand Down
171 changes: 171 additions & 0 deletions .claude/guides/11-react-native-ondevice-ai.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,171 @@
# react-native-ondevice-ai (React Native Library)

## Overview

Location: `libraries/react-native-ondevice-ai/`

React Native module using **Nitro Modules** for bare React Native apps. Wraps the Locanara native SDKs with auto-generated JNI/C++ bridges. Expo users should use `expo-ondevice-ai` instead.

**Does NOT support web** — Nitro is native-only.

## Requirements

- React Native 0.76+
- Nitro Modules 0.22+
- iOS 17+ / Android API 26+
- Bun 1.1+

## Build Commands

```bash
cd libraries/react-native-ondevice-ai

bun install # Install dependencies
bun run nitrogen # Generate Nitro bridge code
bun run lint:tsc # TypeScript type check
bun run test # Run tests
```

## Project Structure

```text
libraries/react-native-ondevice-ai/
├── src/
│ ├── index.ts # Public API wrapper (type conversion, listener mgmt)
│ ├── types.ts # Public TypeScript type definitions
│ ├── specs/
│ │ └── OndeviceAi.nitro.ts # Nitro spec (SOURCE OF TRUTH for bridge codegen)
│ └── __tests__/ # Unit tests
│ └── __mocks__/ # Nitro module mocks
├── ios/
│ ├── HybridOndeviceAi.swift # iOS native implementation (uses Locanara chains)
│ ├── OndeviceAiHelper.swift # Option extractors, PrefilledMemory adapter
│ └── OndeviceAiSerialization.swift # Chain result conversion
├── android/
│ └── src/main/java/com/margelo/nitro/ondeviceai/
│ ├── HybridOndeviceAi.kt # Android native implementation
│ ├── OndeviceAiHelper.kt # Option extractors
│ └── OndeviceAiSerialization.kt
├── nitrogen/generated/ # Auto-generated bridge code (DO NOT EDIT)
├── NitroOndeviceAi.podspec # CocoaPods spec (depends on Locanara)
├── nitro.json # Nitro module configuration
├── example/ # Example React Native app
│ ├── src/screens/ # Feature/Device/Settings screens
│ ├── src/components/ # Feature demos, shared components
│ └── ios/
│ ├── LocanaraLlamaBridge/ # Bridge pod (C++ interop isolation)
│ └── Podfile
└── package.json
```

## How It Works

### Nitro Module Architecture

The library uses Nitro Modules for a **spec-first** native bridge:

```text
OndeviceAi.nitro.ts (Spec — source of truth)
↓ npx nitrogen
nitrogen/generated/ (C++ / JNI bridge code)
HybridOndeviceAi.swift (iOS) HybridOndeviceAi.kt (Android)
↓ ↓
src/index.ts (JS wrapper — converts types, manages listeners)
```

### TypeScript → Native Chain Mapping

Same mapping as `expo-ondevice-ai`:

| TypeScript API | iOS Chain | Android |
| --------------------------- | ------------------------------------------ | -------------------- |
| `summarize(text, opts)` | `SummarizeChain(bulletCount:).run(text)` | ML Kit Summarization |
| `classify(text, opts)` | `ClassifyChain(categories:).run(text)` | Prompt API |
| `extract(text, opts)` | `ExtractChain(entityTypes:).run(text)` | Prompt API |
| `chat(message, opts)` | `ChatChain(memory:).run(message)` | Prompt API |
| `chatStream(message, opts)` | `ChatChain(memory:).streamRun(message)` | Prompt API |
| `translate(text, opts)` | `TranslateChain(source:target:).run(text)` | Prompt API |
| `rewrite(text, opts)` | `RewriteChain(style:).run(text)` | ML Kit Rewriting |
| `proofread(text, opts)` | `ProofreadChain().run(text)` | ML Kit Proofreading |

### Streaming (Listener Pattern)

Nitro uses explicit listener add/remove instead of EventEmitter:

```typescript
// JS wrapper manages listener lifecycle
if (onChunk) {
listener = (chunk) => onChunk(convertChunk(chunk));
AI.instance.addChatStreamListener(listener);
}
try {
/* call stream API */
} finally {
AI.instance.removeChatStreamListener(listener);
}
```

### Nitro Constraints

- **Union types**: Must have 2+ values (single-value union = codegen error)
- **No `Record<K,V>`**: Use flat fields, convert in JS layer
- **All types in spec file**: Nitro codegen only reads the `.nitro.ts` file
- **Optional fields**: Use `field?: Type | null` pattern

## iOS llama.cpp Bridge

Same `LocanaraLlamaBridge` pattern as `expo-ondevice-ai` — see the **Cross-Library iOS llama.cpp Bridge** section in `09-expo-ondevice-ai.md`.

**Key difference**: React Native (without Expo) does NOT use `use_frameworks!` by default, so static linking works naturally. The bridge pod and SPM integration follow the same `configure_llama_bridge` post_install pattern.

### CocoaPods Configuration

```ruby
# NitroOndeviceAi.podspec
s.dependency 'React-Core'
s.dependency 'React-jsi'
s.dependency 'React-callinvoker'
s.dependency 'Locanara'
```

## Spec-First Development Workflow

**When adding or modifying an API, follow this exact order:**

1. **Update Nitro spec** (`src/specs/OndeviceAi.nitro.ts`)
2. **Run nitrogen**: `npx nitrogen`
3. **Update native implementations** (iOS + Android)
4. **Update JS wrapper** (`src/index.ts`)
5. **Update public types** (`src/types.ts`)
6. **Update tests** + mocks
7. **Verify**: `npx nitrogen && npx tsc --noEmit && bun run test`

## Example App

```bash
cd libraries/react-native-ondevice-ai/example

# iOS
bun ios --device

# Android
bun android
```

### App Structure

- 3-tab navigation: Features, Device, Settings
- Feature list → tappable demo screens for each AI feature
- AI Status Banner → Model Selection Sheet for engine/model management

## API Parity

The `react-native-ondevice-ai` public API **MUST** be identical to `expo-ondevice-ai`. When modifying either library, update both.

## Notes

- Nitro-generated files in `nitrogen/generated/` must NEVER be edited manually
- The bridge pod is set up in the example app's `ios/LocanaraLlamaBridge/`
- No web support — Nitro is native-only
- Test on real devices (simulators have limited AI support)
Loading