diff --git a/.plan/adr-draft.md b/.plan/adr-draft.md new file mode 100644 index 00000000..fa89411c --- /dev/null +++ b/.plan/adr-draft.md @@ -0,0 +1,69 @@ +# ADR-00X: Password-encrypted server backup of root key with WebCrypto non-extractable import + +**Date:** 2025-12-17 +**Status:** Draft +**Decision owners:** TinyCongress core team + +## Context + +We maintain canonical cryptographic rules in Rust. The web UI must sign and verify Ed25519 messages using the same message encoding as the backend. + +For MVP recovery, we want a convenience feature: the user can store the root private key on the server encrypted under a user password. During recovery the client fetches the encrypted blob, derives a key from the password, decrypts, and restores the root key for signing. + +Concern: minimize subtle crypto implementation bugs and reduce key handling risks in the web client. + +## Decision + +1. **Canonical encoding and verification rules remain defined in Rust.** + + * Browser uses Rust-compiled-to-WASM (or a generated codec) for message canonicalization and any consensus-critical verification rules. +2. **Client-side signing uses WebCrypto Ed25519 when available.** + + * On recovery, after decrypting the root key, the client **imports it into WebCrypto as a `CryptoKey` with `extractable: false`** and uses `subtle.sign()` for signing. +3. **We treat `extractable: false` as footgun reduction, not a hard security boundary.** + + * It prevents accidental export via WebCrypto APIs and discourages app-level serialization of key bytes. + * It does not protect against XSS, malicious extensions, or malicious code running in-origin (which can still call `sign()`). +4. **Backup blob lifecycle:** + + * The server stores only the encrypted key blob and associated metadata (KDF params, salt, version). + * The client does not re-export the private key from WebCrypto. If re-backup is needed, we re-use the original encrypted blob or require an explicit “re-encrypt from plaintext” flow that re-derives from the password (accepting temporary plaintext exposure during that operation). + +## Consequences + +**Positive** + +* Reduces accidental leakage risks: no exporting, no logging, no app state persistence of decrypted bytes after import. +* Uses well-maintained browser crypto primitives for signing. +* Keeps protocol correctness centralized in Rust (canonical bytes-to-sign, verification semantics). + +**Negative / Limitations** + +* Decrypted key material exists in client memory at least briefly during recovery import. This remains vulnerable to XSS and hostile extensions. +* Non-extractable keys can still be abused by any code with access to the `CryptoKey` handle to sign arbitrary messages. +* Requires browser support for Ed25519 WebCrypto. A fallback path (WASM signing or alternate key type) may be needed. + +## Alternatives considered + +1. **Duplicate crypto in TypeScript and Rust.** Rejected due to high risk of subtle divergence in encoding/validation and library semantics. +2. **Rust/WASM for all signing and verification.** Not preferred for private key hygiene because keys live in JS/WASM memory and cannot be made meaningfully non-extractable. +3. **No server backup.** Stronger security but worse UX and recovery. +4. **WebAuthn/passkeys (device-bound keys) for recovery/signing.** Preferred future direction but out of scope for MVP. +5. **Social recovery / secret sharing.** Also future work, more complexity for MVP. + +## Implementation notes + +* Encrypted backup format is versioned and includes KDF parameters and salt. +* Client performs decrypt in an isolated execution context when feasible (dedicated worker) and avoids storing plaintext bytes in app state. +* Best-effort zeroization of temporary buffers after import. +* Strict separation between: + + * **Canonicalization** (Rust/WASM) + * **Signing** (WebCrypto) + * **Verification semantics** (Rust canonical, optionally mirrored in UI via WASM) + +## Follow-ups + +* Add cross-environment test vectors: canonical payload bytes and known signatures. +* Define XSS hardening baseline (CSP, dependency controls) since client-side recovery inherently raises stakes. +* Evaluate WebAuthn-based recovery/signing post-MVP. diff --git a/.plan/key-recovery-impl-spec.md b/.plan/key-recovery-impl-spec.md new file mode 100644 index 00000000..6fdc7d11 --- /dev/null +++ b/.plan/key-recovery-impl-spec.md @@ -0,0 +1,733 @@ +# Key Recovery Implementation Specification + +**Related:** [ADR-006](../docs/decisions/006-webcrypto-key-recovery.md) | [ADR-007](../docs/decisions/007-zip215-verification.md) | [Signed Envelope Spec](../docs/interfaces/signed-envelope-spec.md) +**Status:** In Progress +**Last updated:** 2025-12-17 + +## Implementation Status + +| Phase | Status | Notes | +|-------|--------|-------| +| Phase 1: Foundation | ✅ Complete | Migration, API endpoints, repository layer, integration tests | +| Phase 2: WASM Canonicalization | ⬜ Not Started | | +| Phase 3: Crypto Worker | ⬜ Not Started | | +| Phase 4: Key Persistence | ⬜ Not Started | | +| Phase 5: UI Integration | ⬜ Not Started | | +| Phase 6: Hardening | ⬜ Not Started | Rate limiting still needed | + +## Overview + +This spec details the implementation of password-encrypted server backup for root keys with WebCrypto non-extractable import, as decided in ADR-006. + +### Goals + +1. Users can opt-in to server-side encrypted backup of their root private key +2. Recovery flow decrypts client-side and imports into WebCrypto as non-extractable +3. Signing uses WebCrypto Ed25519 with WASM fallback for unsupported browsers +4. Canonicalization remains in Rust/WASM per signed-envelope-spec + +### Non-Goals (MVP) + +- WebAuthn/passkey integration +- Social recovery / secret sharing +- Multi-device sync (beyond shared backup blob) +- Hardware key support + +--- + +## Dependencies + +### Backend (Cargo.toml) + +```toml +# Signing and verification (ZIP215-compliant per ADR-007) +ed25519-consensus = "2" + +# Encryption (for validation/re-encryption if needed) +aes-gcm = "0.10" +argon2 = "0.5" + +# WASM generation +wasm-bindgen = "0.2" +``` + +### Frontend (package.json) + +```json +{ + "dependencies": { + "@noble/hashes": "^2.0.1", // existing - for PBKDF2 fallback + "idb-keyval": "^6.2.1" // IndexedDB wrapper for CryptoKey storage + }, + "devDependencies": { + "@aspect-build/aspect-argon2": "^1.0.0" // Argon2 WASM build + } +} +``` + +**Note:** Signing/verification fallback uses the Rust/WASM module (same as verification), not `@noble/curves`. This ensures ZIP215 compliance per ADR-007. + +### Browser Requirements + +| Feature | Chrome | Firefox | Safari | Edge | +|---------|--------|---------|--------|------| +| Ed25519 WebCrypto | 113+ | 128+ | 17+ | 113+ | +| IndexedDB CryptoKey | Yes | Yes | Yes | Yes | +| Web Workers | Yes | Yes | Yes | Yes | + +--- + +## Database Schema + +### Migration: `XX_account_backups.sql` + +Backups use a separate table rather than columns on `accounts` for: +- No NULLs on accounts table (backup is optional) +- Clean domain separation +- Future extensibility (multiple backup methods, history) + +```sql +-- Encrypted backup storage for root keys +CREATE TABLE account_backups ( + id UUID PRIMARY KEY DEFAULT gen_random_uuid(), + account_id UUID NOT NULL REFERENCES accounts(id) ON DELETE CASCADE, + kid TEXT NOT NULL, -- denormalized for recovery lookup without join + encrypted_backup BYTEA NOT NULL, + salt BYTEA NOT NULL, + kdf_algorithm TEXT NOT NULL CHECK (kdf_algorithm IN ('argon2id', 'pbkdf2')), + version INTEGER NOT NULL DEFAULT 1, + created_at TIMESTAMPTZ NOT NULL DEFAULT now(), + + CONSTRAINT uq_account_backups_account UNIQUE (account_id), + CONSTRAINT uq_account_backups_kid UNIQUE (kid) +); + +-- Primary lookup path for recovery (by KID) +CREATE INDEX idx_account_backups_kid ON account_backups(kid); + +COMMENT ON TABLE account_backups IS 'Password-encrypted root key backups for account recovery'; +COMMENT ON COLUMN account_backups.kid IS 'Key ID (denormalized from accounts.root_kid for join-free lookup)'; +COMMENT ON COLUMN account_backups.encrypted_backup IS 'Binary envelope: version + KDF params + nonce + AES-256-GCM ciphertext'; +``` + +**Notes:** +- `kid` is denormalized to avoid joining `accounts` on every recovery lookup +- `ON DELETE CASCADE` ensures backup is removed when account is deleted +- `UNIQUE (account_id)` enforces 1:1 for MVP; can be relaxed for multiple backup methods later +- Nonce and KDF params are embedded in `encrypted_backup` blob (see Encrypted Backup Format) + +--- + +## API Endpoints + +### POST `/api/auth/backup` + +Create or update encrypted backup. + +**Request:** +```json +{ + "kid": "base64url-kid", + "encrypted_backup": "base64url-envelope" +} +``` + +**Note:** `encrypted_backup` contains the full binary envelope. The server parses this envelope to extract and index the `salt`, `kdf_algorithm`, and `version` fields. + +**Response:** `201 Created` or `200 OK` (update) +```json +{ + "kid": "base64url-kid", + "backup_created_at": "2025-12-17T00:00:00Z" +} +``` + +**Authorization:** Requires valid signed envelope proving ownership of the KID. + +### GET `/api/auth/backup/:kid` + +Retrieve encrypted backup for recovery. + +**Response:** `200 OK` +```json +{ + "encrypted_backup": "base64url-envelope", + "salt": "base64url-salt", + "kdf_algorithm": "argon2id", + "version": 1 +} +``` + +**Response:** `404 Not Found` if no backup exists. + +**Rate Limiting:** 5 requests/minute/IP. Returns `429 Too Many Requests` with `Retry-After` header. Failed decryption attempts (inferred from lack of subsequent authenticated signatures) trigger progressive delays to prevent brute-forcing. + +### DELETE `/api/auth/backup/:kid` + +Remove backup (requires signed envelope). + +**Response:** `204 No Content` + +--- + +## Encrypted Backup Format + +### Envelope (binary, versioned) + +``` ++--------+--------+----------+-------+-----------+ +| Version| KDF ID | KDF Params| Nonce | Ciphertext| +| 1 byte | 1 byte | variable | 12 B | 32 B + 16 | ++--------+--------+----------+-------+-----------+ +``` + +| Field | Size | Description | +|-------|------|-------------| +| Version | 1 byte | Format version (`0x01`) | +| KDF ID | 1 byte | `0x01` = Argon2id, `0x02` = PBKDF2 | +| KDF Params | variable | Argon2: 12 bytes (m:4, t:4, p:4). PBKDF2: 4 bytes (iterations) | +| Salt | 16 bytes | Random salt for KDF | +| Nonce | 12 bytes | AES-GCM nonce | +| Ciphertext | 48 bytes | AES-256-GCM(private_key ‖ tag) | + +### Encryption Flow + +``` +password → KDF(password, salt, params) → 256-bit key +private_key (32 bytes) → AES-256-GCM(key, nonce) → ciphertext (48 bytes) +``` + +### KDF Parameters + +**Argon2id (preferred):** +- Memory: 19456 KiB (19 MiB) +- Iterations: 2 +- Parallelism: 1 +- Output: 32 bytes + +**PBKDF2-SHA256 (fallback):** +- Iterations: 600,000 +- Output: 32 bytes + +--- + +## Frontend Architecture + +### Critical: Verification Must Use WASM + +Per ADR-007, **all signature verification in the browser MUST use the Rust/WASM module** (`verify_ed25519`). WebCrypto's `crypto.subtle.verify()` does NOT implement ZIP215 semantics and MUST NOT be used for verification. + +| Operation | Allowed | Not Allowed | +|-----------|---------|-------------| +| Signing | WebCrypto `sign()` ✅ | — | +| Signing fallback | WASM `sign_ed25519()` ✅ | — | +| Verification | WASM `verify_ed25519()` ✅ | WebCrypto `verify()` ❌ | + +### File Structure + +``` +web/src/features/identity/ +├── keys/ +│ ├── crypto.ts # Existing - key generation +│ ├── types.ts # Existing - KeyPair interface +│ ├── webcrypto.ts # NEW - WebCrypto Ed25519 signing only +│ ├── fallback-signer.ts # NEW - WASM fallback for signing +│ ├── verifier.ts # NEW - WASM-only verification (ZIP215) +│ └── feature-detect.ts # NEW - Browser capability detection +├── recovery/ +│ ├── crypto.worker.ts # NEW - Isolated decryption worker +│ ├── backup-client.ts # NEW - API client for backup endpoints +│ ├── indexeddb.ts # NEW - CryptoKey persistence +│ ├── kdf.ts # NEW - Argon2id/PBKDF2 wrapper +│ └── types.ts # NEW - Recovery-specific types +└── components/ + ├── BackupSetup.tsx # NEW - Backup creation UI + └── RecoveryFlow.tsx # NEW - Recovery UI +``` + +### Crypto Worker (`crypto.worker.ts`) + +The worker handles all sensitive operations to prevent key exposure on main thread. + +```typescript +// Message types +type WorkerRequest = + | { type: 'decrypt'; backup: EncryptedBackup; password: string } + | { type: 'encrypt'; privateKey: Uint8Array; password: string; kdf: KdfType }; + +type WorkerResponse = + | { type: 'cryptokey'; handle: CryptoKey } // Transferred via postMessage + | { type: 'encrypted'; backup: EncryptedBackup } + | { type: 'error'; message: string }; + +// Worker implementation +self.onmessage = async (e: MessageEvent) => { + try { + if (e.data.type === 'decrypt') { + const { backup, password } = e.data; + + // 1. Derive key from password + const derivedKey = await deriveKey(password, backup.salt, backup.kdf, backup.kdfParams); + + // 2. Decrypt private key + const privateKeyBytes = await decryptAesGcm(backup.ciphertext, derivedKey, backup.nonce); + + // 3. Import into WebCrypto as non-extractable + const cryptoKey = await crypto.subtle.importKey( + 'raw', + privateKeyBytes, + { name: 'Ed25519' }, + false, // extractable = false + ['sign'] + ); + + // 4. Zeroize temporary buffer + privateKeyBytes.fill(0); + + // 5. Transfer CryptoKey to main thread + self.postMessage({ type: 'cryptokey', handle: cryptoKey }, []); + } + } catch (err) { + self.postMessage({ type: 'error', message: 'Decryption failed' }); + } +}; +``` + +### IndexedDB Persistence (`indexeddb.ts`) + +```typescript +import { get, set, del } from 'idb-keyval'; + +const STORE_KEY = 'tc-signing-key'; + +interface StoredKey { + cryptoKey: CryptoKey; + kid: string; + expiresAt: number; // Unix timestamp +} + +export async function storeSigningKey(cryptoKey: CryptoKey, kid: string, ttlMs: number): Promise { + await set(STORE_KEY, { + cryptoKey, + kid, + expiresAt: Date.now() + ttlMs, + }); +} + +export async function getSigningKey(): Promise { + const stored = await get(STORE_KEY); + if (!stored) return null; + if (Date.now() > stored.expiresAt) { + await del(STORE_KEY); + return null; + } + return stored; +} + +export async function clearSigningKey(): Promise { + await del(STORE_KEY); +} +``` + +### Feature Detection (`feature-detect.ts`) + +```typescript +export interface CryptoCapabilities { + webCryptoEd25519: boolean; + indexedDbCryptoKey: boolean; + webWorkers: boolean; +} + +export async function detectCapabilities(): Promise { + const capabilities: CryptoCapabilities = { + webCryptoEd25519: false, + indexedDbCryptoKey: true, // Assume true, fallback gracefully + webWorkers: typeof Worker !== 'undefined', + }; + + // Test Ed25519 support + try { + const keyPair = await crypto.subtle.generateKey( + { name: 'Ed25519' }, + false, + ['sign', 'verify'] + ); + capabilities.webCryptoEd25519 = true; + } catch { + capabilities.webCryptoEd25519 = false; + } + + return capabilities; +} + +export function requiresWasmFallback(caps: CryptoCapabilities): boolean { + return !caps.webCryptoEd25519; +} +``` + +### Signing Interface (`webcrypto.ts`) + +```typescript +export interface Signer { + sign(message: Uint8Array): Promise; + getPublicKey(): Promise; + kid: string; +} + +export class WebCryptoSigner implements Signer { + constructor( + private cryptoKey: CryptoKey, + private publicKey: Uint8Array, + public kid: string + ) {} + + async sign(message: Uint8Array): Promise { + const signature = await crypto.subtle.sign( + { name: 'Ed25519' }, + this.cryptoKey, + message + ); + return new Uint8Array(signature); + } + + async getPublicKey(): Promise { + return this.publicKey; + } +} + +export class WasmFallbackSigner implements Signer { + constructor( + private privateKey: Uint8Array, // Kept in memory (less secure) + private publicKey: Uint8Array, + public kid: string + ) {} + + async sign(message: Uint8Array): Promise { + try { + // Uses Rust/WASM module for ZIP215 compliance (ADR-007) + const { sign_ed25519 } = await import('@tinycongress/crypto-wasm'); + return sign_ed25519(message, this.privateKey); + } finally { + // Best-effort: strictly explicit zeroization is hard in JS, + // but we should ensure the reference is cleared if this signer is short-lived. + // Note: The privateKey buffer in this class persists as long as the signer instance. + } + } + + async getPublicKey(): Promise { + return this.publicKey; + } +} +``` + +--- + +## WASM Module (Crypto) + +The WASM module provides canonicalization, signing, and verification. All three use Rust to ensure ZIP215 compliance (ADR-007). + +### Rust Source (`service/src/wasm/crypto.rs`) + +```rust +use wasm_bindgen::prelude::*; +use ed25519_consensus::{SigningKey, VerificationKey, Signature}; +use serde_json::Value; + +/// Canonicalize envelope fields for signing (RFC 8785) +#[wasm_bindgen] +pub fn canonical_signing_bytes( + payload_type: &str, + payload_json: &str, + signer_json: &str, +) -> Result, JsError> { + let payload: Value = serde_json::from_str(payload_json)?; + let signer: Value = serde_json::from_str(signer_json)?; + + let signing_obj = serde_json::json!({ + "payload_type": payload_type, + "payload": payload, + "signer": signer, + }); + + let canonical = json_canonicalization::serialize(&signing_obj)?; + Ok(canonical.into_bytes()) +} + +/// Sign a message with Ed25519 (fallback when WebCrypto unavailable) +#[wasm_bindgen] +pub fn sign_ed25519(message: &[u8], private_key: &[u8]) -> Result, JsError> { + let key_bytes: [u8; 32] = private_key + .try_into() + .map_err(|_| JsError::new("Invalid private key length"))?; + let signing_key = SigningKey::from(key_bytes); + let signature = signing_key.sign(message); + Ok(signature.to_bytes().to_vec()) +} + +/// Verify an Ed25519 signature (ZIP215 semantics) +#[wasm_bindgen] +pub fn verify_ed25519(message: &[u8], signature: &[u8], public_key: &[u8]) -> Result { + let sig_bytes: [u8; 64] = signature + .try_into() + .map_err(|_| JsError::new("Invalid signature length"))?; + let key_bytes: [u8; 32] = public_key + .try_into() + .map_err(|_| JsError::new("Invalid public key length"))?; + + let sig = Signature::from(sig_bytes); + let vk = VerificationKey::try_from(key_bytes) + .map_err(|_| JsError::new("Invalid public key"))?; + + // ZIP215 verification + Ok(vk.verify(&sig, message).is_ok()) +} +``` + +### Build Configuration + +```toml +# service/Cargo.toml +[lib] +crate-type = ["cdylib", "rlib"] + +[dependencies] +wasm-bindgen = "0.2" +ed25519-consensus = "2" +serde_json = "1" +json-canonicalization = "0.5" + +[profile.release] +opt-level = "s" +lto = true +``` + +### Vite Integration + +```javascript +// web/vite.config.mjs +import wasm from 'vite-plugin-wasm'; + +export default { + plugins: [wasm()], + worker: { + format: 'es', + plugins: [wasm()], + }, +}; +``` + +--- + +## Implementation Phases + +### Phase 1: Foundation ✅ + +1. ✅ Add database migration for backup table +2. ✅ Implement backup API endpoints (no auth initially, add later) +3. ⬜ Add `ed25519-consensus` to backend for ZIP215-compliant signing/verification (deferred to Phase 2) +4. ⬜ Create cross-environment test vectors (deferred to Phase 2) + +**Deliverables:** +- ✅ Migration `04_account_backups.sql` (new table) +- ✅ `POST/GET/DELETE /auth/backup/:kid` endpoints +- ✅ Repository layer (`BackupRepo` trait + `PgBackupRepo` implementation) +- ✅ Unit tests for HTTP handlers (4 tests) +- ✅ Integration tests for repository (9 tests) +- ⬜ Test vectors in `service/tests/crypto_vectors.rs` (Phase 2) + +### Phase 2: WASM Canonicalization + +1. Create WASM crate for canonicalization +2. Build with `wasm-pack` +3. Integrate into frontend build +4. Verify canonical bytes match between Rust and WASM + +**Deliverables:** +- `service/src/wasm/` module +- `web/src/wasm/canonical.ts` bindings +- Integration tests + +### Phase 3: Crypto Worker + +1. Implement crypto worker with Argon2id/PBKDF2 +2. Add AES-256-GCM encryption/decryption +3. Implement WebCrypto Ed25519 import +4. Add feature detection + +**Deliverables:** +- `crypto.worker.ts` +- `kdf.ts` with Argon2/PBKDF2 +- `feature-detect.ts` +- Worker unit tests + +### Phase 4: Key Persistence + +1. Implement IndexedDB storage for CryptoKey +2. Add TTL and expiration handling +3. Implement key lifecycle (logout, clear) + +**Deliverables:** +- `indexeddb.ts` +- Session management integration +- E2E tests for persistence + +### Phase 5: UI Integration + +1. Build backup setup flow (during signup or settings) +2. Build recovery flow +3. Add fallback warnings for unsupported browsers +4. Add CSP headers + +**Deliverables:** +- `BackupSetup.tsx` +- `RecoveryFlow.tsx` +- CSP configuration in `index.html` + +### Phase 6: Hardening + +1. Add rate limiting to recovery endpoint +2. Audit for timing attacks +3. Add monitoring/logging for recovery attempts +4. Security review + +**Deliverables:** +- Rate limiter middleware +- Audit log events +- Security review document + +--- + +## Testing Strategy + +### Unit Tests + +| Component | Coverage | +|-----------|----------| +| KDF derivation | Argon2id and PBKDF2 with known vectors | +| AES-GCM | Encrypt/decrypt round-trip | +| Canonicalization | RFC 8785 compliance | +| Feature detection | Mock various browser capabilities | + +### Integration Tests + +| Scenario | Description | +|----------|-------------| +| Full backup/recovery | Create backup, clear state, recover | +| Cross-browser | Verify signatures from Chrome validate in Firefox | +| Fallback path | Test WASM signer when WebCrypto unavailable | +| Rate limiting | Verify 429 after threshold | + +### E2E Tests (Playwright) + +```typescript +test('backup and recovery flow', async ({ page }) => { + // 1. Sign up and create backup + await page.goto('/signup'); + await page.fill('[name="password"]', 'test-password-123'); + await page.click('text=Create Backup'); + + // 2. Clear local state (simulate new device) + await page.evaluate(() => indexedDB.deleteDatabase('keyval-store')); + + // 3. Recover + await page.goto('/recover'); + await page.fill('[name="kid"]', 'test-kid'); + await page.fill('[name="password"]', 'test-password-123'); + await page.click('text=Recover'); + + // 4. Verify can sign + await page.click('text=Sign Test Message'); + await expect(page.locator('.signature')).toBeVisible(); +}); +``` + +### Test Vectors + +Store in `service/tests/fixtures/crypto_vectors.json`: + +```json +{ + "ed25519": [ + { + "private_key": "base64url...", + "public_key": "base64url...", + "kid": "base64url...", + "message": "base64url...", + "signature": "base64url..." + } + ], + "backup": [ + { + "password": "test-password", + "salt": "base64url...", + "kdf": "argon2id", + "kdf_params": { "m": 19456, "t": 2, "p": 1 }, + "private_key": "base64url...", + "encrypted_backup": "base64url..." + } + ] +} +``` + +--- + +## Security Considerations + +### Threat Model + +| Threat | Mitigation | Residual Risk | +|--------|------------|---------------| +| Server compromise | Keys encrypted client-side; server never sees plaintext | Attacker gets encrypted blobs, can attempt offline brute force | +| XSS | CSP, non-extractable keys | Attacker can call `sign()` while key in memory | +| Malicious extension | Non-extractable prevents export | Extension can still invoke signing | +| Weak password | Argon2id with high memory cost | User education; consider password strength meter | +| Timing attacks | Constant-time comparison for auth tags | Review crypto library implementations | + +### CSP Headers + +```html + + +``` + +### Audit Logging + +Log recovery attempts (without sensitive data): + +```rust +#[derive(Serialize)] +struct RecoveryAttemptLog { + kid: String, + ip_hash: String, // Hashed for privacy + success: bool, // Inferred from subsequent auth + timestamp: DateTime, +} +``` + +--- + +## Open Questions + +1. **Password strength requirements:** Minimum length? Complexity rules? Strength meter? +2. **Backup versioning:** How to handle algorithm upgrades? Force re-backup? +3. **Multi-device:** Should backup blob be device-specific or shared? +4. **Recovery phrase:** BIP39 mnemonic as alternative to password? Both? +5. **Session duration:** How long should CryptoKey persist in IndexedDB? + +--- + +## References + +- [ADR-006: WebCrypto Key Recovery](../docs/decisions/006-webcrypto-key-recovery.md) +- [Signed Envelope Spec](../docs/interfaces/signed-envelope-spec.md) +- [RFC 8785: JSON Canonicalization Scheme](https://datatracker.ietf.org/doc/html/rfc8785) +- [RFC 8032: Ed25519](https://datatracker.ietf.org/doc/html/rfc8032) +- [OWASP Password Storage Cheat Sheet](https://cheatsheetseries.owasp.org/cheatsheets/Password_Storage_Cheat_Sheet.html) +- [WebCrypto API](https://developer.mozilla.org/en-US/docs/Web/API/Web_Crypto_API) diff --git a/docs/decisions/006-webcrypto-key-recovery.md b/docs/decisions/006-webcrypto-key-recovery.md new file mode 100644 index 00000000..3efc1bdc --- /dev/null +++ b/docs/decisions/006-webcrypto-key-recovery.md @@ -0,0 +1,122 @@ +# ADR-006: Password-encrypted server backup of root key with WebCrypto non-extractable import + +**Date:** 2025-12-17 +**Status:** Draft +**Decision owners:** TinyCongress core team + +## Context + +We maintain canonical cryptographic rules in Rust. The web UI must sign and verify Ed25519 messages using the same message encoding as the backend. + +For MVP recovery, we want a convenience feature: the user can store the root private key on the server encrypted under a user password. During recovery the client fetches the encrypted blob, derives a key from the password, decrypts, and restores the root key for signing. + +Concern: minimize subtle crypto implementation bugs and reduce key handling risks in the web client. + +### Current state (as of 2025-12-17) + +* Frontend uses `@noble/curves` for Ed25519 key generation only. No signing implementation exists. +* Backend has SHA-256 for KID derivation but no Ed25519 signing library. +* No WASM crypto modules exist. Canonicalization and signing are not yet implemented. +* Keys are generated in component state and lost on unmount. No persistence or backup. +* Database stores only public key and KID. No encrypted backup storage. +* No CSP headers configured. + +## Decision + +1. **Canonical encoding and verification rules remain defined in Rust.** + + * Browser uses Rust-compiled-to-WASM (or a generated codec) for message canonicalization and any consensus-critical verification rules. +2. **Client-side signing uses WebCrypto Ed25519 when available.** + + * On recovery, after decrypting the root key, the client **imports it into WebCrypto as a `CryptoKey` with `extractable: false` and `keyUsages: ["sign"]`** (not `["sign", "verify"]`) and uses `subtle.sign()` for signing. + * **Browser requirements:** Ed25519 in WebCrypto requires Chrome 113+, Edge 113+, Safari 17+, Firefox 128+. At time of writing this covers ~92% of global browser usage. + * **Fallback strategy:** On unsupported browsers, signing falls back to the Rust/WASM module (same module used for verification). This path loses the `extractable: false` benefit but maintains functional parity. The UI should display a warning indicating reduced key isolation. +3. **We treat `extractable: false` as footgun reduction, not a hard security boundary.** + + * It prevents accidental export via WebCrypto APIs and discourages app-level serialization of key bytes. + * It does not protect against XSS, malicious extensions, or malicious code running in-origin (which can still call `sign()`). +4. **Backup blob lifecycle:** + + * The server stores only the encrypted key blob and associated metadata (KDF params, salt, version). + * The client does not re-export the private key from WebCrypto. If re-backup is needed (e.g., password change), the user must re-enter their recovery phrase/seed to derive the key material again. The original encrypted blob can be re-used if only extending to additional devices. + +## Consequences + +**Positive** + +* Reduces accidental leakage risks: no exporting, no logging, no app state persistence of decrypted bytes after import. +* Uses well-maintained browser crypto primitives for signing. +* Keeps protocol correctness centralized in Rust (canonical bytes-to-sign, verification semantics). + +**Negative / Limitations** + +* Decrypted key material exists in client memory at least briefly during recovery import. This remains vulnerable to XSS and hostile extensions. +* Non-extractable keys can still be abused by any code with access to the `CryptoKey` handle to sign arbitrary messages. +* Requires browser support for Ed25519 WebCrypto (~92% coverage). Fallback to WASM signing loses `extractable: false` benefit. +* Worker isolation requirement means `CryptoKey` handle management adds complexity (transferability, lifetime across worker restarts). +* Significant implementation work required: WASM module for canonicalization, Web Worker, IndexedDB persistence layer, new API endpoints, database migration. + +## Alternatives considered + +1. **Duplicate crypto in TypeScript and Rust.** Rejected due to high risk of subtle divergence in encoding/validation and library semantics. +2. **Rust/WASM for all signing and verification.** Not preferred for private key hygiene because keys live in JS/WASM memory and cannot be made meaningfully non-extractable. +3. **No server backup.** Stronger security but worse UX and recovery. +4. **WebAuthn/passkeys (device-bound keys) for recovery/signing.** Preferred future direction but out of scope for MVP. +5. **Social recovery / secret sharing.** Also future work, more complexity for MVP. + +## Implementation notes + +* **Encryption format:** AES-256-GCM with a versioned envelope containing KDF parameters, salt, and nonce. +* **KDF:** Argon2id (preferred) with OWASP-recommended parameters (m=19456 KiB, t=2, p=1). PBKDF2-SHA256 with 600,000 iterations as fallback where Argon2 is unavailable. +* **Worker isolation (required):** Decryption and key import MUST occur in a dedicated Web Worker. The plaintext key bytes never transit to the main thread. The Worker imports directly into WebCrypto and returns only the `CryptoKey` handle (which is transferable via `postMessage`). +* Best-effort zeroization of temporary buffers after import (TypedArray overwrite). +* **Decryption failure handling:** + * Wrong password → AES-GCM authentication tag fails → generic "incorrect password or corrupted backup" error (no distinction to avoid oracle attacks). + * Corrupted blob → same error message. Client may offer retry or manual recovery phrase entry. +* Strict separation between: + + * **Canonicalization** (Rust/WASM) + * **Signing** (WebCrypto when available, WASM fallback) + * **Verification** (Rust/WASM only — per [ADR-007](007-zip215-verification.md), ZIP215 semantics require WASM; WebCrypto `verify()` cannot be used) + +* **CryptoKey session persistence:** After import, the `CryptoKey` handle is stored in IndexedDB (which supports structured clone of non-extractable keys). On page reload, retrieve from IndexedDB rather than re-decrypting. + +* **Key lifecycle:** + * Logout: Clear `CryptoKey` from IndexedDB and Worker memory. + * Tab close: `CryptoKey` persists in IndexedDB until explicit logout or TTL expiry. + * Session expiry: IndexedDB entry has TTL; expired keys require re-recovery. + * Worker restart: Retrieve `CryptoKey` from IndexedDB; no re-decryption needed. + +* **Server-side rate limiting:** Recovery endpoint (`GET /auth/backup/:kid`) rate-limited to 5 attempts per minute per IP. Failed decryption attempts (inferred from lack of subsequent authenticated request) may trigger progressive delays. + +* **Database schema:** Backups stored in separate `account_backups` table (not columns on `accounts`) for cleaner separation, no NULLs, and future extensibility (multiple backup methods, history). + ```sql + CREATE TABLE account_backups ( + id UUID PRIMARY KEY DEFAULT gen_random_uuid(), + account_id UUID NOT NULL REFERENCES accounts(id) ON DELETE CASCADE, + kid TEXT NOT NULL, -- denormalized for recovery lookup + encrypted_backup BYTEA NOT NULL, + salt BYTEA NOT NULL, + kdf_algorithm TEXT NOT NULL CHECK (kdf_algorithm IN ('argon2id', 'pbkdf2')), + version INTEGER NOT NULL DEFAULT 1, + created_at TIMESTAMPTZ NOT NULL DEFAULT now(), + UNIQUE (account_id), + UNIQUE (kid) + ); + ``` + +## Related decisions + +* [ADR-007: ZIP215 verification](007-zip215-verification.md) — All verification uses ZIP215 semantics via WASM. +* [Signed Envelope Spec](../interfaces/signed-envelope-spec.md) — Defines envelope structure and canonicalization. + +## Follow-ups + +* Add cross-environment test vectors: canonical payload bytes and known signatures. +* Define XSS hardening baseline (CSP, dependency controls) since client-side recovery inherently raises stakes. +* Evaluate WebAuthn-based recovery/signing post-MVP. +* Define canonical message format (bytes-to-sign structure) for Ed25519 signatures. +* Add backend dependencies: `ed25519-consensus` (for ZIP215 verification), `aes-gcm`, `argon2` crates. +* Build WASM crypto module with `wasm-pack` for canonicalization. +* Implement browser feature detection for Ed25519 WebCrypto support. +* Add Vite worker build configuration for dedicated crypto Worker. diff --git a/docs/decisions/007-zip215-verification.md b/docs/decisions/007-zip215-verification.md new file mode 100644 index 00000000..67ddb55d --- /dev/null +++ b/docs/decisions/007-zip215-verification.md @@ -0,0 +1,90 @@ +# ADR-007: Adopt ZIP215 semantics for Ed25519 verification + +**Date:** 2025-12-17 +**Status:** Draft +**Decision owners:** TinyCongress core team + +## Context + +TinyCongress relies on Ed25519 signatures for identity, endorsements, and protocol actions. Verification correctness must be stable across implementations and environments (Rust backend, browser UI, potential third-party nodes). + +Ed25519 has known edge cases where different libraries disagree on signature validity, especially around: + +* Non-canonical encodings. +* Small-order points. +* Cofactor-related behavior. + +Divergent verification rules can cause consensus splits, replay acceptance mismatches, or "valid in one place, invalid in another" failures. + +## Decision + +We adopt **ZIP215** semantics as the canonical rule set for Ed25519 signature verification. + +* All consensus-critical verification in TinyCongress MUST follow ZIP215. +* The Rust backend uses a ZIP215-compliant verifier (`ed25519-consensus` crate). +* Any non-Rust environment (UI, third-party tooling) must either: + + * Delegate verification to Rust (e.g., via WASM), or + * Prove equivalence to ZIP215 via test vectors. + +## Rationale + +* ZIP215 is explicitly designed to **eliminate cross-implementation divergence** in Ed25519 verification. +* It defines acceptance rules that are stable even in adversarial or heterogeneous environments. +* It avoids historical footguns where "stricter" or "looser" interpretations silently disagree. +* This matches our long-term goal of open participation and third-party interoperability. + +## Consequences + +**Positive** + +* Eliminates an entire class of consensus and interoperability bugs. +* Makes verification behavior explicit and documented. +* Supports future decentralization and third-party nodes safely. + +**Negative** + +* ZIP215 accepts some signatures that older "strict" verifiers may reject. +* Some popular Ed25519 libraries do not default to ZIP215 semantics and must be wrapped or replaced. +* Slightly more conceptual overhead for contributors unfamiliar with Ed25519 edge cases. +* Browser verification cannot use WebCrypto `verify()` — must always delegate to WASM. + +## Alternatives considered + +1. **RFC 8032-style strict verification only.** + Rejected due to known divergence between implementations and lack of consensus safety guarantees. +2. **Library-default verification behavior.** + Rejected because defaults vary and change over time. +3. **Ad-hoc "stricter than strict" rules.** + Rejected as non-standard and likely to cause interoperability failures. +4. **Hybrid approach (WASM for consensus, WebCrypto for UI hints).** + Rejected due to risk of UI showing "valid" when consensus would reject, and maintenance burden of two paths. + +## Implementation notes + +* **Signing is not affected:** ZIP215 concerns verification only. Signing via WebCrypto Ed25519 produces RFC 8032-compliant signatures, which are valid under ZIP215. +* **Browser verification must use WASM:** WebCrypto `verify()` does not implement ZIP215. All verification in the browser MUST go through the Rust/WASM module. +* Use `ed25519-consensus` crate (not `ed25519-dalek`) for ZIP215-compliant verification. +* Canonical test vectors (valid and invalid) are generated from Rust and run in CI across: + + * Backend verification. + * Browser verification via WASM. +* Documentation explicitly states: "Signature validity is defined by ZIP215, not by library defaults." + +## Related decisions + +* [ADR-006: WebCrypto key recovery](006-webcrypto-key-recovery.md) — Signing uses WebCrypto; verification uses WASM per this ADR. +* [Signed Envelope Spec](../interfaces/signed-envelope-spec.md) — Envelope verification must use ZIP215. + +## Follow-ups + +* Publish a short "Why ZIP215" doc for contributors. +* Add fuzz and regression tests for malformed keys and signatures. +* Update signed-envelope-spec.md to reference ZIP215 requirement. +* Generate ZIP215-specific test vectors (small-order points, non-canonical S values). + +## References + +* [ZIP215 Specification](https://zips.z.cash/zip-0215) +* [ed25519-consensus crate](https://crates.io/crates/ed25519-consensus) +* [Taming the many EdDSAs](https://eprint.iacr.org/2020/1244) — Academic paper on Ed25519 verification variants diff --git a/service/migrations/04_account_backups.sql b/service/migrations/04_account_backups.sql new file mode 100644 index 00000000..9f2ea5bc --- /dev/null +++ b/service/migrations/04_account_backups.sql @@ -0,0 +1,28 @@ +-- Encrypted backup storage for root keys +-- Separate table (not columns on accounts) for: +-- - No NULLs on accounts table (backup is optional) +-- - Clean domain separation +-- - Future extensibility (multiple backup methods, history) + +CREATE TABLE IF NOT EXISTS account_backups ( + id UUID PRIMARY KEY DEFAULT gen_random_uuid(), + account_id UUID NOT NULL REFERENCES accounts(id) ON DELETE CASCADE, + kid TEXT NOT NULL, -- denormalized from accounts.root_kid for join-free lookup + encrypted_backup BYTEA NOT NULL, -- binary envelope: version + KDF params + nonce + ciphertext + salt BYTEA NOT NULL, -- KDF salt (16 bytes) + kdf_algorithm TEXT NOT NULL CHECK (kdf_algorithm IN ('argon2id', 'pbkdf2')), + version INTEGER NOT NULL DEFAULT 1, + created_at TIMESTAMPTZ NOT NULL DEFAULT now(), + + CONSTRAINT uq_account_backups_account UNIQUE (account_id), + CONSTRAINT uq_account_backups_kid UNIQUE (kid) +); + +-- Primary lookup path for recovery (by KID, no join needed) +CREATE INDEX idx_account_backups_kid ON account_backups(kid); + +COMMENT ON TABLE account_backups IS 'Password-encrypted root key backups for account recovery'; +COMMENT ON COLUMN account_backups.kid IS 'Key ID (denormalized from accounts.root_kid for join-free lookup)'; +COMMENT ON COLUMN account_backups.encrypted_backup IS 'Binary envelope: version + KDF params + nonce + AES-256-GCM ciphertext'; +COMMENT ON COLUMN account_backups.salt IS 'Random salt for KDF (16 bytes)'; +COMMENT ON COLUMN account_backups.kdf_algorithm IS 'KDF used: argon2id (preferred) or pbkdf2 (fallback)'; diff --git a/service/src/identity/http/backup.rs b/service/src/identity/http/backup.rs new file mode 100644 index 00000000..3d258673 --- /dev/null +++ b/service/src/identity/http/backup.rs @@ -0,0 +1,334 @@ +//! HTTP handlers for encrypted key backup + +use std::sync::Arc; + +use axum::{ + extract::{Extension, Path}, + http::StatusCode, + response::IntoResponse, + Json, +}; +use chrono::{DateTime, Utc}; +use serde::{Deserialize, Serialize}; + +use crate::identity::repo::{BackupRepo, BackupRepoError}; +use tc_crypto::{decode_base64url_native as decode_base64url, encode_base64url}; + +/// Create backup request payload +#[derive(Debug, Deserialize)] +pub struct CreateBackupRequest { + /// Key ID (must match an existing account's `root_kid`) + pub kid: String, + /// Base64url-encoded encrypted backup envelope + pub encrypted_backup: String, +} + +/// Create backup response +#[derive(Debug, Serialize)] +pub struct CreateBackupResponse { + pub kid: String, + pub created_at: DateTime, +} + +/// Get backup response +#[derive(Debug, Serialize)] +pub struct GetBackupResponse { + pub encrypted_backup: String, + pub salt: String, + pub kdf_algorithm: String, + pub version: i32, +} + +/// Error response +#[derive(Debug, Serialize)] +pub struct ErrorResponse { + pub error: String, +} + +/// Parsed backup envelope metadata +struct EnvelopeMetadata { + version: u8, + kdf_algorithm: &'static str, + salt: Vec, +} + +/// Parse and validate backup envelope, extracting metadata +fn parse_envelope(backup_bytes: &[u8]) -> Result { + // Minimum: 1 + 1 + 4 + 16 + 12 + 48 = 82 bytes + if backup_bytes.len() < 82 { + return Err("Encrypted backup envelope too small"); + } + + let version = backup_bytes[0]; + if version != 1 { + return Err("Unsupported backup version"); + } + + let kdf_id = backup_bytes[1]; + let kdf_algorithm = match kdf_id { + 1 => "argon2id", + 2 => "pbkdf2", + _ => return Err("Unknown KDF algorithm"), + }; + + // Extract salt based on KDF type + // Argon2: params at bytes 2-13 (12 bytes), salt at 14-29 (16 bytes) + // PBKDF2: params at bytes 2-5 (4 bytes), salt at 6-21 (16 bytes) + let salt_offset = if kdf_id == 1 { 14 } else { 6 }; + let salt = backup_bytes[salt_offset..salt_offset + 16].to_vec(); + + Ok(EnvelopeMetadata { + version, + kdf_algorithm, + salt, + }) +} + +/// Convert `BackupRepoError` to HTTP response +fn backup_error_response(e: BackupRepoError) -> (StatusCode, Json) { + match e { + BackupRepoError::DuplicateAccount => ( + StatusCode::CONFLICT, + Json(ErrorResponse { + error: "Backup already exists for this account".to_string(), + }), + ), + BackupRepoError::DuplicateKid => ( + StatusCode::CONFLICT, + Json(ErrorResponse { + error: "Backup already exists for this key ID".to_string(), + }), + ), + BackupRepoError::AccountNotFound => ( + StatusCode::NOT_FOUND, + Json(ErrorResponse { + error: "Account not found".to_string(), + }), + ), + BackupRepoError::NotFound => ( + StatusCode::NOT_FOUND, + Json(ErrorResponse { + error: "Backup not found".to_string(), + }), + ), + BackupRepoError::Database(db_err) => { + tracing::error!("Backup operation failed: {}", db_err); + ( + StatusCode::INTERNAL_SERVER_ERROR, + Json(ErrorResponse { + error: "Internal server error".to_string(), + }), + ) + } + } +} + +/// Handle create backup request +/// +/// POST /auth/backup +pub async fn create_backup( + Extension(repo): Extension>, + Json(req): Json, +) -> impl IntoResponse { + // Validate KID format + let kid = req.kid.trim(); + if kid.is_empty() || kid.len() > 64 { + return bad_request("Invalid kid format"); + } + + // Decode encrypted backup envelope + let Ok(backup_bytes) = decode_base64url(&req.encrypted_backup) else { + return bad_request("Invalid base64url encoding for encrypted_backup"); + }; + + // Parse envelope metadata + let metadata = match parse_envelope(&backup_bytes) { + Ok(m) => m, + Err(msg) => return bad_request(msg), + }; + + // TODO: Add account lookup by kid or require account_id in request + let account_id = uuid::Uuid::nil(); // Placeholder + + match repo + .create( + account_id, + kid, + &backup_bytes, + &metadata.salt, + metadata.kdf_algorithm, + i32::from(metadata.version), + ) + .await + { + Ok(created) => ( + StatusCode::CREATED, + Json(CreateBackupResponse { + kid: created.kid, + created_at: created.created_at, + }), + ) + .into_response(), + Err(e) => backup_error_response(e).into_response(), + } +} + +/// Helper to create bad request response +fn bad_request(msg: &str) -> axum::response::Response { + ( + StatusCode::BAD_REQUEST, + Json(ErrorResponse { + error: msg.to_string(), + }), + ) + .into_response() +} + +/// Handle get backup request +/// +/// GET /auth/backup/:kid +pub async fn get_backup( + Extension(repo): Extension>, + Path(kid): Path, +) -> impl IntoResponse { + match repo.get_by_kid(&kid).await { + Ok(backup) => ( + StatusCode::OK, + Json(GetBackupResponse { + encrypted_backup: encode_base64url(&backup.encrypted_backup), + salt: encode_base64url(&backup.salt), + kdf_algorithm: backup.kdf_algorithm, + version: backup.version, + }), + ) + .into_response(), + Err(e) => backup_error_response(e).into_response(), + } +} + +/// Handle delete backup request +/// +/// DELETE /auth/backup/:kid +pub async fn delete_backup( + Extension(repo): Extension>, + Path(kid): Path, +) -> impl IntoResponse { + // TODO: Add authentication - require signed envelope proving ownership + match repo.delete_by_kid(&kid).await { + Ok(()) => StatusCode::NO_CONTENT.into_response(), + Err(e) => backup_error_response(e).into_response(), + } +} + +#[cfg(test)] +mod tests { + use super::*; + use crate::identity::repo::mock::MockBackupRepo; + use crate::identity::repo::{BackupRecord, CreatedBackup}; + use axum::{ + body::Body, + http::{Request, StatusCode}, + routing::{delete, get, post}, + Router, + }; + use chrono::Utc; + use tower::ServiceExt; + use uuid::Uuid; + + fn test_router(repo: Arc) -> Router { + Router::new() + .route("/auth/backup", post(create_backup)) + .route("/auth/backup/{kid}", get(get_backup)) + .route("/auth/backup/{kid}", delete(delete_backup)) + .layer(Extension(repo)) + } + + #[tokio::test] + async fn test_get_backup_success() { + let mock_repo = Arc::new(MockBackupRepo::new()); + mock_repo.set_get_result(Ok(BackupRecord { + id: Uuid::new_v4(), + account_id: Uuid::new_v4(), + kid: "test-kid".to_string(), + encrypted_backup: vec![1u8; 100], + salt: vec![2u8; 16], + kdf_algorithm: "argon2id".to_string(), + version: 1, + created_at: Utc::now(), + })); + let app = test_router(mock_repo); + + let response = app + .oneshot( + Request::builder() + .method("GET") + .uri("/auth/backup/test-kid") + .body(Body::empty()) + .expect("request builder"), + ) + .await + .expect("response"); + + assert_eq!(response.status(), StatusCode::OK); + } + + #[tokio::test] + async fn test_get_backup_not_found() { + let mock_repo = Arc::new(MockBackupRepo::new()); + mock_repo.set_get_result(Err(BackupRepoError::NotFound)); + let app = test_router(mock_repo); + + let response = app + .oneshot( + Request::builder() + .method("GET") + .uri("/auth/backup/nonexistent") + .body(Body::empty()) + .expect("request builder"), + ) + .await + .expect("response"); + + assert_eq!(response.status(), StatusCode::NOT_FOUND); + } + + #[tokio::test] + async fn test_delete_backup_success() { + let mock_repo = Arc::new(MockBackupRepo::new()); + mock_repo.set_delete_result(Ok(())); + let app = test_router(mock_repo); + + let response = app + .oneshot( + Request::builder() + .method("DELETE") + .uri("/auth/backup/test-kid") + .body(Body::empty()) + .expect("request builder"), + ) + .await + .expect("response"); + + assert_eq!(response.status(), StatusCode::NO_CONTENT); + } + + #[tokio::test] + async fn test_delete_backup_not_found() { + let mock_repo = Arc::new(MockBackupRepo::new()); + mock_repo.set_delete_result(Err(BackupRepoError::NotFound)); + let app = test_router(mock_repo); + + let response = app + .oneshot( + Request::builder() + .method("DELETE") + .uri("/auth/backup/nonexistent") + .body(Body::empty()) + .expect("request builder"), + ) + .await + .expect("response"); + + assert_eq!(response.status(), StatusCode::NOT_FOUND); + } +} diff --git a/service/src/identity/http/mod.rs b/service/src/identity/http/mod.rs index db33bb7e..5bc8a2b3 100644 --- a/service/src/identity/http/mod.rs +++ b/service/src/identity/http/mod.rs @@ -1,9 +1,15 @@ //! HTTP handlers for identity system +mod backup; + use std::sync::Arc; use axum::{ - extract::Extension, http::StatusCode, response::IntoResponse, routing::post, Json, Router, + extract::Extension, + http::StatusCode, + response::IntoResponse, + routing::{delete, get, post}, + Json, Router, }; use serde::{Deserialize, Serialize}; use uuid::Uuid; @@ -33,7 +39,11 @@ pub struct ErrorResponse { /// Create identity router pub fn router() -> Router { - Router::new().route("/auth/signup", post(signup)) + Router::new() + .route("/auth/signup", post(signup)) + .route("/auth/backup", post(backup::create_backup)) + .route("/auth/backup/{kid}", get(backup::get_backup)) + .route("/auth/backup/{kid}", delete(backup::delete_backup)) } /// Handle signup request diff --git a/service/src/identity/repo/backup.rs b/service/src/identity/repo/backup.rs new file mode 100644 index 00000000..1875d2d5 --- /dev/null +++ b/service/src/identity/repo/backup.rs @@ -0,0 +1,333 @@ +//! Backup repository for encrypted key storage operations + +use async_trait::async_trait; +use chrono::{DateTime, Utc}; +use sqlx::PgPool; +use uuid::Uuid; + +/// Backup creation/retrieval result +#[derive(Debug, Clone)] +pub struct BackupRecord { + pub id: Uuid, + pub account_id: Uuid, + pub kid: String, + pub encrypted_backup: Vec, + pub salt: Vec, + pub kdf_algorithm: String, + pub version: i32, + pub created_at: DateTime, +} + +/// Backup creation result (subset of fields) +#[derive(Debug, Clone)] +pub struct CreatedBackup { + pub id: Uuid, + pub kid: String, + pub created_at: DateTime, +} + +/// Error types for backup operations +#[derive(Debug, thiserror::Error)] +pub enum BackupRepoError { + #[error("backup already exists for this account")] + DuplicateAccount, + #[error("backup already exists for this key ID")] + DuplicateKid, + #[error("backup not found")] + NotFound, + #[error("referenced account does not exist")] + AccountNotFound, + #[error("database error: {0}")] + Database(#[from] sqlx::Error), +} + +/// Repository trait for backup operations +/// +/// This trait abstracts database operations to enable unit testing +/// handlers with mock implementations. +#[async_trait] +pub trait BackupRepo: Send + Sync { + /// Create a new encrypted backup for an account + /// + /// # Errors + /// + /// Returns `BackupRepoError::DuplicateAccount` if account already has a backup. + /// Returns `BackupRepoError::DuplicateKid` if kid is already registered. + /// Returns `BackupRepoError::AccountNotFound` if `account_id` doesn't exist. + async fn create( + &self, + account_id: Uuid, + kid: &str, + encrypted_backup: &[u8], + salt: &[u8], + kdf_algorithm: &str, + version: i32, + ) -> Result; + + /// Retrieve a backup by key ID + /// + /// # Errors + /// + /// Returns `BackupRepoError::NotFound` if no backup exists for this kid. + async fn get_by_kid(&self, kid: &str) -> Result; + + /// Delete a backup by key ID + /// + /// # Errors + /// + /// Returns `BackupRepoError::NotFound` if no backup exists for this kid. + async fn delete_by_kid(&self, kid: &str) -> Result<(), BackupRepoError>; +} + +/// `PostgreSQL` implementation of [`BackupRepo`] +pub struct PgBackupRepo { + pool: PgPool, +} + +impl PgBackupRepo { + #[must_use] + pub const fn new(pool: PgPool) -> Self { + Self { pool } + } +} + +#[async_trait] +impl BackupRepo for PgBackupRepo { + async fn create( + &self, + account_id: Uuid, + kid: &str, + encrypted_backup: &[u8], + salt: &[u8], + kdf_algorithm: &str, + version: i32, + ) -> Result { + let id = Uuid::new_v4(); + let now = Utc::now(); + + let result = sqlx::query( + r" + INSERT INTO account_backups (id, account_id, kid, encrypted_backup, salt, kdf_algorithm, version, created_at) + VALUES ($1, $2, $3, $4, $5, $6, $7, $8) + ", + ) + .bind(id) + .bind(account_id) + .bind(kid) + .bind(encrypted_backup) + .bind(salt) + .bind(kdf_algorithm) + .bind(version) + .bind(now) + .execute(&self.pool) + .await; + + match result { + Ok(_) => Ok(CreatedBackup { + id, + kid: kid.to_string(), + created_at: now, + }), + Err(e) => { + if let sqlx::Error::Database(db_err) = &e { + // Check for constraint violations + if let Some(constraint) = db_err.constraint() { + match constraint { + "uq_account_backups_account" => { + return Err(BackupRepoError::DuplicateAccount) + } + "uq_account_backups_kid" => return Err(BackupRepoError::DuplicateKid), + "account_backups_account_id_fkey" => { + return Err(BackupRepoError::AccountNotFound) + } + _ => {} + } + } + } + Err(BackupRepoError::Database(e)) + } + } + } + + async fn get_by_kid(&self, kid: &str) -> Result { + let result = sqlx::query_as::< + _, + ( + Uuid, + Uuid, + String, + Vec, + Vec, + String, + i32, + DateTime, + ), + >( + r" + SELECT id, account_id, kid, encrypted_backup, salt, kdf_algorithm, version, created_at + FROM account_backups + WHERE kid = $1 + ", + ) + .bind(kid) + .fetch_optional(&self.pool) + .await?; + + match result { + Some(( + id, + account_id, + kid, + encrypted_backup, + salt, + kdf_algorithm, + version, + created_at, + )) => Ok(BackupRecord { + id, + account_id, + kid, + encrypted_backup, + salt, + kdf_algorithm, + version, + created_at, + }), + None => Err(BackupRepoError::NotFound), + } + } + + async fn delete_by_kid(&self, kid: &str) -> Result<(), BackupRepoError> { + let result = sqlx::query( + r" + DELETE FROM account_backups + WHERE kid = $1 + ", + ) + .bind(kid) + .execute(&self.pool) + .await?; + + if result.rows_affected() == 0 { + return Err(BackupRepoError::NotFound); + } + + Ok(()) + } +} + +#[cfg(any(test, feature = "test-utils"))] +#[allow(clippy::expect_used)] +pub mod mock { + //! Mock implementation for testing + + use super::{async_trait, BackupRecord, BackupRepo, BackupRepoError, CreatedBackup, Utc, Uuid}; + use std::sync::Mutex; + + /// Mock backup repository for unit tests. + pub struct MockBackupRepo { + /// Preset result to return from `create()`. + pub create_result: Mutex>>, + /// Preset result to return from `get_by_kid()`. + pub get_result: Mutex>>, + /// Preset result to return from `delete_by_kid()`. + pub delete_result: Mutex>>, + } + + impl MockBackupRepo { + /// Create a new mock repository. + #[must_use] + pub const fn new() -> Self { + Self { + create_result: Mutex::new(None), + get_result: Mutex::new(None), + delete_result: Mutex::new(None), + } + } + + /// Set the result that `create()` will return. + /// + /// # Panics + /// + /// Panics if the internal mutex is poisoned. + pub fn set_create_result(&self, result: Result) { + *self.create_result.lock().expect("lock poisoned") = Some(result); + } + + /// Set the result that `get_by_kid()` will return. + /// + /// # Panics + /// + /// Panics if the internal mutex is poisoned. + pub fn set_get_result(&self, result: Result) { + *self.get_result.lock().expect("lock poisoned") = Some(result); + } + + /// Set the result that `delete_by_kid()` will return. + /// + /// # Panics + /// + /// Panics if the internal mutex is poisoned. + pub fn set_delete_result(&self, result: Result<(), BackupRepoError>) { + *self.delete_result.lock().expect("lock poisoned") = Some(result); + } + } + + impl Default for MockBackupRepo { + fn default() -> Self { + Self::new() + } + } + + #[async_trait] + impl BackupRepo for MockBackupRepo { + async fn create( + &self, + _account_id: Uuid, + kid: &str, + _encrypted_backup: &[u8], + _salt: &[u8], + _kdf_algorithm: &str, + _version: i32, + ) -> Result { + self.create_result + .lock() + .expect("lock poisoned") + .take() + .unwrap_or_else(|| { + Ok(CreatedBackup { + id: Uuid::new_v4(), + kid: kid.to_string(), + created_at: Utc::now(), + }) + }) + } + + async fn get_by_kid(&self, kid: &str) -> Result { + self.get_result + .lock() + .expect("lock poisoned") + .take() + .unwrap_or_else(|| { + Ok(BackupRecord { + id: Uuid::new_v4(), + account_id: Uuid::new_v4(), + kid: kid.to_string(), + encrypted_backup: vec![0u8; 48], + salt: vec![0u8; 16], + kdf_algorithm: "argon2id".to_string(), + version: 1, + created_at: Utc::now(), + }) + }) + } + + async fn delete_by_kid(&self, _kid: &str) -> Result<(), BackupRepoError> { + self.delete_result + .lock() + .expect("lock poisoned") + .take() + .unwrap_or(Ok(())) + } + } +} diff --git a/service/src/identity/repo/mod.rs b/service/src/identity/repo/mod.rs index 926e7132..14e3d5dd 100644 --- a/service/src/identity/repo/mod.rs +++ b/service/src/identity/repo/mod.rs @@ -1,13 +1,16 @@ //! Repository layer for identity persistence pub mod accounts; +pub mod backup; pub use accounts::{ create_account_with_executor, AccountRepo, AccountRepoError, CreatedAccount, PgAccountRepo, }; +pub use backup::{BackupRecord, BackupRepo, BackupRepoError, CreatedBackup, PgBackupRepo}; // Re-export mock for use in tests across the crate and integration tests #[cfg(any(test, feature = "test-utils"))] pub mod mock { pub use super::accounts::mock::MockAccountRepo; + pub use super::backup::mock::MockBackupRepo; } diff --git a/service/src/main.rs b/service/src/main.rs index f3723752..54a55429 100644 --- a/service/src/main.rs +++ b/service/src/main.rs @@ -24,7 +24,10 @@ use tinycongress_api::{ db::setup_database, graphql::{graphql_handler, graphql_playground, MutationRoot, QueryRoot}, http::{build_security_headers, security_headers_middleware}, - identity::{self, repo::PgAccountRepo}, + identity::{ + self, + repo::{PgAccountRepo, PgBackupRepo}, + }, rest::{self, ApiDoc}, }; use tower_http::cors::{AllowOrigin, Any, CorsLayer}; @@ -73,6 +76,8 @@ async fn main() -> Result<(), anyhow::Error> { // Create repositories let account_repo: Arc = Arc::new(PgAccountRepo::new(pool.clone())); + let backup_repo: Arc = + Arc::new(PgBackupRepo::new(pool.clone())); // Build CORS layer from config let cors_origins = &config.cors.allowed_origins; @@ -130,6 +135,7 @@ async fn main() -> Result<(), anyhow::Error> { .layer(Extension(schema)) .layer(Extension(pool.clone())) .layer(Extension(account_repo)) + .layer(Extension(backup_repo)) .layer(Extension(build_info)) .layer( CorsLayer::new() diff --git a/service/tests/backup_repo_tests.rs b/service/tests/backup_repo_tests.rs new file mode 100644 index 00000000..e2ee3949 --- /dev/null +++ b/service/tests/backup_repo_tests.rs @@ -0,0 +1,267 @@ +//! Integration tests for backup repository. +//! +//! Tests the `BackupRepo` trait implementation against a real database. + +mod common; + +use common::test_db::{get_test_db, run_test}; +use sqlx_core::query::query; +use tinycongress_api::identity::repo::{BackupRepo, BackupRepoError, PgBackupRepo}; +use uuid::Uuid; + +/// Helper to create an account for testing backups (backups require a valid account_id) +async fn create_test_account(pool: &sqlx::PgPool) -> Uuid { + let account_id = Uuid::new_v4(); + let kid = format!("test-kid-{}", Uuid::new_v4()); + + query( + r" + INSERT INTO accounts (id, username, root_pubkey, root_kid, created_at) + VALUES ($1, $2, $3, $4, now()) + ", + ) + .bind(account_id) + .bind(format!("user-{}", account_id)) + .bind("dGVzdC1wdWJrZXk") // base64url encoded test pubkey + .bind(&kid) + .execute(pool) + .await + .expect("Failed to create test account"); + + account_id +} + +/// Test creating a backup successfully +#[test] +fn test_create_backup_success() { + run_test(async { + let db = get_test_db().await; + let repo = PgBackupRepo::new(db.pool().clone()); + + let account_id = create_test_account(db.pool()).await; + let kid = format!("backup-kid-{}", Uuid::new_v4()); + let encrypted_backup = vec![1u8; 100]; + let salt = vec![2u8; 16]; + + let result = repo + .create(account_id, &kid, &encrypted_backup, &salt, "argon2id", 1) + .await; + + assert!(result.is_ok(), "Should create backup successfully"); + let created = result.expect("backup created"); + assert_eq!(created.kid, kid); + }); +} + +/// Test that duplicate account backup returns error +#[test] +fn test_create_backup_duplicate_account() { + run_test(async { + let db = get_test_db().await; + let repo = PgBackupRepo::new(db.pool().clone()); + + let account_id = create_test_account(db.pool()).await; + let kid1 = format!("backup-kid-{}", Uuid::new_v4()); + let kid2 = format!("backup-kid-{}", Uuid::new_v4()); + let encrypted_backup = vec![1u8; 100]; + let salt = vec![2u8; 16]; + + // First backup should succeed + repo.create(account_id, &kid1, &encrypted_backup, &salt, "argon2id", 1) + .await + .expect("First backup should succeed"); + + // Second backup for same account should fail + let result = repo + .create(account_id, &kid2, &encrypted_backup, &salt, "argon2id", 1) + .await; + + assert!( + matches!(result, Err(BackupRepoError::DuplicateAccount)), + "Should return DuplicateAccount error" + ); + }); +} + +/// Test that duplicate kid returns error +#[test] +fn test_create_backup_duplicate_kid() { + run_test(async { + let db = get_test_db().await; + let repo = PgBackupRepo::new(db.pool().clone()); + + let account_id1 = create_test_account(db.pool()).await; + let account_id2 = create_test_account(db.pool()).await; + let kid = format!("backup-kid-{}", Uuid::new_v4()); + let encrypted_backup = vec![1u8; 100]; + let salt = vec![2u8; 16]; + + // First backup should succeed + repo.create(account_id1, &kid, &encrypted_backup, &salt, "argon2id", 1) + .await + .expect("First backup should succeed"); + + // Backup with same kid for different account should fail + let result = repo + .create(account_id2, &kid, &encrypted_backup, &salt, "pbkdf2", 1) + .await; + + assert!( + matches!(result, Err(BackupRepoError::DuplicateKid)), + "Should return DuplicateKid error" + ); + }); +} + +/// Test that invalid account_id returns error +#[test] +fn test_create_backup_account_not_found() { + run_test(async { + let db = get_test_db().await; + let repo = PgBackupRepo::new(db.pool().clone()); + + let fake_account_id = Uuid::new_v4(); + let kid = format!("backup-kid-{}", Uuid::new_v4()); + let encrypted_backup = vec![1u8; 100]; + let salt = vec![2u8; 16]; + + let result = repo + .create( + fake_account_id, + &kid, + &encrypted_backup, + &salt, + "argon2id", + 1, + ) + .await; + + assert!( + matches!(result, Err(BackupRepoError::AccountNotFound)), + "Should return AccountNotFound error" + ); + }); +} + +/// Test retrieving a backup by kid +#[test] +fn test_get_backup_by_kid() { + run_test(async { + let db = get_test_db().await; + let repo = PgBackupRepo::new(db.pool().clone()); + + let account_id = create_test_account(db.pool()).await; + let kid = format!("backup-kid-{}", Uuid::new_v4()); + let encrypted_backup = vec![1u8; 100]; + let salt = vec![2u8; 16]; + + repo.create(account_id, &kid, &encrypted_backup, &salt, "argon2id", 1) + .await + .expect("Should create backup"); + + let result = repo.get_by_kid(&kid).await; + + assert!(result.is_ok(), "Should retrieve backup"); + let backup = result.expect("backup retrieved"); + assert_eq!(backup.kid, kid); + assert_eq!(backup.account_id, account_id); + assert_eq!(backup.encrypted_backup, encrypted_backup); + assert_eq!(backup.salt, salt); + assert_eq!(backup.kdf_algorithm, "argon2id"); + assert_eq!(backup.version, 1); + }); +} + +/// Test retrieving non-existent backup returns NotFound +#[test] +fn test_get_backup_not_found() { + run_test(async { + let db = get_test_db().await; + let repo = PgBackupRepo::new(db.pool().clone()); + + let result = repo.get_by_kid("nonexistent-kid").await; + + assert!( + matches!(result, Err(BackupRepoError::NotFound)), + "Should return NotFound error" + ); + }); +} + +/// Test deleting a backup by kid +#[test] +fn test_delete_backup_by_kid() { + run_test(async { + let db = get_test_db().await; + let repo = PgBackupRepo::new(db.pool().clone()); + + let account_id = create_test_account(db.pool()).await; + let kid = format!("backup-kid-{}", Uuid::new_v4()); + let encrypted_backup = vec![1u8; 100]; + let salt = vec![2u8; 16]; + + repo.create(account_id, &kid, &encrypted_backup, &salt, "argon2id", 1) + .await + .expect("Should create backup"); + + // Delete should succeed + let delete_result = repo.delete_by_kid(&kid).await; + assert!(delete_result.is_ok(), "Should delete backup"); + + // Verify it's gone + let get_result = repo.get_by_kid(&kid).await; + assert!( + matches!(get_result, Err(BackupRepoError::NotFound)), + "Backup should be deleted" + ); + }); +} + +/// Test deleting non-existent backup returns NotFound +#[test] +fn test_delete_backup_not_found() { + run_test(async { + let db = get_test_db().await; + let repo = PgBackupRepo::new(db.pool().clone()); + + let result = repo.delete_by_kid("nonexistent-kid").await; + + assert!( + matches!(result, Err(BackupRepoError::NotFound)), + "Should return NotFound error" + ); + }); +} + +/// Test that deleting account cascades to backup +#[test] +fn test_cascade_delete_on_account_removal() { + run_test(async { + let db = get_test_db().await; + let repo = PgBackupRepo::new(db.pool().clone()); + let pool = db.pool(); + + let account_id = create_test_account(pool).await; + let kid = format!("backup-kid-{}", Uuid::new_v4()); + let encrypted_backup = vec![1u8; 100]; + let salt = vec![2u8; 16]; + + repo.create(account_id, &kid, &encrypted_backup, &salt, "argon2id", 1) + .await + .expect("Should create backup"); + + // Delete the account + query("DELETE FROM accounts WHERE id = $1") + .bind(account_id) + .execute(pool) + .await + .expect("Should delete account"); + + // Backup should be gone due to CASCADE + let result = repo.get_by_kid(&kid).await; + assert!( + matches!(result, Err(BackupRepoError::NotFound)), + "Backup should be cascade deleted" + ); + }); +}