You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
In Scope: API Gateway, ML Serving, Web Dashboard, Async Workers, CLI, PostgreSQL+PostGIS+TimescaleDB, Redis, MinIO, Neo4j, Kafka, Airflow, MLflow, Climate Intelligence, scaffolded domain services.
Deferred to v2.0+: Federated Learning, Digital Twin, RL Policy, WASM, Flink, Spark+Sedona, Triton.
1.3 System Architecture Diagram
graph TB
subgraph External Data
SAT[Satellite<br/>Sentinel Landsat MODIS]
WX[Weather<br/>ERA5 GFS]
BIO[Biodiversity<br/>GBIF eBird]
AQ[Air Quality<br/>OpenAQ TROPOMI]
end
subgraph Ingestion
SH[STAC Harvester]
AP[API Pollers]
SG[Sensor Gateway]
end
subgraph Event Bus
KF[[Kafka + Schema Registry]]
end
subgraph Processing
AF[Airflow DAGs]
WK[Dramatiq Workers]
end
subgraph Domain Services
CS[Climate Intelligence]
BS[Biodiversity Sentinel]
HS[Health Shield]
FS[Food Security]
ES[Resource Equity]
end
subgraph ML Layer
FA[FastAPI ML API]
MF[MLflow Registry]
FT[Feature Store]
AG[Agent Orchestrator]
end
subgraph Gateway
NS[NestJS API<br/>REST+GraphQL+WS]
end
subgraph Clients
WB[Next.js 15 Dashboard]
CL[Python CLI]
end
subgraph Storage
PG[(PostgreSQL 16<br/>PostGIS TimescaleDB pgvector)]
RD[(Redis 7)]
NE[(Neo4j 5)]
MO[(MinIO)]
end
SAT --> SH
WX --> AP
BIO --> AP
AQ --> AP
SH --> KF
AP --> KF
SG --> KF
KF --> AF
KF --> WK
AF --> PG
AF --> MO
WK --> PG
CS --> PG
CS --> RD
CS --> FA
BS --> NE
BS --> PG
FA --> MF
FA --> FT
AG --> CS
AG --> NE
NS --> CS
NS --> BS
NS --> HS
NS --> FS
NS --> ES
NS --> FA
NS --> AG
WB --> NS
CL --> NS
Loading
1.4 Key Architectural Decisions
ADR-001: Polyglot Monorepo
Decision: Single monorepo with TypeScript for API/frontend and Python for ML/data.
✅ Shared types via OpenAPI code generation
✅ Single CI/CD pipeline; atomic cross-service changes
graph LR
subgraph Ingest
S1[STAC Harvester]
S2[CDS API - ERA5]
S3[GBIF Client]
S4[OpenAQ Client]
end
subgraph Bus
K[Kafka Topics]
end
subgraph Process
AF[Airflow DAGs]
WK[Workers]
end
subgraph Store
PG[(PostGIS)]
TS[(TimescaleDB)]
MN[(MinIO)]
NE[(Neo4j)]
RD[(Redis)]
end
subgraph Serve
API[REST + GraphQL]
WS[WebSocket]
ML[ML Inference]
end
S1 & S2 & S3 & S4 --> K
K --> AF & WK
AF --> PG & TS & MN
WK --> NE & RD
PG & TS --> API
RD --> WS
MN --> API
NE --> API
PG --> ML
Loading
3.2 Database Schema Design
3.2.1 Core Schema — PostgreSQL + PostGIS
CREATESCHEMAeco_core;
CREATETABLEeco_core.regions (
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
name VARCHAR(255) NOT NULL,
h3_index VARCHAR(15) NOT NULL,
h3_resolution SMALLINTNOT NULL,
geometry GEOMETRY(MultiPolygon, 4326) NOT NULL,
area_km2 DOUBLE PRECISION,
country_iso3 CHAR(3),
admin_level SMALLINT,
properties JSONB DEFAULT '{}'
);
CREATEINDEXidx_regions_h3ONeco_core.regions (h3_index);
CREATEINDEXidx_regions_geomONeco_core.regions USING GIST (geometry);
CREATETABLEeco_core.data_sources (
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
name VARCHAR(255) NOT NULL UNIQUE,
source_type VARCHAR(50) NOT NULL,
provider VARCHAR(255),
license VARCHAR(100),
stac_collection_id VARCHAR(255),
config JSONB DEFAULT '{}',
is_active BOOLEAN DEFAULT TRUE,
last_sync_at TIMESTAMPTZ
);
CREATETABLEeco_core.catalog_items (
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
stac_id VARCHAR(255) NOT NULL,
collection_id VARCHAR(255) NOT NULL,
source_id UUID REFERENCESeco_core.data_sources(id),
geometry GEOMETRY(Geometry, 4326) NOT NULL,
bbox DOUBLE PRECISION[4],
datetime_start TIMESTAMPTZNOT NULL,
datetime_end TIMESTAMPTZ,
h3_cells VARCHAR(15)[],
properties JSONB NOT NULL DEFAULT '{}',
assets JSONB NOT NULL DEFAULT '{}'
);
CREATEINDEXidx_catalog_geomONeco_core.catalog_items USING GIST (geometry);
CREATEINDEXidx_catalog_h3ONeco_core.catalog_items USING GIN (h3_cells);
CREATETABLEeco_core.users (
id UUID PRIMARY KEY,
email VARCHAR(255) NOT NULL UNIQUE,
role VARCHAR(50) DEFAULT 'viewer',
api_key_hash VARCHAR(255),
quota_tier VARCHAR(50) DEFAULT 'free',
preferences JSONB DEFAULT '{}'
);
CREATETABLEeco_core.alerts (
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
alert_type VARCHAR(100) NOT NULL,
domain VARCHAR(50) NOT NULL,
severity VARCHAR(20) NOT NULL,
title VARCHAR(500) NOT NULL,
description TEXT,
geometry GEOMETRY(Geometry, 4326),
h3_index VARCHAR(15),
valid_from TIMESTAMPTZNOT NULL,
valid_until TIMESTAMPTZ,
metadata JSONB DEFAULT '{}'
);
3.2.2 Climate Schema — TimescaleDB Hypertables
CREATESCHEMAeco_climate;
CREATETABLEeco_climate.observations (
id BIGSERIAL,
h3_index VARCHAR(15) NOT NULL,
observed_at TIMESTAMPTZNOT NULL,
variable VARCHAR(50) NOT NULL,
value DOUBLE PRECISIONNOT NULL,
unit VARCHAR(20) NOT NULL,
quality_flag SMALLINT DEFAULT 0,
PRIMARY KEY (id, observed_at)
);
SELECT create_hypertable('eco_climate.observations', 'observed_at',
chunk_time_interval => INTERVAL '1 day');
CREATETABLEeco_climate.forecasts (
id BIGSERIAL,
model_name VARCHAR(100) NOT NULL,
run_id UUID NOT NULL,
h3_index VARCHAR(15) NOT NULL,
forecast_time TIMESTAMPTZNOT NULL,
lead_hours INTEGERNOT NULL,
variable VARCHAR(50) NOT NULL,
value_mean DOUBLE PRECISIONNOT NULL,
value_p10 DOUBLE PRECISION,
value_p50 DOUBLE PRECISION,
value_p90 DOUBLE PRECISION,
unit VARCHAR(20) NOT NULL,
PRIMARY KEY (id, forecast_time)
);
SELECT create_hypertable('eco_climate.forecasts', 'forecast_time',
chunk_time_interval => INTERVAL '1 day');
CREATETABLEeco_climate.anomalies (
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
h3_index VARCHAR(15) NOT NULL,
detected_at TIMESTAMPTZNOT NULL,
variable VARCHAR(50) NOT NULL,
anomaly_score DOUBLE PRECISIONNOT NULL,
baseline_mean DOUBLE PRECISIONNOT NULL,
baseline_std DOUBLE PRECISIONNOT NULL,
observed_value DOUBLE PRECISIONNOT NULL,
severity VARCHAR(20) NOT NULL,
alert_id UUID REFERENCESeco_core.alerts(id)
);
CREATETABLEeco_climate.scenarios (
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
scenario_name VARCHAR(50) NOT NULL,
model_name VARCHAR(100) NOT NULL,
h3_index VARCHAR(15) NOT NULL,
year INTEGERNOT NULL,
month SMALLINT,
variable VARCHAR(50) NOT NULL,
value_mean DOUBLE PRECISIONNOT NULL,
value_p5 DOUBLE PRECISION,
value_p95 DOUBLE PRECISION,
unit VARCHAR(20) NOT NULL,
UNIQUE (scenario_name, model_name, h3_index, year, month, variable)
);
-- Continuous aggregates
CREATE MATERIALIZED VIEW eco_climate.observations_daily
WITH (timescaledb.continuous) ASSELECT
time_bucket('1 day', observed_at) AS bucket,
h3_index, variable,
AVG(value) AS avg_val, MIN(value) AS min_val,
MAX(value) AS max_val, COUNT(*) AS samples
FROMeco_climate.observationsGROUP BY bucket, h3_index, variable;
3.2.3 Domain Schemas
-- BiodiversityCREATESCHEMAeco_biodiversity;
CREATETABLEeco_biodiversity.species (
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
scientific_name VARCHAR(255) NOT NULL UNIQUE,
common_name VARCHAR(255), taxonomy JSONB NOT NULL,
iucn_status VARCHAR(20), gbif_taxon_key BIGINT
);
CREATETABLEeco_biodiversity.occurrences (
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
species_id UUID REFERENCESeco_biodiversity.species(id),
source VARCHAR(50) NOT NULL, observed_at TIMESTAMPTZNOT NULL,
location GEOMETRY(Point, 4326) NOT NULL,
h3_index VARCHAR(15) NOT NULL, properties JSONB DEFAULT '{}'
);
-- HealthCREATESCHEMAeco_health;
CREATETABLEeco_health.air_quality (
id BIGSERIAL, h3_index VARCHAR(15) NOT NULL,
measured_at TIMESTAMPTZNOT NULL,
pollutant VARCHAR(20) NOT NULL, value DOUBLE PRECISIONNOT NULL,
unit VARCHAR(20) NOT NULL, aqi INTEGER,
PRIMARY KEY (id, measured_at)
);
SELECT create_hypertable('eco_health.air_quality', 'measured_at',
chunk_time_interval => INTERVAL '1 day');
-- Food SecurityCREATESCHEMAeco_food;
CREATETABLEeco_food.yield_forecasts (
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
region_id UUID REFERENCESeco_core.regions(id),
crop_type VARCHAR(50) NOT NULL, harvest_year INTEGERNOT NULL,
yield_mean_t_ha DOUBLE PRECISIONNOT NULL,
yield_ci_low DOUBLE PRECISION, yield_ci_high DOUBLE PRECISION
);
-- EquityCREATESCHEMAeco_equity;
CREATETABLEeco_equity.justice_scores (
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
h3_index VARCHAR(15) NOT NULL, computed_at TIMESTAMPTZNOT NULL,
overall_score DOUBLE PRECISIONNOT NULL, breakdown JSONB DEFAULT '{}'
);
-- ML Feature StoreCREATESCHEMAeco_ml;
CREATETABLEeco_ml.features (
id BIGSERIAL, entity_id VARCHAR(255) NOT NULL,
entity_type VARCHAR(50) NOT NULL, feature_set VARCHAR(100) NOT NULL,
computed_at TIMESTAMPTZNOT NULL, features JSONB NOT NULL,
embedding vector(768), PRIMARY KEY (id, computed_at)
);
SELECT create_hypertable('eco_ml.features', 'computed_at');
CREATEINDEXidx_feat_embONeco_ml.features
USING hnsw (embedding vector_cosine_ops);
graph LR
A[Hot: Redis+PostGIS<br/>0-7d] --> B[Warm: TimescaleDB compressed<br/>7d-1yr]
B --> C[Cold: MinIO<br/>1-10yr]
C --> D[Archive: Cloud<br/>10yr+]
style A fill:#e74c3c,color:#fff
style B fill:#f39c12,color:#fff
style C fill:#3498db,color:#fff
style D fill:#95a5a6,color:#fff
Loading
TimescaleDB lifecycle:
Table
Chunk
Compress
Retention
eco_climate.observations
1 day
7 days
5 years
eco_climate.forecasts
1 day
30 days
1 year
eco_health.air_quality
1 day
7 days
3 years
4. ML/AI Architecture
4.1 ML System Diagram
graph TB
subgraph Training
EXP[MLflow Tracking]
TR[PyTorch 2.x Trainer]
EV[Evaluation Framework]
HP[Optuna Tuning]
end
subgraph Registry
REG[MLflow Model Registry]
ST[None -> Staging -> Production]
end
subgraph Inference
OX[ONNX Runtime CPU]
GP[PyTorch GPU]
EN[Ensemble Engine]
UQ[Uncertainty Quantification]
end
subgraph Serving
FP[FastAPI ML API]
BA[Batch: Airflow]
SK[Stream: Kafka Consumer]
end
TR --> EXP
HP --> TR
TR --> EV --> REG --> ST
ST --> OX & GP
OX & GP --> EN --> UQ
UQ --> FP & BA & SK
Promotion:None → Staging (shadow traffic) → Production (A/B) → Archived
Promotion Sequence
sequenceDiagram
participant D as Developer
participant T as Training Pipeline
participant M as MLflow
participant E as Evaluator
participant S as Staging
participant P as Production
D->>T: Submit training config
T->>T: Train with PyTorch
T->>M: Log metrics and artifacts
T->>E: Run evaluation suite
E->>E: Domain metrics and bias audit
alt Passes
E->>M: Register model version
M->>S: Promote to Staging
S->>P: Promote to Production after A/B
else Fails
E-->>D: Failure report
end
graph TB
subgraph Clients
WB[Web Dashboard]
CL[CLI / SDK]
EX[External Apps]
end
subgraph NestJS Gateway Port 3000
AU[Auth: JWT + API Key]
RL[Rate Limiter]
CC[Cache Interceptor]
RS[REST Controllers]
GQ[GraphQL Resolvers]
WG[WebSocket Gateway]
end
subgraph FastAPI ML Port 8000
MA[Auth Middleware]
MR[ML Endpoints]
end
WB & CL & EX --> AU
AU --> RL --> CC
CC --> RS & GQ & WG
RS -->|proxy| MA --> MR
Loading
5.2 Endpoint Taxonomy
Climate — /api/v1/climate
Method
Path
Description
GET
/observations
Query by H3, time, variable
GET
/observations/{h3}/timeseries
Time series for a cell
GET
/forecasts
Weather forecasts 1-14 day
GET
/anomalies
Detected anomalies
GET
/scenarios
CMIP6 SSP projections
POST
/scenarios/explore
Interactive scenario exploration
GET
/downscaling/{h3}
Downscaled climate data
Biodiversity — /api/v1/biodiversity
Method
Path
Description
GET
/species
Species search
GET
/species/{id}/distribution
Distribution map
GET
/occurrences
Occurrence records
GET
/habitat/{h3}
Habitat quality
GET
/deforestation/alerts
Forest loss alerts
Health — /api/v1/health
Method
Path
Description
GET
/air-quality
AQ observations
GET
/air-quality/{h3}/forecast
AQ forecast
GET
/vector-risk
Disease vector maps
Food — /api/v1/food
Method
Path
Description
GET
/crop-monitoring
Crop conditions
GET
/yield-forecast
Yield predictions
GET
/drought-index
SPI/SPEI indices
Equity — /api/v1/equity
Method
Path
Description
GET
/justice-scores
Environmental justice
GET
/justice-scores/{h3}
Score breakdown
Cross-Cutting
Method
Path
Description
GET
/catalog/collections
STAC collections
GET
/catalog/search
STAC item search
POST
/agents/query
Agent natural language
GET
/alerts
Active alerts
WS
/ws/alerts
Real-time alert stream
ML API (internal) — /ml/v1
Method
Path
Description
POST
/predict/climate/forecast
Forecast inference
POST
/predict/biodiversity/sdm
Species distribution
POST
/predict/health/airquality
AQ prediction
POST
/predict/food/yield
Yield prediction
GET
/models
Deployed models list
GET
/models/{name}/info
Model card
5.3 Authentication and Authorization
sequenceDiagram
participant C as Client
participant GW as API Gateway
participant AUTH as Supabase Auth
participant DB as PostgreSQL
C->>GW: Request + JWT or API Key
alt JWT
GW->>AUTH: Verify signature
AUTH-->>GW: User claims
else API Key
GW->>DB: Lookup hash
DB-->>GW: User + quota
end
GW->>GW: RBAC + rate limit check
alt Authorized
GW-->>C: 200 Response
else Denied
GW-->>C: 401 or 403
end
sequenceDiagram
participant SH as STAC Harvester
participant K as Kafka
participant ETL as ETL Worker
participant PG as PostGIS
participant MO as MinIO
participant AD as Anomaly Detector
participant AL as Alert Service
participant WS as WebSocket
participant UI as Dashboard
SH->>K: SatelliteDataIngested
K->>ETL: Consume
ETL->>MO: Store processed raster
ETL->>PG: Insert observations
ETL->>K: DataProcessed
K->>AD: Run anomaly detection
AD->>K: AnomalyDetected
K->>AL: Create alert
AL->>PG: Store alert
AL->>WS: Broadcast
WS->>UI: Push to clients
Loading
6.4 Event Flow — Model Retraining
sequenceDiagram
participant AF as Airflow
participant K as Kafka
participant TR as Trainer
participant MLF as MLflow
participant EV as Evaluator
participant FA as FastAPI ML
AF->>TR: Launch training
TR->>MLF: Log metrics
TR->>EV: Run eval suite
alt Passes
EV->>MLF: Register model
EV->>K: TrainingCompleted
MLF->>K: ModelDeployed
K->>FA: Hot-reload model
else Fails
EV->>K: TrainingFailed
end
Loading
6.5 Event Flow — User Agent Query
sequenceDiagram
participant U as User
participant API as NestJS
participant OR as Orchestrator
participant CA as Climate Agent
participant BA as Bio Agent
participant KG as Knowledge Graph
participant ML as ML API
U->>API: POST /agents/query
API->>OR: Route query
OR->>OR: Decompose tasks
par Climate
OR->>CA: Climate question
CA->>ML: Forecast request
ML-->>CA: Result
CA-->>OR: Climate answer
and Biodiversity
OR->>BA: Bio question
BA->>KG: Species lookup
KG-->>BA: Graph data
BA-->>OR: Bio answer
end
OR->>OR: Synthesize
OR-->>API: Combined response
API-->>U: AgentResponse
Loading
7. Multi-Agent System Architecture
7.1 Agent Architecture
graph TB
USER[User Query] --> ORCH
subgraph Agent System
ORCH[Orchestrator<br/>Decompose + Synthesize]
CA[Climate Agent]
BA[Biodiversity Agent]
HA[Health Agent]
FA_A[Food Agent]
EA[Equity Agent]
KGA[KG Agent]
end
subgraph Tools
DT[Data Query Tools]
MT[ML Inference Tools]
GT[Geospatial Tools]
VT[Visualization Tools]
LT[Literature RAG Tools]
end
ORCH --> CA & BA & HA & FA_A & EA & KGA
CA & BA & HA & FA_A & EA --> DT & MT & GT & VT
KGA --> LT & DT
Execute: domain agents run concurrently with tools
Reflect: evaluate sub-results for consistency
Synthesize: combine into coherent response with uncertainty
Cite: attach sources and provenance
7.5 Tool Registry
# packages/agents/src/ecotrack_agents/tools/registry.pyfromdataclassesimportdataclassfromtypingimportCallable, Any@dataclassclassToolDefinition:
name: strdescription: strparameters: dict[str, Any]
function: Callabledomains: list[str]
timeout_seconds: int=30TOOL_REGISTRY: dict[str, ToolDefinition] = {
"query_climate_observations": ToolDefinition(
name="query_climate_observations",
description="Query climate observations by H3 cell, variable, time",
parameters={"h3_index": "str", "variable": "str",
"start_time": "datetime", "end_time": "datetime"},
function=None, domains=["climate", "health", "food"]),
"run_species_distribution_model": ToolDefinition(
name="run_species_distribution_model",
description="Run SDM for a species under current or projected climate",
parameters={"species_id": "str", "scenario": "str", "year": "int"},
function=None, domains=["biodiversity"]),
"query_knowledge_graph": ToolDefinition(
name="query_knowledge_graph",
description="Execute Cypher query on environmental knowledge graph",
parameters={"query": "str", "params": "dict"},
function=None, domains=["knowledge_graph", "climate", "biodiversity"]),
"generate_map_visualization": ToolDefinition(
name="generate_map_visualization",
description="Generate a map from geospatial data",
parameters={"data": "GeoJSON", "style": "dict"},
function=None, domains=["climate", "biodiversity", "health", "food", "equity"]),
}
8. Security Architecture
8.1 Authentication Flow
sequenceDiagram
participant C as Client
participant GW as API Gateway
participant SA as Supabase Auth
participant DB as PostgreSQL
C->>GW: Request + Bearer token
GW->>SA: Verify JWT
SA-->>GW: User claims + role
GW->>GW: RBAC check
GW->>GW: Rate limit check
GW->>DB: Query with RLS
DB-->>GW: Filtered results
GW-->>C: Response
graph TB
subgraph Apps
A1[NestJS API]
A2[FastAPI ML]
A3[Workers]
end
subgraph Collection
OT[OpenTelemetry SDK]
PE[Prometheus Exporters]
LD[Structured Log Drivers]
end
subgraph Storage
PS[(Prometheus)]
LS[(Loki)]
JS[(Jaeger)]
end
subgraph Viz
GR[Grafana Dashboards]
end
subgraph Alert
AM[Alertmanager]
SL[Slack / PagerDuty]
end
A1 & A2 & A3 --> OT & PE & LD
OT --> JS
PE --> PS
LD --> LS
PS & LS & JS --> GR
PS --> AM --> SL
graph TB
LB[Load Balancer] --> BLUE[Blue: Current v1.2.0]
LB -.-> GREEN[Green: New v1.3.0]
subgraph Deployment Steps
S1[1. Deploy Green alongside Blue]
S2[2. Run smoke tests on Green]
S3[3. Switch traffic to Green]
S4[4. Monitor for 15 minutes]
S5[5. Tear down Blue or rollback]
end
S1 --> S2 --> S3 --> S4 --> S5
Loading
11. Interface Contracts
11.1 Data Pipeline → ML Engine
# Interface: DataPipeline produces features; ML Engine consumes themfromdataclassesimportdataclassfromdatetimeimportdatetimeimportnumpyasnp@dataclassclassFeatureVector:
"""Output from data pipeline, input to ML models."""entity_id: str# H3 cell ID, species ID, etc.entity_type: str# "h3_cell", "species", "region"feature_set: str# "climate_grid_daily"timestamp: datetimevalues: dict[str, float] # Named feature valuesembedding: np.ndarray|None# Optional pre-computed embedding (768-dim)@dataclassclassTrainingDataset:
"""Dataset prepared by pipeline for ML training."""dataset_id: strname: strversion: strfeature_set: strsplit: str# "train", "val", "test"num_samples: inttemporal_range: tuple[datetime, datetime]
spatial_extent: dict# GeoJSON bboxstorage_path: str# MinIO pathchecksum: str# SHA256@dataclassclassPredictionRequest:
"""ML Engine input for inference."""model_name: strmodel_version: strfeatures: list[FeatureVector]
return_uncertainty: bool=True@dataclassclassPredictionResult:
"""ML Engine output from inference."""model_name: strmodel_version: strpredictions: list[dict[str, float]] # [{variable: value, ...}]uncertainty: list[dict[str, float]] |None# [{variable_std, variable_p10, ...}]latency_ms: floatmlflow_run_id: str
11.2 ML Engine → API Layer
# Interface: ML API serves predictions; NestJS API Gateway consumes themfrompydanticimportBaseModel, FieldfromdatetimeimportdatetimeclassMLPredictionRequest(BaseModel):
"""Request from API Gateway to ML API."""model_name: strdomain: strh3_index: strvariables: list[str]
time_range: dict=Field(description="start and end datetime")
scenario: str|None=Nonereturn_uncertainty: bool=TrueclassMLPredictionResponse(BaseModel):
"""Response from ML API to API Gateway."""model_name: strmodel_version: strh3_index: strpredictions: list[dict] # Time-indexed predictionsuncertainty: dict|None# Uncertainty bandsmetadata: dict=Field(default_factory=dict)
latency_ms: floatcached: bool=FalseclassModelInfoResponse(BaseModel):
"""Model metadata for API consumers."""name: strversion: strdomain: strarchitecture: strdescription: strinput_features: list[str]
output_variables: list[str]
metrics: dict[str, float]
training_date: datetimedataset_version: str