Lightweight, modular demo showing how to turn a single RGB image into structured scene facts, run LangGraph agents for risk/decision/explanation, and surface an auditable action.
- OpenCV-based deterministic heuristics for visibility, rough obstacle detection, and traffic-light color.
- Normalized JSON scene schema with no raw image data passed to agents.
- LangGraph orchestration of three agents (risk, decision, explanation) with safety-first rules.
- Streamlit UI and Typer CLI; both feed the same pipeline.
- Pytest coverage with synthetic scenes; no external models or API keys required.
python -m venv .venv
source .venv/bin/activate
pip install -r requirements.txtpython main.py run path/to/image.pngstreamlit run streamlit_app.pypytestvt_action.vision.analyze_imageuses OpenCV to derive structured facts (objects, traffic light, visibility, environment).vt_action.schema.normalize_sceneenforces the schema and deterministic ordering.vt_action.graph.run_graphexecutes LangGraph nodes:- Risk agent (hazard + qualitative level)
- Decision agent (PROCEED/SLOW_DOWN/STOP)
- Explanation agent (concise, fact-backed rationale)
vt_action.pipeline.evaluate_imageties everything together for the UI/CLI/tests.
OpenAI API keys are not required; the graph uses rule-based agents for clarity.
