"The future of AI coding isn't just larger context windows—it's smarter context retrieval."
I am an architect solving the "Context Precision" problem in AI software engineering.
While others focus on stuffing more code into an LLM, my focus is on Repository Graph RAG—building the "GPS" for codebases. My goal is to enable AI to navigate complex, cross-module dependencies and understand architectural impact with surgical precision and minimal token usage.
High-Precision Code Review via Contextual Retrieval
I built LlamaPReview to prove that less is more: by retrieving only the relevant dependency graph, we can outperform massive context windows.
- The Metric: Achieved a 61% Signal-to-Noise Ratio (3x industry average) by filtering out irrelevant code noise.
- The Evidence: Caught a critical transaction bug in Vanna.ai (20K stars) that required tracing logic across multiple hidden modules—something standard "diff-based" AI missed entirely.
- The Product: A validated SaaS solution trusted by 4,000+ repositories.
👉 Visit Product Site | 📊 Read the Signal-to-Noise Analysis
The Retrieval Infrastructure Layer
To build a graph, you first need high-fidelity data. I open-sourced the retrieval engine that powers my experiments.
- Role: A production-grade library designed to fetch and structure GitHub data specifically for RAG pipelines.
- Capability: Bridges the gap between raw Git objects and AI-ready context.
from llama_github import GithubRAG
# Efficiently retrieve cross-module context without cloning the entire repo
context = github_rag.retrieve_context("How does the payment service impact the user schema?")Occupying the "Code Understanding" Ecological Niche
LlamaPReview was just the first application. My long-term strategy is to build the definitive Repository Knowledge Graph that serves as the backend for all autonomous coding agents.
- The Problem: Flat text search (Standard RAG) loses the relationships between classes, methods, and data flows.
- The Solution: A traversable graph that allows LLMs to "hop" through dependencies.
- The Value:
- Token Efficiency: Solves the problem with 5% of the tokens required by full-context approaches.
- Impact Analysis: Instantly identifies how a change in
Module AbreaksModule Zwithout reading the files in between. - Scalability: The only viable path for AI to understand million-line monoliths.
I document my research on defining the next generation of AI architecture.
- Case Study: Catching the "Invisible" Bug — Real-world evidence: How we found a critical logic error in a 20k-star repo that standard "Diff-based" AI missed entirely.
- The Signal-to-Noise Ratio in AI Code Review — A new evaluation framework: Why simply increasing context window size often leads to lower quality reviews.
- (Coming Soon) The Inconsistency Problem — Why the same AI tool works perfectly on Monday but fails on Tuesday: A deep dive into "Context Instability."
- (Coming Soon) The End of Guesswork: Repository Graph RAG — Moving beyond probabilistic search to deterministic, graph-based dependency analysis for 100% consistent context.
| Core Intelligence |
|
| Graph & Data |
|
I am building the infrastructure that will power the next decade of AI development tools.
Building the GPS for the world's code.

