Manage, compose, test, and evolve AI personas.
uv pip install -e .Turn AI personas from throwaway prompt strings into managed, testable, self-improving assets.
| Without prsna | With prsna |
|---|---|
| Prompts scattered across files | persona ls — versioned library |
| Made-up traits | Real person data via Exa |
| "Seems right?" | Consistency score: 73% |
| Manual prompt tweaking | GEPA auto-optimizes |
| Persona degrades over chat | Drift detection + refresh |
| Static forever | Self-learning from interactions |
# Bootstrap a persona with AI
persona create "skeptical investigative journalist"
# Or base on a real person
persona create --like "Marc Andreessen" "tech investor"
# Chat with it
persona chat journalist
# Check consistency
persona test journalist
# Let it learn from interactions
persona learn journalist --applyfrom prsna import Persona, bootstrap_from_description, LLMError
# Load and chat
vc = Persona.load("~/.prsna/personas/tech-investor.yaml")
response = vc.chat("Should I raise now?")
# Multi-turn conversation
with vc.conversation() as conv:
print(conv.send("I have $50k MRR"))
print(conv.send("Should I raise?"))
# Streaming
for chunk in vc.stream("Tell me about market timing"):
print(chunk, end="", flush=True)
# Synthetic user generation (for testing chatbots)
angry = Persona(**bootstrap_from_description("frustrated customer"))
test_input = angry.as_user("asking about refund policy")
bot_response = my_chatbot(test_input)
# Batch generation
responses = persona.generate(["prompt1", "prompt2", "prompt3"])
# Error handling
try:
response = persona.chat("Hello")
except LLMError as e:
print(f"LLM failed: {e}")# GLOBAL FLAGS
persona --version # Show version
persona --json ls # Output as JSON (for scripting)
persona --quiet ls # Minimal output (names only)
# CREATE
persona init scientist # Empty template (manual edit)
persona create "description" # AI-generated from description
persona create --like "Person" # Based on real person (Exa)
persona create --role "Job Title" # Based on job role
# MANAGE
persona ls # List all personas
persona show scientist # Show details
persona edit scientist # Open in $EDITOR
persona rm scientist # Delete
# COMPOSE
persona mix scientist comedian --as science-comedian
# ENRICH
persona enrich scientist --query "MIT AI researcher"
# TEST & OPTIMIZE
persona test scientist --samples 10 # DSPy consistency check
persona optimize scientist --iterations 50 # GEPA prompt evolution
persona drift scientist "response text" # Check single response
# LEARN & IMPROVE
persona learn scientist --apply # Learn from logged interactions
persona critique scientist --apply # Self-critique and improve
# USE
persona chat scientist # Interactive REPL
persona ask scientist "question" # One-shot
echo "question" | persona ask scientist - # Pipe from stdin
# EXPORT
persona export scientist --to eliza # PersonaKit/Eliza
persona export scientist --to v2 # Character Card V2
persona export scientist --to ollama # Ollama Modelfile
persona export scientist --to hub # PERSONA HUB formatname: scientist
version: 1
description: A curious research scientist who values evidence
traits:
- curious
- methodical
- precise
- humble about uncertainty
voice:
tone: academic
vocabulary: technical
patterns:
- "The evidence suggests..."
- "It's worth noting that..."
boundaries:
- Never claim certainty without data
- Acknowledge limitations
examples:
- user: "Is this true?"
assistant: "The current evidence suggests..."
dynamic:
source: exa
query: "Dr. Jane Smith MIT"
refresh: weekly
providers:
default: gpt-4o-mini┌──────────────────────────────────────────────────────────────────┐
│ LIFECYCLE │
├──────────────────────────────────────────────────────────────────┤
│ │
│ CREATE ──▶ ENRICH ──▶ TEST ──▶ OPTIMIZE ──▶ USE ──▶ LEARN │
│ │ │ │ │ │ │ │
│ ▼ ▼ ▼ ▼ ▼ ▼ │
│ Bootstrap Exa DSPy GEPA Chat/Ask Analyze │
│ from desc people fidelity evolve with interactions │
│ or person search scoring prompt drift & improve │
│ detect │
│ │
│ ◀──── CONTINUOUS IMPROVEMENT ────▶ │
└──────────────────────────────────────────────────────────────────┘
Built on techniques from recent persona research:
| Paper | Technique Used |
|---|---|
| Scaling Synthetic Data with 1B Personas | Persona Hub integration |
| Measuring Persona Drift | Drift detection |
| Persona Vectors | Consistency monitoring |
| Self-Improving Agents | Learning from interactions |
| PersonaGym | Fidelity evaluation |
| RoleLLM | Role-conditioned tuning |
- typer — CLI framework
- dspy — LLM programming & signatures
- gepa — Genetic-Pareto prompt optimization
- litellm — Multi-provider LLM calls (via centralized
llm.py) - exa-py — People search enrichment
- pydantic — Data validation
- rich — Terminal formatting
OPENAI_API_KEY=sk-... # Required for most features
EXA_API_KEY=... # Required for enrich, create --likecd ~/Projects/prsna
uv sync --dev
uv run persona --help
uv run pytest
ruff check src/Personas are the new prompts. As AI systems get more capable, the bottleneck shifts from "can it do X?" to "does it behave consistently as Y?"
prsna treats personas as first-class software artifacts:
- Versioned — Track changes, roll back
- Testable — Measure consistency, catch regressions
- Composable — Mix traits, extend bases
- Evolvable — Learn from use, self-improve
- Portable — Export to any format/platform
prsna = Git + npm + CI for AI personas