A modern chat application built with FastAPI and PydanticAI, featuring streaming AI responses, code execution, web search capabilities, and rich markdown rendering.
There's also an experimental code execution sandbox (uses Deno) that is being worked on, as an MCP server.
This project demonstrates a full-stack chat application that leverages OpenAI's GPT-4o-mini model through the PydanticAI framework. The application includes a FastAPI backend with SQLite persistence and a dynamic frontend that supports real-time streaming responses, syntax highlighting, Mermaid diagrams, mathematical equations, and more.
- Streaming Responses: Real-time streaming of AI responses with debounced updates
- Web Search: Built-in web search tool for accessing current information
- Code Execution: Sandboxed code execution environment for running Python code
- Conversation History: Persistent chat history stored in SQLite database
- Markdown Support: Full GitHub Flavored Markdown (GFM) including tables
- Syntax Highlighting: Dark-themed code blocks with syntax highlighting for multiple languages (Python, JavaScript, Java, Rust, HTML, etc.)
- Copy to Clipboard: One-click copy button for all code blocks
- Mermaid Diagrams: Render flowcharts, sequence diagrams, and other visualizations
- Mathematical Equations: LaTeX/MathJax support for mathematical notation
- Tables: Formatted tables with proper borders and styling
- Sticky Toolbar: Always-visible header with app branding
- Responsive Design: Adapts to different screen sizes (75vw max-width)
- Clear Conversation: Button to clear visible conversation without deleting history
- New Chat: Button to start fresh by clearing both UI and database history
- Loading Spinner: Visual feedback during AI response generation
- Custom Background: Configurable background image
agent-chat/
├── chat_app.py # FastAPI backend server
├── chat_app.html # HTML UI with embedded styles
├── chat_app.ts # TypeScript frontend logic
├── static/ # Static assets
│ └── assets/ # Images and backgrounds
├── pyproject.toml # Python dependencies
└── README.md # This file
FastAPI Application
- Lifespan management for database connections
- Static file serving for assets
- RESTful endpoints for chat operations
Endpoints
GET /- Serve main HTML pageGET /chat_app.ts- Serve TypeScript source (transpiled in browser)GET /chat/- Retrieve all stored messagesPOST /chat/- Submit new chat message and stream AI responsePOST /chat/clear- Clear all stored chat history
Database Layer
- Asynchronous SQLite wrapper using ThreadPoolExecutor
- Stores conversation history as JSON-encoded ModelMessages
- Methods:
add_messages(),get_messages(),clear_messages()
AI Agent Configuration
- Model: OpenAI GPT-4o-mini
- Tools: Web search preview, code execution
- Settings: Streaming enabled, code execution outputs included
TypeScript Logic (transpiled in-browser)
- Fetches and processes streaming responses
- Handles newline-delimited JSON messages
- Manages conversation UI updates
- Processes Mermaid diagram blocks
- Initializes syntax highlighting
- Implements copy-to-clipboard functionality
Markdown Processing
- Marked.js for markdown parsing
- marked-highlight extension for code blocks
- marked-table extension for GFM tables
- Custom preprocessing for Mermaid diagrams
UI Components
- Conversation area with role-based styling
- Form with text input and dual buttons (Clear/Send)
- Toolbar with New Chat button
- Loading spinner with smooth transitions
- Error display area
- Python 3.11 or higher
- UV package manager (recommended) or pip
- OpenAI API key
- Clone the repository:
git clone https://github.com/darenr/agent-chat.git
cd agent-chat- Install dependencies:
uv sync- Set up your OpenAI API key:
export OPENAI_API_KEY='your-api-key-here'- (Optional) Configure Logfire for observability:
export LOGFIRE_TOKEN='your-logfire-token'Start the server with auto-reload:
uv run -m chat_appThe application will be available at http://localhost:8000
- Send a Message: Type your question in the input field and click "Send" or press Enter
- View Response: AI responses stream in real-time with formatted content
- Copy Code: Click the "Copy" button in the top-right of any code block
- Clear Screen: Click "Clear" to remove visible messages (history preserved)
- New Chat: Click "New Chat" in the toolbar to clear all history and start fresh
Code Generation
User: Write a Python function to calculate Fibonacci numbers
AI: [Generates syntax-highlighted code with copy button]
Web Search
User: What are the latest developments in AI?
AI: [Uses web search tool to provide current information]
Code Execution
User: Calculate the sum of squares from 1 to 100
AI: [Executes code and shows result]
Diagrams
User: Show me a flowchart of the login process
AI: [Generates Mermaid diagram that renders visually]
- User submits form → TypeScript prevents default submission
- POST to
/chat/with FormData containing prompt - Backend creates user message and streams to client
- Backend runs PydanticAI agent with chat history
- Agent streams output tokens back to client
- Client updates UI incrementally as chunks arrive
- Complete messages saved to database for future context
Messages are stored in .chat_app_messages.sqlite as JSON-encoded arrays:
{
"id": int, # Auto-increment primary key
"message_list": str # JSON array of ModelMessage objects
}- Code execution runs in a sandboxed environment (PydanticAI built-in)
- No user authentication (add for production use)
- Database stored locally (not suitable for multi-user deployments)
- API keys should be environment variables, not committed to code
- Edit CSS in
<style>block ofchat_app.html - Modify toolbar color:
#toolbar { background-color: #3a3632 } - Change syntax theme: Update Highlight.js CDN link to different theme
- Adjust model: Change
OpenAIResponsesModel("gpt-4o-mini")to other models - Modify tools: Add/remove items in
builtin_toolsarray - Tune streaming: Adjust
debounce_byparameter inresult.stream_output()
- Replace
/static/assets/green-bg.jpgwith your own image - Update CSS
body { background-image: url('...') }
fastapi- Web frameworkpydantic-ai- AI agent frameworkuvicorn- ASGI serverlogfire- Observability (optional)
- Bootstrap 5.3.8 - UI framework
- Highlight.js 11.9.0 - Syntax highlighting
- Mermaid 10 - Diagram rendering
- Marked.js 15.0.0 - Markdown parsing
- MathJax 3 - Mathematical equations
- TypeScript 5.6.3 - In-browser transpilation
- Minimal build tooling (TypeScript transpiled in-browser)
- Single-file components where possible
- Progressive enhancement
- Streaming-first architecture
- User authentication and multi-user support
- Export conversations to markdown/PDF
- Voice input/output
- Image generation and analysis
- Custom system prompts
- Conversation branching
- Share conversation links
See LICENSE file for details.
Contributions welcome! Please open an issue or submit a pull request.
- Built with PydanticAI by Pydantic
- Uses OpenAI models
- UI powered by Bootstrap