This is the official frontend user interface component for NeMo Agent Toolkit, an open-source library for building AI agents and workflows.
This project builds upon the work of:
- chatbot-ui by Mckay Wrigley
- chatbot-ollama by Ivan Fioravanti
- π¨ Modern and responsive user interface
- π Real-time streaming responses
- π€ Human-in-the-loop workflow support
- π Light/Dark theme
- π WebSocket and HTTP API integration
- π³ Docker support
- NeMo Agent Toolkit installed and configured
- Git
- Node.js (v18 or higher)
- npm or Docker
Clone the repository:
git clone [email protected]:NVIDIA/NeMo-Agent-Toolkit-UI.git
cd NeMo-Agent-Toolkit-UIInstall dependencies:
npm cinpm run devThe application will be available at http://localhost:3000
# Build the Docker image
docker build -t nemo-agent-toolkit-ui .
# Run the container with environment variables from .env
# Ensure the .env file is present before running this command.
# Skip --env-file .env if no overrides are needed.
docker run --env-file .env -p 3000:3000 nemo-agent-toolkit-uiThe application supports configuration via environment variables in a .env file:
Application Configuration:
NEXT_PUBLIC_NAT_WORKFLOW- Application workflow name displayed in the UINEXT_PUBLIC_NAT_BACKEND_ADDRESS- Required - Backend server address without protocol (e.g., '127.0.0.1:8000' or 'api.example.com')- Used for both HTTP API and WebSocket connections
- Protocols are automatically added:
http/wsin development,https/wssin production
NEXT_PUBLIC_NAT_DEFAULT_ENDPOINT- Default endpoint selection
MCP Configuration:
NEXT_PUBLIC_MCP_PATH- MCP client API path (defaults to/mcp/client/tool/list)- Note: Uses the same server as
NEXT_PUBLIC_SERVER_URL. Feature Toggles:
- Note: Uses the same server as
NEXT_PUBLIC_NAT_WEB_SOCKET_DEFAULT_ON- Enable WebSocket mode by default (true/false)NEXT_PUBLIC_NAT_CHAT_HISTORY_DEFAULT_ON- Enable chat history persistence by default (true/false)NEXT_PUBLIC_NAT_RIGHT_MENU_OPEN- Show right menu panel by default (true/false)NEXT_PUBLIC_NAT_ENABLE_INTERMEDIATE_STEPS- Show AI reasoning steps by default (true/false)NEXT_PUBLIC_NAT_ADDITIONAL_VIZ_DEFAULT_ON- View settings and toggles not part of the core functionality (true/false)
Optional Configuration:
NAT_BACKEND_URL- Advanced - Override HTTP API backend URL for production (e.g., 'http://nat-backend-internal:8000')- Only set this if your internal API routing differs from the public WebSocket address
- If not set, automatically derived from
NEXT_PUBLIC_NAT_BACKEND_ADDRESS - Example use case: Internal Docker network for API, public domain for WebSocket
NAT_DEFAULT_MODEL- Default AI model identifier for server-side renderingNAT_MAX_FILE_SIZE_STRING- Maximum file upload size for all operations (e.g., '5mb', '10mb', '1gb')NODE_ENV- Environment mode (development/production) affects security settingsNEXT_TELEMETRY_DISABLED- Disable Next.js telemetry data collection (1 to disable)
Settings can be configured by selecting the Settings icon located on the bottom left corner of the home page.
Appearance:
Theme: Switch between Light and Dark mode
API Configuration:
HTTP Endpoint: Select API endpoint type:- Chat Completions β Streaming - Real-time OpenAI Chat Completions compatible API endpoint with streaming responses
- Chat Completions β Non-Streaming - Standard OpenAI Chat Completions compatible API endpoint
- Generate β Streaming - Text generation with streaming
- Generate β Non-Streaming - Standard text generation
- Context-Aware RAG β Non-Streaming (Experimental) - Experimental integration with Context-Aware RAG backend
Optional Generation Parameters: OpenAI Chat Completions compatible JSON parameters that can be added to the request body (available for chat endpoints)
WebSocket Configuration: The WebSocket path defaults to websocket.
WebSocket Schema: Select schema for real-time connections:- Chat Completions β Streaming - Streaming chat over WebSocket
- Chat Completions β Non-Streaming - Non-streaming chat over WebSocket
- Generate β Streaming - Streaming generation over WebSocket
- Generate β Non-Streaming - Non-streaming generation over WebSocket
Note: For intermediate results streaming, use Chat Completions β Streaming (/chat/stream) or Generate β Streaming (/generate/stream).
Note: This is an experimental feature
The live data streaming feature allows visualization of real-time text updates across multiple streams. This is useful for monitoring ongoing processes or displaying live transcription or streaming data.
For more detail, see the README for live data streaming.
- Set up NeMo Agent Toolkit following the getting started guide
- Start workflow by following the Getting Started Examples
nat serve --config_file=examples/getting_started/simple_calculator/configs/config.ymlInteract with the chat interface by prompting the agent with the message:
Is 4 + 4 greater than the current hour of the day?
- Set up NeMo Agent Toolkit following the getting started guide
- Start workflow by following the HITL Example
nat serve --config_file=examples/HITL/simple_calculator_hitl/configs/config-hitl.ymlEnable WebSocket mode in the settings panel for bidirectional real-time communication between the client and server.
- Send the following prompt:
Can you process my input and display the result for the given prompt: How are you today?
- Enter your response when prompted:
- Monitor the result:
The UI supports both HTTP requests (OpenAI Chat compatible) and WebSocket connections for server communication. For detailed information about WebSocket messaging integration, please refer to the WebSocket Documentation in the NeMo Agent Toolkit documentation.
This project is licensed under the MIT License - see the LICENSE file for details. The project includes code from chatbot-ui and chatbot-ollama, which are also MIT licensed.




