-
Notifications
You must be signed in to change notification settings - Fork 8.3k
feat: Agent Blocks for composable agent #11055
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: feat/loop-subgraph
Are you sure you want to change the base?
Conversation
Add WhileLoop, AgentStep, ExecuteTool, and ThinkTool components:
- WhileLoop: Flow control component with initial_state input for
MessageHistory integration and loop feedback support
- AgentStep: LLM reasoning step with conditional routing (ai_message
or tool_calls output based on model response)
- ExecuteTool: Executes tool calls with parallel execution and
timeout support, outputs updated conversation as DataFrame
- ThinkTool: Optional tool that lets the model reason step-by-step
Supporting infrastructure:
- LCModelComponent base class for model components
- Message and tool execution utilities
- Updated serve_app to use execute_graph_with_capture
- EventManager improvements for streaming
- Frontend cleanEdges fix for group_outputs components
These components enable visual agent loops:
ChatInput → WhileLoop → AgentStep → [tool_calls] → ExecuteTool → WhileLoop
↓ [ai_message]
ChatOutput
Add Agent Blocks category to the frontend UI: - Add nodeColors entry with violet color (#7C3AED) - Add nodeColorsName mapping - Add SIDEBAR_CATEGORIES entry after Flow Control - Add categoryIcons with Blocks icon - Add nodeIconToDisplayIconMap entry
… and parent message handling
…utputs for conditional routing
…mportant messages based on vertex configuration
…is correctly assigned and all fields are included
… vertex identification
…nditional routing
…of FakeCallModelComponent
Introduces comprehensive unit and contract tests validating the agent loop’s graph execution, LLM invocation, streaming behavior, and tool-call lifecycle. Ensures single AI message updates, immediate tool notification during streaming with a parent message, and reuse of tool entries across execution to avoid duplicates. Improves reliability and UX by formalizing event sequencing (accessing → executed → final), enforcing one message ID per response, and preventing duplicate message events. Supports performance by minimizing DB updates during streaming and tightening tool-content updates. Relates to feature X to enhance user experience and optimize performance.
- Check model_message.text instead of lf_message.text for unconsumed generator (lf_message loses generator reference after serialization via model_dump) - Remove redundant aggregation in fallback loop - aggregation already happens inside stream_and_capture() via nonlocal aggregated_chunk - Fixes corrupted tool_calls (e.g., 'calculatorcalculator') when running without event_manager
|
Important Review skippedDraft detected. Please check the settings in the CodeRabbit UI or the You can disable this status message by setting the ✨ Finishing touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
…rokenEdgesEdges Fixed handle reconstruction to match how NodeOutputParameter builds handles: - Regular outputs use [selectedType] - Loop outputs with allows_loop=true use [selectedType, ...loop_types] This fixes loop connections being incorrectly removed as invalid when loading flows with While Loop and Execute Tool components. Added comprehensive unit tests for both functions covering edge validation, loop edges, group_outputs handling, and hidden field filtering.
- Add SharedContextEventsDisplay component for rendering shared context updates - Add sharedContextStore for managing shared context state - Update ContentDisplay to include shared context events - Add SharedContextEventOutput type to chat types - Update buildUtils to handle shared context events
…Display component
…w utility functions
- Fix extract_loop_output to handle dict outputs from model_dump() - Add loop subgraph execution infrastructure - Preserve component configuration across subgraph deepcopy
Note
This is a Proof of Concept PR to validate the agent blocks architecture. Once the approach is established, this will be broken into smaller, more modular PRs for easier review. (This PR has ~9k new lines but most of them are tests 😅)
Summary
Introduces Agent Blocks - a set of composable primitives for building agent workflows visually:
Key Features
ai_messageortool_calls)Architecture
Memory Integration
AgentLoop automatically retrieves conversation history from Langflow's session memory:
message_history- Optional DataFrame input (takes precedence if provided)n_messages- Number of messages to retrieve (default: 100)context_id- Optional context ID for memory isolationBenchmark Results (vs other agent frameworks)
50 iterations, 100% success rate, using
gpt-4o-mini:Runtime differences between Langflow, LangGraph, and Pydantic-AI are within API latency variance (~18ms). Langflow's
AgentLoopComponentperforms on par with the best agent frameworks.Test plan