diff --git a/01-tutorials/01-fundamentals/09-configuration-loader/.gitignore b/01-tutorials/01-fundamentals/09-configuration-loader/.gitignore
new file mode 100644
index 00000000..10b44aec
--- /dev/null
+++ b/01-tutorials/01-fundamentals/09-configuration-loader/.gitignore
@@ -0,0 +1,2 @@
+*.txt
+*.md
\ No newline at end of file
diff --git a/01-tutorials/01-fundamentals/09-configuration-loader/01-tool-loading.ipynb b/01-tutorials/01-fundamentals/09-configuration-loader/01-tool-loading.ipynb
new file mode 100644
index 00000000..d576c9e8
--- /dev/null
+++ b/01-tutorials/01-fundamentals/09-configuration-loader/01-tool-loading.ipynb
@@ -0,0 +1,163 @@
+{
+ "cells": [
+ {
+ "cell_type": "markdown",
+ "id": "8e394456",
+ "metadata": {},
+ "source": [
+ "# Loading Tools from Configuration Files\n",
+ "\n",
+ "This notebook demonstrates how to use the **ToolConfigLoader** to dynamically load and instantiate tools from YAML configuration files. This approach enables:\n",
+ "\n",
+ "- **Declarative Tool Management**: Define tools in configuration files rather than hardcoding them\n",
+ "- **Dynamic Tool Loading**: Load tools at runtime based on configuration\n",
+ "- **Flexible Tool Composition**: Mix custom tools with pre-built tools from strands-tools\n",
+ "\n",
+ "## What You'll Learn\n",
+ "\n",
+ "1. How to define tools in YAML configuration files\n",
+ "2. Using ToolConfigLoader to load custom and pre-built tools\n",
+ "3. Executing loaded tools with proper parameters\n",
+ "4. Best practices for tool configuration management\n",
+ "\n",
+ "## Prerequisites\n",
+ "\n",
+ "- Python 3.10 or later\n",
+ "- strands-agents and strands-agents-tools packages\n",
+ "- Basic understanding of YAML configuration files\n",
+ "\n",
+ "Let's explore tool loading from configuration!"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "id": "5f7b62fd",
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "# Install Strands using pip\n",
+ "!pip install strands-agents strands-agents-tools PyYAML"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "id": "6763a5b8",
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "import yaml\n",
+ "import sys\n",
+ "\n",
+ "with open('./configs/tools.strands.yml', 'r') as file:\n",
+ " config = yaml.safe_load(file)\n",
+ "print(config)\n"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "id": "51dba503",
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "from strands.experimental.config_loader.tools.tool_config_loader import ToolConfigLoader\n",
+ "\n",
+ "tool_loader = ToolConfigLoader()\n",
+ "weather = tool_loader.load_tool(tool=config[\"tools\"][0])\n",
+ "\n",
+ "print(weather)"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "id": "e19068ac",
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "response = weather()\n",
+ "print(response)"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "id": "f5cb951a",
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "# from strands_tools import file_write\n",
+ "tool_loader = ToolConfigLoader()\n",
+ "file_write = tool_loader.load_tool(tool=config[\"tools\"][1])\n",
+ "\n",
+ "print(file_write)"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "demo_explanation",
+ "metadata": {},
+ "source": [
+ "## Demonstrating the file_write Tool\n",
+ "\n",
+ "Now let's use the `file_write` tool to create a simple text file. This demonstrates how to:\n",
+ "\n",
+ "1. Create a proper `ToolUse` request with the required parameters\n",
+ "2. Execute the tool asynchronously using the `stream` method\n",
+ "3. Handle the tool's response and verify the operation\n",
+ "\n",
+ "The `file_write` tool requires two parameters:\n",
+ "- `path`: The file path where content should be written\n",
+ "- `content`: The text content to write to the file"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "id": "b131235e",
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "import os\n",
+ "\n",
+ "# Set environment variable to bypass interactive prompts\n",
+ "os.environ['BYPASS_TOOL_CONSENT'] = 'true'\n",
+ "\n",
+ "# Create a ToolUse request for the file_write tool\n",
+ "tool_use = {\n",
+ " 'toolUseId': 'demo-file-write',\n",
+ " 'name': 'file_write',\n",
+ " 'input': {\n",
+ " 'path': 'hello-strands.txt',\n",
+ " 'content': 'Hello Strands!'\n",
+ " }\n",
+ "}\n",
+ "\n",
+ "[result async for result in file_write.stream(tool_use, {})]"
+ ]
+ }
+ ],
+ "metadata": {
+ "kernelspec": {
+ "display_name": "dev",
+ "language": "python",
+ "name": "python3"
+ },
+ "language_info": {
+ "codemirror_mode": {
+ "name": "ipython",
+ "version": 3
+ },
+ "file_extension": ".py",
+ "mimetype": "text/x-python",
+ "name": "python",
+ "nbconvert_exporter": "python",
+ "pygments_lexer": "ipython3",
+ "version": "3.12.9"
+ }
+ },
+ "nbformat": 4,
+ "nbformat_minor": 5
+}
diff --git a/01-tutorials/01-fundamentals/09-configuration-loader/02-agent-loading.ipynb b/01-tutorials/01-fundamentals/09-configuration-loader/02-agent-loading.ipynb
new file mode 100644
index 00000000..5946c426
--- /dev/null
+++ b/01-tutorials/01-fundamentals/09-configuration-loader/02-agent-loading.ipynb
@@ -0,0 +1,153 @@
+{
+ "cells": [
+ {
+ "cell_type": "markdown",
+ "id": "8e394456",
+ "metadata": {},
+ "source": [
+ "# Loading Agents from Configuration Files\n",
+ "\n",
+ "This notebook demonstrates how to create and configure Strands Agents using **YAML configuration files** and the **AgentConfigLoader**. This declarative approach provides several advantages:\n",
+ "\n",
+ "- **Configuration-Driven Development**: Define agent behavior, tools, and models in external config files\n",
+ "- **Environment Flexibility**: Easily switch between different configurations for development, testing, and production\n",
+ "- **Maintainability**: Separate agent logic from configuration, making updates easier\n",
+ "- **Reusability**: Share and version control agent configurations independently\n",
+ "\n",
+ "## What You'll Learn\n",
+ "\n",
+ "1. How to define agent configurations in YAML files\n",
+ "2. Loading agents using the AgentConfigLoader from configuration files or dictionaries\n",
+ "3. Configuring models, system prompts, and tools declaratively\n",
+ "4. Best practices for agent configuration management\n",
+ "\n",
+ "## Prerequisites\n",
+ "\n",
+ "- Python 3.10 or later\n",
+ "- AWS account configured with appropriate permissions for Bedrock\n",
+ "- strands-agents package installed\n",
+ "- Basic understanding of YAML syntax\n",
+ "\n",
+ "Let's build agents from configuration!"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "id": "5f7b62fd",
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "# Install Strands using pip\n",
+ "!pip install strands-agents strands-agents-tools python-dotenv PyYAML"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "id": "fda69d85",
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "from dotenv import load_dotenv\n",
+ "load_dotenv()"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "79a45632",
+ "metadata": {
+ "vscode": {
+ "languageId": "plaintext"
+ }
+ },
+ "source": [
+ "\n",
+ "## Creating an Agent from Configuration\n",
+ "\n",
+ "Let's examine how to define and load a weather agent using YAML configuration and the AgentConfigLoader.\n",
+ "\n",
+ "### 1. Weather Agent Configuration:\n",
+ "\n",
+ "The configuration file defines:\n",
+ "- **Model**: Specifies which LLM to use (Claude 3.7 Sonnet via Amazon Bedrock)\n",
+ "- **System Prompt**: Sets the agent's behavior and capabilities\n",
+ "- **Tools**: Lists the tools available to the agent (weather_tool.weather)\n",
+ "\n",
+ "This creates a specialized weather assistant that can:\n",
+ "- Answer weather-related queries using the weather tool\n",
+ "- Perform simple calculations as specified in the system prompt\n",
+ "- Maintain consistent behavior across different environments\n",
+ "\n",
+ "
\n",
+ "

\n",
+ "
"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "id": "5f35b45a",
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "import yaml\n",
+ "\n",
+ "with open('./configs/weather-agent.strands.yml', 'r') as file:\n",
+ " config = yaml.safe_load(file)\n",
+ "print(config)"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "id": "0d7592da",
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "from pathlib import Path\n",
+ "from strands.experimental.config_loader.agent import AgentConfigLoader\n",
+ "\n",
+ "# Create the config loader\n",
+ "loader = AgentConfigLoader()\n",
+ "\n",
+ "# Load agent from dictionary config\n",
+ "weather_agent = loader.load_agent(config)\n",
+ "\n",
+ "print(weather_agent)"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "id": "b61d3792",
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "response = weather_agent(\"What is the weather today?\")\n",
+ "print(response)"
+ ]
+ }
+ ],
+ "metadata": {
+ "kernelspec": {
+ "display_name": "dev",
+ "language": "python",
+ "name": "python3"
+ },
+ "language_info": {
+ "codemirror_mode": {
+ "name": "ipython",
+ "version": 3
+ },
+ "file_extension": ".py",
+ "mimetype": "text/x-python",
+ "name": "python",
+ "nbconvert_exporter": "python",
+ "pygments_lexer": "ipython3",
+ "version": "3.12.9"
+ }
+ },
+ "nbformat": 4,
+ "nbformat_minor": 5
+}
diff --git a/01-tutorials/01-fundamentals/09-configuration-loader/03-agents-as-tools.ipynb b/01-tutorials/01-fundamentals/09-configuration-loader/03-agents-as-tools.ipynb
new file mode 100644
index 00000000..e2764571
--- /dev/null
+++ b/01-tutorials/01-fundamentals/09-configuration-loader/03-agents-as-tools.ipynb
@@ -0,0 +1,181 @@
+{
+ "cells": [
+ {
+ "cell_type": "markdown",
+ "id": "8e394456",
+ "metadata": {},
+ "source": [
+ "# Agents as Tools: Multi-Agent Orchestration from Configuration\n",
+ "\n",
+ "This notebook demonstrates how to create **sophisticated multi-agent systems** using configuration files, where specialized agents are exposed as tools to an orchestrator agent. This pattern enables:\n",
+ "\n",
+ "- **Specialized Agent Tools**: Define domain-specific agents (research, product recommendations, travel planning) as reusable tools\n",
+ "- **Intelligent Routing**: An orchestrator agent automatically selects the appropriate specialist based on user queries\n",
+ "- **Scalable Architecture**: Add new specialized agents without modifying existing code\n",
+ "- **Configuration-Driven Multi-Agent Systems**: Define complex agent hierarchies in YAML\n",
+ "\n",
+ "## What You'll Learn\n",
+ "\n",
+ "1. How to define agents as tools in configuration files\n",
+ "2. Creating an orchestrator agent that routes queries to specialists\n",
+ "3. Configuring agent tool schemas and input validation\n",
+ "4. Building scalable multi-agent systems declaratively\n",
+ "5. Combining agent tools with traditional function tools\n",
+ "\n",
+ "## Prerequisites\n",
+ "\n",
+ "- Python 3.10 or later\n",
+ "- AWS account configured with appropriate permissions for Bedrock\n",
+ "- strands-agents and strands-agents-tools packages\n",
+ "- Understanding of multi-agent system concepts\n",
+ "\n",
+ "Let's build intelligent agent orchestration!"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "id": "5f7b62fd",
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "# Install Strands using pip\n",
+ "!pip install strands-agents strands-agents-tools python-dotenv PyYAML"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "id": "fda69d85",
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "from dotenv import load_dotenv\n",
+ "load_dotenv()"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "79a45632",
+ "metadata": {
+ "vscode": {
+ "languageId": "plaintext"
+ }
+ },
+ "source": [
+ "\n",
+ "## Multi-Agent Orchestration Pattern\n",
+ "\n",
+ "Let's examine the agents-as-tools configuration that creates a sophisticated multi-agent system.\n",
+ "\n",
+ "### 1. Orchestrator Agent with Specialized Agent Tools:\n",
+ "\n",
+ "The configuration defines:\n",
+ "- **Orchestrator Agent**: Routes queries to appropriate specialist agents\n",
+ "- **Research Assistant**: Handles factual research queries with citations\n",
+ "- **Product Recommendation Assistant**: Provides personalized product suggestions\n",
+ "- **Trip Planning Assistant**: Creates detailed travel itineraries\n",
+ "- **Research & Summary Assistant**: Performs research and creates concise summaries\n",
+ "- **Traditional Tools**: Includes file_write for output generation\n",
+ "\n",
+ "This creates an intelligent system where:\n",
+ "- The orchestrator analyzes user queries and selects the most appropriate specialist\n",
+ "- Each specialist agent has focused expertise and tailored system prompts\n",
+ "- Agents can be combined with traditional function tools seamlessly\n",
+ "- The entire system is defined declaratively in configuration\n",
+ "\n",
+ "\n",
+ "

\n",
+ "
"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "id": "5f35b45a",
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "import yaml\n",
+ "\n",
+ "with open('./configs/agents-as-tools.strands.yml', 'r') as file:\n",
+ " config = yaml.safe_load(file)\n",
+ "print(config)"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "id": "0d7592da",
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "from pathlib import Path\n",
+ "from strands.experimental.config_loader.agent import AgentConfigLoader\n",
+ "\n",
+ "# Create the config loader\n",
+ "loader = AgentConfigLoader()\n",
+ "\n",
+ "# Load agent from dictionary config\n",
+ "orchestrator = loader.load_agent(config)\n",
+ "\n",
+ "print(orchestrator)\n",
+ "\n",
+ "print(\"Orchestrator agent loaded successfully!\")\n",
+ "\n",
+ "print(f\"Tools: {orchestrator.tool_names}\")"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "id": "b61d3792",
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "# Example: E-commerce Customer Service System\n",
+ "import os\n",
+ "\n",
+ "customer_query = (\n",
+ " \"I'm looking for hiking boots. Write the final response to current directory.\"\n",
+ ")\n",
+ "\n",
+ "os.environ[\"BYPASS_TOOL_CONSENT\"] = \"true\"\n",
+ "\n",
+ "# The orchestrator automatically determines this requires multiple specialized agents\n",
+ "response = orchestrator(customer_query)"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "id": "9ea734b9",
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "print(response)"
+ ]
+ }
+ ],
+ "metadata": {
+ "kernelspec": {
+ "display_name": "dev",
+ "language": "python",
+ "name": "python3"
+ },
+ "language_info": {
+ "codemirror_mode": {
+ "name": "ipython",
+ "version": 3
+ },
+ "file_extension": ".py",
+ "mimetype": "text/x-python",
+ "name": "python",
+ "nbconvert_exporter": "python",
+ "pygments_lexer": "ipython3",
+ "version": "3.12.9"
+ }
+ },
+ "nbformat": 4,
+ "nbformat_minor": 5
+}
diff --git a/01-tutorials/01-fundamentals/09-configuration-loader/04-swarm-loading.ipynb b/01-tutorials/01-fundamentals/09-configuration-loader/04-swarm-loading.ipynb
new file mode 100644
index 00000000..e49c00ff
--- /dev/null
+++ b/01-tutorials/01-fundamentals/09-configuration-loader/04-swarm-loading.ipynb
@@ -0,0 +1,254 @@
+{
+ "cells": [
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "# Loading Swarms from Configuration\n",
+ "\n",
+ "The SwarmConfigLoader enables you to create multi-agent swarms from YAML configuration files, making it easy to define, version, and manage complex agent teams declaratively.\n",
+ "\n",
+ "## What is a Swarm?\n",
+ "\n",
+ "A Swarm is a collaborative multi-agent system where specialized agents work together autonomously to solve complex tasks through:\n",
+ "\n",
+ "* **Autonomous coordination** - Agents decide when to hand off to other specialists\n",
+ "* **Shared context** - All agents have access to the full conversation history\n",
+ "* **Specialized roles** - Each agent has distinct expertise and capabilities\n",
+ "* **Emergent intelligence** - The team achieves better results than individual agents\n",
+ "\n",
+ "## Configuration-Driven Swarms\n",
+ "\n",
+ "Instead of creating agents programmatically, you can define your entire swarm team in a YAML configuration file. This approach provides:\n",
+ "\n",
+ "- **Declarative definition** - Define agent roles, prompts, and parameters in YAML\n",
+ "- **Version control** - Track changes to your agent team configurations\n",
+ "- **Easy deployment** - Load different swarm configurations for different environments\n",
+ "- **Maintainability** - Modify agent behavior without changing code"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "## Prerequisites\n",
+ "\n",
+ "- Python 3.10 or later\n",
+ "- AWS account configured with appropriate permissions for Amazon Bedrock\n",
+ "- Basic understanding of YAML configuration files"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "# Install Strands using pip\n",
+ "!pip install strands-agents strands-agents-tools python-dotenv PyYAML"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "from IPython.display import Markdown, display"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "# Load environment variables\n",
+ "from dotenv import load_dotenv\n",
+ "load_dotenv()"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "## Examining the Swarm Configuration\n",
+ "\n",
+ "Let's first look at our swarm configuration file to understand how agents are defined:"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "import yaml\n",
+ "\n",
+ "# Load the swarm configuration\n",
+ "with open('./configs/swarm.strands.yml', 'r') as file:\n",
+ " config = yaml.safe_load(file)\n",
+ " \n",
+ "print(\"Swarm Configuration Structure:\")\n",
+ "print(f\"Agents: {len(config['swarm']['agents'])}\")\n",
+ "print(f\"Max Handoffs: {config['swarm']['max_handoffs']}\")\n",
+ "print(f\"Execution Timeout: {config['swarm']['execution_timeout']}s\")"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "## Loading a Swarm from Configuration\n",
+ "\n",
+ "The simplest way to create a swarm from configuration is using the SwarmConfigLoader:"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "from strands.experimental.config_loader.swarm import SwarmConfigLoader\n",
+ "\n",
+ "# Create the config loader\n",
+ "loader = SwarmConfigLoader()\n",
+ "\n",
+ "# Load from dictionary config\n",
+ "swarm = loader.load_swarm(config)"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "print(\"Swarm Created Successfully\")\n",
+ "print(f\"Total Agents: {len(swarm.nodes)}\")\n",
+ "print(f\"Max Handoffs: {swarm.max_handoffs}\")\n",
+ "print(f\"Max Iterations: {swarm.max_iterations}\")\n",
+ "print(f\"Node Timeout: {swarm.node_timeout}s\")\n",
+ "print(f\"Execution Timeout: {swarm.execution_timeout}s\")\n",
+ "print(f\"Repetetive Handoff Detection Window: {swarm.repetitive_handoff_detection_window}\")\n",
+ "print(f\"Repetetive Handoff Min Unique Agents: {swarm.repetitive_handoff_min_unique_agents}\")\n",
+ "print(\"\\nAgent Roles:\")\n",
+ "for key, value in swarm.nodes.items():\n",
+ " print(f\"- {key}: {value.executor.description}\")"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "## Executing the Swarm\n",
+ "\n",
+ "Now let's put our swarm to work on a collaborative task that benefits from multiple perspectives:"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "# Execute the swarm on a complex task\n",
+ "task = \"Create a blog post explaining Agentic AI and then create a summary for a social media post.\"\n",
+ "\n",
+ "print(\"Executing Task:\")\n",
+ "print(f\"Task: {task}\")\n",
+ "\n",
+ "result = swarm(task)"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "## Analyzing Swarm Results\n",
+ "\n",
+ "Let's examine how the agents collaborated and what they accomplished:"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "# Display execution summary\n",
+ "print(\"Execution Summary:\")\n",
+ "print(f\"Status: {result.status}\")\n",
+ "print(f\"Total Iterations: {result.execution_count}\")\n",
+ "print(f\"Execution Time: {result.execution_time}ms\")\n",
+ "print(f\"Tokens Used: {result.accumulated_usage['totalTokens']}\")\n",
+ "\n",
+ "print(\"\\nAgent Collaboration Flow:\")\n",
+ "for i, node in enumerate(result.node_history, 1):\n",
+ " print(f\"{i}. {node.node_id}\")"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "# Final Swarm Output\n",
+ "for agent_name in result.results.keys():\n",
+ " print(f\"- {agent_name}\")\n",
+ " display(Markdown(str(result.results[agent_name].result)))"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "## Key Benefits of Configuration-Driven Swarms\n",
+ "\n",
+ "1. **Declarative Definition**: Define complex agent teams in readable YAML files\n",
+ "2. **Version Control**: Track changes to agent configurations over time\n",
+ "3. **Environment Flexibility**: Use different configurations for development, testing, and production\n",
+ "4. **Easy Maintenance**: Modify agent behavior without changing application code\n",
+ "5. **Reproducibility**: Ensure consistent swarm behavior across deployments\n",
+ "\n",
+ "## When to Use Configuration-Driven Swarms\n",
+ "\n",
+ "- **Complex multi-step tasks** requiring diverse expertise\n",
+ "- **Content creation workflows** needing research, creativity, and review\n",
+ "- **Analysis projects** benefiting from multiple perspectives\n",
+ "- **Production deployments** where configuration management is important\n",
+ "- **Team collaboration** where non-developers need to modify agent behavior\n",
+ "\n",
+ "## Next Steps\n",
+ "\n",
+ "- Experiment with different agent configurations in the YAML file\n",
+ "- Try adjusting swarm parameters like `max_handoffs` and timeouts\n",
+ "- Create custom swarm configurations for your specific use cases\n",
+ "- Explore the [Strands Documentation](https://strandsagents.com/) for advanced swarm patterns"
+ ]
+ }
+ ],
+ "metadata": {
+ "kernelspec": {
+ "display_name": "dev",
+ "language": "python",
+ "name": "python3"
+ },
+ "language_info": {
+ "codemirror_mode": {
+ "name": "ipython",
+ "version": 3
+ },
+ "file_extension": ".py",
+ "mimetype": "text/x-python",
+ "name": "python",
+ "nbconvert_exporter": "python",
+ "pygments_lexer": "ipython3",
+ "version": "3.12.9"
+ }
+ },
+ "nbformat": 4,
+ "nbformat_minor": 4
+}
diff --git a/01-tutorials/01-fundamentals/09-configuration-loader/05-graph-loading.ipynb b/01-tutorials/01-fundamentals/09-configuration-loader/05-graph-loading.ipynb
new file mode 100644
index 00000000..ec83a719
--- /dev/null
+++ b/01-tutorials/01-fundamentals/09-configuration-loader/05-graph-loading.ipynb
@@ -0,0 +1,431 @@
+{
+ "cells": [
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "# Loading Graphs from Configuration\n",
+ "\n",
+ "The GraphConfigLoader enables you to create multi-agent graphs from YAML configuration files, making it easy to define, version, and manage complex agent workflows declaratively. The GraphBuilder.from_config() method provides a convenient interface to the underlying GraphConfigLoader.\n",
+ "\n",
+ "## What is a Graph?\n",
+ "\n",
+ "A Graph is a structured multi-agent system where agents are connected through explicit dependencies and conditions:\n",
+ "\n",
+ "* **Directed workflow** - Agents execute in a specific order based on dependencies\n",
+ "* **Conditional routing** - Dynamic paths based on intermediate results\n",
+ "* **Parallel execution** - Independent branches can run concurrently\n",
+ "* **Deterministic flow** - Predictable execution patterns for complex workflows\n",
+ "\n",
+ "## Configuration-Driven Graphs\n",
+ "\n",
+ "Instead of building graphs programmatically, you can define your entire workflow in a YAML configuration file. This approach provides:\n",
+ "\n",
+ "- **Declarative definition** - Define nodes, edges, and conditions in YAML\n",
+ "- **Version control** - Track changes to your workflow configurations\n",
+ "- **Easy deployment** - Load different graph configurations for different environments\n",
+ "- **Maintainability** - Modify workflow behavior without changing code"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "## Prerequisites\n",
+ "\n",
+ "- Python 3.10 or later\n",
+ "- AWS account configured with appropriate permissions for Amazon Bedrock\n",
+ "- Basic understanding of YAML configuration files"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "# Install Strands using pip\n",
+ "!pip install strands-agents strands-agents-tools python-dotenv PyYAML"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "from IPython.display import Markdown, display"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "# Load environment variables\n",
+ "from dotenv import load_dotenv\n",
+ "load_dotenv()"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "## Example 1: Simple Graph - Basic Processing\n",
+ "\n",
+ "Let's start with a simple graph configuration that demonstrates basic processing with one coordinator leading two specialists:"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "import yaml\n",
+ "\n",
+ "# Load the simple graph configuration\n",
+ "with open('./configs/graph-simple.strands.yml', 'r') as file:\n",
+ " simple_config = yaml.safe_load(file)\n",
+ " \n",
+ "print(\"Simple Graph Configuration Structure:\")\n",
+ "print(f\"Graph ID: {simple_config['graph']['graph_id']}\")\n",
+ "print(f\"Nodes: {len(simple_config['graph']['nodes'])}\")\n",
+ "print(f\"Edges: {len(simple_config['graph']['edges'])}\")\n",
+ "print(f\"Entry Points: {simple_config['graph']['entry_points']}\")"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "from strands.experimental.config_loader.graph.graph_config_loader import GraphConfigLoader\n",
+ "\n",
+ "# Load simple graph using GraphConfigLoader.load_graph() method\n",
+ "config_loader = GraphConfigLoader()\n",
+ "simple_graph = config_loader.load_graph(config=simple_config)\n",
+ "\n",
+ "print(\"ā
Simple Graph Created Successfully\")\n",
+ "print(f\"š Total Nodes: {len(simple_graph.nodes)}\")\n",
+ "print(f\"š Entry Points: {[node.node_id for node in simple_graph.entry_points]}\")\n",
+ "print(\"\\nšÆ Node Roles:\")\n",
+ "for node_id, node in simple_graph.nodes.items():\n",
+ " print(f\" - {node_id}: {getattr(node.executor, 'name', 'Agent')}\")"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "# Execute the simple graph\n",
+ "task = \"Analyze the impact of remote work on employee productivity. Provide a short analysis, no need for follow ups.\"\n",
+ "\n",
+ "print(\"Executing Simple Graph:\")\n",
+ "print(f\"Task: {task}\")\n",
+ "\n",
+ "result = simple_graph(task)\n",
+ "\n",
+ "print(\"\\nExecution Summary:\")\n",
+ "print(f\"Status: {result.status}\")\n",
+ "print(f\"Total nodes: {result.total_nodes}\")\n",
+ "print(f\"Completed nodes: {result.completed_nodes}\")\n",
+ "print(f\"Execution time: {result.execution_time}ms\")\n",
+ "\n",
+ "print(\"\\nExecution Order:\")\n",
+ "for node in result.execution_order:\n",
+ " print(f\"- {node.node_id}\")"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "# Display results from specific nodes\n",
+ "print(\"Expert Analysis:\")\n",
+ "display(Markdown(str(result.results[\"expert\"].result)))"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "## Example 2: Parallel Graph - Parallel Processing\n",
+ "\n",
+ "Now let's load a more complex graph that demonstrates parallel processing with multiple experts feeding into a final risk analyst:"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "# Load the parallel graph configuration\n",
+ "with open('./configs/graph-parallel.strands.yml', 'r') as file:\n",
+ " parallel_config = yaml.safe_load(file)\n",
+ " \n",
+ "print(\"Parallel Graph Configuration Structure:\")\n",
+ "print(f\"Graph ID: {parallel_config['graph']['graph_id']}\")\n",
+ "print(f\"Nodes: {len(parallel_config['graph']['nodes'])}\")\n",
+ "print(f\"Edges: {len(parallel_config['graph']['edges'])}\")\n",
+ "print(f\"Entry Points: {parallel_config['graph']['entry_points']}\")"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "# Load parallel graph using GraphConfigLoader.load_graph() method\n",
+ "config_loader = GraphConfigLoader()\n",
+ "parallel_graph = config_loader.load_graph(config=parallel_config)\n",
+ "\n",
+ "\n",
+ "print(\"ā
Parallel Graph Created Successfully\")\n",
+ "print(f\"š Total Nodes: {len(parallel_graph.nodes)}\")\n",
+ "print(f\"š Entry Points: {[node.node_id for node in parallel_graph.entry_points]}\")\n",
+ "print(\"\\nšÆ Node Roles:\")\n",
+ "for node_id, node in parallel_graph.nodes.items():\n",
+ " print(f\" - {node_id}: {getattr(node.executor, 'name', 'Agent')}\")"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "# Execute the parallel graph\n",
+ "task = \"Our company is considering launching a new AI-powered customer service platform. Initial investment is $2M with projected 3-year ROI of 150%. What's your assessment?\"\n",
+ "\n",
+ "print(\"Executing Parallel Graph:\")\n",
+ "print(f\"Task: {task}\")\n",
+ "\n",
+ "result = parallel_graph(task)\n",
+ "\n",
+ "print(\"\\nExecution Summary:\")\n",
+ "print(f\"Status: {result.status}\")\n",
+ "print(f\"Total nodes: {result.total_nodes}\")\n",
+ "print(f\"Completed nodes: {result.completed_nodes}\")\n",
+ "print(f\"Execution time: {result.execution_time}ms\")\n",
+ "\n",
+ "print(\"\\nExecution Order:\")\n",
+ "for node in result.execution_order:\n",
+ " print(f\"- {node.node_id}\")"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "# Display results from each expert\n",
+ "print(\"Financial Expert Analysis:\")\n",
+ "display(Markdown(str(result.results[\"finance_expert\"].result)))\n",
+ "\n",
+ "print(\"\\nTechnical Expert Analysis:\")\n",
+ "display(Markdown(str(result.results[\"tech_expert\"].result)))\n",
+ "\n",
+ "print(\"\\nMarket Expert Analysis:\")\n",
+ "display(Markdown(str(result.results[\"market_expert\"].result)))\n",
+ "\n",
+ "print(\"\\nRisk Analyst Final Assessment:\")\n",
+ "display(Markdown(str(result.results[\"risk_analyst\"].result)))"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "## Example 3: Conditional Graph - Branching with Conditions\n",
+ "\n",
+ "Finally, let's load a graph that demonstrates conditional routing based on classification:"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "# Load the conditional graph configuration\n",
+ "with open('./configs/graph-conditional.strands.yml', 'r') as file:\n",
+ " conditional_config = yaml.safe_load(file)\n",
+ " \n",
+ "print(\"Conditional Graph Configuration Structure:\")\n",
+ "print(f\"Graph ID: {conditional_config['graph']['graph_id']}\")\n",
+ "print(f\"Nodes: {len(conditional_config['graph']['nodes'])}\")\n",
+ "print(f\"Edges: {len(conditional_config['graph']['edges'])}\")\n",
+ "print(f\"Entry Points: {conditional_config['graph']['entry_points']}\")\n",
+ "print(\"\\nConditional Edges:\")\n",
+ "for edge in conditional_config['graph']['edges']:\n",
+ " if edge.get('condition'):\n",
+ " print(f\"- {edge['from_node']} -> {edge['to_node']} (condition: {edge['condition']['function']})\")"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "# Load conditional graph using GraphConfigLoader.load_graph() method\n",
+ "config_loader = GraphConfigLoader()\n",
+ "conditional_graph = config_loader.load_graph(config=conditional_config)\n",
+ "\n",
+ "print(\"ā
Conditional Graph Created Successfully\")\n",
+ "print(f\"š Total Nodes: {len(conditional_graph.nodes)}\")\n",
+ "print(f\"š Entry Points: {[node.node_id for node in conditional_graph.entry_points]}\")\n",
+ "print(\"\\nšÆ Node Roles:\")\n",
+ "for node_id, node in conditional_graph.nodes.items():\n",
+ " print(f\" - {node_id}: {getattr(node.executor, 'name', 'Agent')}\")"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "# Execute the conditional graph with a technical request\n",
+ "technical_task = \"Provide report on technical aspects of working from home, outline things to consider and key risk factors\"\n",
+ "\n",
+ "print(\"Executing Conditional Graph (Technical Request):\")\n",
+ "print(f\"Task: {technical_task}\")\n",
+ "\n",
+ "result = conditional_graph(technical_task)\n",
+ "\n",
+ "print(\"\\nExecution Summary:\")\n",
+ "print(f\"Status: {result.status}\")\n",
+ "print(f\"Total nodes: {result.total_nodes}\")\n",
+ "print(f\"Completed nodes: {result.completed_nodes}\")\n",
+ "print(f\"Execution time: {result.execution_time}ms\")\n",
+ "\n",
+ "print(\"\\nExecution Order:\")\n",
+ "for node in result.execution_order:\n",
+ " print(f\"- {node.node_id}\")\n",
+ "\n",
+ "print(\"\\nClassifier Result:\")\n",
+ "display(Markdown(str(result.results[\"classifier\"].result)))\n",
+ "\n",
+ "# Display the final result\n",
+ "if \"technical_report\" in result.results:\n",
+ " print(\"\\nTechnical Expert Report:\")\n",
+ " display(Markdown(str(result.results[\"technical_report\"].result)))\n",
+ "elif \"business_report\" in result.results:\n",
+ " print(\"\\nBusiness Expert Report:\")\n",
+ " display(Markdown(str(result.results[\"business_report\"].result)))"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "# Execute the conditional graph with a business request\n",
+ "business_task = \"Provide report on business impact of working from home, outline things to consider and key risk factors\"\n",
+ "\n",
+ "print(\"Executing Conditional Graph (Business Request):\")\n",
+ "print(f\"Task: {business_task}\")\n",
+ "\n",
+ "result = conditional_graph(business_task)\n",
+ "\n",
+ "print(\"\\nExecution Summary:\")\n",
+ "print(f\"Status: {result.status}\")\n",
+ "print(f\"Total nodes: {result.total_nodes}\")\n",
+ "print(f\"Completed nodes: {result.completed_nodes}\")\n",
+ "print(f\"Execution time: {result.execution_time}ms\")\n",
+ "\n",
+ "print(\"\\nExecution Order:\")\n",
+ "for node in result.execution_order:\n",
+ " print(f\"- {node.node_id}\")\n",
+ "\n",
+ "print(\"\\nClassifier Result:\")\n",
+ "display(Markdown(str(result.results[\"classifier\"].result)))\n",
+ "\n",
+ "# Display the final result\n",
+ "if \"technical_report\" in result.results:\n",
+ " print(\"\\nTechnical Expert Report:\")\n",
+ " display(Markdown(str(result.results[\"technical_report\"].result)))\n",
+ "elif \"business_report\" in result.results:\n",
+ " print(\"\\nBusiness Expert Report:\")\n",
+ " display(Markdown(str(result.results[\"business_report\"].result)))"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "## Key Benefits of Configuration-Driven Graphs\n",
+ "\n",
+ "1. **Declarative Definition**: Define complex workflows in readable YAML files\n",
+ "2. **Version Control**: Track changes to workflow configurations over time\n",
+ "3. **Environment Flexibility**: Use different configurations for development, testing, and production\n",
+ "4. **Easy Maintenance**: Modify workflow behavior without changing application code\n",
+ "5. **Reproducibility**: Ensure consistent graph behavior across deployments\n",
+ "6. **Visual Clarity**: YAML structure makes workflow dependencies clear\n",
+ "\n",
+ "## When to Use Configuration-Driven Graphs\n",
+ "\n",
+ "- **Structured workflows** with clear dependencies and execution order\n",
+ "- **Conditional processing** where different paths are taken based on results\n",
+ "- **Parallel processing** where independent tasks can run concurrently\n",
+ "- **Production deployments** where configuration management is important\n",
+ "- **Complex decision trees** requiring multiple specialized agents\n",
+ "- **Audit trails** where execution order and dependencies must be tracked\n",
+ "\n",
+ "## Graph vs Swarm: When to Use Which?\n",
+ "\n",
+ "**Use Graphs when:**\n",
+ "- You need predictable, structured workflows\n",
+ "- Dependencies between agents are clear\n",
+ "- You want parallel processing capabilities\n",
+ "- Conditional routing is required\n",
+ "\n",
+ "**Use Swarms when:**\n",
+ "- You need autonomous agent collaboration\n",
+ "- Agents should decide when to hand off to others\n",
+ "- The workflow is more exploratory and adaptive\n",
+ "- You want emergent behavior from agent interactions\n",
+ "\n",
+ "## Next Steps\n",
+ "\n",
+ "- Experiment with different graph configurations in the YAML files\n",
+ "- Try creating custom conditional functions for routing\n",
+ "- Combine graphs and swarms for hybrid workflows\n",
+ "- Explore the [Strands Documentation](https://strandsagents.com/) for advanced graph patterns"
+ ]
+ }
+ ],
+ "metadata": {
+ "kernelspec": {
+ "display_name": "dev",
+ "language": "python",
+ "name": "python3"
+ },
+ "language_info": {
+ "codemirror_mode": {
+ "name": "ipython",
+ "version": 3
+ },
+ "file_extension": ".py",
+ "mimetype": "text/x-python",
+ "name": "python",
+ "nbconvert_exporter": "python",
+ "pygments_lexer": "ipython3",
+ "version": "3.12.9"
+ }
+ },
+ "nbformat": 4,
+ "nbformat_minor": 4
+}
diff --git a/01-tutorials/01-fundamentals/09-configuration-loader/06-structured-output-config.ipynb b/01-tutorials/01-fundamentals/09-configuration-loader/06-structured-output-config.ipynb
new file mode 100644
index 00000000..dd9c0324
--- /dev/null
+++ b/01-tutorials/01-fundamentals/09-configuration-loader/06-structured-output-config.ipynb
@@ -0,0 +1,402 @@
+{
+ "cells": [
+ {
+ "cell_type": "markdown",
+ "id": "introduction",
+ "metadata": {},
+ "source": [
+ "# Structured Output Configuration Tutorial\n",
+ "\n",
+ "This tutorial demonstrates how to configure agents with structured output schemas using YAML configuration files.\n",
+ "\n",
+ "## What You'll Learn\n",
+ "- Define structured output schemas in YAML configuration\n",
+ "- Use inline schemas, external files, and existing Pydantic models\n",
+ "- Configure validation and error handling\n",
+ "- Build a complete business intelligence pipeline\n",
+ "\n",
+ "## Prerequisites\n",
+ "- Basic understanding of Pydantic models\n",
+ "- Familiarity with YAML configuration\n",
+ "- AWS credentials configured for Bedrock access"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "id": "1d41681d",
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "!pip install strands-agents strands-agents-tools python-dotenv PyYAML"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "id": "setup-imports",
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "import yaml\n",
+ "import json\n",
+ "from pathlib import Path\n",
+ "from datetime import datetime\n",
+ "from IPython.display import display, Markdown, JSON\n",
+ "\n",
+ "# Load environment variables\n",
+ "from dotenv import load_dotenv\n",
+ "load_dotenv()\n",
+ "\n",
+ "# Import Strands components\n",
+ "from strands.experimental.config_loader.agent import AgentConfigLoader\n",
+ "\n",
+ "print(\"ā
Environment setup complete\")"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "id": "simple-config",
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "# Simple structured output configuration\n",
+ "simple_config = {\n",
+ " \"schemas\": [\n",
+ " {\n",
+ " \"name\": \"UserProfile\",\n",
+ " \"schema\": {\n",
+ " \"type\": \"object\",\n",
+ " \"properties\": {\n",
+ " \"name\": {\"type\": \"string\", \"description\": \"User's full name\"},\n",
+ " \"email\": {\"type\": \"string\"},\n",
+ " \"age\": {\"type\": \"integer\", \"minimum\": 0}\n",
+ " },\n",
+ " \"required\": [\"name\", \"email\"]\n",
+ " }\n",
+ " }\n",
+ " ],\n",
+ " \"agent\": {\n",
+ " \"name\": \"user_extractor\",\n",
+ " \"model\": \"us.anthropic.claude-3-7-sonnet-20250219-v1:0\",\n",
+ " \"system_prompt\": \"Extract user information from text.\",\n",
+ " \"structured_output\": \"UserProfile\"\n",
+ " }\n",
+ "}\n",
+ "\n",
+ "# Load agent\n",
+ "loader = AgentConfigLoader()\n",
+ "agent = loader.load_agent(simple_config)\n",
+ "\n",
+ "print(\"ā
Loaded agent with structured output\")\n",
+ "print(f\"š Schema: {agent._structured_output_schema.__name__}\")"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "id": "demo-extraction",
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "# Demo structured output extraction\n",
+ "sample_text = \"Hi, I'm John Doe, 30 years old. You can reach me at john.doe@example.com\"\n",
+ "\n",
+ "print(\"š Sample Text:\")\n",
+ "print(sample_text)\n",
+ "\n",
+ "print(\"\\nš Extracting structured data...\")\n",
+ "result = agent.structured_output(f\"Extract user information: {sample_text}\")\n",
+ "\n",
+ "print(\"\\nā
Extraction completed!\")\n",
+ "print(f\"Name: {result.name}\")\n",
+ "print(f\"Email: {result.email}\")\n",
+ "print(f\"Age: {result.age}\")\n",
+ "\n",
+ "# Show full structured data\n",
+ "display(JSON(result.model_dump(), expanded=True))"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "advanced-config",
+ "metadata": {},
+ "source": [
+ "## Advanced Configuration with Validation\n",
+ "\n",
+ "Now let's explore more advanced structured output configuration with validation and error handling."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "id": "advanced-schema",
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "# Advanced configuration with validation\n",
+ "advanced_config = {\n",
+ " \"schemas\": [\n",
+ " {\n",
+ " \"name\": \"ProductAnalysis\",\n",
+ " \"schema\": {\n",
+ " \"type\": \"object\",\n",
+ " \"properties\": {\n",
+ " \"product_name\": {\"type\": \"string\"},\n",
+ " \"price\": {\"type\": \"number\", \"minimum\": 0},\n",
+ " \"rating\": {\"type\": \"number\", \"minimum\": 1, \"maximum\": 5},\n",
+ " \"category\": {\n",
+ " \"type\": \"string\", \n",
+ " \"enum\": [\"electronics\", \"clothing\", \"books\", \"home\"]\n",
+ " },\n",
+ " \"pros\": {\"type\": \"array\", \"items\": {\"type\": \"string\"}},\n",
+ " \"cons\": {\"type\": \"array\", \"items\": {\"type\": \"string\"}},\n",
+ " \"recommendation\": {\n",
+ " \"type\": \"string\",\n",
+ " \"enum\": [\"highly_recommended\", \"recommended\", \"neutral\", \"not_recommended\"]\n",
+ " }\n",
+ " },\n",
+ " \"required\": [\"product_name\", \"price\", \"rating\", \"category\", \"recommendation\"]\n",
+ " }\n",
+ " }\n",
+ " ],\n",
+ " \"agent\": {\n",
+ " \"name\": \"product_analyzer\",\n",
+ " \"model\": \"us.anthropic.claude-3-7-sonnet-20250219-v1:0\",\n",
+ " \"system_prompt\": \"Analyze product information and provide structured insights.\",\n",
+ " \"structured_output\": {\n",
+ " \"schema\": \"ProductAnalysis\",\n",
+ " \"validation\": {\n",
+ " \"strict\": True,\n",
+ " \"allow_extra_fields\": False\n",
+ " },\n",
+ " \"error_handling\": {\n",
+ " \"retry_on_validation_error\": True,\n",
+ " \"max_retries\": 2\n",
+ " }\n",
+ " }\n",
+ " }\n",
+ "}\n",
+ "\n",
+ "# Load advanced agent with new loader instance\n",
+ "advanced_loader = AgentConfigLoader()\n",
+ "advanced_agent = advanced_loader.load_agent(advanced_config)\n",
+ "print(\"ā
Loaded advanced agent with validation\")"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "id": "demo-product-analysis",
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "# Demo product analysis\n",
+ "product_review = \"\"\"\n",
+ "I recently bought the Sony WH-1000XM4 headphones for $299. \n",
+ "These are wireless noise-canceling headphones that deliver exceptional sound quality.\n",
+ "\n",
+ "Pros:\n",
+ "- Excellent noise cancellation\n",
+ "- Great battery life (30+ hours)\n",
+ "- Comfortable for long listening sessions\n",
+ "- Quick charge feature\n",
+ "\n",
+ "Cons:\n",
+ "- Expensive compared to competitors\n",
+ "- Touch controls can be finicky\n",
+ "- Bulky design\n",
+ "\n",
+ "Overall, I'd rate these 4.5/5 stars. Highly recommended for audiophiles and frequent travelers.\n",
+ "\"\"\"\n",
+ "\n",
+ "print(\"š Product Review:\")\n",
+ "print(product_review[:200] + \"...\")\n",
+ "\n",
+ "print(\"\\nš Analyzing product...\")\n",
+ "analysis = advanced_agent.structured_output(f\"Analyze this product review: {product_review}\")\n",
+ "\n",
+ "print(\"\\nā
Analysis completed!\")\n",
+ "print(f\"Product: {analysis.product_name}\")\n",
+ "print(f\"Price: ${analysis.price}\")\n",
+ "print(f\"Rating: {analysis.rating}/5\")\n",
+ "print(f\"Category: {analysis.category}\")\n",
+ "print(f\"Recommendation: {analysis.recommendation}\")\n",
+ "print(f\"Pros: {', '.join(analysis.pros)}\")\n",
+ "print(f\"Cons: {', '.join(analysis.cons)}\")\n",
+ "\n",
+ "# Show full structured data\n",
+ "display(JSON(analysis.model_dump(), expanded=True))"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "shared-schemas",
+ "metadata": {},
+ "source": [
+ "## Shared Schema Registry\n",
+ "\n",
+ "You can also share schemas across multiple agents by defining them in a single configuration:"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "id": "shared-config",
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "# Configuration with shared schemas\n",
+ "shared_config = {\n",
+ " \"schemas\": [\n",
+ " {\n",
+ " \"name\": \"ContactInfo\",\n",
+ " \"schema\": {\n",
+ " \"type\": \"object\",\n",
+ " \"properties\": {\n",
+ " \"name\": {\"type\": \"string\"},\n",
+ " \"email\": {\"type\": \"string\"},\n",
+ " \"phone\": {\"type\": \"string\"},\n",
+ " \"company\": {\"type\": \"string\"}\n",
+ " },\n",
+ " \"required\": [\"name\", \"email\"]\n",
+ " }\n",
+ " },\n",
+ " {\n",
+ " \"name\": \"TaskInfo\",\n",
+ " \"schema\": {\n",
+ " \"type\": \"object\",\n",
+ " \"properties\": {\n",
+ " \"title\": {\"type\": \"string\"},\n",
+ " \"priority\": {\"type\": \"string\", \"enum\": [\"low\", \"medium\", \"high\", \"urgent\"]},\n",
+ " \"due_date\": {\"type\": \"string\"},\n",
+ " \"assignee\": {\"type\": \"string\"}\n",
+ " },\n",
+ " \"required\": [\"title\", \"priority\"]\n",
+ " }\n",
+ " }\n",
+ " ]\n",
+ "}\n",
+ "\n",
+ "# Create multiple agents using shared schemas\n",
+ "shared_loader = AgentConfigLoader()\n",
+ "\n",
+ "# Load schemas first\n",
+ "shared_loader._load_global_schemas(shared_config[\"schemas\"])\n",
+ "\n",
+ "# Create contact extractor agent\n",
+ "contact_agent_config = {\n",
+ " \"agent\": {\n",
+ " \"name\": \"contact_extractor\",\n",
+ " \"model\": \"us.anthropic.claude-3-7-sonnet-20250219-v1:0\",\n",
+ " \"system_prompt\": \"Extract contact information from text.\",\n",
+ " \"structured_output\": \"ContactInfo\"\n",
+ " }\n",
+ "}\n",
+ "\n",
+ "# Create task extractor agent\n",
+ "task_agent_config = {\n",
+ " \"agent\": {\n",
+ " \"name\": \"task_extractor\",\n",
+ " \"model\": \"us.anthropic.claude-3-7-sonnet-20250219-v1:0\",\n",
+ " \"system_prompt\": \"Extract task information from text.\",\n",
+ " \"structured_output\": \"TaskInfo\"\n",
+ " }\n",
+ "}\n",
+ "\n",
+ "contact_agent = shared_loader.load_agent(contact_agent_config)\n",
+ "task_agent = shared_loader.load_agent(task_agent_config)\n",
+ "\n",
+ "print(\"ā
Created multiple agents with shared schemas\")\n",
+ "print(f\"š Available schemas: {list(shared_loader.schema_registry.list_schemas().keys())}\")\n",
+ "print(f\"š¤ Contact agent schema: {contact_agent._structured_output_schema.__name__}\")\n",
+ "print(f\"š Task agent schema: {task_agent._structured_output_schema.__name__}\")"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "summary",
+ "metadata": {},
+ "source": [
+ "## Summary\n",
+ "\n",
+ "In this tutorial, we've learned how to:\n",
+ "\n",
+ "ā
**Configure structured output schemas** using JSON Schema syntax in agent configurations\n",
+ "\n",
+ "ā
**Extract structured data** from unstructured text using configured agents\n",
+ "\n",
+ "ā
**Apply validation and constraints** to ensure data quality and consistency\n",
+ "\n",
+ "ā
**Handle complex nested data structures** with multiple fields and validation rules\n",
+ "\n",
+ "ā
**Share schemas across multiple agents** using a shared schema registry\n",
+ "\n",
+ "## Key Patterns\n",
+ "\n",
+ "### Single Agent with Schema\n",
+ "```python\n",
+ "config = {\n",
+ " \"schemas\": [{\"name\": \"MySchema\", \"schema\": {...}}],\n",
+ " \"structured_output\": \"MySchema\"\n",
+ "}\n",
+ "agent = AgentConfigLoader().load_agent(config)\n",
+ "```\n",
+ "\n",
+ "### Multiple Agents with Shared Schemas\n",
+ "```python\n",
+ "loader = AgentConfigLoader()\n",
+ "loader._load_global_schemas(shared_schemas)\n",
+ "agent1 = loader.load_agent({\"structured_output\": \"Schema1\"})\n",
+ "agent2 = loader.load_agent({\"structured_output\": \"Schema2\"})\n",
+ "```\n",
+ "\n",
+ "### Advanced Validation\n",
+ "```python\n",
+ "config = {\n",
+ " \"structured_output\": {\n",
+ " \"schema\": \"MySchema\",\n",
+ " \"validation\": {\"strict\": True},\n",
+ " \"error_handling\": {\"max_retries\": 3}\n",
+ " }\n",
+ "}\n",
+ "```\n",
+ "\n",
+ "## Next Steps\n",
+ "\n",
+ "- Explore external schema files for complex, reusable schemas\n",
+ "- Build multi-agent systems with shared schema registries\n",
+ "- Integrate with databases and external APIs using structured output\n",
+ "- Deploy structured output agents in production environments\n",
+ "\n",
+ "## Additional Resources\n",
+ "\n",
+ "- [Structured Output Documentation](../../../sdk-python/src/strands/experimental/config_loader/agent/STRUCTURED-OUTPUT.md)\n",
+ "- [Pydantic Documentation](https://docs.pydantic.dev/)\n",
+ "- [JSON Schema Specification](https://json-schema.org/)"
+ ]
+ }
+ ],
+ "metadata": {
+ "kernelspec": {
+ "display_name": "dev",
+ "language": "python",
+ "name": "python3"
+ },
+ "language_info": {
+ "codemirror_mode": {
+ "name": "ipython",
+ "version": 3
+ },
+ "file_extension": ".py",
+ "mimetype": "text/x-python",
+ "name": "python",
+ "nbconvert_exporter": "python",
+ "pygments_lexer": "ipython3",
+ "version": "3.12.9"
+ }
+ },
+ "nbformat": 4,
+ "nbformat_minor": 5
+}
diff --git a/01-tutorials/01-fundamentals/09-configuration-loader/07-swarms-as-tools.ipynb b/01-tutorials/01-fundamentals/09-configuration-loader/07-swarms-as-tools.ipynb
new file mode 100644
index 00000000..27066869
--- /dev/null
+++ b/01-tutorials/01-fundamentals/09-configuration-loader/07-swarms-as-tools.ipynb
@@ -0,0 +1,256 @@
+{
+ "cells": [
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "# Swarms as Tools\n",
+ "\n",
+ "This notebook demonstrates how to use the Agent(config=) constructor to load an agent configuration that includes a swarm as a tool. This pattern allows you to create coordinator agents that can delegate complex multi-agent tasks to specialized research teams.\n",
+ "\n",
+ "## Overview\n",
+ "\n",
+ "In this example, we'll:\n",
+ "1. Load an agent configuration that includes a research team swarm as a tool\n",
+ "2. Use the Agent(config=) constructor to create the coordinator\n",
+ "3. Demonstrate how the coordinator delegates research tasks to the swarm\n"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "!pip install strands-agents strands-agents-tools python-dotenv PyYAML"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "from dotenv import load_dotenv\n",
+ "load_dotenv()"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "import yaml\n",
+ "\n",
+ "# Load the simple graph configuration\n",
+ "with open('./configs/swarms-as-tools.strands.yml', 'r') as file:\n",
+ " config = yaml.safe_load(file)"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "# Load the research coordinator agent using AgentConfigLoader\n",
+ "from strands.experimental.config_loader.agent import AgentConfigLoader\n",
+ "\n",
+ "loader = AgentConfigLoader()\n",
+ "research_coordinator = loader.load_agent(config)\n",
+ "\n",
+ "print(\"Research coordinator agent loaded successfully!\") \n",
+ "\n",
+ "print(f\"Tools: {research_coordinator.tool_names}\")"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "# Test the swarm tool with a comprehensive research request\n",
+ "print(\"Testing research coordinator with comprehensive research request...\\n\")\n",
+ "\n",
+ "research_request = \"\"\"\n",
+ "I need comprehensive research on 'Quantum Computing Applications in Healthcare'. \n",
+ "Please provide detailed analysis including:\n",
+ "- Current applications and use cases\n",
+ "- Future potential and emerging opportunities\n",
+ "- Technical challenges and limitations\n",
+ "- Key players and recent developments\n",
+ "- Timeline for practical implementation\n",
+ "\n",
+ "This research will be used for a strategic investment decision.\n",
+ "\"\"\"\n",
+ "\n",
+ "response = research_coordinator(research_request)\n",
+ "\n",
+ "print(\"=== RESEARCH COORDINATOR RESPONSE ===\")\n",
+ "print(response)\n",
+ "print(\"=\" * 50)"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "# Test with different research parameters - basic depth\n",
+ "print(\"Testing research coordinator with basic depth analysis...\\n\")\n",
+ "\n",
+ "basic_request = \"\"\"\n",
+ "Research 'Sustainable Energy Storage Solutions' with basic depth analysis. \n",
+ "Focus on the most promising technologies and their commercial viability. \n",
+ "I need this for a preliminary market assessment.\n",
+ "\"\"\"\n",
+ "\n",
+ "response = research_coordinator(basic_request)\n",
+ "\n",
+ "print(\"=== RESEARCH COORDINATOR RESPONSE ===\")\n",
+ "print(response)\n",
+ "print(\"=\" * 50)"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "# Test with a more specific research focus\n",
+ "print(\"Testing research coordinator with specific research focus...\\n\")\n",
+ "\n",
+ "specific_request = \"\"\"\n",
+ "I need expert-level research on 'Artificial Intelligence in Financial Services' \n",
+ "focusing specifically on:\n",
+ "- Risk management applications\n",
+ "- Regulatory compliance challenges\n",
+ "- Current implementations by major banks\n",
+ "- ROI and cost-benefit analysis\n",
+ "\n",
+ "Please provide comprehensive analysis with current implementations and \n",
+ "regulatory considerations. This is for a fintech startup's business plan.\n",
+ "\"\"\"\n",
+ "\n",
+ "response = research_coordinator(specific_request)\n",
+ "\n",
+ "print(\"=== RESEARCH COORDINATOR RESPONSE ===\")\n",
+ "print(response)\n",
+ "print(\"=\" * 50)"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "# Test with a technology trend analysis\n",
+ "print(\"Testing research coordinator with technology trend analysis...\\n\")\n",
+ "\n",
+ "trend_request = \"\"\"\n",
+ "Conduct comprehensive research on 'Edge Computing Market Trends 2024-2025'. \n",
+ "I need analysis covering:\n",
+ "- Market size and growth projections\n",
+ "- Key technology drivers and innovations\n",
+ "- Major vendors and competitive landscape\n",
+ "- Industry adoption patterns\n",
+ "- Investment opportunities and risks\n",
+ "\n",
+ "Format the output as a detailed report suitable for executive presentation.\n",
+ "\"\"\"\n",
+ "\n",
+ "response = research_coordinator(trend_request)\n",
+ "\n",
+ "print(\"=== RESEARCH COORDINATOR RESPONSE ===\")\n",
+ "print(response)\n",
+ "print(\"=\" * 50)"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "# Test with a competitive analysis request\n",
+ "print(\"Testing research coordinator with competitive analysis...\\n\")\n",
+ "\n",
+ "competitive_request = \"\"\"\n",
+ "Research the competitive landscape for 'Cloud-based Data Analytics Platforms'. \n",
+ "Focus on:\n",
+ "- Top 5 market leaders and their key differentiators\n",
+ "- Pricing strategies and business models\n",
+ "- Emerging challengers and disruptive technologies\n",
+ "- Market gaps and opportunities\n",
+ "\n",
+ "Provide recommendations for a new entrant's positioning strategy.\n",
+ "\"\"\"\n",
+ "\n",
+ "response = research_coordinator(competitive_request)\n",
+ "\n",
+ "print(\"=== RESEARCH COORDINATOR RESPONSE ===\")\n",
+ "print(response)\n",
+ "print(\"=\" * 50)"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "## Key Takeaways\n",
+ "\n",
+ "This notebook demonstrates how the AgentConfigLoader pattern with swarms-as-tools enables:\n",
+ "\n",
+ "1. **Declarative Multi-Agent Configuration**: Define complex swarm workflows in YAML without code\n",
+ "2. **Automatic Swarm Tool Loading**: The swarm is automatically converted to a callable tool\n",
+ "3. **Intelligent Task Delegation**: The coordinator delegates research tasks to the specialized team\n",
+ "4. **Collaborative Processing**: Multiple agents work together through defined handoff strategies\n",
+ "\n",
+ "The research coordinator agent automatically:\n",
+ "- Analyzes research requests and delegates to the research team swarm\n",
+ "- Coordinates the sequential workflow: Researcher ā Analyst ā Synthesizer\n",
+ "- Handles different research depths (basic, comprehensive, expert)\n",
+ "- Produces structured, high-quality research outputs\n",
+ "- Adapts output format based on requirements (report, summary, analysis)\n",
+ "\n",
+ "The research team swarm automatically:\n",
+ "- **Researcher**: Gathers comprehensive information and data\n",
+ "- **Analyst**: Analyzes findings and identifies key insights\n",
+ "- **Synthesizer**: Combines insights into coherent, actionable reports\n",
+ "\n",
+ "This pattern is particularly powerful for:\n",
+ "- Complex research and analysis tasks requiring multiple perspectives\n",
+ "- Strategic planning and market analysis\n",
+ "- Competitive intelligence and trend analysis\n",
+ "- Investment research and due diligence\n",
+ "- Any scenario requiring collaborative expertise and structured workflows\n"
+ ]
+ }
+ ],
+ "metadata": {
+ "kernelspec": {
+ "display_name": "dev",
+ "language": "python",
+ "name": "python3"
+ },
+ "language_info": {
+ "codemirror_mode": {
+ "name": "ipython",
+ "version": 3
+ },
+ "file_extension": ".py",
+ "mimetype": "text/x-python",
+ "name": "python",
+ "nbconvert_exporter": "python",
+ "pygments_lexer": "ipython3",
+ "version": "3.12.9"
+ }
+ },
+ "nbformat": 4,
+ "nbformat_minor": 4
+}
diff --git a/01-tutorials/01-fundamentals/09-configuration-loader/08-graphs-as-tools.ipynb b/01-tutorials/01-fundamentals/09-configuration-loader/08-graphs-as-tools.ipynb
new file mode 100644
index 00000000..4602e878
--- /dev/null
+++ b/01-tutorials/01-fundamentals/09-configuration-loader/08-graphs-as-tools.ipynb
@@ -0,0 +1,398 @@
+{
+ "cells": [
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "# Graphs as Tools\n",
+ "\n",
+ "This notebook demonstrates how to use the AgentConfigLoader to load an agent configuration that includes a graph as a tool. This pattern allows you to create coordinator agents that can delegate complex sequential workflows to structured document processing pipelines.\n",
+ "\n",
+ "## Overview\n",
+ "\n",
+ "In this example, we'll:\n",
+ "1. Load an agent configuration that includes a document processing graph as a tool\n",
+ "2. Use the AgentConfigLoader to create the coordinator\n",
+ "3. Demonstrate how the coordinator delegates document processing tasks to the graph\n"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "!pip install strands-agents strands-agents-tools python-dotenv PyYAML"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "from dotenv import load_dotenv\n",
+ "load_dotenv()"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "import yaml\n",
+ "\n",
+ "# Load the graphs-as-tools configuration\n",
+ "with open('./configs/graphs-as-tools.strands.yml', 'r') as file:\n",
+ " config = yaml.safe_load(file)"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "# Load the document processing coordinator agent using AgentConfigLoader\n",
+ "from strands.experimental.config_loader.agent import AgentConfigLoader\n",
+ "\n",
+ "loader = AgentConfigLoader()\n",
+ "document_coordinator = loader.load_agent(config)\n",
+ "\n",
+ "print(\"Document processing coordinator agent loaded successfully!\")\n",
+ "\n",
+ "print(f\"Tools: {document_coordinator.tool_names}\")"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "# Define sample document for testing\n",
+ "healthcare_document = \"\"\"\n",
+ "# Artificial Intelligence in Modern Healthcare\n",
+ "\n",
+ "Artificial Intelligence (AI) is revolutionizing healthcare through various applications including \n",
+ "diagnostic imaging, drug discovery, personalized treatment plans, and predictive analytics. \n",
+ "Machine learning algorithms can now detect diseases earlier and more accurately than traditional methods.\n",
+ "\n",
+ "## Key Applications\n",
+ "\n",
+ "### Diagnostic Imaging\n",
+ "AI-powered imaging systems can identify tumors, fractures, and other abnormalities with remarkable \n",
+ "precision. Deep learning models trained on millions of medical images can often spot patterns \n",
+ "that human radiologists might miss.\n",
+ "\n",
+ "### Drug Discovery\n",
+ "Traditional drug discovery takes 10-15 years and costs billions of dollars. AI is accelerating \n",
+ "this process by predicting molecular behavior, identifying promising compounds, and optimizing \n",
+ "clinical trial designs.\n",
+ "\n",
+ "### Personalized Medicine\n",
+ "By analyzing genetic data, medical history, and lifestyle factors, AI can help create personalized \n",
+ "treatment plans that are more effective and have fewer side effects.\n",
+ "\n",
+ "## Key Benefits\n",
+ "- Improved diagnostic accuracy (up to 95% in some imaging applications)\n",
+ "- Reduced treatment costs (estimated 20-30% savings in some areas)\n",
+ "- Personalized patient care based on individual characteristics\n",
+ "- Accelerated drug development (reducing timelines by 2-3 years)\n",
+ "- Enhanced clinical decision support for healthcare providers\n",
+ "\n",
+ "## Challenges and Considerations\n",
+ "- Data privacy and security concerns\n",
+ "- Regulatory approval processes\n",
+ "- Integration with existing healthcare systems\n",
+ "- Need for healthcare professional training\n",
+ "- Ensuring AI transparency and explainability\n",
+ "\n",
+ "The future of healthcare will likely see even deeper integration of AI technologies, \n",
+ "leading to more precise, efficient, and accessible medical care for patients worldwide.\n",
+ "\"\"\"\n",
+ "\n",
+ "print(\"Healthcare document prepared for processing.\")\n",
+ "print(f\"Document length: {len(healthcare_document)} characters\")"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "# Test the graph tool with comprehensive document analysis\n",
+ "print(\"Testing document coordinator with comprehensive analysis...\\n\")\n",
+ "\n",
+ "analysis_request = f\"\"\"\n",
+ "Please process this healthcare document and generate a comprehensive analysis:\n",
+ "\n",
+ "{healthcare_document}\n",
+ "\n",
+ "I need a detailed analysis that includes key themes, insights, and recommendations \n",
+ "for healthcare technology investment decisions.\n",
+ "\"\"\"\n",
+ "\n",
+ "response = document_coordinator(analysis_request)\n",
+ "\n",
+ "print(\"=== DOCUMENT COORDINATOR RESPONSE ===\")\n",
+ "print(response)\n",
+ "print(\"=\" * 50)"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "# Test with different output format - summary\n",
+ "print(\"Testing document coordinator with summary generation...\\n\")\n",
+ "\n",
+ "summary_request = f\"\"\"\n",
+ "Please process this healthcare document and generate a concise summary:\n",
+ "\n",
+ "{healthcare_document}\n",
+ "\n",
+ "I need a brief summary suitable for executive briefing that captures the main points \n",
+ "and key takeaways about AI in healthcare.\n",
+ "\"\"\"\n",
+ "\n",
+ "response = document_coordinator(summary_request)\n",
+ "\n",
+ "print(\"=== DOCUMENT COORDINATOR RESPONSE ===\")\n",
+ "print(response)\n",
+ "print(\"=\" * 50)"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "# Test with a different document type - technical specification\n",
+ "technical_document = \"\"\"\n",
+ "# API Specification: User Management Service v2.1\n",
+ "\n",
+ "## Overview\n",
+ "The User Management Service provides RESTful APIs for managing user accounts, authentication, \n",
+ "and authorization in our enterprise platform. This version introduces enhanced security features \n",
+ "and improved performance optimizations.\n",
+ "\n",
+ "## Authentication & Security\n",
+ "All endpoints require Bearer token authentication with JWT tokens. Tokens expire after 24 hours \n",
+ "and must be refreshed using the refresh token endpoint. All communications must use HTTPS.\n",
+ "\n",
+ "## Core Endpoints\n",
+ "\n",
+ "### POST /api/v2/users\n",
+ "Creates a new user account with enhanced validation.\n",
+ "\n",
+ "**Request Body:**\n",
+ "```json\n",
+ "{\n",
+ " \"username\": \"string (3-50 chars, alphanumeric + underscore)\",\n",
+ " \"email\": \"string (valid email format)\",\n",
+ " \"password\": \"string (min 12 chars, complexity requirements)\",\n",
+ " \"role\": \"admin|manager|user|viewer\",\n",
+ " \"department\": \"string (optional)\",\n",
+ " \"metadata\": \"object (optional custom fields)\"\n",
+ "}\n",
+ "```\n",
+ "\n",
+ "**Response:** 201 Created\n",
+ "```json\n",
+ "{\n",
+ " \"id\": \"uuid\",\n",
+ " \"username\": \"string\",\n",
+ " \"email\": \"string\",\n",
+ " \"role\": \"string\",\n",
+ " \"department\": \"string\",\n",
+ " \"created_at\": \"ISO 8601 timestamp\",\n",
+ " \"status\": \"active|pending|suspended\"\n",
+ "}\n",
+ "```\n",
+ "\n",
+ "### GET /api/v2/users/{id}\n",
+ "Retrieves detailed user information by ID with role-based field filtering.\n",
+ "\n",
+ "**Query Parameters:**\n",
+ "- `include_metadata`: boolean (default: false)\n",
+ "- `include_permissions`: boolean (default: false)\n",
+ "\n",
+ "**Response:** 200 OK\n",
+ "```json\n",
+ "{\n",
+ " \"id\": \"uuid\",\n",
+ " \"username\": \"string\",\n",
+ " \"email\": \"string\",\n",
+ " \"role\": \"string\",\n",
+ " \"department\": \"string\",\n",
+ " \"created_at\": \"ISO 8601 timestamp\",\n",
+ " \"last_login\": \"ISO 8601 timestamp\",\n",
+ " \"status\": \"string\",\n",
+ " \"permissions\": \"array (if requested)\",\n",
+ " \"metadata\": \"object (if requested)\"\n",
+ "}\n",
+ "```\n",
+ "\n",
+ "## Rate Limiting & Performance\n",
+ "- Authenticated users: 1000 requests per hour\n",
+ "- Admin users: 5000 requests per hour\n",
+ "- Bulk operations: 100 requests per hour\n",
+ "- Response caching: 5 minutes for user data\n",
+ "\n",
+ "## Error Handling\n",
+ "All errors follow RFC 7807 Problem Details format with detailed error codes and descriptions.\n",
+ "\n",
+ "## Monitoring & Logging\n",
+ "All API calls are logged with request IDs for tracing. Performance metrics are available \n",
+ "through the /health and /metrics endpoints.\n",
+ "\"\"\"\n",
+ "\n",
+ "print(\"Testing document coordinator with technical documentation...\\n\")\n",
+ "\n",
+ "tech_analysis_request = f\"\"\"\n",
+ "Please process this technical API specification document and generate a comprehensive analysis:\n",
+ "\n",
+ "{technical_document}\n",
+ "\n",
+ "I need a technical analysis that covers the API design quality, security considerations, \n",
+ "and recommendations for implementation and integration.\n",
+ "\"\"\"\n",
+ "\n",
+ "response = document_coordinator(tech_analysis_request)\n",
+ "\n",
+ "print(\"=== DOCUMENT COORDINATOR RESPONSE ===\")\n",
+ "print(response)\n",
+ "print(\"=\" * 50)"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "# Test with business document processing\n",
+ "business_document = \"\"\"\n",
+ "# Q3 2024 Business Performance Report\n",
+ "\n",
+ "## Executive Summary\n",
+ "Our company achieved strong performance in Q3 2024, with revenue growth of 18% year-over-year \n",
+ "and significant improvements in operational efficiency. Key highlights include successful \n",
+ "product launches, expanded market presence, and enhanced customer satisfaction scores.\n",
+ "\n",
+ "## Financial Performance\n",
+ "- Total Revenue: $12.4M (ā18% YoY, ā8% QoQ)\n",
+ "- Gross Profit: $8.1M (65% margin, ā2% from Q2)\n",
+ "- Operating Expenses: $6.2M (ā12% YoY due to expansion)\n",
+ "- Net Income: $1.9M (15% margin, ā25% YoY)\n",
+ "- Cash Flow: $2.3M positive (ā40% from Q2)\n",
+ "\n",
+ "## Customer Metrics\n",
+ "- New Customers: 1,247 (ā22% QoQ)\n",
+ "- Customer Retention: 91% (ā3% from Q2)\n",
+ "- Net Promoter Score: 68 (ā5 points from Q2)\n",
+ "- Average Customer Lifetime Value: $15,600 (ā8% YoY)\n",
+ "- Support Ticket Resolution: 94% within 24 hours\n",
+ "\n",
+ "## Product Development\n",
+ "Successfully launched two major product features:\n",
+ "1. Advanced Analytics Dashboard - adopted by 78% of enterprise customers\n",
+ "2. Mobile Application v3.0 - 45% increase in mobile engagement\n",
+ "\n",
+ "## Market Expansion\n",
+ "- Entered 3 new geographic markets (APAC region)\n",
+ "- Established partnerships with 5 major distributors\n",
+ "- Increased market share by 2.3% in core segments\n",
+ "\n",
+ "## Challenges & Risks\n",
+ "- Increased competition in core markets\n",
+ "- Supply chain disruptions affecting 15% of operations\n",
+ "- Talent acquisition challenges in technical roles\n",
+ "- Regulatory changes in 2 key markets\n",
+ "\n",
+ "## Q4 Outlook\n",
+ "Projecting continued growth with revenue target of $14.2M for Q4, driven by seasonal demand \n",
+ "and new product launches. Focus areas include operational scaling and market consolidation.\n",
+ "\"\"\"\n",
+ "\n",
+ "print(\"Testing document coordinator with business report analysis...\\n\")\n",
+ "\n",
+ "business_request = f\"\"\"\n",
+ "Please process this business performance report and generate insights and recommendations:\n",
+ "\n",
+ "{business_document}\n",
+ "\n",
+ "I need a business analysis that identifies key performance drivers, potential risks, \n",
+ "and strategic recommendations for the next quarter.\n",
+ "\"\"\"\n",
+ "\n",
+ "response = document_coordinator(business_request)\n",
+ "\n",
+ "print(\"=== DOCUMENT COORDINATOR RESPONSE ===\")\n",
+ "print(response)\n",
+ "print(\"=\" * 50)"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "## Key Takeaways\n",
+ "\n",
+ "This notebook demonstrates how the AgentConfigLoader pattern with graphs-as-tools enables:\n",
+ "\n",
+ "1. **Declarative Workflow Configuration**: Define complex sequential processing workflows in YAML without code\n",
+ "2. **Automatic Graph Tool Loading**: The graph is automatically converted to a callable tool\n",
+ "3. **Intelligent Task Delegation**: The coordinator delegates document processing tasks to the structured workflow\n",
+ "4. **Sequential Processing**: Documents flow through a structured pipeline with conditional logic\n",
+ "\n",
+ "The document processing coordinator agent automatically:\n",
+ "- Analyzes document processing requests and delegates to the document processor graph\n",
+ "- Coordinates the sequential workflow: Validator ā Analyzer ā Processor ā Synthesizer\n",
+ "- Handles different document types (healthcare, technical, business)\n",
+ "- Produces structured, high-quality analysis outputs\n",
+ "- Adapts processing based on document type and requested output format\n",
+ "\n",
+ "The document processing graph automatically:\n",
+ "- **Validator**: Checks document format, quality, and classifies document type\n",
+ "- **Analyzer**: Extracts key themes, topics, and structural information\n",
+ "- **Processor**: Applies specialized processing based on document type and requirements\n",
+ "- **Synthesizer**: Combines all findings into coherent, actionable outputs\n",
+ "\n",
+ "This pattern is particularly powerful for:\n",
+ "- Content analysis and document processing pipelines\n",
+ "- Quality assurance workflows with validation steps\n",
+ "- Multi-stage analysis requiring different specialized processing\n",
+ "- Document classification and routing systems\n",
+ "- Any scenario requiring structured, sequential processing with conditional logic\n"
+ ]
+ }
+ ],
+ "metadata": {
+ "kernelspec": {
+ "display_name": "dev",
+ "language": "python",
+ "name": "python3"
+ },
+ "language_info": {
+ "codemirror_mode": {
+ "name": "ipython",
+ "version": 3
+ },
+ "file_extension": ".py",
+ "mimetype": "text/x-python",
+ "name": "python",
+ "nbconvert_exporter": "python",
+ "pygments_lexer": "ipython3",
+ "version": "3.12.9"
+ }
+ },
+ "nbformat": 4,
+ "nbformat_minor": 4
+}
diff --git a/01-tutorials/01-fundamentals/09-configuration-loader/README.md b/01-tutorials/01-fundamentals/09-configuration-loader/README.md
new file mode 100644
index 00000000..d7ab579c
--- /dev/null
+++ b/01-tutorials/01-fundamentals/09-configuration-loader/README.md
@@ -0,0 +1,98 @@
+# Configuration Loader Tutorial
+
+This tutorial demonstrates how to use Strands Agents' experimental configuration system to load and manage agents, tools, swarms, and graphs programmatically using dictionary configurations.
+
+## Prerequisites
+
+- Python 3.10 or later
+- Strands Agents SDK installed
+- Basic understanding of Python dictionaries
+
+## Overview
+
+The experimental configuration loader system allows you to define agents, tools, swarms, and graphs programmatically using dictionary configurations. This approach provides:
+
+- **Programmatic Configuration**: Define your agents and workflows using Python dictionaries
+- **Reusability**: Share and reuse configurations across projects
+- **Modularity**: Compose complex systems from simple components
+- **Dynamic Loading**: Create configurations at runtime
+
+**Note**: This is an experimental feature.
+
+## Tutorial Notebooks
+
+Run these notebooks in order to learn the configuration system:
+
+### Core Configuration Loading
+1. **01-tool-loading.ipynb** - Load individual tools using ToolConfigLoader
+2. **02-agent-loading.ipynb** - Load agents with tools and system prompts using AgentConfigLoader
+3. **04-swarm-loading.ipynb** - Load multi-agent swarms using SwarmConfigLoader
+4. **05-graph-loading.ipynb** - Load workflow graphs with conditions and routing using GraphConfigLoader
+
+### Advanced Patterns
+5. **03-agents-as-tools.ipynb** - Use agents as tools within other agents
+6. **06-structured-output-config.ipynb** - Configure agents with structured output schemas
+7. **07-swarms-as-tools.ipynb** - Use entire swarms as tools
+8. **08-graphs-as-tools.ipynb** - Use workflow graphs as tools
+
+## Configuration Files
+
+The `configs/` directory contains example configurations for:
+- Individual tools
+- Agent configurations
+- Swarm definitions
+- Graph workflows
+- Composite configurations (agents/swarms/graphs as tools)
+
+## Running the Examples
+
+### Prerequisites
+Make sure you have the required dependencies installed:
+
+```bash
+pip install strands-agents
+```
+
+### Running Notebooks
+1. Start Jupyter Lab or Jupyter Notebook:
+ ```bash
+ jupyter lab
+ # or
+ jupyter notebook
+ ```
+
+2. Open any notebook file (`.ipynb`) and run the cells sequentially
+
+3. Each notebook is self-contained and includes:
+ - Configuration examples using dictionary-based configs
+ - Code to load and use the configurations with config loaders
+ - Explanations of key concepts
+
+### Key Files
+- **weather_tool.py** - Example custom tool implementation
+- **configs/** - Directory containing configuration examples
+- **workflow/** - Directory containing workflow-specific configurations
+- **SCHEMA-PLAN.md** - Documentation of the schema validation system (experimental)
+- **README-schema-validation.md** - Detailed schema validation guide (experimental)
+
+## Configuration Loaders
+
+The tutorial demonstrates the experimental configuration loader classes:
+
+- **AgentConfigLoader**: Load agents from dictionary configurations
+- **ToolConfigLoader**: Load tools by identifier or multi-agent configurations
+- **SwarmConfigLoader**: Load swarms from dictionary configurations
+- **GraphConfigLoader**: Load graphs from dictionary configurations
+
+Each loader provides programmatic configuration management with caching and validation.
+
+## Next Steps
+
+After completing this tutorial, you'll understand how to:
+- Create dictionary configurations for all Strands components
+- Load and use configurations programmatically with config loaders
+- Compose complex multi-agent systems
+- Use agents, swarms, and graphs as reusable tools
+- Work with the experimental configuration system
+
+For more advanced topics, explore the other tutorial sections in the samples repository.
\ No newline at end of file
diff --git a/01-tutorials/01-fundamentals/09-configuration-loader/configs/agents-as-tools.strands.yml b/01-tutorials/01-fundamentals/09-configuration-loader/configs/agents-as-tools.strands.yml
new file mode 100644
index 00000000..9ea57836
--- /dev/null
+++ b/01-tutorials/01-fundamentals/09-configuration-loader/configs/agents-as-tools.strands.yml
@@ -0,0 +1,85 @@
+# Agents as Tools Configuration
+# This configuration defines an orchestrator agent with specialized agent tools
+# Based on the examples from 03-agent-as-tools.ipynb
+
+# Orchestrator agent configuration that uses specialized agent tools
+# orchestrator:
+agent:
+ model: "us.anthropic.claude-3-7-sonnet-20250219-v1:0"
+ system_prompt: |
+ You are an assistant that routes queries to specialized agents:
+ - For research questions and factual information ā Use the research_assistant tool
+ - For product recommendations and shopping advice ā Use the product_recommendation_assistant tool
+ - For travel planning and itineraries ā Use the trip_planning_assistant tool
+ - For research that needs summarization ā Use the research_and_summary_assistant tool
+ - For simple questions not requiring specialized knowledge ā Answer directly
+
+ Always select the most appropriate tool based on the user's query.
+ tools:
+ # Research Assistant Agent Tool
+ - name: research_assistant
+ description: "Process and respond to research-related queries with factual information and citations"
+ agent:
+ model: "us.anthropic.claude-3-7-sonnet-20250219-v1:0"
+ system_prompt: "You are a specialized research assistant. Focus only on providing factual, well-sourced information in response to research questions. Always cite your sources when possible."
+ tools: []
+ input_schema:
+ type: object
+ properties:
+ query:
+ type: string
+ description: "A research question requiring factual information"
+ required:
+ - query
+
+ # Product Recommendation Assistant Agent Tool
+ - name: product_recommendation_assistant
+ description: "Handle product recommendation queries by suggesting appropriate products"
+ agent:
+ model: "us.anthropic.claude-3-7-sonnet-20250219-v1:0"
+ system_prompt: "You are a specialized product recommendation assistant. Provide personalized product suggestions based on user preferences. Always cite your sources."
+ tools: []
+ input_schema:
+ type: object
+ properties:
+ query:
+ type: string
+ description: "A product inquiry with user preferences"
+ required:
+ - query
+
+ # Trip Planning Assistant Agent Tool
+ - name: trip_planning_assistant
+ description: "Create travel itineraries and provide travel advice"
+ agent:
+ model: "us.anthropic.claude-3-7-sonnet-20250219-v1:0"
+ system_prompt: "You are a specialized travel planning assistant. Create detailed travel itineraries based on user preferences."
+ tools: []
+ input_schema:
+ type: object
+ properties:
+ query:
+ type: string
+ description: "A travel planning request with destination and preferences"
+ required:
+ - query
+
+ # Sequential Research and Summary Agent Tool (demonstrates agent chaining)
+ - name: research_and_summary_assistant
+ description: "Research a topic and provide a concise summary of findings"
+ agent:
+ model: "us.anthropic.claude-3-7-sonnet-20250219-v1:0"
+ system_prompt: |
+ You are a research and summarization specialist. For the topic '{topic}':
+ 1. First gather comprehensive research information
+ 2. Then create a concise, well-structured summary
+ Focus on key points, main arguments, and critical data while maintaining accuracy.
+ tools: []
+ args:
+ topic:
+ description: "Research topic to investigate and summarize"
+ type: "string"
+ required: true
+
+ # File writing tool from strands-tools (now working with enhanced ToolConfigLoader)
+ - strands_tools.file_write
\ No newline at end of file
diff --git a/01-tutorials/01-fundamentals/09-configuration-loader/configs/graph-conditional.strands.yml b/01-tutorials/01-fundamentals/09-configuration-loader/configs/graph-conditional.strands.yml
new file mode 100644
index 00000000..615302da
--- /dev/null
+++ b/01-tutorials/01-fundamentals/09-configuration-loader/configs/graph-conditional.strands.yml
@@ -0,0 +1,67 @@
+# Conditional Graph Configuration - Branching with Conditions
+# Based on the conditional branching example from the graph tutorial
+# samples/01-tutorials/02-multi-agent-systems/03-graph-agent/graph.ipynb
+
+graph:
+ graph_id: "conditional_routing_workflow"
+ name: "Conditional Routing Workflow"
+ description: "Conditional branching that routes requests to technical or business experts based on classification"
+
+ # Graph execution configuration
+ max_node_executions: null
+ execution_timeout: null
+ node_timeout: null
+ reset_on_revisit: false
+
+ # Nodes (agents)
+ nodes:
+ - node_id: "classifier"
+ type: "agent"
+ config:
+ name: "classifier"
+ model: "us.anthropic.claude-3-7-sonnet-20250219-v1:0"
+ system_prompt: "You are an agent responsible for classification of the report request, return only Technical or Business classification."
+ tools: []
+
+ - node_id: "technical_report"
+ type: "agent"
+ config:
+ name: "technical_expert"
+ model: "us.anthropic.claude-3-7-sonnet-20250219-v1:0"
+ system_prompt: "You are a technical expert that focuses on providing short summary from technical perspective"
+ tools: []
+
+ - node_id: "business_report"
+ type: "agent"
+ config:
+ name: "business_expert"
+ model: "us.anthropic.claude-3-7-sonnet-20250219-v1:0"
+ system_prompt: "You are a business expert that focuses on providing short summary from business perspective"
+ tools: []
+
+ # Edges with conditional routing
+ edges:
+ - from_node: "classifier"
+ to_node: "technical_report"
+ condition:
+ type: "function"
+ module: "workflow.conditions"
+ function: "is_technical"
+ description: "Routes to technical expert if classification contains 'technical'"
+
+ - from_node: "classifier"
+ to_node: "business_report"
+ condition:
+ type: "function"
+ module: "workflow.conditions"
+ function: "is_business"
+ description: "Routes to business expert if classification contains 'business'"
+
+ # Entry points
+ entry_points:
+ - "classifier"
+
+ metadata:
+ version: "1.0.0"
+ created_from: "graph tutorial conditional branching example"
+ tags: ["conditional", "routing", "classification", "technical", "business"]
diff --git a/01-tutorials/01-fundamentals/09-configuration-loader/configs/graph-parallel.strands.yml b/01-tutorials/01-fundamentals/09-configuration-loader/configs/graph-parallel.strands.yml
new file mode 100644
index 00000000..a22978c1
--- /dev/null
+++ b/01-tutorials/01-fundamentals/09-configuration-loader/configs/graph-parallel.strands.yml
@@ -0,0 +1,75 @@
+# Parallel Graph Configuration - Parallel Processing
+# Based on the parallel processing example from the graph tutorial
+# samples/01-tutorials/02-multi-agent-systems/03-graph-agent/graph.ipynb
+
+graph:
+ graph_id: "parallel_assessment_workflow"
+ name: "Parallel Assessment Workflow"
+ description: "Parallel processing with multiple experts feeding into a final risk analyst"
+
+ # Graph execution configuration
+ max_node_executions: null
+ execution_timeout: null
+ node_timeout: null
+ reset_on_revisit: false
+
+ # Nodes (agents)
+ nodes:
+ - node_id: "finance_expert"
+ type: "agent"
+ config:
+ name: "financial_advisor"
+ model: "us.anthropic.claude-3-7-sonnet-20250219-v1:0"
+ system_prompt: "You are a financial advisor focused on cost-benefit analysis, budget implications, and ROI calculations. Engage with other experts to build comprehensive financial perspectives."
+ tools: []
+
+ - node_id: "tech_expert"
+ type: "agent"
+ config:
+ name: "technical_architect"
+ model: "us.anthropic.claude-3-7-sonnet-20250219-v1:0"
+ system_prompt: "You are a technical architect who evaluates feasibility, implementation challenges, and technical risks. Collaborate with other experts to ensure technical viability."
+ tools: []
+
+ - node_id: "market_expert"
+ type: "agent"
+ config:
+ name: "market_researcher"
+ model: "us.anthropic.claude-3-7-sonnet-20250219-v1:0"
+ system_prompt: "You are a market researcher who analyzes market conditions, user needs, and competitive landscape. Work with other experts to validate market opportunities."
+ tools: []
+
+ - node_id: "risk_analyst"
+ type: "agent"
+ config:
+ name: "risk_analyst"
+ model: "us.anthropic.claude-3-7-sonnet-20250219-v1:0"
+ system_prompt: "You are a risk analyst who identifies potential risks, mitigation strategies, and compliance issues. Collaborate with other experts to ensure comprehensive risk assessment."
+ tools: []
+
+ # Edges (dependencies) - Creates parallel processing pattern
+ edges:
+ - from_node: "finance_expert"
+ to_node: "tech_expert"
+ condition: null
+
+ - from_node: "finance_expert"
+ to_node: "market_expert"
+ condition: null
+
+ - from_node: "tech_expert"
+ to_node: "risk_analyst"
+ condition: null
+
+ - from_node: "market_expert"
+ to_node: "risk_analyst"
+ condition: null
+
+ # Entry points
+ entry_points:
+ - "finance_expert"
+
+ metadata:
+ version: "1.0.0"
+ created_from: "graph tutorial parallel processing example"
+ tags: ["parallel", "assessment", "financial", "technical", "market", "risk"]
diff --git a/01-tutorials/01-fundamentals/09-configuration-loader/configs/graph-simple.strands.yml b/01-tutorials/01-fundamentals/09-configuration-loader/configs/graph-simple.strands.yml
new file mode 100644
index 00000000..d6e3dcdc
--- /dev/null
+++ b/01-tutorials/01-fundamentals/09-configuration-loader/configs/graph-simple.strands.yml
@@ -0,0 +1,59 @@
+# Simple Graph Configuration - Basic Processing
+# Based on the basic processing example from the graph tutorial
+# samples/01-tutorials/02-multi-agent-systems/03-graph-agent/graph.ipynb
+
+graph:
+ graph_id: "simple_research_workflow"
+ name: "Simple Research Workflow"
+ description: "Basic processing with one coordinator leading two specialists"
+
+ # Graph execution configuration
+ max_node_executions: null
+ execution_timeout: null
+ node_timeout: null
+ reset_on_revisit: false
+
+ # Nodes (agents)
+ nodes:
+ - node_id: "team_lead"
+ type: "agent"
+ config:
+ name: "coordinator"
+ model: "us.anthropic.claude-3-7-sonnet-20250219-v1:0"
+ system_prompt: "You are a research team leader coordinating specialists. Provide a short analysis, no need for follow ups"
+ tools: []
+
+ - node_id: "analyst"
+ type: "agent"
+ config:
+ name: "data_analyst"
+ model: "us.anthropic.claude-3-7-sonnet-20250219-v1:0"
+ system_prompt: "You are a data analyst coordinating with specialists. Provide a short analysis, no need for follow ups"
+ tools: []
+
+ - node_id: "expert"
+ type: "agent"
+ config:
+ name: "domain_expert"
+ model: "us.anthropic.claude-3-7-sonnet-20250219-v1:0"
+ system_prompt: "You are a domain expert coordinating with specialists. Provide a short analysis, no need for follow ups"
+ tools: []
+
+ # Edges (dependencies)
+ edges:
+ - from_node: "team_lead"
+ to_node: "analyst"
+ condition: null
+
+ - from_node: "team_lead"
+ to_node: "expert"
+ condition: null
+
+ # Entry points
+ entry_points:
+ - "team_lead"
+
+ metadata:
+ version: "1.0.0"
+ created_from: "graph tutorial basic processing example"
+ tags: ["simple", "coordination", "research"]
diff --git a/01-tutorials/01-fundamentals/09-configuration-loader/configs/graphs-as-tools.strands.yml b/01-tutorials/01-fundamentals/09-configuration-loader/configs/graphs-as-tools.strands.yml
new file mode 100644
index 00000000..611c92ca
--- /dev/null
+++ b/01-tutorials/01-fundamentals/09-configuration-loader/configs/graphs-as-tools.strands.yml
@@ -0,0 +1,121 @@
+
+agent:
+ model: "us.amazon.nova-pro-v1:0"
+ system_prompt: |
+ You are a document processing coordinator that can handle complex document workflows.
+ When you receive a document processing request, use the document processor graph to analyze
+ and process the content through a structured workflow.
+
+ The document processing workflow includes:
+ - Validation: Checks document format and content
+ - Analysis: Extracts key information and themes
+ - Processing: Applies specific processing based on document type
+ - Synthesis: Combines results into final output
+
+ Use the document processor for any complex document analysis tasks that require
+ structured processing and quality assurance.
+
+ tools:
+ - name: "document_processor"
+ description: "A structured workflow for processing and analyzing documents of various types"
+ graph:
+ nodes:
+ - node_id: "validator"
+ type: "agent"
+ config:
+ model: "us.amazon.nova-pro-v1:0"
+ system_prompt: |
+ You are a document validator. Your role is to:
+ 1. Check if the document is properly formatted and readable
+ 2. Identify the document type (article, report, specification, etc.)
+ 3. Assess the document quality and completeness
+ 4. Flag any issues that might affect processing
+
+ Provide a validation status and document type classification.
+ temperature: 0.1
+
+ - node_id: "analyzer"
+ type: "agent"
+ config:
+ model: "us.amazon.nova-pro-v1:0"
+ system_prompt: |
+ You are a document analyzer. Your role is to:
+ 1. Extract key themes, topics, and main points from the document
+ 2. Identify important entities, dates, and factual information
+ 3. Analyze document structure and organization
+ 4. Summarize the core content and purpose
+
+ Provide detailed analysis of document content and structure.
+ temperature: 0.2
+
+ - node_id: "processor"
+ type: "agent"
+ config:
+ model: "us.amazon.nova-pro-v1:0"
+ system_prompt: |
+ You are a document processor. Your role is to:
+ 1. Apply specific processing based on document type and requirements
+ 2. Extract or transform content according to the requested output format
+ 3. Perform specialized analysis (technical, business, academic, etc.)
+ 4. Prepare content for final synthesis
+
+ Adapt your processing approach based on document type and output requirements.
+ temperature: 0.3
+
+ - node_id: "synthesizer"
+ type: "agent"
+ config:
+ model: "us.amazon.nova-pro-v1:0"
+ system_prompt: |
+ You are a document synthesizer. Your role is to:
+ 1. Combine validation, analysis, and processing results
+ 2. Create the final output in the requested format
+ 3. Ensure consistency and coherence across all findings
+ 4. Provide actionable insights and recommendations when appropriate
+
+ Generate comprehensive, well-structured final outputs.
+ temperature: 0.4
+
+ edges:
+ - from_node: "validator"
+ to_node: "analyzer"
+ condition:
+ type: "expression"
+ expression: "state.results.get('validator', {}).get('status') == 'valid'"
+ description: "Proceed to analysis only if document validation passes"
+
+ - from_node: "analyzer"
+ to_node: "processor"
+ condition:
+ type: "always"
+ description: "Always proceed to processing after analysis"
+
+ - from_node: "processor"
+ to_node: "synthesizer"
+ condition:
+ type: "always"
+ description: "Always proceed to synthesis after processing"
+
+ entry_points: ["validator"]
+
+ input_schema:
+ type: "object"
+ properties:
+ document:
+ type: "string"
+ description: "The document content to process and analyze"
+ output_format:
+ type: "string"
+ enum: ["summary", "analysis", "report", "insights", "recommendations"]
+ default: "analysis"
+ description: "Desired format for the final output"
+ processing_type:
+ type: "string"
+ enum: ["general", "technical", "business", "academic", "legal"]
+ default: "general"
+ description: "Type of specialized processing to apply"
+ include_metadata:
+ type: "boolean"
+ default: false
+ description: "Whether to include document metadata in the output"
+ required: ["document"]
diff --git a/01-tutorials/01-fundamentals/09-configuration-loader/configs/swarm.strands.yml b/01-tutorials/01-fundamentals/09-configuration-loader/configs/swarm.strands.yml
new file mode 100644
index 00000000..0538176e
--- /dev/null
+++ b/01-tutorials/01-fundamentals/09-configuration-loader/configs/swarm.strands.yml
@@ -0,0 +1,72 @@
+# Swarm Configuration based on the swarm tutorial
+# This configuration recreates the blog writing swarm from the tutorial
+# samples/01-tutorials/02-multi-agent-systems/02-swarm-agent/swarm.ipynb
+
+swarm:
+ # Core swarm parameters (matching the tutorial configuration)
+ max_handoffs: 20 # Maximum handoffs to agents and users
+ max_iterations: 20 # Maximum node executions within the swarm
+ execution_timeout: 900.0 # Total execution timeout in seconds (15 minutes)
+ node_timeout: 300.0 # Individual node timeout in seconds (5 minutes)
+ repetitive_handoff_detection_window: 8 # Number of recent nodes to check for repetitive handoffs
+ repetitive_handoff_min_unique_agents: 3 # Minimum unique agents required in recent sequence
+
+ # Specialized agents for blog writing collaboration (using original tutorial prompts)
+ agents:
+ - name: "research_agent"
+ description: "Research Agent specializing in gathering and analyzing information"
+ model: "us.anthropic.claude-3-7-sonnet-20250219-v1:0"
+ system_prompt: |
+ You are a Research Agent specializing in gathering and analyzing information.
+ Your role in the swarm is to provide factual information and research insights on the topic.
+ You should focus on providing accurate data and identifying key aspects of the problem.
+ When receiving input from other agents, evaluate if their information aligns with your research.
+
+ When you need help from other specialists or have completed your research, use the available tools to coordinate with other agents.
+ tools: []
+ handoff_conditions:
+ - condition: "research_complete"
+ target_agent: "creative_agent"
+
+ - name: "creative_agent"
+ description: "Creative Agent specializing in generating innovative solutions"
+ model: "us.anthropic.claude-3-7-sonnet-20250219-v1:0"
+ system_prompt: |
+ You are a Creative Agent specializing in generating innovative solutions.
+ Your role in the swarm is to think outside the box and propose creative approaches.
+ You should build upon information from other agents while adding your unique creative perspective.
+ Focus on novel approaches that others might not have considered.
+ tools: []
+ handoff_conditions:
+ - condition: "creative_ideas_generated"
+ target_agent: "critical_agent"
+
+ - name: "critical_agent"
+ description: "Critical Agent specializing in analyzing proposals and finding flaws"
+ model: "us.anthropic.claude-3-7-sonnet-20250219-v1:0"
+ system_prompt: |
+ You are a Critical Agent specializing in analyzing proposals and finding flaws.
+ Your role in the swarm is to evaluate solutions proposed by other agents and identify potential issues.
+ You should carefully examine proposed solutions, find weaknesses or oversights, and suggest improvements.
+ Be constructive in your criticism while ensuring the final solution is robust.
+ tools: []
+ handoff_conditions:
+ - condition: "analysis_complete"
+ target_agent: "summarizer_agent"
+ - condition: "needs_more_research"
+ target_agent: "research_agent"
+ - condition: "needs_creative_input"
+ target_agent: "creative_agent"
+
+ - name: "summarizer_agent"
+ description: "Summarizer Agent specializing in synthesizing information"
+ model: "us.anthropic.claude-3-7-sonnet-20250219-v1:0"
+ system_prompt: |
+ You are a Summarizer Agent specializing in synthesizing information.
+ Your role in the swarm is to gather insights from all agents and create a cohesive final solution.
+ You should combine the best ideas and address the criticisms to create a comprehensive response.
+ Focus on creating a clear, actionable summary that addresses the original query effectively.
+ tools: []
+ handoff_conditions:
+ - condition: "needs_refinement"
+ target_agent: "critical_agent"
diff --git a/01-tutorials/01-fundamentals/09-configuration-loader/configs/swarms-as-tools.strands.yml b/01-tutorials/01-fundamentals/09-configuration-loader/configs/swarms-as-tools.strands.yml
new file mode 100644
index 00000000..364a640d
--- /dev/null
+++ b/01-tutorials/01-fundamentals/09-configuration-loader/configs/swarms-as-tools.strands.yml
@@ -0,0 +1,93 @@
+agent:
+ model: "us.amazon.nova-pro-v1:0"
+ system_prompt: |
+ You are a research coordinator that can delegate complex research tasks to specialized research teams.
+ When you receive a research request, use the research team swarm to conduct comprehensive analysis.
+
+ The research team consists of multiple specialized agents working together:
+ - Researcher: Gathers initial information and data
+ - Analyst: Analyzes findings and identifies patterns
+ - Synthesizer: Combines insights into coherent reports
+
+ Use the research team for any complex research tasks that require multiple perspectives and deep analysis.
+
+ tools:
+ - name: "research_team"
+ description: "A collaborative research team that conducts comprehensive analysis on any topic"
+ swarm:
+ agents:
+ - name: "researcher"
+ model: "us.amazon.nova-pro-v1:0"
+ system_prompt: |
+ You are a research specialist focused on information gathering. Your role is to:
+ 1. Collect comprehensive information about the research topic
+ 2. Identify key sources, studies, and data points
+ 3. Gather multiple perspectives on the topic
+ 4. Organize findings for further analysis
+
+ Always provide detailed, factual information with context about sources and reliability.
+ temperature: 0.2
+
+ - name: "analyst"
+ model: "us.amazon.nova-pro-v1:0"
+ system_prompt: |
+ You are an analytical specialist who processes research findings. Your role is to:
+ 1. Analyze data and information provided by the researcher
+ 2. Identify patterns, trends, and key insights
+ 3. Evaluate the significance and implications of findings
+ 4. Highlight gaps or areas needing additional research
+
+ Focus on critical thinking and drawing meaningful conclusions from the data.
+ temperature: 0.3
+
+ - name: "synthesizer"
+ model: "us.amazon.nova-pro-v1:0"
+ system_prompt: |
+ You are a synthesis specialist who creates comprehensive reports. Your role is to:
+ 1. Combine research findings and analysis into coherent reports
+ 2. Structure information in a logical, easy-to-understand format
+ 3. Provide executive summaries and key takeaways
+ 4. Ensure the final output addresses the original research question
+
+ Create well-organized, professional reports that are actionable and insightful.
+ temperature: 0.4
+
+ handoff_strategy: "sequential"
+ max_handoffs: 3
+ entry_agent: "researcher"
+
+ handoff_conditions:
+ - from: "researcher"
+ to: "analyst"
+ condition:
+ type: "always"
+ description: "Always hand off to analyst after research is complete"
+
+ - from: "analyst"
+ to: "synthesizer"
+ condition:
+ type: "always"
+ description: "Always hand off to synthesizer after analysis is complete"
+
+ input_schema:
+ type: "object"
+ properties:
+ topic:
+ type: "string"
+ description: "The research topic or question to investigate"
+ depth:
+ type: "string"
+ enum: ["basic", "comprehensive", "expert"]
+ default: "comprehensive"
+ description: "Depth of research and analysis required"
+ focus_areas:
+ type: "array"
+ items:
+ type: "string"
+ description: "Specific areas or aspects to focus on during research"
+ output_format:
+ type: "string"
+ enum: ["report", "summary", "analysis", "recommendations"]
+ default: "report"
+ description: "Desired format for the final output"
+ required: ["topic"]
diff --git a/01-tutorials/01-fundamentals/09-configuration-loader/configs/tools.strands.yml b/01-tutorials/01-fundamentals/09-configuration-loader/configs/tools.strands.yml
new file mode 100644
index 00000000..691c76c3
--- /dev/null
+++ b/01-tutorials/01-fundamentals/09-configuration-loader/configs/tools.strands.yml
@@ -0,0 +1,4 @@
+tools:
+ - weather_tool.weather
+ - strands_tools.file_write
+ - strands_tools.editor.editor
\ No newline at end of file
diff --git a/01-tutorials/01-fundamentals/09-configuration-loader/configs/weather-agent.strands.yml b/01-tutorials/01-fundamentals/09-configuration-loader/configs/weather-agent.strands.yml
new file mode 100644
index 00000000..630980d5
--- /dev/null
+++ b/01-tutorials/01-fundamentals/09-configuration-loader/configs/weather-agent.strands.yml
@@ -0,0 +1,5 @@
+agent:
+ model: "us.anthropic.claude-3-7-sonnet-20250219-v1:0"
+ system_prompt: "You're a helpful assistant. You can do simple math calculation, and tell the weather."
+ tools:
+ - weather_tool.weather
\ No newline at end of file
diff --git a/01-tutorials/01-fundamentals/09-configuration-loader/weather_tool.py b/01-tutorials/01-fundamentals/09-configuration-loader/weather_tool.py
new file mode 100644
index 00000000..6b520165
--- /dev/null
+++ b/01-tutorials/01-fundamentals/09-configuration-loader/weather_tool.py
@@ -0,0 +1,7 @@
+from strands import tool
+
+# Create a custom tool
+@tool
+def weather():
+ """ Get weather """ # Dummy implementation
+ return "stormy"
\ No newline at end of file
diff --git a/01-tutorials/01-fundamentals/09-configuration-loader/workflow/__init__.py b/01-tutorials/01-fundamentals/09-configuration-loader/workflow/__init__.py
new file mode 100644
index 00000000..eca922cb
--- /dev/null
+++ b/01-tutorials/01-fundamentals/09-configuration-loader/workflow/__init__.py
@@ -0,0 +1 @@
+"""Workflow utilities for graph configuration examples."""
diff --git a/01-tutorials/01-fundamentals/09-configuration-loader/workflow/conditions.py b/01-tutorials/01-fundamentals/09-configuration-loader/workflow/conditions.py
new file mode 100644
index 00000000..46202b93
--- /dev/null
+++ b/01-tutorials/01-fundamentals/09-configuration-loader/workflow/conditions.py
@@ -0,0 +1,36 @@
+"""Condition functions for graph routing.
+
+This module contains condition functions used by the conditional graph
+configuration to determine routing between nodes based on classification results.
+"""
+
+def is_technical(state):
+ """Check if the classifier result indicates a technical classification.
+
+ Args:
+ state: GraphState containing execution results
+
+ Returns:
+ bool: True if the classification contains 'technical', False otherwise
+ """
+ classifier_result = state.results.get("classifier")
+ if not classifier_result:
+ return False
+ result_text = str(classifier_result.result)
+ return "technical" in result_text.lower()
+
+
+def is_business(state):
+ """Check if the classifier result indicates a business classification.
+
+ Args:
+ state: GraphState containing execution results
+
+ Returns:
+ bool: True if the classification contains 'business', False otherwise
+ """
+ classifier_result = state.results.get("classifier")
+ if not classifier_result:
+ return False
+ result_text = str(classifier_result.result)
+ return "business" in result_text.lower()