Replies: 3 comments 6 replies
-
|
For me, "deployment" means running agent(s) remotely, not on my laptop. I like to think about it as a continuum of different types of deployments, from a single, standalone agent all the way to a managed, multi-agent enterprise platform. For what it's worth, most of the use cases I've seen are closer to the former than the latter. |
Beta Was this translation helpful? Give feedback.
-
|
I've been following the discussions in this thread and I’m very interested in contributing to the development of the Platform managing beeAI’s Agents, Tools, and infrastructure components. I'm currently working on a proposal focusing on deployment on Kubernetes and would like to share it with you all in the near future for feedback. What would be the best place to share the proposal? Should I open a PR or just add markup to this thread? |
Beta Was this translation helpful? Give feedback.
-
|
Jan suggested I post my proposal in this discussion. Kubernetes Operator for Agentic Platform Component Management1. ProposalThis document presents a design for a Kubernetes Operator to automate the lifecycle management of agentic platform components within a Kubernetes cluster. The operator will manage two primary Custom Resources: The The The operator implements controllers for these CRs that handle automated creation, updating, and deletion of underlying Kubernetes resources. It will facilitate building components from source code using Tekton pipelines and deploying these components using various methods such as direct Kubernetes manifests, Helm charts, and the Operator Lifecycle Manager (OLM). 2. Goals
3. Proposed DesignThe following diagram illustrates the high-level architecture of the Component Operator: graph TD;
subgraph Kubernetes
direction RL
Operator[Agentic Platform Operator]
PlatformCRD["Platform CRD"]
ComponentCRD["Component CRD"]
AgentComp[Agent Component]
ToolComp[Tool Component]
InfraComp[Infrastructure Component]
HelmInstaller[Helm Installer]
OLMDeployer[OLM Deployer]
PlatformCRD --> |References| ComponentCRD
ComponentCRD --> AgentComp
ComponentCRD --> ToolComp
ComponentCRD --> InfraComp
Operator -- Reconciles --> PlatformCRD
Operator -- Reconciles --> ComponentCRD
AgentComp --> |Creates| AgentService[Service]
AgentComp --> |Creates| AgentDeployment[Deployment]
ToolComp --> |Creates| ToolService[Service]
ToolComp --> |Creates| ToolDeployment[Deployment]
InfraComp --> |Processed by| HelmInstaller
InfraComp --> |Processed by| OLMDeployer
OLMDeployer --> |Creates| OLMSubscription[OLM Subscription]
OLMDeployer --> |Creates| PrometheusOperator[Prometheus Operator]
PrometheusOperator --> |Creates| CustomResource[Custom Resource]
end
The architecture diagram illustrates the key components of the system and their interactions: Platform CRD: The central resource definition which models complete platform as collections of components. Component CRD: Represents various deployable entities supporting different component types through a union pattern (Agent, Tool, Infrastructure). Component Operator: The core controller that reconciles Component resources and orchestrates the deployment process. Tekton Pipeline: Manages the build process for components requiring building from source code, consisting of three main tasks:
The initial design offers a basic, built-in pipeline that automates the essential steps to built images from source code. However, Tekton's true strength lies in its extensibility. While this default pipeline serves as a convenient starting point, the operator will evolve to accommodate more complex, user-defined Tekton pipelines. This allows for advanced workflows, such as those seen in Red Hat Trusted Application Pipelines, which incorporate practices like creating Bill of Materials (BOM) manifests and performing signed builds to enhance software security. Deployment Methods:
This architecture provides a cohesive approach to managing diverse components while accommodating deployment strategies best suited for each component type. 4. Platform DefinitionThe Platform CR defines the overall composition of an agentic platform, referencing the infrastructure, tools, and agents that make up the platform. Example: apiVersion: agentic.example.com/v1alpha1
kind: Platform
metadata:
name: research-platform
spec:
description: "Research Agentic Platform"
# Infrastructure components required by the platform
infrastructure:
- name: redis-cache-component
componentReference:
name: redis
namespace: redis-system
- name: postgres-component
componentReference:
name: postgresql
namespace: postgres-system
- name: prometheus-component
componentReference:
name: prometheus
namespace: prometheus-system
# Tools required by the platform
tools:
- name: mcp-server-component
componentReference:
name: mcp-server
namespace: mcp-system
- name: dashboard-component
componentReference:
name: dashboard
namespace: agentic-platform
# Agents that will run on the platform
agents:
- name: research-agent-component
componentReference:
name: research-agent
namespace: my-agents
- name: assistant-agent-component
componentReference:
name: assistant-agent
namespace: my-agents
# Global configurations that apply to all components
globalConfig:
namespace: agentic-platform
annotations:
platform.agentic.example.com/version: "1.0.0"
labels:
environment: development
status:5. Component TypesThe Component CRD supports three primary types of components: Agents, Tools, and Infrastructure. Each type has its own specific configuration within the spec field. Only one type can be defined in the Component CR instance. This will be enforced by the k8s validation webhook. 5.1 Agent ComponentAgent components represent AI agents designed to perform specific tasks within the agentic platform. Each agent can be configured with unique attributes and can be built from source code if necessary. Example: apiVersion: agentic.platform.dev/v1alpha1
kind: Component
metadata:
name: research-agent
spec:
agentComponent:
buildSpec:
sourceRepository: "github.com/example/agents.git"
sourceRevision: "main"
sourceSubfolder: "research-agent"
repoUser: "git-user"
buildOutput:
image: "research-agent"
imageTag: "v1.0.0"
imageRegistry: "ghcr.io/example"
description: "A research agent for information gathering"
deployerSpec:
name: research-agent
namespace: my-agents
kubernetes:
imageSpec:
image: "research-agent"
imageTag: "v1.0.0"
imageRegistry: "ghcr.io/example"
secret: $(IMAGE_REPO_SECRET)
resources:
limits:
cpu: "1"
memory: "2Gi"
requests:
cpu: "500m"
memory: "1Gi"
serviceType: "ClusterIP"
env:
- name: LLM_MODEL
value: "llama3.2:70b"
- name: LLM_URL
value: "http://llm-service:11434"
- name: "IMAGE_REPO_SECRET"
valueFrom:
secretKeyRef:
name: "ghcr-token-secret"
key: "token"
deployAfterBuild: true
5.2 Tool ComponentTool components are services that agents can utilize to interact with external systems or provide specific functionalities, such as Model Context Protocal Servers(MCP). Example: apiVersion: agentic.platform.dev/v1alpha1
kind: Component
metadata:
name: mcp-server
spec:
toolComponent:
toolType: "MCP"
buildSpec:
sourceRepository: "github.com/example/mcp-server.git
sourceRevision: "main"
repoUser: "git-user"
buildOutput:
image: "mcp-server"
imageTag: "v1.0.0"
imageRegistry: "ghcr.io/example"
description: "MCP Server"
deployerSpec:
name: weather-mcp-server
namespace: my-mcps
kubernetes:
imageSpec:
image: "mcp-server"
imageTag: "v1.0.0"
imageRegistry: "ghcr.io/example"
secret: $(IMAGE_REPO_SECRET)
resources:
limits:
cpu: "2"
memory: "4Gi"
requests:
cpu: "1"
memory: "2Gi"
serviceType: "ClusterIP"
deployAfterBuild: true
env:
- name: PORT
value: "10000"
- name: "IMAGE_REPO_SECRET"
valueFrom:
secretKeyRef:
name: "ghcr-token-secret"
key: "token"
5.3 Infrastructure ComponentInfrastructure components provide the foundational services required by agents and tools, such as databases, caches, observability tools, metrics etc. Such services can be deloyed with Helm charts or OLM. Example with Helm: apiVersion: agentic.platform.dev/v1alpha1
kind: Component
metadata:
name: redis-cache
spec:
infraComponent:
infraType: "Cache"
infraProvider: "Redis"
version: "7.0"
description: "Redis cache for agents"
deployerSpec:
name: redis
namespace: redis-system
helm:
chartName: "redis"
chartVersion: "17.3.14"
chartRepository: "https://charts.bitnami.com/bitnami"
releaseName: "redis-cache"
secret: $(REDIS_SECRET)
env:
- name: PORT
value: "10000"
- name: "REDIS_SECRET"
valueFrom:
secretKeyRef:
name: "redis-secret"
key: "secret"
Example with OLM: apiVersion: agentic.platform.dev/v1alpha1
kind: Component
metadata:
name: prometheus
spec:
infraComponent:
infraType: "Metrics"
infraProvider: "Prometheus"
version: "0.50.0"
description: "Prometheus for the agentic platform"
deployerSpec:
name: prometheus
namespace: prometheus-system
olm:
catalog: "certified-operators"
package: "prometheus-operator"
channel: "stable"
version: "0.50.0"
approvalStrategy: "Automatic"
6. Component Type DefinitionsThis section details the Go struct definitions for the Component CRD specification and status. These definitions are typically used with Kubernetes controller-runtime. 6.1 Component Spectype ComponentSpec struct {
// Component Types
// +kubebuilder:validation:XValidation:rule="(has(self.agentComponent) ? 1 : 0) + (has(self.toolComponent) ? 1 : 0) + (has(self.infraComponent) ? 1 : 0) == 1",message="Exactly one component type must be specified"
// Union pattern: only one of the following components should be specified.
Agent *AgentComponent `json:"agentComponent,omitempty"`
// MCP Servers, Utilities, etc
Tool *ToolComponent `json:"toolComponent,omitempty"`
// Redis, Postgresql, Prometheus, etc
Infra *InfraComponent `json:"infraComponent,omitempty"`
// --------------------------
// Common fields for all component types
// Description is a human-readable description of the component
// +optional
Description string `json:"description,omitempty"`
// Deployment strategy for the component: Helm, K8s manifest(deployments), OLM (operators)
Deplopyer DeployerSpec `json:"deployerSpec"`
// Dependencies defines other components this agent depends on
// +optional
Dependencies []DependencySpec `json:"dependencies,omitempty"`
}6.2 Agent Componenttype AgentComponent struct {
// Agent specific attributes
// Build configuration for building the agent from source
// +optional
Build *BuildSpec `json:"buildSpec,omitempty"`
}6.3 Tool Componenttype ToolComponent struct {
// tool specific attributes
// Build configuration for building the tool from source
// +optional
Build *BuildSpec `json:"buildSpec,omitempty"`
// ToolType specifies the type of tool
// MCP;Utility
ToolType string `json:"toolType"`
}6.4 Infrastructure Componenttype InfraComponent struct {
// Infra specific attributes
// InfraType specifies the type of infrastructure
// Database;Cache;Queue;StorageService;SearchEngine
InfraType string `json:"infraType"`
// InfraProvider specifies the infrastructure provider
// PostgreSQL;MySQL;MongoDB;Redis;Kafka;ElasticSearch;MinIO
InfraProvider string `json:"infraProvider"`
// Version specifies the version of the infrastructure component
Version string `json:"version"`
// SecretRef reference to secrets containing credentials
// +optional
SecretRef *corev1.LocalObjectReference `json:"secretRef,omitempty"`
}6.5 Dependency Specification// DependencySpec defines a dependency on another component
type DependencySpec struct {
// Name is the name of the component
Name string `json:"name"`
// Kind is the kind of the component
// +kubebuilder:validation:Enum=Agent;Tool;Infra
Kind string `json:"kind"`
// Version is the version of the component
// +optional
Version string `json:"version,omitempty"`
}6.6 Deployer Specification// DeployerSpec defines how to deploy a component
type DeployerSpec struct {
// Union pattern: only one of the following components should be specified.Enforced by Validating WebHook.
Helm *HelmSpec `json:"helmSpec,omitempty"`
Kubernetes *KubernetesSpec `json:"kubernetesSpec,omitempty"`
Olm *OlmSpec `json:"olmSpec,omitempty"`
// Common deployment settings
// Name of the k8s resource
// +optional
Name string `json:"name,omitempty"`
// Namespace to deploy to, defaults to the namespace of the CR
// +optional
Namespace string `json:"namespace,omitempty"`
// Environment variables for the component
// +optional
Env []corev1.EnvVar `json:"env,omitempty"`
// DeployAfterBuild indicates whether to automatically deploy the component after build
// +optional
DeployAfterBuild bool `json:"deployAfterBuild,omitempty"`
}
// OlmSpec defines OLM operator deployment configuration
type OlmSpec struct {
...
}
// HelmSpec defines Helm deployment configuration
type HelmSpec struct {
...
}
// KubernetesSpec defines K8s deployment configuration
type KubernetesSpec struct {
...
}6.7 Build Specification// BuildSpec defines how to build a component from source
type BuildSpec struct {
// SourceRepository is the Git repository URL
SourceRepository string `json:"sourceRepository"`
// SourceRevision is the Git revision (branch, tag, commit)
SourceRevision string `json:"sourceRevision"`
// SourceSubfolder is the folder within the repository containing the source
// +optional
SourceSubfolder string `json:"sourceSubfolder,omitempty"`
// RepoUser is the username in the Git repository containing the source
// +kubebuilder:validation:Required
RepoUser string `json:"repoUser"`
// SourceCredentials is a reference to a secret containing Git credentials
// +optional
SourceCredentials *corev1.LocalObjectReference `json:"sourceCredentials,omitempty"`
// BuildArgs are arguments to pass to the build process
// +optional
BuildArgs []BuildArg `json:"buildArgs,omitempty"`
// BuildOutput specifies where to store build artifacts
// +optional
BuildOutput *BuildOutput `json:"buildOutput,omitempty"`
// CleanupAfterBuild indicates whether to automatically cleanup after build
// +optional
CleanupAfterBuild bool `json:"cleanupAfterBuild,omitempty"`
}
// BuildArg defines a build argument
type BuildArg struct {
// Name of the build argument
Name string `json:"name"`
// Value of the build argument
Value string `json:"value"`
}
// BuildOutput defines where to store build artifacts
type BuildOutput struct {
// Image is the name of the image to build
// +kubebuilder:validation:Required
Image string `json:"image"`
// ImageTag is the tag to apply to the built image
// +kubebuilder:validation:Required
ImageTag string `json:"imageTag"`
// ImageRegistry is the container registry where the image will be pushed
// +kubebuilder:validation:Required
ImageRegistry string `json:"imageRegistry"`
}7. Platform Type Definitions7.1 Platform Spec// PlatformSpec defines the desired state of a Platform
type PlatformSpec struct {
// Description of the platform
Description string `json:"description,omitempty"`
// Infrastructure components required by the platform
Infrastructure []PlatformComponentRef `json:"infrastructure,omitempty"`
// Tools required by the platform
Tools []PlatformComponentRef `json:"tools,omitempty"`
// Agents that will run on the platform
Agents []PlatformComponentRef `json:"agents,omitempty"`
// Global configurations that apply to all components
GlobalConfig GlobalConfig `json:"globalConfig,omitempty"`
}7.2 PlatformComponentRef// PlatformComponentRef defines a reference to a component
type PlatformComponentRef struct {
// Name of the component in the platform
Name string `json:"name"`
// Reference to the component resource
ComponentReference ComponentReference `json:"componentReference"`
}7.3 ComponentReference// ComponentReference identifies a component resource
type ComponentReference struct {
// Name of the component resource
Name string `json:"name"`
// Kind of the component (Component)
Kind string `json:"kind"`
// API version of the component
// +optional
APIVersion string `json:"apiVersion,omitempty"`
// Namespace of the component
// +optional
Namespace string `json:"namespace,omitempty"`
}7.4 GlobalConfig// GlobalConfig defines global configuration for all components
type GlobalConfig struct {
// Namespace for all components
// +optional
Namespace string `json:"namespace,omitempty"`
// Annotations to apply to all components
// +optional
Annotations map[string]string `json:"annotations,omitempty"`
// Labels to apply to all components
// +optional
Labels map[string]string `json:"labels,omitempty"`
}8. Implementation Details
|
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
-
One of the key considerations, as pointed out in https://github.com/orgs/i-am-bee/discussions/428, is whether a Docker-based solution for the platform is sufficient or if we need something more robust, it is invetably related to how users typicaly expect the agents and the platform to be deployed.
Definition of Deployment
We still need to understand users' needs and expectations around "Deployment." Here are some key use cases we've been considering.
Local Installation
The user installs the platform locally to explore the available agents or integrate their own agents to test multi-agent workflows. That's how the platform was originally envisioned to be used.
Individual Agent Deployment to Existing Infrastructure
User prefers deploying a specific agent to their existing infrastructure. Containerization supports this use case well, and communication is handled via the Agent Communication Protocol.
Multi-Agent Deployment to Existing Infrastructure
The user wants to deploy multiple agents to their existing infrastructure. Again, containerization aligns with this use case, and robust inter-agent communication (introduced via ACP) is crucial.
Deployment of the Entire Platform
User deploys the full platform: the agent catalog, platform server, and all out-of-the-box agents. Users may also include their own or GitHub-hosted agents.
One real-world example could be a small company that wants to share the entire platform internally so that all team members can experiment with and collaborate on the agents being shared.
Key distinction:
Potential Paths
We've been thinking about some potential ideas we can further explore:
Docker-based Solution
The platform only depends on Docker being available on the machine and uses it to manage the lifecycle of agents, where an agent is essentially a Docker container. The system interacts with the containers via the Docker API.
Lima VM-based Solution
The platform installs and depends on its own Lima VM, which then serves as an isolated environment to run the dockerized solution. This offers more control over the environment and networking.
Kubernetes Based Solution
The platform would run inside a Kubernetes cluster. This isolated environment provides robust control for running dockerized solutions, offering the highest flexibility and scalability. This might work really well with turnkey mode for production deployments.
Call To Action
Please share your thoughts on potential use cases and expectations when it comes to "Deployment."
Beta Was this translation helpful? Give feedback.
All reactions