Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
79 changes: 79 additions & 0 deletions src/content/docs/agents/api-reference/agents-api.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -947,6 +947,14 @@ type UseAgentChatOptions = Omit<
Parameters<typeof useChat>[0] & {
// Agent connection from useAgent
agent: ReturnType<typeof useAgent>;
// Client-side tools that can be executed in the browser
tools?: Record<string, AITool>;
// List of tool names that require human confirmation before execution
toolsRequiringConfirmation?: string[];
// Enable automatic continuation after client tool execution
// When true, the server will automatically continue the conversation
// after a client tool returns a result, creating a seamless experience
autoContinueAfterToolResult?: boolean;
},
"fetch"
>;
Expand Down Expand Up @@ -1008,6 +1016,77 @@ function useAgentChat(options: UseAgentChatOptions): {
};
```

##### Client-defined tools

You can define tools that execute on the client side using the `tools` option. Client-side tools allow you to:

- Access browser APIs and local resources
- Execute logic without server round-trips
- Implement human-in-the-loop patterns with approval workflows

Client tools are defined using the `AITool` type and can optionally include an `execute` function for client-side execution:

```ts
import type { AITool } from "agents/ai-react";

const clientTools: Record<string, AITool> = {
// Client-side tool with execution logic
getLocalTime: {
description: "get the local time for a specified location",
parameters: {
type: "object",
properties: {
location: { type: "string" }
},
required: ["location"]
},
execute: async (input) => {
const { location } = input as { location: string };
// Client-side logic here
return new Date().toLocaleTimeString();
}
},
// Server-side tool reference (no execute function)
getWeatherInformation: {
description: "Get weather information for a city"
// No parameters or execute - handled by server
}
};
```

##### Tool confirmation workflows

Use `toolsRequiringConfirmation` to specify which tools need user approval before execution:

```ts
const { messages, addToolResult } = useAgentChat({
agent,
tools: clientTools,
toolsRequiringConfirmation: ["getLocalTime", "getWeatherInformation"]
});
```

When a tool requiring confirmation is called, you can handle the approval in your UI by checking for `part.state === "input-available"` and calling `addToolResult` with the user's decision.

##### Auto-continuation after tool results

By default, client-executed tools require a new request to continue the conversation. Enable `autoContinueAfterToolResult` to make client tools behave like server tools:

```ts
const { messages, addToolResult } = useAgentChat({
agent,
tools: clientTools,
autoContinueAfterToolResult: true
});
```

When enabled:

1. Client executes the tool and sends the result to the server
2. Server automatically calls `onChatMessage()` to continue
3. The LLM continuation is merged into the same assistant message
4. User sees a single, seamless response

<TypeScriptExample>

```tsx
Expand Down
85 changes: 66 additions & 19 deletions src/content/docs/agents/guides/human-in-the-loop.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -114,7 +114,7 @@
inputSchema: z.object({ location: z.string() }),
execute: async ({ location }) => {
console.log(`Getting local time for ${location}`);
await new Promise((res) => setTimeout(res, 2000));

Check warning on line 117 in src/content/docs/agents/guides/human-in-the-loop.mdx

View workflow job for this annotation

GitHub Actions / Semgrep

semgrep.style-guide-potential-date-year

Potential year found. Documentation should strive to represent universal truth, not something time-bound. (add [skip style guide checks] to commit message to skip)
return "10am";
}
});
Expand All @@ -125,7 +125,7 @@
inputSchema: z.object({ location: z.string() }),
execute: async ({ location }) => {
console.log(`Getting local news for ${location}`);
await new Promise((res) => setTimeout(res, 2000));

Check warning on line 128 in src/content/docs/agents/guides/human-in-the-loop.mdx

View workflow job for this annotation

GitHub Actions / Semgrep

semgrep.style-guide-potential-date-year

Potential year found. Documentation should strive to represent universal truth, not something time-bound. (add [skip style guide checks] to commit message to skip)
return `${location} kittens found drinking tea this last weekend`;
}
});
Expand All @@ -141,15 +141,33 @@
};

// Export AITool format for client-side use
// AITool uses JSON Schema (not Zod) because it needs to be serialized over the wire.
// Only tools with `execute` need `parameters` - they get extracted and sent to the server.
// Tools without `execute` are server-side only and just need description for display.
export const clientTools: Record<string, AITool> = {
getLocalTime: getLocalTimeTool as AITool,
getLocalTime: {
description: "get the local time for a specified location",
parameters: {
type: "object",
properties: {
location: { type: "string" }
},
required: ["location"]
},
execute: async (input) => {
const { location } = input as { location: string };
console.log(`Getting local time for ${location}`);
await new Promise((res) => setTimeout(res, 2000));

Check warning on line 160 in src/content/docs/agents/guides/human-in-the-loop.mdx

View workflow job for this annotation

GitHub Actions / Semgrep

semgrep.style-guide-potential-date-year

Potential year found. Documentation should strive to represent universal truth, not something time-bound. (add [skip style guide checks] to commit message to skip)
return "10am";
}
},
// Server-side tools: no execute, no parameters needed (schema lives on server)
getWeatherInformation: {
description: getWeatherInformationTool.description,
inputSchema: getWeatherInformationTool.inputSchema
description:
"Get the current weather information for a specific city. Always use this tool when the user asks about weather."
},
getLocalNews: {
description: getLocalNewsTool.description,
inputSchema: getLocalNewsTool.inputSchema
description: "get local news for a specified location"
}
};
````
Expand Down Expand Up @@ -249,18 +267,44 @@
const startTime = Date.now();
const lastMessage = this.messages[this.messages.length - 1];

// Check if the last message contains tool confirmations
if (hasToolConfirmation(lastMessage)) {
// Process tool confirmations using UI stream
const stream = createUIMessageStream({
execute: async ({ writer }) => {
await processToolCalls(
{ writer, messages: this.messages, tools },
{ getWeatherInformation }
);
// Process tool confirmations - execute the tool and update messages
const updatedMessages = await processToolCalls(
{ messages: this.messages, tools },
{ getWeatherInformation }
);

// Update the agent's messages with the actual tool results
// This replaces "Yes, confirmed." with the actual tool output
this.messages = updatedMessages;
await this.persistMessages(this.messages);

// Now continue with streamText so the LLM can respond to the tool result
const result = streamText({
messages: convertToModelMessages(this.messages),
model: openai("gpt-4o"),
onFinish,
tools,
stopWhen: stepCountIs(5)
});

return result.toUIMessageStreamResponse({
messageMetadata: ({ part }) => {
if (part.type === "start") {
return {
model: "gpt-4o",
createdAt: Date.now(),
messageCount: this.messages.length
};
}
if (part.type === "finish") {
return {
responseTime: Date.now() - startTime,
totalTokens: part.totalUsage?.totalTokens
};
}
}
});
return createUIMessageStreamResponse({ stream });
}

// Normal message flow - stream AI response
Expand Down Expand Up @@ -330,7 +374,9 @@
agent,
experimental_automaticToolResolution: true,
toolsRequiringConfirmation,
tools: clientTools satisfies Record<string, AITool>
tools: clientTools satisfies Record<string, AITool>,
// Enable server auto-continuation after tool results for seamless UX
autoContinueAfterToolResult: true
});

const [input, setInput] = useState("");
Expand Down Expand Up @@ -528,11 +574,12 @@

### Message streaming with confirmations

The agent uses the Vercel AI SDK's streaming capabilities:
The agent uses the Vercel AI SDK streaming capabilities combined with client-side tool execution:

- `createUIMessageStream` creates a stream for processing tool confirmations.
- `streamText` handles normal AI responses with tool calls.
- The `hasToolConfirmation` function detects when a message contains a tool confirmation response.
- When `autoContinueAfterToolResult: true` is enabled, tool results trigger automatic server continuation
- `streamText` handles both normal AI responses and post-tool-execution continuations
- The `hasToolConfirmation` function detects when a message contains a tool confirmation response
- After tool execution, the server updates messages with actual tool results and continues the LLM response seamlessly

### State persistence

Expand Down
Loading