A minimal but complete AI chatbot starter built with Vercel AI SDK v5, Next.js 15, and Google Gemini. This template demonstrates tool calling (function calling) in a clean, educational way—perfect for Design Engineers learning about LLM integrations.
This template demonstrates:
- How tool calling works - The LLM can call functions to get external data
- Multi-step reasoning - The LLM can chain multiple tools together
- Streaming responses - Real-time updates as the LLM generates text
- Type safety - Full TypeScript types for messages and tool results
- Frontend integration - How to render tool results in the UI
The Vercel AI SDK v5 makes it easy to build AI applications with:
- First-class support for tool calling with simple APIs
- Built-in streaming support for real-time experiences
- Type-safe definitions with Zod integration
- Seamless integration with Next.js and React
- Support for multiple AI providers including Google Gemini, OpenAI, Anthropic, and more
In this template, we implement an "intergalactic weather assistant" with two tools:
weather- Gets weather data for any location (returns mock data)whatToWear- Suggests futuristic equipment based on weather
When you ask "What's the weather on Mars?", the LLM:
- Calls the
weathertool with{ location: "Mars" } - Receives the weather data
- Calls the
whatToWeartool to generate suggestions - Synthesizes everything into a conversational response
- Real-time chat interface with user and AI messages
- Support for streaming responses
- Auto-scroll to latest message
- Clean, minimal UI using shadcn/ui
- Vercel AI SDK v5 with latest patterns
- Integration with Google's Gemini 2.0 Flash model
- Multi-step tool calling with
stopWhen - Type-safe tool definitions with Zod
The template includes two example tools that demonstrate the full tool-calling flow. The code is heavily documented to help you understand how it works and create your own tools.
- Framework: Next.js 15
- AI SDK: [email protected], @ai-sdk/react, @ai-sdk/google
- Styling: Tailwind CSS
- Type Safety: TypeScript
- Schema Validation: Zod
- UI Components: shadcn/ui
- Vercel AI SDK v5 Tool Calling Documentation - Complete guide to tools
- streamText API Reference - Backend streaming API
- useChat Hook Reference - Frontend React hook
- Google AI Studio - Get your free API key
- shadcn/ui Documentation - UI component customization
- AI Elements - Free shadcn-compatible AI components by Vercel
- 21st.dev - Library of shadcn-compatible components
While this template uses Google's Gemini model by default, you can easily switch to other providers. The Vercel AI SDK v5 supports:
- OpenAI (GPT-5 etc.)
- Anthropic (Claude 4.5 Sonnet, etc.)
- Hugging Face
- Azure OpenAI
- Cohere
- And many others
├── components/
│ └── ui/ # Reusable UI components
├── app/
│ ├── page.tsx # Main chat interface
│ └── api/
│ └── chat/
│ └── route.ts # API route for chat functionality
-
Set up your environment:
- Copy
.env.exampleto.env.local - Get your free Google Gemini API key from Google AI Studio
- Add your API key to
.env.local
- Copy
-
Install dependencies:
npm install # or yarn install # or pnpm install
-
Start the development server:
npm run dev # or yarn dev # or pnpm dev
-
Open http://localhost:3000 in your browser
To add a new tool, extend the tools object in app/api/chat/route.ts:
tools: {
yourNewTool: tool({
description: "Description of your tool",
inputSchema: z.object({
// Define your input parameters using Zod
param1: z.string().describe("Description of param1"),
param2: z.number().optional(),
}),
execute: async ({ param1, param2 }) => {
// Implement your tool logic
return { result: "..." };
},
}),
}Tool Result: To create a component to render the tool result in app/page.tsx:
// Add this in the MessagePart component
if (
toolInvocation.toolName === "yourNewTool" &&
toolInvocation.state === "result"
) {
return <YourToolResult result={toolInvocation.result} />;
}This template uses Google Gemini by default, but you can easily switch to other providers. See the AI SDK Providers documentation for all supported providers and what parameters to use with each model.
Backend Flow (app/api/chat/route.ts)
- Receive messages from the frontend via POST request
- Call
streamText()with model, messages, system prompt, and tools - LLM generates response, potentially calling tools
- Tools execute and return results
- LLM uses tool results to continue generation (up to 5 steps with
stopWhen: stepCountIs(5)) - Stream response back to client
Frontend Flow (app/page.tsx)
useChat()hook manages chat state and API communication- User types message and submits form
sendMessage()sends message to/api/chatendpoint- Streaming response updates
messagesarray in real-time - Messages are rendered, including text and tool invocations
- Custom components render tool results (WeatherResult, WeatherWear)
Messages have a parts array that can contain:
{
type: "text",
text: "The weather on Mars..."
}
{
type: "tool-invocation",
toolInvocation: {
toolName: "weather",
toolCallId: "call_123",
state: "result", // or "pending"
input: { location: "Mars" },
result: { location: "Mars", temperature: 72 }
}
}Try these prompts to see tool calling in action:
- "What's the weather on Mars?"
- "Tell me about the weather on Jupiter and what I should wear"
- "I'm going to Saturn's rings, what's the weather like and what equipment do I need?"
- "Compare the weather on Venus and Neptune"
- The template uses TypeScript for type safety throughout
- Tool parameters are validated using Zod schemas
- Messages stream in real-time for better UX
- UI components use Tailwind CSS for styling
- The chat interface automatically scrolls to the latest message
- Code is heavily documented to help Design Engineers learn
- Implementation is kept minimal to make it easy to understand and extend
- Making it your own with custom tools, creative idea and visual design
- Better typography
- Improving chat UX and ergonomics, key shortcuts, adding loading state to components etc.
- Adding micro-interactions, animations, and transitions
- Implementing a more advanced tool-calling system
- Overall making the interface more engaging and interactive
This project is MIT licensed.