Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Support Tool Calling with Output Schema in ai.chat Configuration #1578

Open
karandevhub opened this issue Dec 31, 2024 · 5 comments
Open

Support Tool Calling with Output Schema in ai.chat Configuration #1578

karandevhub opened this issue Dec 31, 2024 · 5 comments

Comments

@karandevhub
Copy link

Is your feature request related to a problem? Please describe.

Currently, the tool calling functionality in Genkit does not work when an output schema is defined in the ai.chat configuration. This creates a limitation where we have to remove the output schema to enable tool usage, which affects the structured handling of responses.

Describe the solution you'd like

The ai.chat configuration should support tool calling seamlessly even when an output schema is defined. This would allow both structured response validation (via the output schema) and tool usage to coexist without requiring modifications or compromises.

Describe alternatives you've considered

Removing the output schema entirely to enable tool calling.
Manually restructuring the logic to validate outputs separately after tool usage.
Both alternatives are suboptimal, as they either sacrifice response validation or add unnecessary complexity to the implementation.

Additional context

Here is an example illustrating the issue:

When using the following configuration:

javascript

const chat = ai.chat({
  system: `You are a concise Legal AI assistant specializing in Indian law. Follow these guidelines strictly:
    ...
  `,
  temperature: 0,
  tools: [documentRetrievalTool], 
  output: { schema: llmOutputSchema }, // This breaks tool usage
});
@ashtonthomas
Copy link

I am having this same issue with ai.generate() as part of a Flow.

From reading the Trace in the Genkit Dev UI, it appears that when the model returns the initial response with toolRequest the output scheme is enforced (which breaks).

I would have expected this output schema to not be enforced until the final response from the model (after the model requests the tool and subsequent calls to the model are made)

@ashtonthomas
Copy link

ashtonthomas commented Jan 8, 2025

Perhaps related Issue: #889

Related prior PR: #542

@ashtonthomas
Copy link

I was able to copy the output prompt message from a different Trace where I removed the tool but had the output schema.

I then copied this string into the bottom of my prompt

Output should be in JSON format and conform to the following schema:\n\n\`\`\`\n{\"type\":\"object\",\"properties\":{\"title\":{\"type\":\"string\",...

I then re-adde the tool but removed the output schema from the ai.generate call.

Surprisingly, this actually worked all the way through. That is, I was able to cast the output of the response

const result = response.output as z.infer<typeof TestOutputSchema>;

So this is a hacky workaround (and perhaps provide some more context), but it may unblock me for now. (Assuming this is a bug and I haven't goofed something else in all of this..)

@karandevhub
Copy link
Author

karandevhub commented Jan 8, 2025

I also tried putting the schema directly in the prompt and adding the tool inside the ai.chat call, and it worked for me as well!

Output Schema:
{
  "answer": "string",
  "document_name": "string or null",
  "context": "string or null",
  "follow_up_questions": ["string"],
  "document_url": "string or null" 
}
Ensure the output strictly matches this schema. Do not add any additional fields or deviate from this structure

@ashtonthomas
Copy link

This issue may now be resolved
#889 (comment)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
Status: No status
Development

No branches or pull requests

2 participants