Skip to content

Conversation

@vansh-nagar
Copy link
Contributor

Fixes #418
Implements LLM-based filtering to reduce shape library context size for large libraries.

@DayuanJiang
Copy link
Owner

DayuanJiang commented Dec 28, 2025

Hey, thanks for tackling this!

I found one big issue - using globalThis to pass the user prompt is going to cause problems in production. Since it's shared across all requests, if two users hit the API at the same time, one user's prompt could end up being used for the other's shape filtering.

The good news is AI SDK 6 already gives you messages in the tool execute function, so you can just do:

execute: async ({ library }, { messages }) => {
    const lastUserMessage = [...messages].reverse().find(m => m.role === "user")
    const userPrompt = lastUserMessage?.content?.find((p: any) => p.type === "text")?.text || ""
    // ...
}

No global state needed.

A few other things:

  1. If generateText fails or parsing returns nothing, the user gets zero shapes back (worse than the full list). Wrap it in try/catch and fall back to returning the full content.
  2. You might want to check that the shapes the LLM returns actually exist in the original list - it could hallucinate names.
  3. AI SDK has Output.object() with Zod schemas which handles JSON parsing + validation automatically, might be cleaner than manual JSON.parse with regex fallback.

Let me know if you want help with any of this!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

[Feature] Optimize get_shape_library tool with LLM-based shape filtering

2 participants