Skip to content

[Question] Is DSPy _designed_ to allow me to export optimized prompt templates/programs? (How to use DSPy with other frameworks) #8043

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
1 of 2 tasks
Taytay opened this issue Apr 2, 2025 · 2 comments
Labels
enhancement New feature or request

Comments

@Taytay
Copy link

Taytay commented Apr 2, 2025

What feature would you like to see?

I want to use DSPy to perform optimizations on my LLM programs, but I am not sure how to use my optimized prompts (programs) with other frameworks. In particular, should I expect to be able to use DSPy to format my prompts/messages so I could extract and use them elsewhere, or am I really swimming upstream if I do that?

My current conclusion is that I can do this through inspection, but it's not exactly a design goal to export prompts and templates.

I am currently using InspectAI framework to perform evaluations for my LLM Programs. It (naturally) has its own abstractions around calling LLMs and scoring the results, and we have built some infrastructure around these assumptions. Two of the nice things it provides out of the gate is provider-based prompt caching, resulting in much lower API bills, and permanent compressed storage of eval runs and results, making it easy to use to both evaluate "prompts", as well as extract structured results from hundreds of thousands of calls at a later time. It would be nice to be able to use DSPy to perform my optimization, but use the prompts and examples it comes up in the context of Inspect.

My fear is that by using DSPy, I am going to be "locked in" to its way of both formatting the prompt/messages AND then calling the LLM. In my brief investigations (reading issues, source, etc), it appears that I might be able to extract the ingredients for a particular prompt from a saved json file, but assembling them back into a templated prompt is not something that is exactly a "first class citizen" of the framework.

In researching this question, I see some very recent activity designed to improve the architecture of the Adapters: #7996
That makes me think that I am not doing something completely odd by expecting to be able to use DSPy predictors/adapters to extract formatted prompts that I then manipulate and send to my LLMs using my own custom mechanisms, but I wanted to ask before going too far down this road.

(I also see in some of the examples the use of "inspect_history" as a mechanism to extract the prompts textually, but that feels a bit "hardcoded". )

Would you like to contribute?

  • Yes, I'd like to help implement this.
  • No, I just want to request it.

Additional Context

No response

@Taytay Taytay added the enhancement New feature or request label Apr 2, 2025
@okhat
Copy link
Collaborator

okhat commented Apr 2, 2025

Yes, definitely swimming upstream. DSPy is a programming model, not an optimizer.

Part of that programming model are things like controlling LM behavior (signature), composing and scaling LM behaviors (modules), and adjusting LM behavior (optimizers). These pieces are not meant to be decoupled. They all rely on the idea that signatures + inputs are managed and translated (perhaps in rather sophisticated or LM-dependent ways) by adapters under the hood.

tl;dr We strongly recommend using DSPy as a programming model, not as an optimizer to extract strings from. The whole point is to avoid managing strings.

@Taytay
Copy link
Author

Taytay commented Apr 3, 2025

Understood. That's clarifying! Thank you!

@okhat okhat closed this as completed Apr 3, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

2 participants