Skip to content

Conversation

@tninja
Copy link

@tninja tninja commented Nov 26, 2025

@karthink PTAL Thanks

Addressing #1158

* Initial plan

* Add gptel-request-sync function for synchronous LLM requests

Co-authored-by: tninja <[email protected]>

* Address code review feedback for gptel-request-sync

Co-authored-by: tninja <[email protected]>

* Improve gptel-request-sync stability: FSM-based completion, disable tools

Co-authored-by: tninja <[email protected]>

* Add :sync argument to gptel-request instead of separate gptel-request-sync

Co-authored-by: tninja <[email protected]>

* Remove gptel-request-timeout parameter, hardcode 30s timeout

Co-authored-by: tninja <[email protected]>

---------

Co-authored-by: copilot-swe-agent[bot] <[email protected]>
Co-authored-by: tninja <[email protected]>
@karthink
Copy link
Owner

@tninja Thanks for the PR. Was this generated by Copilot, BTW? This is a good start, but there are several problems and edge cases in the code, since many more parts of the process of sending requests can be asynchronous, such as context collection and the prompt transform functions.

@tninja
Copy link
Author

tninja commented Nov 28, 2025

@karthink Thanks for the reply. Do you mean that context collection and the prompt transform functions are still asynchronous and need to be converted to synchronous versions? I assumed these two functions run locally and wouldn’t take much time, so using either sync or async might not make a significant difference at the moment. I also imagine that most of the time the code is actually waiting for the LLM to generate the full response.

I agree this solution is not perfect yet. The code was generated by Copilot after several iterations. I have tested it and it works on my side, e.g. (gptel-request "What is 2+2?" :sync t). One possible, more formal improvement could be to set stream=False for the ChatGPT API so that the API itself performs a synchronous call. However, this would require investigation and code changes for different LLM vendors, and we would need to verify the behavior for each of them, which may take time.

Perhaps we can add a TODO comment in the code to note a better future version?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants