Skip to content

Version v0.9.8.5

Compare
Choose a tag to compare
@karthink karthink released this 11 Jun 09:20
· 125 commits to master since this release

Version 0.9.8.5 adds support for new Gemini, Anthropic and OpenAI models, for AWS Bedrock and other providers, brings better MCP support and a redesigned tools menu, support for "presets" and quick ways to invoke them, context inclusion via Org/Markdown links and better handling of LLM "reasoning" content.

Additionally, the gptel-request pipeline is now fully asynchronous and tweakable, making it easy to add support for RAG steps or other prompt transformations.

Breaking changes

  • gptel-org-branching-context is now a global variable. It was buffer-local by default in past releases.

  • The following models have been removed from the default ChatGPT backend:

    • o1-preview: use o1 instead.
    • gpt-4-turbo-preview: use gpt-4o or gpt-4-turbo instead.
    • gpt-4-32k, gpt-4-0125-preview and gpt-4-1106-preview: use gpt-4o or gpt-4 instead.

    Alternatively, you can add these models back to the backend in your personal configuration:

   (push 'gpt-4-turbo-preview
         (gptel-backend-models (gptel-get-backend "ChatGPT")))
  • Only relevant if you use gptel-request in your elisp code, interactive gptel usage is unaffected: gptel-request now takes a new, optional :transforms argument. Any prompt modifications (like adding context to requests) must now be specified via this argument. See the definition of gptel-send for an example.

New models and backends

  • Add support for gpt-4.1, gpt-4.1-mini, gpt-4.1-nano, o3 and o4-mini.

  • Add support for gemini-2.5-pro-exp-03-25, gemini-2.5-flash-preview-04-17, gemini-2.5-pro-preview-05-06 and gemini-2.5-pro-preview-06-05.

  • Add support for claude-sonnet-4-20250514 and claude-opus-4-20250514.

  • Add support for AWS Bedrock models. You can create an AWS Bedrock gptel backend with gptel-make-bedrock, which see. Please note: AWS Bedrock support requires Curl 8.5.0 or higher.

  • You can now create an xAI backend with gptel-make-xai, which see. (xAI was supported before but the model configuration is now handled for you by this function.)

  • Add support for GitHub Copilot Chat. See the README and gptel-make-gh-copilot. Please note: this is only the chat component of GitHub Copilot. Copilot’s completion-at-point (tab-completion) functionality is not supported by gptel.

  • Add support for Sambanova. This is an OpenAI-compatible API so you can create a backend with gptel-make-openai, see the README for details.

  • Add support for Mistral Le Chat. This is an an OpenAI-compatible API so you can create a backend with gptel-make-openai, see the README for details.

New features and UI changes

  • gptel can access MCP server tools by integrating with the mcp.el package, which is at https://github.com/lizqwerscott/mcp.el. (mcp.el is available on MELPA.) To help with the integration, two new commands are provided: gptel-mcp-connect and gptel-mcp-disconnect. You can use these to start MCP servers selectively and add tools to gptel. These commands are also available from gptel’s tools menu.

    These commands are currently not autoloaded by gptel. To access them, require the gptel-integrations feature.

  • Tools now run in the buffer from which the request originates. This can be significant when tools read or manipulate Emacs’ state.

  • The tools menu (gptel-tools) has been redesigned. It now displays tool categories and associated tools in two columns, and it should scale better to any number of tools. As a bonus, the new menu requires half as many keystrokes as before to enable individual tools or toggle categories.

  • You can now define “presets”, which are a bundle of gptel options, such as the backend, model, system message, included tools, temperature and so on. This set of options can be applied together, making it easy to switch between different tasks using gptel. From gptel’s transient menu, you can save the current configuration as a preset or apply another one. Presets can be applied globally, buffer-locally or for the next request only. To persist presets across Emacs sessions, define presets in your configuration using gptel-make-preset.

  • When using gptel-send from anywhere in Emacs, you can now include a “cookie” of the form @preset-name in the prompt text to apply that preset before sending. The preset is applied for that request only. This is an easy way to specify models, tools, system messages (etc) on the fly. In chat buffers the preset cookie is fontified and available for completion via completion-at-point.

  • For scripting purposes, provide a gptel-with-preset macro to create an environment with a preset applied.

  • Links to plain-text files in chat buffers can be followed, and their contents included with the request. Using Org or Markdown links is an easy, intuitive, persistent and buffer-local way to specify context. To enable this behavior, turn on gptel-track-media. This is a pre-existing option that also controls whether image/document links are followed and sent (when the model supports it).

  • The current kill can be added to gptel’s context. To enable this, turn on gptel-expert-commands and use gptel’s transient menu.

  • gptel now supports handling reasoning/thinking blocks in responses from Gemini models. This is controlled by gptel-include-reasoning, in the same way that it handles other APIs.

  • A new hook gptel-prompt-transform-functions is provided for arbitrary transformations of the prompt prior to sending a request. This hook runs in a temporary buffer containing the text to be sent. Any aspect of the request (the text, destination, request parameters, response handling preferences) can be modified buffer-locally here. These hook functions can be asynchronous.

  • The user option gptel-use-curl can now be used to specify a Curl path.

  • The new option gptel-curl-extra-args can be used to specify extra arguments to the Curl command used for the request. This is the global version of the gptel-backend-specific :curl-args slot, which can be used to specify Curl arguments when using a specific backend.

Notable Bug fixes

  • Fix more Org markup conversion edge cases involving nested Markdown delimiters.

What's Changed

  • elpaignore: Add .github by @pabl0 in #699
  • gptel-transient: Return to menu with just RET by @pabl0 in #695
  • Enable Markdown formatting for "o3-mini" by removing nosystem capability by @Inkbottle007 in #702
  • README: Tweak list of alternatives by @pabl0 in #707
  • elpaignore: Oopsie daisy by @pabl0 in #708
  • gptel: Ensure restoring state does not flag the buffer modified by @pabl0 in #711
  • README: Clarify backend configuration and registration by @spudlyo in #715
  • Fix typo in README by @DivineDominion in #717
  • README: Restructure authinfo explanation by @karthink in #718
  • gptel-rewrite: Diff buffers, not files by @pabl0 in #731
  • gptel-rewrite: Make rewrite work on Emacs < 29 again by @pabl0 in #721
  • gptel-rewrite: Ensure kill-buffer does not ask for confirmation by @pabl0 in #733
  • README: Clarify installation instructions by @pabl0 in #732
  • prevent invalid duplication of tool_call messages by @tcahill in #734
  • fix(gptel): add missing newline after end_tool marker by @LuciusChen in #735
  • Add gemini-2.5-pro-exp-03-25 to gptel-gemini.el by @surenkov in #742
  • gptel-rewrite: Use correct form of with-demoted-errors by @pabl0 in #745
  • gptel-ollama: Fix docstring by @pabl0 in #752
  • README: Install without submodule in Doom by @real-or-random in #758
  • Allow tool use if gptel-use-tools was non-nil at request time by @kmontag in #755
  • UTF-8 encoding issue in ChatGPT buffer by @kayomarz in #754
  • README: Code to register backend for Mistral Le Chat, per #703 by @ligon in #766
  • gptel-rewrite: Make other modes not clobber rewrite overlay by @pabl0 in #744
  • gptel--mode-description-alist: add OCaml by @mlemerre in #771
  • gptel-openai-extras: Add support for Github Copilot Chat by @kiennq in #767
  • allow customize-variable to set temperature to nil by @nleve in #777
  • [README] provide update-to-date Gemini setting by @nohzafk in #778
  • chore: add gemini 2.5 pro model for gh copilot chat by @tianrui-wei in #779
  • Fix pairing of details tags by @Arclite in #780
  • docs: update xAI backend with Grok 3 model variants by @axelknock in #775
  • gptel-gemini: Add support for Gemini 2.5 Pro Preview by @benthamite in #781
  • gptel--gh-models: add new models by @kiennq in #789
  • Fix max tokens for o4-mini by @orge-dev in #791
  • dir-locals: Set some project specific local variables by @pabl0 in #712
  • Add gemini-2.5-flash-preview-04-17 model to gptel-gemini.el by @surenkov in #793
  • Improve gptel-org-set-topic to not include tags in the default value by @akirak in #801
  • Readme: added sambanova documentation by @wlauppe in #803
  • fix: Filter unsupported JSON Schema attributes out of Gemini tool calls by @necaris in #827
  • Add gemini-2.5-pro-preview-05-06 model to gptel-gemini.el by @surenkov in #829
  • gptel-gemini: Fix filtering of unsupported attributes in tool calls by @necaris in #834
  • gptel-gh: Add media capability to GitHub Copilot gpt-4.1 model by @huonw in #836
  • fix: arg-values default value by @xlarsx in #835
  • add missing in README.org by @nano-o in #846
  • gptel-openai: Prevent null tool-calls from causing error by @ragnard in #830
  • send parameter in request when args is nil by @longlene in #818
  • gptel-gh: add claude-sonnet/opus-4 by @kiennq in #860
  • C-y can be used to add the current kill to the context by @jwiegley in #832
  • Update gemini-2.5-flash-preview to latest model code. by @bjodah in #856
  • GPTEL changes for bedrock by @akssri in #871
  • gptel: Silence byte-compilation warnings by @pabl0 in #657
  • gptel-gh: only sent vision-request header when there's media by @kiennq in #878
  • Add Claude 3.7 Sonnet model ID by @mobatmedia in #887
  • Add gemini-2.5-pro-preview-06-05 model to gptel-gemini.el by @surenkov in #897
  • README: Update link by @samihda in #888

New Contributors

Full Changelog: v0.9.8...v0.9.8.5