1
1
# -*- mode: org; -*-
2
2
3
- * 0.9.8 2025-03-11
3
+ * 0.9.8 2025-03-13
4
4
5
5
Version 0.9.8 adds support for new Gemini, Anthropic, OpenAI,
6
6
Perplexity, and DeepSeek models, introduces LLM tool use/function
@@ -13,6 +13,15 @@ feature and control of LLM "reasoning" content.
13
13
- ~gemini-pro~ has been removed from the list of Gemini models, as
14
14
this model is no longer supported by the Gemini API.
15
15
16
+ - Sending an active region in Org mode will now apply Org
17
+ mode-specific rules to the text, such as branching context.
18
+
19
+ - The following obsolete variables and functions have been removed:
20
+ - ~gptel-send-menu~: Use ~gptel-menu~ instead.
21
+ - ~gptel-host~: Use ~gptel-make-openai~ instead.
22
+ - ~gptel-playback~: Use ~gptel-stream~ instead.
23
+ - ~gptel--debug~: Use ~gptel-log-level~ instead.
24
+
16
25
** New models and backends
17
26
18
27
- Add support for several new Gemini models including
@@ -35,7 +44,7 @@ feature and control of LLM "reasoning" content.
35
44
support for the DeepSeek API, including support for handling
36
45
"reasoning" output.
37
46
38
- ** Notable new features and UI changes
47
+ ** New features and UI changes
39
48
40
49
- ~gptel-rewrite~ now supports iterating on responses.
41
50
@@ -121,9 +130,10 @@ feature and control of LLM "reasoning" content.
121
130
- (Org mode) Org property drawers are now stripped from the prompt
122
131
text before sending queries. You can control this behavior or
123
132
specify additional Org elements to ignore via
124
- ~gptel-org-ignore-elements~.
133
+ ~gptel-org-ignore-elements~. (For more complex pre-processing you
134
+ can use ~gptel-prompt-filter-hook~.)
125
135
126
- ** Bug fixes
136
+ ** Notable Bug fixes
127
137
128
138
- Fix response mix-up when running concurrent requests in Org mode
129
139
buffers.
@@ -145,58 +155,54 @@ model/backend configuration.
145
155
146
156
- Add support for Anthropic's Claude 3.5 Haiku.
147
157
148
- - Add support for xAI (contributed by @WuuBoLin) .
158
+ - Add support for xAI.
149
159
150
- - Add support for Novita AI (contributed by @jasonhp) .
160
+ - Add support for Novita AI.
151
161
152
- ** Notable new features and UI changes
162
+ ** New features and UI changes
153
163
154
164
- gptel's directives (see ~gptel-directives~) can now be dynamic, and
155
165
include more than the system message. You can "pre-fill" a
156
166
conversation with canned user/LLM messages. Directives can now be
157
167
functions that dynamically generate the system message and
158
168
conversation history based on the current context. This paves the
159
169
way for fully flexible task-specific templates, which the UI does
160
- not yet support in full. This design was suggested by
161
- @meain. (#375)
170
+ not yet support in full.
162
171
163
172
- gptel's rewrite interface has been reworked. If using a streaming
164
173
endpoint, the rewritten text is streamed in as a preview placed over
165
174
the original. In all cases, clicking on the preview brings up a
166
175
dispatch you can use to easily diff, ediff, merge, accept or reject
167
176
the changes (4ae9c1b2), and you can configure gptel to run one of
168
- these actions automatically. See the README for examples. This
169
- design was suggested by @meain. (#375)
177
+ these actions automatically. See the README for examples.
170
178
171
179
- ~gptel-abort~, used to cancel requests in progress, now works across
172
- the board, including when not using Curl or with ~gptel-rewrite~
173
- (7277c00).
180
+ the board, including when not using Curl or with ~gptel-rewrite~.
174
181
175
182
- The ~gptel-request~ API now explicitly supports streaming responses
176
- (7277c00) , making it easy to write your own helpers or features with
183
+ , making it easy to write your own helpers or features with
177
184
streaming support. The API also supports ~gptel-abort~ to stop and
178
185
clean up responses.
179
186
180
187
- You can now unset the system message -- different from setting it to
181
188
an empty string. gptel will also automatically disable the system
182
- message when using models that don't support it (0a2c07a) .
189
+ message when using models that don't support it.
183
190
184
191
- Support for including PDFs with requests to Anthropic models has
185
192
been added. (These queries are cached, so you pay only 10% of the
186
193
token cost of the PDF in follow-up queries.) Note that document
187
194
support (PDFs etc) for Gemini models has been available since
188
- v0.9.5. (0f173ba, #459)
195
+ v0.9.5.
189
196
190
197
- When defining a gptel model or backend, you can specify arbitrary
191
198
parameters to be sent with each request. This includes the (many)
192
199
API options across all APIs that gptel does not yet provide explicit
193
- support for (bcbbe67e). This feature was suggested by @tillydray
194
- (#471).
200
+ support for.
195
201
196
202
- New transient command option to easily remove all included context
197
- chunks (a844612), suggested by @metachip and @gavinhughes .
203
+ chunks.
198
204
199
- ** Bug fixes
205
+ ** Notable Bug fixes
200
206
- Pressing ~RET~ on included files in the context inspector buffer now
201
207
pops up the file correctly.
202
208
- API keys are stripped of whitespace before sending.
0 commit comments