Releases: evalstate/fast-agent
v0.2.36
What's Changed
- Support for Streaming with OpenAI endpoints, migrate to Async API.
- Added
/toolscommand and MCP Server summary on startup/agent switch. - Migrate to official A2A Types
- SDK bumps (including MCP 1.10.1)
Full Changelog: v0.2.35...v0.2.36
v0.2.35
Streaming Support for Anthropic Endpoint
NB: The default max_output_tokens for Opus 4 and Sonnet 4 is now 32,000 and 64,000 tokens respectively. For OpenAI and Anthropic models, in general unless you want lower than max I'd recommend using the fast-agent defaults from the model database.
- Feat/streaming @evalstate (#251)
Full Changelog: v0.2.34...v0.2.35
v0.2.34
Usage and Context Window Support
Last Turn and Cumulative usage is now available via the UsageAccumulator available on the Agent interface (usage_tracking). This also contains the Provider API specific Usage information on a per-turn basis.

Added support for Context Window %age usage for known models, including tokenizable content support in preparation for improved multi-modality support.
/usage command available in interactive mode to show current agent token usage.
- Feat/context windows @evalstate (#247 #248)
v0.2.33
What's Changed
- fix last message is assistant handling for structured by @evalstate in #238
- add env var support to the config yaml file by @hevangel in #239
New Contributors
Full Changelog: v0.2.31...v0.2.33
v0.2.31
Changes
- bump MCP SDK to 1.9.4
- fix: last message is assistant for gemini structured handling @evalstate (#238)
- Fix orchestrator doc (max_iterations => plan_iterations) @yeahdongcn (#237)
- Add OpenAI o3 @kahkeng (#236)
- Support explicitly marking an agent as default @yeahdongcn (#231)
- custom agent poc @wreed4 (#92)
- Fix deepseek-reasoner request with history @ufownl (#228)
- Feat/hf token mode @evalstate (#230)
v0.2.30
What's Changed
HF_TOKEN mode
HF_TOKEN environment variable is used when accessing HuggingFace hosted MCP Servers either at hf.co/mcp or .hf.spaces. Can be overriden with Auth headers or fastagent.config.yaml.
Use external prompt editor with Ctrl+E
Use CTRL+E to open an external editor for Prompt editing. See #218 for more details.
Aliyun support
Aliyun Bailian, which provides APIs for the Qwen series of models—widely used across mainland China. @yeahdongcn informs me that "Qwen3 is gaining a lot of popularity, and Aliyun is currently offering free tokens for developers"!
Other fixes
- fix(#208): Fix wrong prompt description displayed in the [Available MCP Prompts] table by @codeboyzhou in #209
- Support Deepseek json_format by @Zorro30 in #182
- fix(#226): Error in tool list changed callback by @codeboyzhou in #227
New Contributors
- @codeboyzhou made their first contribution in #209
- @Zorro30 made their first contribution in #182
- @yeahdongcn made their first contribution in #224
Full Changelog: v0.2.29...v0.2.30
v0.2.29
Changes
- Add HF_TOKEN mode @evalstate (#223)
- Support Deepseek json_format @Zorro30 (#182)
- fix(#208): Fix wrong prompt description displayed in the [Available MCP Prompts] table @codeboyzhou (#209)
- Made sure that empty content.parts will not anymore cause bugs when u… @janspoerer (#220)
- adding deprecated to fix missing dependency error @jjwall (#216)
- added slow llm to test parallel sampling @wreed4 (#197)
- ensure tool names are < 64 characters
- fix opentelemetry (the MCP instrumentation isn't compatible with 1.9)
- Update MCP package to 1.9.3
New Contributors
- @jjwall made their first contribution in #216
- @janspoerer made their first contribution in #220
Full Changelog: v0.2.28...v0.2.29
v0.2.28
Release 0.2.28
Gemini Native Support
This release switches to the native Google API as the default for google models. Thanks to @monotykamary and @janspoerer for this work 🍾. If you run in to issues the old provider is accessible as googleoai -> but you will need to update your API key to match.
Autosampling
Servers are now offered Sampling capability as default, provided by the Agents model (or system default if not specified). Switch auto_sampling: false in the config.py file to switch off this behaviour.
Other Changes
- added slow llm to test parallel sampling @wreed4 (#197)
- fix(177): Fix prompt listing across agents @wreed4 (#178)
New Contributors
- @monotykamary made their first contribution in #134
Full Changelog: v0.2.27...v0.2.28
v0.2.27
v0.2.25
What's Changed
- Fix: Remove parallel_tool_calls from OpenAI model provider for 'o' model compatibility by @kikiya in #164
- feat: Add Azure OpenAI Service Support to FastAgent by @pablotoledo in #160
New Contributors
- @kikiya made their first contribution in #164
- @pablotoledo made their first contribution in #160
Full Changelog: v0.2.24...v0.2.25
