Skip to content

Releases: evalstate/fast-agent

v0.2.36

29 Jun 08:59

Choose a tag to compare

What's Changed

  • Support for Streaming with OpenAI endpoints, migrate to Async API.
  • Added /tools command and MCP Server summary on startup/agent switch.
  • Migrate to official A2A Types
  • SDK bumps (including MCP 1.10.1)

Full Changelog: v0.2.35...v0.2.36

v0.2.35

26 Jun 18:12

Choose a tag to compare

Streaming Support for Anthropic Endpoint

stream_release

NB: The default max_output_tokens for Opus 4 and Sonnet 4 is now 32,000 and 64,000 tokens respectively. For OpenAI and Anthropic models, in general unless you want lower than max I'd recommend using the fast-agent defaults from the model database.

Closes #234 , #186

Full Changelog: v0.2.34...v0.2.35

v0.2.34

22 Jun 11:39

Choose a tag to compare

Usage and Context Window Support

Last Turn and Cumulative usage is now available via the UsageAccumulator available on the Agent interface (usage_tracking). This also contains the Provider API specific Usage information on a per-turn basis.
usage_info

Added support for Context Window %age usage for known models, including tokenizable content support in preparation for improved multi-modality support.

/usage command available in interactive mode to show current agent token usage.

v0.2.33

19 Jun 19:46

Choose a tag to compare

What's Changed

  • fix last message is assistant handling for structured by @evalstate in #238
  • add env var support to the config yaml file by @hevangel in #239

New Contributors

Full Changelog: v0.2.31...v0.2.33

v0.2.31

12 Jun 09:41

Choose a tag to compare

Changes

v0.2.30

09 Jun 06:27

Choose a tag to compare

What's Changed

HF_TOKEN mode

HF_TOKEN environment variable is used when accessing HuggingFace hosted MCP Servers either at hf.co/mcp or .hf.spaces. Can be overriden with Auth headers or fastagent.config.yaml.

Use external prompt editor with Ctrl+E

Use CTRL+E to open an external editor for Prompt editing. See #218 for more details.

Aliyun support

Aliyun Bailian, which provides APIs for the Qwen series of models—widely used across mainland China. @yeahdongcn informs me that "Qwen3 is gaining a lot of popularity, and Aliyun is currently offering free tokens for developers"!

Other fixes

New Contributors

Full Changelog: v0.2.29...v0.2.30

v0.2.29

07 Jun 10:29

Choose a tag to compare

Changes

  • Add HF_TOKEN mode @evalstate (#223)
  • Support Deepseek json_format @Zorro30 (#182)
  • fix(#208): Fix wrong prompt description displayed in the [Available MCP Prompts] table @codeboyzhou (#209)
  • Made sure that empty content.parts will not anymore cause bugs when u… @janspoerer (#220)
  • adding deprecated to fix missing dependency error @jjwall (#216)
  • added slow llm to test parallel sampling @wreed4 (#197)
  • ensure tool names are < 64 characters
  • fix opentelemetry (the MCP instrumentation isn't compatible with 1.9)
  • Update MCP package to 1.9.3

New Contributors

Full Changelog: v0.2.28...v0.2.29

v0.2.28

30 May 17:41

Choose a tag to compare

Release 0.2.28

Gemini Native Support

This release switches to the native Google API as the default for google models. Thanks to @monotykamary and @janspoerer for this work 🍾. If you run in to issues the old provider is accessible as googleoai -> but you will need to update your API key to match.

Autosampling

Servers are now offered Sampling capability as default, provided by the Agents model (or system default if not specified). Switch auto_sampling: false in the config.py file to switch off this behaviour.

Other Changes

  • added slow llm to test parallel sampling @wreed4 (#197)
  • fix(177): Fix prompt listing across agents @wreed4 (#178)

New Contributors

Full Changelog: v0.2.27...v0.2.28

v0.2.27

27 May 21:16

Choose a tag to compare

What's Changed

  • Update to MCP1.9.1 -> fix #187
  • Implementation Details on Init correct

Full Changelog: v0.2.25...v0.2.27

v0.2.25

14 May 21:31

Choose a tag to compare

What's Changed

  • Fix: Remove parallel_tool_calls from OpenAI model provider for 'o' model compatibility by @kikiya in #164
  • feat: Add Azure OpenAI Service Support to FastAgent by @pablotoledo in #160

New Contributors

Full Changelog: v0.2.24...v0.2.25