Skip to content

Releases: u1and0/gpt-cli

Release v0.9.9

17 Aug 06:57

Choose a tag to compare

Release Notes: v0.9.9**

Feature Updates

  • Introduced new provider Open Router.
  • Added short help message using -h option.
  • Added long help message using --help option.

Release v0.9.8

30 Jul 22:50

Choose a tag to compare

Release Notes: v0.9.7**

Key Changes

  • Style and Formatting: Improved code style and formatting across the platform.
  • Test and Documentation Updates:
    • Fixed timeout error messages.
    • Updated default model to gpt-4.1-mini and increased temperature and token limits.
  • Documentation: Enhanced README and command help sections for better user guidance.
  • Feature Updates:
    • Introduced gpt-4.1-mini-2025-04-14 as the default model with a maximum of 32k tokens.
    • Added support for a timeout option.

Release release/v0.9.9

17 Aug 06:48

Choose a tag to compare

Pre-release

Release Notes: v0.9.7**

Key Changes

  • Style and Formatting: Improved code style and formatting across the platform.
  • Test and Documentation Updates:
    • Fixed timeout error messages.
    • Updated default model to gpt-4.1-mini and increased temperature and token limits.
  • Documentation: Enhanced README and command help sections for better user guidance.
  • Feature Updates:
    • Introduced gpt-4.1-mini-2025-04-14 as the default model with a maximum of 32k tokens.
    • Added support for a timeout option.

Release main

30 Jul 22:45

Choose a tag to compare

Release main Pre-release
Pre-release

Release Notes: v0.9.9**

Feature Updates

  • Introduced new provider Open Router.
  • Added short help message using -h option.
  • Added long help message using --help option.

Release v0.9.7

24 May 01:05

Choose a tag to compare

Release Notes: v0.9.7**

Key Changes

  • Style and Formatting: Improved code style and formatting across the platform.
  • Test and Documentation Updates:
    • Fixed timeout error messages.
    • Updated default model to gpt-4.1-mini and increased temperature and token limits.
  • Documentation: Enhanced README and command help sections for better user guidance.
  • Feature Updates:
    • Introduced gpt-4.1-mini-2025-04-14 as the default model with a maximum of 32k tokens.
    • Added support for a timeout option.

Release v0.9.6

05 May 03:49

Choose a tag to compare

v0.9.6 Release Notes

This update brings significant improvements to the project's integration with Hugging Face, along with several quality-of-life enhancements and bug fixes. Users are encouraged to review the detailed commit history for more insights into specific changes and updates.

Changes

  • [chore]: Added ts-ignore for type checks to ensure smoother CI/CD processes.
  • [chore]: Updated GitHub Actions to include tsconfig.json and deno.json for better configuration management.
  • [chore]: Ignored tsconfig option for deno check on GitHub Actions to resolve potential conflicts.
  • [style]: Removed unused comment-out code for better code cleanliness.
  • [doc]: Updated version information for transparency and tracking.
  • [style]: Modified permissions for tools/*.sh scripts to mode 755 for better execution and security.

Features and Fixes

  • [refactor]: Refactored BaseMessage import to utilize langchain messages, enhancing modularity.
  • [feat]: Introduced toRoleContent(BaseMessage) => MessageFieldWithRole for improved message handling.
  • [feat]: Integrated Hugging Face stream generator, expanding functionality.
  • [test]: Refactored and enhanced tests for formatHuggingFacePrompt() to ensure reliability.
  • [fix]: Addressed issues with Hugging Face stream functionality to improve performance and stability.

Hotfixes

  • [fix]: Resolved the issue of "zip in zip" to prevent file corruption or errors during packaging.
  • [chore]: Implemented measures to remove release-package directories to clean up the environment.

Release release/v0.9.7

24 May 00:35

Choose a tag to compare

Pre-release

Unreleased

  • [feat]: Added support for OLLAMA_URL environment variable as an alternative to the --url option for Ollama model connections.
  • [feat]: Set default Ollama URL to http://localhost:11434 when not specified via command line or environment variable.
  • [deprecation]: Marked the --url option as deprecated in favor of using the OLLAMA_URL environment variable.
  • [feat]: Added --timeout / -o option to customize the timeout duration in seconds for waiting for AI responses (default 30s).

v0.9.6 Release Notes

This update brings significant improvements to the project's integration with Hugging Face, along with several quality-of-life enhancements and bug fixes. Users are encouraged to review the detailed commit history for more insights into specific changes and updates.

Changes

  • [chore]: Added ts-ignore for type checks to ensure smoother CI/CD processes.
  • [chore]: Updated GitHub Actions to include tsconfig.json and deno.json for better configuration management.
  • [chore]: Ignored tsconfig option for deno check on GitHub Actions to resolve potential conflicts.
  • [style]: Removed unused comment-out code for better code cleanliness.
  • [doc]: Updated version information for transparency and tracking.
  • [style]: Modified permissions for tools/*.sh scripts to mode 755 for better execution and security.

Features and Fixes

  • [refactor]: Refactored BaseMessage import to utilize langchain messages, enhancing modularity.
  • [feat]: Introduced toRoleContent(BaseMessage) => MessageFieldWithRole for improved message handling.
  • [feat]: Integrated Hugging Face stream generator, expanding functionality.
  • [test]: Refactored and enhanced tests for formatHuggingFacePrompt() to ensure reliability.
  • [fix]: Addressed issues with Hugging Face stream functionality to improve performance and stability.

Hotfixes

  • [fix]: Resolved the issue of "zip in zip" to prevent file corruption or errors during packaging.
  • [chore]: Implemented measures to remove release-package directories to clean up the environment.

Release v0.9.5

26 Mar 23:02

Choose a tag to compare

v0.9.5 Release Notes

New Features:

  • [feat] New LLM Google Gemma: Added support for the new Google Gemma LLM.
  • [feat] Code execution: Implemented code execution functionality with the following features:
    • [feat] Extract code block by grep | sed, and safe execution by safe_execution: Extracts code blocks using grep and sed.
    • [feat] tools/safe_execution.sh: Implemented safe_execution.sh to handle the execution of code blocks. The script prompts the user for permission before execution, ensuring safer code execution.

Refactor:

  • [refactor] Shortened code assuming piped input: The code has been refactored and shortened with the assumption that input will be provided via a pipe.

Documentation:

  • [doc] version info, new model 0day @README: Updated version information and added details about the new model to the README.
  • [docs] fix: Fixed documentation issues.
  • [docs] count s experiment: Experimented with counting "s" in the documentation.
  • [docs] Add result to system prompt Added information on how to include results in the system prompt and removed specific model names.
  • [docs] Code execution: Added documentation related to the code execution feature.

Release release/v0.9.6

05 May 03:27

Choose a tag to compare

Pre-release

v0.9.5 Release Notes

New Features:

  • [feat] New LLM Google Gemma: Added support for the new Google Gemma LLM.
  • [feat] Code execution: Implemented code execution functionality with the following features:
    • [feat] Extract code block by grep | sed, and safe execution by safe_execution: Extracts code blocks using grep and sed.
    • [feat] tools/safe_execution.sh: Implemented safe_execution.sh to handle the execution of code blocks. The script prompts the user for permission before execution, ensuring safer code execution.

Refactor:

  • [refactor] Shortened code assuming piped input: The code has been refactored and shortened with the assumption that input will be provided via a pipe.

Documentation:

  • [doc] version info, new model 0day @README: Updated version information and added details about the new model to the README.
  • [docs] fix: Fixed documentation issues.
  • [docs] count s experiment: Experimented with counting "s" in the documentation.
  • [docs] Add result to system prompt Added information on how to include results in the system prompt and removed specific model names.
  • [docs] Code execution: Added documentation related to the code execution feature.

Release hotfix/v0.9.6

04 Apr 14:15

Choose a tag to compare

Release hotfix/v0.9.6 Pre-release
Pre-release

v0.9.5 Release Notes

New Features:

  • [feat] New LLM Google Gemma: Added support for the new Google Gemma LLM.
  • [feat] Code execution: Implemented code execution functionality with the following features:
    • [feat] Extract code block by grep | sed, and safe execution by safe_execution: Extracts code blocks using grep and sed.
    • [feat] tools/safe_execution.sh: Implemented safe_execution.sh to handle the execution of code blocks. The script prompts the user for permission before execution, ensuring safer code execution.

Refactor:

  • [refactor] Shortened code assuming piped input: The code has been refactored and shortened with the assumption that input will be provided via a pipe.

Documentation:

  • [doc] version info, new model 0day @README: Updated version information and added details about the new model to the README.
  • [docs] fix: Fixed documentation issues.
  • [docs] count s experiment: Experimented with counting "s" in the documentation.
  • [docs] Add result to system prompt Added information on how to include results in the system prompt and removed specific model names.
  • [docs] Code execution: Added documentation related to the code execution feature.