Skip to content

kusl/GeminiClient

Repository files navigation

🤖 Gemini Client Console

A powerful, interactive command-line client for Google's Gemini AI API featuring multi-turn conversations, real-time streaming, dynamic model selection, XDG-compliant conversation logging, and detailed performance metrics.

🔑 Quick Start - API Key Required!

⚠️ IMPORTANT: You need a Google Gemini API key to use this application!

Getting Your API Key

  1. Get a FREE API key from Google AI Studio: https://aistudio.google.com/apikey
  2. Click "Get API Key" and follow the instructions.
  3. Copy your API key (starts with AIza...).

Setting Your API Key

The application supports multiple configuration methods (in priority order):

  1. User Secrets (Recommended for development):

    dotnet user-secrets set "GeminiSettings:ApiKey" "YOUR_API_KEY"
  2. Environment Variables:

    export GeminiSettings__ApiKey="YOUR_API_KEY"
  3. appsettings.json in the executable directory:

    {
      "GeminiSettings": {
        "ApiKey": "YOUR_API_KEY_HERE",
        "BaseUrl": "https://generativelanguage.googleapis.com/",
        "DefaultModel": "gemini-2.5-flash"
      }
    }

📥 Installation

Download Pre-built Binaries

Download the latest release for your platform from the Releases page.

Platform Download Architecture
Windows gemini-client-win-x64.zip 64-bit Intel/AMD
Linux gemini-client-linux-x64.tar.gz 64-bit Intel/AMD

Note: Self-contained binaries include the .NET 10 runtime. No separate installation required.

Linux One-Liner Install

curl -fsSL https://raw.githubusercontent.com/kusl/GeminiClient/main/install-gemini-client.sh | bash

🚀 Features

💬 Multi-Turn Conversations

Engage in stateful, context-aware conversations. The client remembers your previous exchanges within a session, allowing for natural follow-up questions and iterative discussions. Use the reset command to start fresh.

🌊 Real-time Streaming

  • SSE Support: True real-time communication with the Gemini API using Server-Sent Events.
  • Performance Optimizations: Configured with Server GC and Concurrent GC for high-throughput response handling.
  • Live Metrics: Monitor token speed (tokens/s) and first-response latency in real-time.

🤖 Dynamic Model Selection

  • Live Discovery: Fetches available models directly from the Gemini API at startup.
  • Smart Fallbacks: Gracefully handles API errors with a curated fallback list.

📝 Conversation Logging

All prompts, responses, and session statistics are automatically logged to text files for review and debugging.

  • Linux: ~/.local/share/gemini-client/logs/ (XDG compliant)
  • macOS: ~/Library/Application Support/GeminiClient/logs/
  • Windows: %LOCALAPPDATA%\GeminiClient\logs\

💻 Usage

Available Commands

Command Description
exit Quit the application and display session stats
reset Clear conversation context and start fresh
model Change the selected AI model
stats View detailed session statistics
log Open the log folder in your file manager
stream Toggle streaming mode ON/OFF

Building from Source

Prerequisites: .NET 10.0 SDK

# Clone the repository
git clone https://github.com/kusl/GeminiClient.git
cd GeminiClient

# Build the project
dotnet build

# Run the console app
dotnet run --project GeminiClientConsole

🛠️ Project Structure

  • GeminiClient/: Core library with multi-turn API support and SSE streaming.
  • GeminiClientConsole/: Interactive CLI with conversation state management, animated model selection, and XDG-compliant logging.
  • Directory.Build.props: Centralized versioning and build optimizations.
  • Directory.Packages.props: Central Package Management for all NuGet dependencies.

📜 License

This project is licensed under the AGPL-3.0-or-later.


Made with ❤️ using .NET 10, Google Gemini AI, and Server-Sent Events

Star this repo if you find it useful!


🔄 Version History

  • v0.0.8 (Current) - Added multi-turn conversation support, reset command, log command, and XDG-compliant conversation logging.
  • v0.0.7 - Upgraded to .NET 10.0, implemented repository-wide performance optimizations for streaming, and centralized versioning.
  • v0.0.6 - Added real-time streaming support with SSE.
  • v0.0.5 - Improved terminal compatibility by removing destructive console clears.
  • v0.0.4 - Initial interactive console client with dynamic model discovery.

Notice: This project contains code generated by Large Language Models such as Claude and Gemini. All code is experimental whether explicitly stated or not. The streaming implementation uses Server-Sent Events (SSE) for real-time communication with the Gemini API.

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Packages

No packages published