A powerful, interactive command-line client for Google's Gemini AI API featuring multi-turn conversations, real-time streaming, dynamic model selection, XDG-compliant conversation logging, and detailed performance metrics.
⚠️ IMPORTANT: You need a Google Gemini API key to use this application!
- Get a FREE API key from Google AI Studio: https://aistudio.google.com/apikey
- Click "Get API Key" and follow the instructions.
- Copy your API key (starts with
AIza...).
The application supports multiple configuration methods (in priority order):
-
User Secrets (Recommended for development):
dotnet user-secrets set "GeminiSettings:ApiKey" "YOUR_API_KEY"
-
Environment Variables:
export GeminiSettings__ApiKey="YOUR_API_KEY"
-
appsettings.json in the executable directory:
{ "GeminiSettings": { "ApiKey": "YOUR_API_KEY_HERE", "BaseUrl": "https://generativelanguage.googleapis.com/", "DefaultModel": "gemini-2.5-flash" } }
Download the latest release for your platform from the Releases page.
| Platform | Download | Architecture |
|---|---|---|
| Windows | gemini-client-win-x64.zip |
64-bit Intel/AMD |
| Linux | gemini-client-linux-x64.tar.gz |
64-bit Intel/AMD |
Note: Self-contained binaries include the .NET 10 runtime. No separate installation required.
curl -fsSL https://raw.githubusercontent.com/kusl/GeminiClient/main/install-gemini-client.sh | bashEngage in stateful, context-aware conversations. The client remembers your previous exchanges within a session, allowing for natural follow-up questions and iterative discussions. Use the reset command to start fresh.
- SSE Support: True real-time communication with the Gemini API using Server-Sent Events.
- Performance Optimizations: Configured with Server GC and Concurrent GC for high-throughput response handling.
- Live Metrics: Monitor token speed (tokens/s) and first-response latency in real-time.
- Live Discovery: Fetches available models directly from the Gemini API at startup.
- Smart Fallbacks: Gracefully handles API errors with a curated fallback list.
All prompts, responses, and session statistics are automatically logged to text files for review and debugging.
- Linux:
~/.local/share/gemini-client/logs/(XDG compliant) - macOS:
~/Library/Application Support/GeminiClient/logs/ - Windows:
%LOCALAPPDATA%\GeminiClient\logs\
| Command | Description |
|---|---|
exit |
Quit the application and display session stats |
reset |
Clear conversation context and start fresh |
model |
Change the selected AI model |
stats |
View detailed session statistics |
log |
Open the log folder in your file manager |
stream |
Toggle streaming mode ON/OFF |
Prerequisites: .NET 10.0 SDK
# Clone the repository
git clone https://github.com/kusl/GeminiClient.git
cd GeminiClient
# Build the project
dotnet build
# Run the console app
dotnet run --project GeminiClientConsole- GeminiClient/: Core library with multi-turn API support and SSE streaming.
- GeminiClientConsole/: Interactive CLI with conversation state management, animated model selection, and XDG-compliant logging.
- Directory.Build.props: Centralized versioning and build optimizations.
- Directory.Packages.props: Central Package Management for all NuGet dependencies.
This project is licensed under the AGPL-3.0-or-later.
Made with ❤️ using .NET 10, Google Gemini AI, and Server-Sent Events
⭐ Star this repo if you find it useful!
- v0.0.8 (Current) - Added multi-turn conversation support,
resetcommand,logcommand, and XDG-compliant conversation logging. - v0.0.7 - Upgraded to .NET 10.0, implemented repository-wide performance optimizations for streaming, and centralized versioning.
- v0.0.6 - Added real-time streaming support with SSE.
- v0.0.5 - Improved terminal compatibility by removing destructive console clears.
- v0.0.4 - Initial interactive console client with dynamic model discovery.
Notice: This project contains code generated by Large Language Models such as Claude and Gemini. All code is experimental whether explicitly stated or not. The streaming implementation uses Server-Sent Events (SSE) for real-time communication with the Gemini API.