Eidos is an AI-powered command-line interface that brings natural language processing to your Linux terminal. Built with Rust, Eidos leverages large language models to translate natural language into safe shell commands, provide intelligent chat assistance, and offer multi-language translation.
π Project Status: Beta - Core functionality complete with comprehensive testing and documentation. Performance optimizations implemented (model caching, shared runtime).
- Translate English descriptions into shell commands
- Support for ONNX models (T5, BART, GPT-2) via tract
- Support for quantized models (LLaMA, Mistral) via candle/GGUF
- Intelligent command validation with 60+ dangerous pattern detection
- Multi-provider support: OpenAI, Ollama, custom APIs
- Conversation history with auto-pruning
- Async/sync runtime support
- Configurable via environment variables
- Auto-detect 75+ languages with lingua
- Translate to/from any supported language
- LibreTranslate API integration
- Offline language detection
- Whitelist-based command validation
- Shell injection prevention
- Path traversal protection
- No automatic command execution
- Comprehensive security testing (7 dedicated tests)
- Docker support with multi-stage builds
- Interactive installation script
- Makefile for common tasks
- Pre-built binary support
- Systemd and Kubernetes deployment examples
curl -sSf https://raw.githubusercontent.com/Ru1vly/eidos/main/install.sh | bashdocker pull eidos:latest
docker run --rm eidos chat "Hello, world!"git clone https://github.com/Ru1vly/eidos
cd eidos
make build-release
make install# Set up model paths
export EIDOS_MODEL_PATH=/path/to/model.onnx
export EIDOS_TOKENIZER_PATH=/path/to/tokenizer.json
# Generate commands from natural language
eidos core "list all files"
# Output: ls -la
eidos core "find Python files in current directory"
# Output: find . -name '*.py'
eidos core "show disk usage"
# Output: df -h# Configure API
export OPENAI_API_KEY=sk-...
# or
export OLLAMA_HOST=http://localhost:11434
# Start chatting
eidos chat "Explain how grep works"
eidos chat "What is the difference between cat and less?"eidos translate "Bonjour le monde"
# Detected language: fr
# Translated (en): Hello world
eidos translate "Hola, ΒΏcΓ³mo estΓ‘s?"
# Detected language: es
# Translated (en): Hello, how are you?Eidos follows a modular design with clear separation of concerns:
βββββββββββββββββββββββββββββββββββββββ
β CLI Interface β
β (src/main.rs) β
ββββββββββββββββββ¬βββββββββββββββββββββ
β
βΌ
βββββββββββββββββββββββββββββββββββββββ
β Request Router β
β (lib_bridge) β
ββββ¬βββββββββββ¬βββββββββββ¬βββββββββββββ
β β β
βΌ βΌ βΌ
ββββββββββ ββββββββββ ββββββββββββ
βlib_coreβ βlib_chatβ βlib_ β
β β β β βtranslate β
βONNX/ β βOpenAI/ β βLingua/ β
βGGUF β βOllama β βLibre β
ββββββββββ ββββββββββ ββββββββββββ
lib_core: Command generation with ONNX/GGUF model supportlib_chat: Multi-provider LLM API integrationlib_translate: Language detection and translationlib_bridge: Dynamic request routing systemsrc/: CLI interface, configuration, error handling
See docs/ARCHITECTURE.md for detailed design documentation.
- Installation & Deployment - Complete deployment guide
- Architecture - System design and components
- Model Training - Train and deploy custom models
- API Reference - Programmatic usage
- Contributing - Development guidelines
-
Environment Variables
export EIDOS_MODEL_PATH=/path/to/model.onnx export EIDOS_TOKENIZER_PATH=/path/to/tokenizer.json export OPENAI_API_KEY=sk-...
-
Local Config (
./eidos.toml)model_path = "model.onnx" tokenizer_path = "tokenizer.json"
-
User Config (
~/.config/eidos/eidos.toml) -
Built-in Defaults
Comprehensive test suite with 38 tests passing:
# Run all tests
cargo test --all
# Run specific test suite
cargo test -p lib_core
cargo test --test integration_tests
# Run with coverage
make test
# Run benchmarks
cargo bench- Unit Tests (29): Core logic, routing, API integration
- Integration Tests (9): End-to-end CLI workflows
- Security Tests (7): Command validation and injection prevention
- Benchmark Suite: Performance testing for inference
Eidos implements defense-in-depth security:
- Length limits and encoding checks
- Empty input rejection
- 60+ dangerous pattern detection
- Shell metacharacter blocking
- Path traversal prevention
- Never executes commands automatically
- Display-only mode
- User reviews all output
rm -rf, dd if=, mkfs, chmod 777, curl | sh,
>, |, &, ;, $( ), ` `, ../,
~/.ssh/, /dev/, /proc/, fork bombs, etc.
See lib_core/tests/command_validation_tests.rs for complete test suite.
Eidos supports custom model training:
# 1. Prepare training data (JSONL format)
cat > training_data.jsonl <<EOF
{"prompt": "list all files", "command": "ls -la"}
{"prompt": "show current directory", "command": "pwd"}
EOF
# 2. Train model
./scripts/train_model.py training_data.jsonl -o ./my-model
# 3. Validate
./scripts/validate_model.py ./my-model/final_model test_data.jsonl
# 4. Convert to ONNX
./scripts/convert_to_onnx.py ./my-model/final_model -o model.onnx
# 5. Use with Eidos
eidos core "list files"See docs/MODEL_GUIDE.md for comprehensive training guide.
100+ example command pairs provided in datasets/example_commands.jsonl:
{"prompt": "list all files", "command": "ls -la"}
{"prompt": "find Python files", "command": "find . -name '*.py'"}
{"prompt": "count lines in file.txt", "command": "wc -l file.txt"}# Build image
docker build -t eidos:latest .
# Run command
docker run --rm \
-v $(pwd)/models:/home/eidos/models:ro \
eidos core "list files"# Start services
docker-compose up -d
# Run command
docker-compose run eidos chat "Hello"
# With Ollama
docker-compose --profile with-ollama up -dSee docs/DEPLOYMENT.md for production deployment guides.
# Clone and enter directory
git clone https://github.com/Ru1vly/eidos
cd eidos
# Install development tools
make dev-setup
# Build
make build
# Run tests
make test
# Format and lint
make check-alleidos/
βββ src/ # Main binary (CLI, config, errors)
βββ lib_core/ # Command generation (ONNX/GGUF)
βββ lib_chat/ # Chat API integration
βββ lib_translate/ # Translation service
βββ lib_bridge/ # Request routing
βββ tests/ # Integration tests
βββ benches/ # Performance benchmarks
βββ docs/ # Documentation
βββ scripts/ # Training/validation scripts
βββ datasets/ # Example training data
We welcome contributions! See CONTRIBUTING.md for guidelines.
Good First Issues:
- Check issues labeled
good first issue - Add more training examples
- Improve documentation
- Add test coverage
- Core command generation (ONNX/GGUF)
- Multi-provider chat integration
- Language detection and translation
- Comprehensive security validation
- Full test suite (38 tests)
- Docker deployment
- Installation scripts
- Complete documentation
- Streaming responses
- Plugin system for custom handlers
- Conversation history persistence
- Web interface (optional GUI)
- Pre-trained model releases
- Multi-architecture binaries
Performance characteristics (T5-small on CPU):
- Inference: ~100-500ms per command
- Memory: ~200MB (model) + ~50MB (runtime)
- Startup: <100ms
- Language Detection: ~10-50ms
Run benchmarks:
cargo bench- Issues: GitHub Issues
- Discussions: GitHub Discussions
- Contributions: See CONTRIBUTING.md
Eidos is licensed under the GNU General Public License v3.0. See LICENSE for details.
- tract - ONNX runtime
- candle - Rust ML framework
- lingua-rs - Language detection
- Hugging Face - Model ecosystem
- Documentation: docs/
- Bug Reports: Create an issue
- Feature Requests: Start a discussion
Built with β€οΈ using Rust