Skip to content

houhuawei23/ask_llm

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 

History

25 Commits
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 

Repository files navigation

Ask LLM v2.6.0

A modern command-line tool for calling multiple LLM APIs (DeepSeek, Qwen, etc.) with an elegant interface.

Python 3.8+ Code style: black zread

Features

  • ✨ Modern CLI - Built with Typer and Rich
  • πŸ”§ Type Safe - Full type hints and Pydantic validation
  • πŸ“Š Progress Bars - Visual feedback for file operations
  • πŸ“ Rich Logging - Powered by Loguru
  • πŸ’¬ Interactive Chat - Multi-turn conversations with command support
  • πŸ”Œ Multiple Providers - Support for OpenAI-compatible APIs
  • πŸ“¦ Batch Processing - Process multiple tasks concurrently with multi-threading

Quick Start

Installation

# Clone repository
git clone <repository-url>
cd ask_llm

# Install dependencies
pip install -r requirements.txt

# Install in development mode
pip install -e .

Configuration

# Create default_config.yml template
ask-llm config init

# Edit default_config.yml with your API keys (use ${VAR} for environment variables)
# Config priority: CLI args > env vars (ASK_LLM_*) > user config > package default
# Config is searched in: --config > ./default_config.yml > ~/.config/ask_llm/ > /etc/ask_llm/
# Then verify
ask-llm config test

Usage

# Process a file
ask-llm ask input.txt

# Direct text input
ask-llm "Translate to Chinese: Hello world"

# With system prompt for one-shot behavior control
ask-llm ask "What are you?" --system "You are a pirate. Respond in pirate dialect."

# Show reasoning from reasoner models
ask-llm ask "Solve: 12*15" --include-reasoning -m deepseek-reasoner

# Dry-run: preview prompt and token estimate (no API call)
ask-llm ask input.md --dry-run

# Interactive chat mode
ask-llm chat

# With initial context
ask-llm chat -i context.txt -s "You are a helpful assistant"

# Translation (file, directory, or glob)
ask-llm trans document.md
ask-llm trans /path/to/dir/ -o translated/
ask-llm trans *.md --max-parallel-files 5

# Translation with glossary for consistent terminology
ask-llm trans paper.md --glossary glossary.yml

# Batch processing
ask-llm batch batch-examples/prompt-contents.yml -o results.json

# Paper explanation (Markdown by headings, or arxiv2md-beta directory)
ask-llm paper -i paper.md --run all
ask-llm paper -i path/to/arxiv-paper-dir --run sections

# Paper dry-run: preview sections and token estimates
ask-llm paper -i paper.md --dry-run

# Resume interrupted paper processing
ask-llm paper -i paper.md --resume

Commands

Command Description
ask-llm ask [INPUT] Process input with LLM
ask-llm ask --system Add system prompt for one-shot behavior control
ask-llm ask --include-reasoning Show chain-of-thought from reasoner models
ask-llm ask --dry-run Preview prompt and token estimate (no API call)
ask-llm chat Start interactive chat
ask-llm chat /search Search message history
ask-llm chat /export Export conversation to JSON/Markdown/TXT
ask-llm trans [FILES...] Translate files (supports directory and glob)
ask-llm trans --glossary Use terminology glossary for consistent translations
ask-llm paper -i PATH Explain a paper: outputs under ./explain/ next to the file or directory
ask-llm paper --dry-run Preview sections and token estimates
ask-llm paper --resume Skip completed sections when resuming
ask-llm batch [CONFIG] Process batch tasks from YAML config
ask-llm config show Display configuration
ask-llm config test Test API connections
ask-llm config init Create example config

Batch Processing

The batch command supports processing multiple tasks concurrently:

# Basic usage
ask-llm batch batch-examples/prompt-contents.yml

# With options
ask-llm batch config.yml -o results.json -f json --threads 10 --retries 5

See docs/BATCH_USAGE.md for detailed batch processing documentation.

Paper explanation (paper)

  • Input: a single .md whose level-2 headings (## …) delimit explain sections (subsections ###+ stay inside the same job), or a directory produced by tools like arxiv2md-beta (paper.yml, main *.md, optional *-References.md, *-Appendix.md).
  • Runs: --run sections (meta + each recognized section), --run full (whole document), --run all (both).
  • Output: <input_dir>/explain/ (or next to the .md file). Files are numbered in document order, e.g. 0-meta.explain.md, 1-abstract.explain.md, …, N-full.explain.md. Recognized CS/AI-oriented section titles (among others) map to dedicated prompts: e.g. Related Work β†’ section-related-work.md, Model Architecture β†’ section-model-architecture.md. Headings that still do not match any canonical key use section-generic.md (extra:… keys, filenames like 3-model-architecture.explain.md). Large Appendix sidecars split by ## use d-appendices-<slug>.explain.md.
  • Preamble: each output file starts with a short 说明 block (source slice + prompt path + one-line summary of the analysis prompt).
  • Length & models: paper.max_output_tokens is the requested completion cap; the CLI sets API max_tokens to min(requested, max_output.maximum) from providers.yml for that model. DeepSeek HTTP caps differ by model: deepseek-chat ≀8192, deepseek-reasoner ≀65536 (then min with YAML). The full-document job (full) uses paper.full_model (default deepseek-reasoner). When the API returns reasoning content, it is written under ζŽ¨η†θΏ‡η¨‹οΌˆζ€η»΄ι“ΎοΌ‰ before ζ­£ζ–‡θ§£ζž. On API errors, the log includes model= and max_tokens= (from llm_engine).
  • Concurrency: section jobs use GlobalBatchProcessor.process_global_tasks (same pipeline as ask-llm trans): each job gets its own provider/HTTP client, with Rich per-task progress. Default paper.concurrency (e.g. 20); override with ask-llm paper -i ... -j 8. Use 1 to force sequential calls.
  • Prompts: canonical tree is prompts/paper/ at the repository root (ask_llm/prompts/…). Templates default to computer science / AI papers (methodology, experiments, reproducibility, related-work positioning, multiview-style full-paper analysis). Under src/ask_llm/ the prompts entry is a symlink to that tree so setuptools package-data stays valid. Override directory via paper.prompt_dir in default_config.yml.
  • Pipeline mapping: paper.pipeline_config points to prompts/paper-explain-pipeline.yml (project overrides). It is merged on load with the bundled paper-explain-pipeline.defaults.yml (same prompts/ directory): any key omitted in the project file keeps the default. Edit the project file for small deltas; edit or fork paper-explain-pipeline.defaults.yml only when changing the canonical registry, heading aliases, or bundled defaults. The package ships src/ask_llm/prompts as a symlink to repo prompts/ so defaults load in dev and in wheels. Override for one run with ask-llm paper --pipeline /path/to/paper-explain-pipeline.yml.
  • Multiple full-paper prompts: full_prompts in that YAML lists several templates (e.g. section-full.md + outlines.md). Each gets a separate API call on the same concatenated body; outputs are N-full-<stem>.explain.md (e.g. N-full-outlines.explain.md). If only one full template is configured, the job key stays full and the file remains N-full.explain.md (backward compatible).
  • Heading match (configurable): heading_match is an ordered list of { key, aliases } entries. The splitter uses the same fuzzy rules as before, but you can edit aliases or order in YAML instead of Python.
  • Multiple prompts per section: section_prompts maps a canonical key to several { file, label_zh } entries. Job keys become abstract:<template-stem>, …; outputs look like N-abstract-section-abstract.explain.md.
  • Merged sections (combos): section_combos lists { id, keys, prompts, output_stem? }. Bodies of keys are concatenated in order; each prompt runs on that merged text. Job keys are combo:<id>:<template-stem>. With a single prompt and output_stem: Abstract-Introduction, the file is N-abstract-introduction.explain.md. Keys that appear only inside combos skip standalone jobs unless you pass that section name explicitly in --sections (so you can force a per-section run when experimenting).

Project Structure

ask_llm/
β”œβ”€β”€ prompts/              # Prompt templates (paper/, tech-paper-trans.md, …)
β”œβ”€β”€ src/ask_llm/          # Main package (prompts β†’ symlink to ../prompts)
β”‚   β”œβ”€β”€ cli/              # Typer CLI (app.py, commands/, common.py, errors.py)
β”‚   β”œβ”€β”€ core/             # Core logic (batch, processor, paper_explain, …)
β”‚   β”œβ”€β”€ config/           # Configuration
β”‚   └── utils/            # Utilities
β”œβ”€β”€ tests/                # Tests
β”œβ”€β”€ docs/                 # Documentation
└── default_config.yml    # Unified configuration (run `ask-llm config init` to create)

Batch, translation, and paper internals

  • ask_llm.config.cli_session β€” shared config load (ConfigLoader + set_config), ConfigManager, CLI overrides, and API key gate used by trans, paper, and batch.
  • ask_llm.core.global_batch_runner β€” run_global_batch_tasks creates GlobalBatchProcessor and runs process_global_tasks (optional worker clamp vs. task count for paper).
  • ask_llm.core.tasks.builders β€” factories such as build_paper_explain_task for typed BatchTask construction.
  • BatchTask.task_kind β€” translation_chunk or paper_explain; legacy paper_mode=True still maps to paper_explain.

Development

CLI package layout (ask_llm/cli/)

The Typer entry point is ask_llm.cli:run_cli (see pyproject.toml scripts). The former monolithic cli.py is split as follows:

Module Role
cli/app.py Typer app, global --version / --debug / --quiet callback, registers subcommands
cli/commands/ One module per command: ask, chat, config, batch, trans, format_cmd (CLI name format), paper
cli/common.py Shared helpers (_config_init, _resolve_trans_input_paths, notebook translation helper)
cli/errors.py raise_unexpected_cli_error, optional cli_errors context manager for consistent exit codes and logging

Public imports from ask_llm.cli remain app, run_cli, and _resolve_trans_input_paths (for tests and tooling).

# Run tests
pytest

# Run with coverage
pytest --cov=src/ask_llm

# Type checking
mypy src/ask_llm

# Linting
ruff check src/ask_llm
ruff format src/ask_llm

Documentation

See docs/README_ask_llm.md for detailed documentation.

License

MIT License

About

ask_llm, a cmd line ask llm api caller, support chat mode.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors