Translate entire books, subtitles, and large texts with AI - simply and efficiently.
TBL is an application that lets you translate large volumes of text using Language Models (LLMs). Whether you want to translate an ebook, movie subtitles, or long documents, TBL does it automatically while preserving formatting.
- π― Easy to use: Intuitive web interface, no technical skills required
- π Private & Local: Use Ollama to translate without sending your texts to the internet
- π° Cost-effective: Free with Ollama, controlled costs with cloud APIs
- π Preserves formatting: EPUB files keep their structure, subtitles keep their timings
- π Batch translation: Translate multiple files at once
- π Multi-language: Translate between any languages
- Translate ebooks (EPUB)
- Translate movie subtitles (SRT)
- Translate long documents
Just 3 steps to get started!
Step 1: Install Required Software
-
Python 3.8+ - The programming language
- Download: Python for Windows
β οΈ IMPORTANT: Check "Add Python to PATH" during installation!
-
Ollama - Runs AI models locally (free!)
- Download: Ollama for Windows
- Install and it will start automatically
-
Git - Downloads TBL
- Download: Git for Windows
- Install with default settings
Step 2: Download TBL
Open Command Prompt or PowerShell and run:
# Navigate to your preferred location
cd %USERPROFILE%\Documents
# Download TBL
git clone https://github.com/hydropix/TranslateBookWithLLM.git
cd TranslateBookWithLLMStep 3: Download an AI Model & Launch!
# Download a recommended model (choose based on your GPU)
ollama pull qwen3:30b
# Launch TBL - Everything is automatic!
start.batπ That's it! The web interface will open automatically at http://localhost:5000
β Creates Python virtual environment (first time only) β Installs all dependencies β Checks for updates from Git β Updates dependencies if needed β Creates configuration files β Launches the web interface
Next time, just double-click start.bat and everything updates automatically!
Qwen3 Models by VRAM (GPU Memory):
6-10 GB β ollama pull qwen3:8b (5.2 GB, basic translations)
10-16 GB β ollama pull qwen3:14b (9.3 GB, good translations)
16-24 GB β ollama pull qwen3:30b (19 GB, very good translations) β RECOMMENDED
48+ GB β ollama pull qwen3:235b (142 GB, professional quality)
# Check your installed models
ollama listIf you prefer using Anaconda or already have it installed:
Step 1: Install Required Software
-
Miniconda - Manages Python easily
- Download: Miniconda Windows Installer
- Install with default settings
-
Ollama - Runs AI models locally (free!)
- Download: Ollama for Windows
- Install and it will start automatically
-
Git - Downloads TBL
- Download: Git for Windows
- Install with default settings
Step 2: Install TBL
Open Anaconda Prompt (search in Start Menu) and run:
# Create a Python environment for TBL
conda create -n translate_book_env python=3.9
# Activate it (do this every time)
conda activate translate_book_env
# Download TBL
cd %USERPROFILE%\Documents
git clone https://github.com/hydropix/TranslateBookWithLLM.git
cd TranslateBookWithLLM
# Install dependencies
pip install -r requirements.txtStep 3: Download an AI Model & Launch
# Download a recommended model
ollama pull qwen3:30b
# Launch the web interface
python translation_api.pyOpen your browser and go to: http://localhost:5000
π Ready! You can now translate your files.
-
Choose your LLM Provider:
- Ollama (recommended): Free, private, works offline
- OpenAI: Paid, requires API key, high quality (GPT-4, etc.)
- Google Gemini: Paid, requires API key, fast and efficient
-
Select your Model:
- The list fills automatically based on your provider
- Click π to refresh the list
-
Languages:
- Source Language: The language of your original text
- Target Language: The language to translate into
- Use "Other" to specify any language
-
Add your Files:
- Drag and drop or click to select
- Accepted formats:
.txt,.epub,.srt - You can add multiple files at once
-
Start Translation:
- Click "Start Translation"
- Follow real-time progress
- Download translated files when complete
TBL offers two modes for translating EPUB files:
- β Preserves all original formatting (bold, italic, tables, etc.)
- β Keeps images and complex structure
β οΈ Requires a capable model (>12 billion parameters)β οΈ May have issues with strict EPUB readers
When to use: You have a good model and formatting is important.
- β Maximum compatibility with all EPUB readers
- β Works with small models (7B, 8B parameters)
- β No issues with tags or placeholders
- β Creates standard EPUB 2.0 output
- β Complex formatting is simplified (basic text only)
When to use:
- You're using a small model (qwen2:7b, llama3:8b, etc.)
- You're having problems with Standard Mode
- Your EPUB reader is strict (Aquile Reader, Adobe Digital Editions)
- Formatting is not critical
π‘ Tip: TBL automatically detects small models and recommends Fast Mode!
How to enable Fast Mode:
- β Check the "Fast Mode (Recommended for small models)" checkbox in the web interface
- Or use
--fast-modeflag in command line
- β Timings are preserved exactly
- β Numbering remains intact
- β Only the text is translated
- β SRT format perfectly maintained
Simply drag your .srt file and start translation!
Click "βΌ Advanced Settings" to access:
Chunk Size (5-200 lines)
- Controls how many lines are translated together
- Larger = better context, but slower (make sure you have enough VRAM)
- Recommended: 25 for most cases
Timeout (30-600 seconds)
- Maximum wait time per request
- Increase if you're experiencing timeouts
- Recommended: 180s for web, 900s for CLI
Context Window (1024-32768 tokens)
- The context adjusts automatically, so this setting is no longer very important.
- Recommended: 2048.
Max Retries (1-5)
- Number of retry attempts on failure
- Recommended: 2
Auto-Adjustment
- β Enabled by default
- Automatically adapts parameters if needed
- Leave enabled unless you have specific needs
Output Filename Pattern
- Customize translated file names
- Example:
{originalName}_FR.{ext} - Placeholders:
{originalName},{ext}
You can translate multiple files at once:
- Add all your files ("Add Files" button)
- Each file appears in the list with its status
- Click "Start Batch" to translate all sequentially
- Follow the progress of each file individually
For advanced users or automation:
python translate.py -i input_file.txt -o output_file.txt| Option | Description | Default |
|---|---|---|
-i, --input |
π Input file (.txt, .epub, .srt) | Required |
-o, --output |
π Output file | Auto-generated |
-sl, --source_lang |
π Source language | English |
-tl, --target_lang |
π Target language | Chinese |
-m, --model |
π€ LLM model to use | mistral-small:24b |
-cs, --chunksize |
π Lines per chunk | 25 |
--provider |
π’ Provider (ollama/gemini/openai) | ollama |
--api_endpoint |
π API URL | http://localhost:11434/api/generate |
--gemini_api_key |
π Gemini API key | - |
--openai_api_key |
π OpenAI API key | - |
--fast-mode |
π Fast Mode for EPUB | Disabled |
--no-color |
π¨ Disable colors | Colors enabled |
Translate an EPUB book (Fast Mode)
python translate.py -i book.epub -o book_zh.epub -sl English -tl Chinese --fast-modeTranslate with OpenAI GPT-4
python translate.py -i text.txt -o text_es.txt \
--provider openai \
--openai_api_key sk-your-key-here \
--api_endpoint https://api.openai.com/v1/chat/completions \
-m gpt-4o \
-sl English -tl SpanishTranslate with Google Gemini
python translate.py -i document.txt -o document_de.txt \
--provider gemini \
--gemini_api_key your-gemini-key \
-m gemini-2.0-flash \
-sl French -tl GermanTranslate subtitles
python translate.py -i movie.srt -o movie_zh.srt -sl English -tl ChineseTranslation with larger chunks for better context
python translate.py -i novel.txt -o novel_zh.txt -cs 50TBL supports three types of providers:
Advantages:
- β Totally free
- β Works offline
- β Your texts stay private (nothing sent to the internet)
- β No usage limits
Disadvantages:
β οΈ Requires a powerful computer (GPU recommended)β οΈ Slower than cloud APIsβ οΈ Quality varies by model
Advantages:
- β Excellent translation quality
- β Fast
- β No powerful hardware needed
- β Very capable models (GPT-4, etc.)
Disadvantages:
β οΈ Paid (cost per token)β οΈ Requires internet connectionβ οΈ Your texts are sent to OpenAI
Available models:
gpt-4o- Latest version, very capablegpt-4o-mini- More economical, still excellentgpt-4-turbo- Turbo version of GPT-4gpt-3.5-turbo- Most economical
Setup:
-
Get an API key at platform.openai.com
-
Web Interface:
- Select "OpenAI" in the dropdown
- Enter your API key
- Endpoint is automatically configured
-
Command Line:
python translate.py -i book.txt -o book_zh.txt \ --provider openai \ --openai_api_key sk-your-key \ --api_endpoint https://api.openai.com/v1/chat/completions \ -m gpt-4o
π° Estimated cost: About $0.50 - $2.00 for a 300-page book with GPT-4o-mini.
Advantages:
- β Very fast
- β Excellent quality/price ratio
- β Generous free quota
Disadvantages:
β οΈ Requires internet connectionβ οΈ Quota limits
Available models:
gemini-2.0-flash- Fast and efficient (recommended)gemini-1.5-pro- More capable, slowergemini-1.5-flash- Balanced
Setup:
-
Get an API key at Google AI Studio
-
Web Interface:
- Select "Google Gemini"
- Enter your API key
- Choose your model
-
Command Line:
python translate.py -i document.txt -o document_zh.txt \ --provider gemini \ --gemini_api_key your-key \ -m gemini-2.0-flash
π‘ Tip: Gemini offers a generous monthly free quota, perfect for testing!
For simplified installation with Docker:
# Build the image
docker build -t translatebook .
# Run the container
docker run -p 5000:5000 -v $(pwd)/translated_files:/app/translated_files translatebookThe web interface will be accessible at http://localhost:5000
docker run -p 8080:5000 -e PORT=5000 -v $(pwd)/translated_files:/app/translated_files translatebookAccess at http://localhost:8080
Create docker-compose.yml:
version: '3'
services:
translatebook:
build: .
ports:
- "5000:5000"
volumes:
- ./translated_files:/app/translated_files
environment:
- PORT=5000
- API_ENDPOINT=http://localhost:11434/api/generate
- DEFAULT_MODEL=mistral-small:24bThen run:
docker-compose upπ‘ Note: Translated files will be saved in ./translated_files on your machine.
You can create a .env file at the project root to set default values:
# Copy the example file
cp .env.example .env
# Edit with your parametersImportant variables:
# Default LLM provider
LLM_PROVIDER=ollama # or gemini, openai
# Ollama configuration
API_ENDPOINT=http://localhost:11434/api/generate
DEFAULT_MODEL=mistral-small:24b
OLLAMA_NUM_CTX=8192 # Context window size
# OpenAI configuration
OPENAI_API_KEY=sk-your-key
# Endpoint configured automatically
# Gemini configuration
GEMINI_API_KEY=your-key
GEMINI_MODEL=gemini-2.0-flash
# Default languages
DEFAULT_SOURCE_LANGUAGE=English
DEFAULT_TARGET_LANGUAGE=Chinese
# Translation parameters
MAIN_LINES_PER_CHUNK=25
REQUEST_TIMEOUT=900
MAX_TRANSLATION_ATTEMPTS=3
RETRY_DELAY_SECONDS=5
# Automatic adjustment (recommended)
AUTO_ADJUST_CONTEXT=true
# Web server
PORT=5000
HOST=127.0.0.1
OUTPUT_DIR=translated_filesSymptom: Error when launching python translation_api.py
Solutions:
-
Check that the port is free:
netstat -an | find "5000"
-
Change the port in
.env:PORT=8080
-
Check that conda environment is activated:
conda activate translate_book_env
Symptom: "Connection refused" or "Cannot connect to Ollama"
Solutions:
-
Check that Ollama is running (icon in system tray)
-
Test the connection:
curl http://localhost:11434/api/tags
-
Restart Ollama from Start Menu
-
Check your firewall (allow port 11434)
Symptom: "Model 'xxx' not found"
Solutions:
-
List your installed models:
ollama list
-
Download the missing model:
ollama pull model-name
-
Use an available model from the list
Symptom: Translation stops with "Request timeout"
Solutions:
-
Increase timeout in advanced options (web) or
.env:REQUEST_TIMEOUT=1800
-
Reduce chunk size:
MAIN_LINES_PER_CHUNK=15
-
Use a faster model (qwen2:7b instead of mistral-small:24b)
Symptom: Translation is incorrect, inconsistent, or weird
Solutions:
-
Use a better model:
- Ollama:
mistral-small:24binstead ofqwen2:7b - Switch to OpenAI
gpt-4oor Geminigemini-1.5-pro
- Ollama:
-
For EPUB with small models: Use Fast Mode
--fast-mode
Symptom: Translated EPUB file won't open or is broken
Solutions:
-
Use Fast Mode (most reliable solution):
python translate.py -i book.epub -o book_zh.epub --fast-mode
-
Check your EPUB reader: Test with Calibre (more permissive)
-
If using a small model (qwen2:7b, llama3:8b): Fast Mode required
-
If placeholders remain (β¦TAG0β§): This is a bug in Standard Mode, switch to Fast Mode
Symptom: "Invalid API key" or "Quota exceeded"
Solutions:
-
Check your API key: Copy-paste correctly
-
Check your quota/credit:
- OpenAI: platform.openai.com/usage
- Gemini: console.cloud.google.com
-
Check endpoint (OpenAI):
https://api.openai.com/v1/chat/completions
Symptom: "Out of memory" or crash with large files
Solutions:
-
Reduce chunk size:
MAIN_LINES_PER_CHUNK=10
-
Reduce context window:
OLLAMA_NUM_CTX=4096
-
Use a smaller model
-
Close other applications
| Message | Meaning | Solution |
|---|---|---|
Connection refused |
Ollama not running | Start Ollama |
Model not found |
Model not downloaded | ollama pull model-name |
Request timeout |
Request too long | Increase timeout or reduce chunk size |
Invalid API key |
Incorrect API key | Check your key |
Context length exceeded |
Prompt too large | Reduce chunk size or increase context window |
Quota exceeded |
API limit reached | Wait or add credits |
Q: Is it really free? A: With Ollama, yes! You only pay if you use OpenAI or Gemini.
Q: Are my texts sent to the internet? A: With Ollama, no. With OpenAI/Gemini, yes (sent to respective servers).
Q: How long does it take? A: Very variable depending on length, model, and your machine. A 300-page book takes between 30 minutes (cloud) and 3 hours (Ollama with small model).
Q: What's the translation quality? A: Depends on the model. GPT-4o is excellent, mistral-small:24b is very good, small models (7B) are decent for simple text.
Q: Simple or Standard Mode for my EPUB? A:
- Fast Mode if: small model (β€12B), strict reader, or you have problems
- Standard Mode if: large model (>12B) and complex formatting is important
Q: Does Fast Mode lose all formatting? A: Basic structure is preserved (paragraphs, chapters), but advanced formatting (complex tables, CSS) is simplified.
Q: Why does TBL recommend Fast Mode with my model? A: Your model has β€12 billion parameters. Small models struggle with the placeholder system in Standard Mode.
Q: How to speed up translation? A:
- Use a cloud model (OpenAI/Gemini)
- Reduce chunk size (
-cs 15) - Use a smaller model (qwen2:7b)
- With Ollama: use a GPU
Q: How to improve quality? A:
- Use a better model (gpt-4o, mistral-small:24b)
- Increase chunk size (
-cs 40) - Increase context window (
OLLAMA_NUM_CTX=16384)
Q: Is my computer powerful enough? A: For Ollama:
- Minimum: 16 GB RAM, recent CPU (7B models)
- Recommended: 32 GB RAM, NVIDIA GPU (24B models)
- Alternative: Use OpenAI/Gemini (cloud)
Q: Can I translate multiple files simultaneously? A: In the web interface, yes with batch mode. In CLI, no (launch multiple separate commands).
Q: Where are translated files stored?
A: In the translated_files/ folder by default (configurable with OUTPUT_DIR).
Q: Can I customize translation prompts?
A: Yes, edit prompts.py, but it's technical.
Q: Are my files stored on your servers? A: No, TBL runs on YOUR machine. Nothing is sent elsewhere (except if you use OpenAI/Gemini).
Q: What happens to my files during translation? A: TBL runs entirely on your local machine. Your files are processed locally by the web server running on your computer:
- With Ollama: 100% local - nothing leaves your machine
- With OpenAI/Gemini: Only the text content is sent to their APIs for translation (consult their data policies)
- Source files are deleted after translation. Translated files remain in
translated_files/until you delete them.
Q: Are there file size limits?
A: Yes, configurable. Default limits are set to ensure smooth operation. Modifiable in .env or code if needed.
- Check this FAQ and the Troubleshooting section
- Check logs: Detailed errors are in the console/terminal
- Test with a small file: Isolate the problem
- Check your configuration: Model downloaded? Valid API key?
If you find a bug, open an issue on GitHub with:
- Description of the problem
- Example file (if possible)
- Error logs
- Your configuration (model, OS, etc.)
This project is open-source. See the LICENSE file for details.
Happy translating! πβ¨
