An open-source macOS/Windows application for LaTeX with integrated AI capabilities. Compile .tex files to PDF and generate LaTeX code from natural language prompts.
- Cross-Platform: Works on macOS and Windows
- GUI Editor: Electron-based graphical interface
- CLI Tool: Command-line interface for automated workflows
- AI Integration: Generate LaTeX code from text prompts
- Cloud: OpenAI GPT-4 for high-quality generation
- Local: Ollama support for privacy and offline usage
- PDF Compilation: Compile LaTeX documents to PDF
- Modern UI: Clean, user-friendly interface
- Node.js (v16 or higher)
- npm (v7 or higher)
- LaTeX Distribution:
- AI Integration (choose one):
- OpenAI API Key: For cloud-based AI generation
- Ollama: For local, privacy-focused AI generation
- Install from ollama.ai
- Recommended models:
codellama,llama2,mistral
-
Clone the repository:
git clone https://github.com/landsharkiest/macos-windows-latex-plus-cli-app.git cd macos-windows-latex-plus-cli-app -
Install dependencies:
npm install
-
Set up AI Integration (optional - choose one):
Option A: OpenAI (Cloud)
# macOS/Linux export AI_PROVIDER=openai export OPENAI_API_KEY="your-api-key-here" # Windows (Command Prompt) set AI_PROVIDER=openai set OPENAI_API_KEY=your-api-key-here # Windows (PowerShell) $env:AI_PROVIDER="openai" $env:OPENAI_API_KEY="your-api-key-here"
Option B: Ollama (Local)
# 1. Install Ollama from https://ollama.ai # 2. Pull a model (codellama recommended for LaTeX) ollama pull codellama # 3. Start Ollama service (if not already running) ollama serve # 4. Configure environment # macOS/Linux export AI_PROVIDER=ollama export OLLAMA_MODEL=codellama # Windows (Command Prompt) set AI_PROVIDER=ollama set OLLAMA_MODEL=codellama # Windows (PowerShell) $env:AI_PROVIDER="ollama" $env:OLLAMA_MODEL="codellama"
Start the Electron GUI:
npm startFeatures:
- Open and edit
.texfiles - Compile LaTeX to PDF
- Generate LaTeX code from AI prompts
- Save your work
Keyboard Shortcuts:
Ctrl/Cmd + S: Save fileCtrl/Cmd + B: Compile to PDF
node cli/latex-cli.js init [filename]Creates a sample LaTeX document.
Example:
node cli/latex-cli.js init my-document.texnode cli/latex-cli.js compile <file> [options]Options:
-o, --output <dir>: Output directory (default:./output)-c, --compiler <name>: LaTeX compiler (default:pdflatex)--clean: Clean auxiliary files after compilation
Examples:
# Basic compilation
node cli/latex-cli.js compile document.tex
# Custom output directory
node cli/latex-cli.js compile document.tex -o ./pdf-output
# Clean auxiliary files
node cli/latex-cli.js compile document.tex --cleannode cli/latex-cli.js generate "<prompt>" [options]Options:
-o, --output <file>: Output file path (default:./generated.tex)-p, --provider <name>: AI provider (openaiorollama)-m, --model <name>: Model name to use--compile: Automatically compile the generated LaTeX
Examples:
# Generate with OpenAI
node cli/latex-cli.js generate "Create a research paper with abstract and introduction" -p openai
# Generate with Ollama (local)
node cli/latex-cli.js generate "Create a resume template" -p ollama -m codellama
# Generate and compile
node cli/latex-cli.js generate "Create a math worksheet" --compile -o worksheet.tex
# Use environment variable for provider
export AI_PROVIDER=ollama
node cli/latex-cli.js generate "Create a presentation"macos-windows-latex-plus-cli-app/
├── src/
│ ├── main.js # Electron main process
│ ├── renderer.html # GUI HTML
│ ├── renderer.js # GUI JavaScript
│ ├── styles.css # GUI styles
│ ├── compiler.js # LaTeX compiler module
│ └── ai-generator.js # AI integration module
├── cli/
│ └── latex-cli.js # CLI application
├── config/
│ └── default.json # Configuration file
├── output/ # Compiled PDFs (created automatically)
├── package.json # Project dependencies
└── README.md # This file
Edit config/default.json to customize settings:
{
"latex": {
"compiler": "pdflatex",
"outputDir": "./output",
"options": [
"-interaction=nonstopmode",
"-halt-on-error"
]
},
"ai": {
"provider": "openai",
"enabled": true,
"ollama": {
"baseUrl": "http://localhost:11434",
"model": "llama2"
},
"openai": {
"model": "gpt-4"
}
}
}AI Provider Configuration:
- Set
providerto"openai"or"ollama" - For Ollama: Customize
baseUrlif running on a different host/port - For Ollama: Change
modelto any installed model (codellama,mistral, etc.) - For OpenAI: Adjust
modelfor different GPT versions
Build platform-specific executables:
# Build for current platform
npm run build
# Build for macOS
npm run build:mac
# Build for Windows
npm run build:winExecutables will be created in the dist/ directory.
The application includes comprehensive error handling:
- Missing LaTeX Compiler: Provides installation instructions
- Invalid Files: Validates file extensions and existence
- Compilation Errors: Displays detailed error messages
- AI Errors: Handles API key issues and quota limits
Error: LaTeX compiler 'pdflatex' not found
Solution: Install a LaTeX distribution:
- macOS:
brew install --cask mactex-no-gui - Windows: Download and install MiKTeX
Error: AI integration not available
Solution: Configure an AI provider:
For OpenAI:
export AI_PROVIDER=openai
export OPENAI_API_KEY="your-api-key"For Ollama (Local):
- Install Ollama: https://ollama.ai
- Pull a model:
ollama pull codellama - Start Ollama:
ollama serve - Set environment:
export AI_PROVIDER=ollama export OLLAMA_MODEL=codellama
Error: Cannot connect to Ollama at http://localhost:11434
Solution:
- Make sure Ollama is installed and running:
ollama serve - Check if the service is accessible:
curl http://localhost:11434/api/tags - Verify the model is installed:
ollama list - If using a custom port, set
OLLAMA_BASE_URLenvironment variable
Error: Permission denied
Solution:
chmod +x cli/latex-cli.jsContributions are welcome! Please feel free to submit a Pull Request.
MIT License - see LICENSE file for details
- Built with Electron
- CLI powered by Commander.js
- AI integration via OpenAI API
For issues and questions, please open an issue on GitHub.