Skip to content

Its-Satyajit/ai-commit-messege

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 

Repository files navigation

AI Commit Message Generator for VS Code πŸ€–βœ¨

Local AI Commit Message Generation - Transform code diffs into conventional commits using open-source models. Privacy-focused, offline-capable solution for developers.

Workflow Demo 1 Workflow Demo 2
Demo showing commit message generation process (left) and settings (right)

Features 🌟

  • πŸ”’ Privacy First - No data leaves your machine
  • ⚑ Multi-Backend Support - Compatible with popular AI runners
  • πŸ“œ Commit Standard Compliance - Conventional Commits 1.0.0
  • πŸ–₯️ Hardware Aware - Optimized for various setups
  • 🌐 Model Agnostic - Use any compatible LLM

Quick Start πŸš€

  1. Install extension:
    code --install-extension Its-Satyajit.ai-commit-message
  2. Set up AI backend:
    # For CPU-focused systems
    ollama pull phi-3
    
    # For GPU-equipped machines
    ollama pull deepseek-r1:8b
  3. Generate your first AI commit via VS Code Source Control view

Hardware Requirements πŸ–₯️

Tested Environment

OS: openSUSE Tumbleweed
CPU: Intel i7-8750H (6c/12t @4.1GHz)
GPU: NVIDIA GTX 1050 Ti Mobile 4GB
RAM: 16GB DDR4
Storage: NVMe SSD

Minimum Recommendations

  • CPU: 4-core (2015+)
  • RAM: 8GB
  • Storage: SSD
  • Node: ^22
  • Vscode: ^1.92.0

Model Compatibility 🧠

Performance Characteristics

Model Family Example Models Speed* Quality* Use When...
Lightweight phi-3, mistral 22 t/s β–ˆβ–ˆβ–Œ Quick iterations
Balanced llama3, qwen 14 t/s β–ˆβ–ˆβ–ˆβ–Ž Daily development
Quality-Focused deepseek-r1 7 t/s β–ˆβ–ˆβ–ˆβ–ˆβ–‹ Complex changes

* Metrics from personal testing on mobile GTX 1050 Ti (Q4_K_M quantization)

        Speed vs Quality Tradeoff

        β–²
        β”‚ 
Quality β”‚.....β–ˆβ–ˆβ–ˆβ–ˆβ–ˆ (deepseek-r1)
        β”‚...β–ˆβ–ˆβ–ˆ     (llama3) 
        β”‚.β–ˆβ–ˆβ–Œ       (phi-3)
        └───────────────────▢ Time

Configuration βš™οΈ

Backend Setup

Option 1: Ollama (Simplest)

curl -fsSL https://ollama.com/install.sh | sh
ollama serve

* For more info, visit Ollama

Option 2: LM Studio (Advanced)

lmstudio serve --model ./models/deepseek-r1.gguf --gpulayers 20

* For more info, visit LM Studio

Extension Settings

{
  "commitMessageGenerator.provider": "ollama",
  "commitMessageGenerator.apiUrl": "http://localhost:11434",
  "commitMessageGenerator.model": "deepseek-r1:8b",
  "commitMessageGenerator.temperature": 0.7,
  "commitMessageGenerator.maxTokens": 5000,
  "commitMessageGenerator.apiKey": "your_api_key (if required by your OpenAI-compatible/ollama endpoint)",
  "commitMessageGenerator.types": [
    "feat: A new feature",
    "fix: A bug fix",
    "chore: Maintenance tasks",
    "docs: Documentation updates",
  ],
  "commitMessageGenerator.scopes": ["ui", "api", "config"]
}

Optimization Guide

GPU Acceleration

# NVIDIA Settings
export OLLAMA_GPUS=1
export GGML_CUDA_OFFLOAD=20

# Memory Allocation (4GB VRAM example)
β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚ GPU Layers: 18/20     β”‚
β”‚ Batch Size: 128       β”‚
β”‚ Threads: 6            β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜

Performance Tips

  • Start with phi-3 for quick feedback
  • Switch to deepseek-r1 for final commits
  • Use --no-mmap if experiencing slowdowns
  • Reduce GPU layers when memory constrained

Troubleshooting πŸ”§

Issue First Steps Advanced Fixes
Slow generation 1. Check CPU usage
2. Verify quantization
Use --no-mmap flag
Model loading fails 1. Confirm SHA256 checksum
2. Check disk space
Try different quantization
GPU not detected 1. Verify drivers
2. Check CUDA version
Set CUDA_VISIBLE_DEVICES=0

FAQ ❓

Why local AI instead of cloud services?
  • Privacy: Code never leaves your machine
  • Offline Use: Works without internet
  • Cost: No API fees
  • Customization: Use models tailored to your needs
How to choose between models?

Quick Sessions β†’ phi-3/mistral:

  • Prototyping
  • Personal projects
  • Low-resource machines

Important Commits β†’ deepseek-r1:

  • Production code
  • Team projects
  • Complex refactors

Legal & Ethics

Neutrality Statement

This project and I am not affiliated with, endorsed by, or sponsored by:
- Ollama
- LM Studio
- Any model creators

Mentioned tools/models are personal preferences based on technical merits.

Contributing 🀝

  1. Fork repository
  2. Install dependencies:
    npm install
  3. Build extension:
    npm run package
  4. Submit PR with changes

Full contribution guidelines

License πŸ“„

MIT License - View License


Built by Developers, for Developers - From quick fixes to production-grade commits πŸ’»βš‘

Report Issue