This project provides a user-friendly interface for fine-tuning Large Language Models (LLMs) using the Unsloth library. It includes features for dataset preparation, synthetic dataset creation, model training, testing, and GGUF conversion.
- Load and fine-tune various pre-trained models
- Prepare existing datasets or create synthetic datasets
- Fine-tune models with customizable hyperparameters
- Test fine-tuned models
- Convert models to GGUF format for deployment
- Python 3.8 or higher
- CUDA-capable GPU (for efficient training)
-
Clone the repository:
git clone https://github.com/yourusername/llm-finetuner.git cd llm-finetuner
-
Create a virtual environment (optional but recommended):
python -m venv venv source venv/bin/activate # On Windows, use `venv\Scripts\activate`
-
Install the required packages:
pip install -r requirements.txt
-
Run the application:
python main.py
-
Open the provided URL in your web browser to access the Gradio interface.
-
Follow these steps in the interface: a. Settings: Enter your Hugging Face token and select a model. b. Dataset: Prepare an existing dataset or create a synthetic one. c. Training: Set hyperparameters and start the fine-tuning process. d. Test: Test your fine-tuned model with custom inputs. e. GGUF Conversion: Convert your model to GGUF format if needed.
- Ensure you have the necessary API keys for OpenAI or Anthropic if you plan to use them for synthetic dataset creation.
- If using Ollama for local generation, make sure it's installed and running on your machine.
- Fine-tuning can be computationally intensive. Ensure you have adequate GPU resources available.
Contributions are welcome! Please feel free to submit a Pull Request.
This project is licensed under the MIT License.
This guide will walk you through setting up Python, creating a virtual environment, and running your LLM Finetuner project on a new system.
- Go to https://www.python.org/downloads/windows/
- Download the latest Python 3.x installer (64-bit version recommended)
- Run the installer
- Check "Add Python to PATH" during installation
- Click "Install Now"
- Install Homebrew if you haven't already:
/bin/bash -c "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/HEAD/install.sh)"
- Install Python using Homebrew:
brew install python
- Update package list:
sudo apt update
- Install Python:
sudo apt install python3 python3-pip python3-venv
Open a terminal (Command Prompt on Windows) and run:
python --version
You should see the Python version number. If not, try python3 --version
.
- Go to https://git-scm.com/download/win
- Download and run the installer
- Use the default settings during installation
If you installed Homebrew earlier:
brew install git
sudo apt install git
- Open a terminal
- Navigate to where you want to store the project
- Clone the repository:
git clone https://github.com/yourusername/llm-finetuner.git cd llm-finetuner
python -m venv venv
venv\Scripts\activate
python3 -m venv venv
source venv/bin/activate
Your prompt should change to indicate that the virtual environment is active.
With the virtual environment activated:
pip install -r requirements.txt
This may take a while as it installs all necessary dependencies.
If you have an NVIDIA GPU and want to use it for training:
- Go to https://developer.nvidia.com/cuda-downloads
- Download and install the CUDA Toolkit appropriate for your system
- Install the cuDNN library:
- Go to https://developer.nvidia.com/cudnn
- Download cuDNN (you may need to create an NVIDIA account)
- Follow the installation instructions for your system
With the virtual environment still activated:
python main.py
This will start the Gradio interface. Open the provided URL in your web browser.
-
In the "Settings" tab:
- Enter your Hugging Face token
- Select a model
-
In the "Dataset" tab:
- Prepare an existing dataset or create a synthetic one
-
In the "Training" tab:
- Set hyperparameters and start training
-
In the "Test" tab:
- Test your fine-tuned model
-
In the "GGUF Conversion" tab:
- Convert your model to GGUF format if needed
- If
python
doesn't work, trypython3
- Ensure your GPU drivers are up to date for CUDA support
- If you encounter "command not found" errors, ensure the relevant programs are in your system's PATH
- Always activate the virtual environment before running the project
- To deactivate the virtual environment, simply type
deactivate
in the terminal - Keep your Python packages updated with
pip install --upgrade -r requirements.txt
Remember to keep your API keys and tokens secure. Happy fine-tuning!
pip install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu121 pip install triton pip install unsloth gradio transformers datasets tqdm