This repository demonstrates the fine-tuning of the Llama3.2:3B model using the BAAI/Infinity-Instruct dataset and the Unsloth library. The fine-tuned model, MateoRov/Llama3.2-3b-SFF-Infinity-MateoRovere, is now available on Hugging Face and can be used for conversational AI tasks.
- Fine-tunes the Llama3.2:3B model for supervised instruction-following tasks.
- Utilizes Unsloth for efficient and scalable training.
- Leverages the BAAI/Infinity-Instruct dataset for high-quality supervised fine-tuning.
- Provides a terminal-based chat interface using the fine-tuned model.
git clone https://github.com/Mateorovere/FineTuning-LLM-Llama3.2-3b.git
cd FineTuning-LLM-Llama3.2-3b
Install the required Python packages:
pip install -r requirements.txt
And install PyTorch
If you want to perform fine-tuning yourself, open the Llama_3_2_3B_Finetuning.ipynb notebook and follow the steps to train the model using the Unsloth library and the BAAI/Infinity-Instruct dataset.
Both the Llama_3_2_3B_Finetuning.ipynb and the inference.ipynb are ment to run on google collab
Run the main.py script to interact with the fine-tuned model in the terminal:
python main.py
The fine-tuned model is hosted on Hugging Face:
- Model: MateoRov/Llama3.2-3b-SFF-Infinity-MateoRovere
- Dataset: BAAI/Infinity-Instruct
Contributions are welcome! Please fork the repository and create a pull request with your proposed changes.
Making it compatible with Ollama