-
Notifications
You must be signed in to change notification settings - Fork 2.3k
[DOCS] Lora without regret #4181
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Changes from 16 commits
4e33b88
a9fcac0
55a35ad
ed094bf
d1675fc
21c78c3
64dd3f5
0584915
0726576
63a5d21
46e3255
fd5eb14
fc85021
3e1942d
23238d7
75ecba0
a56672d
081636e
087f100
c547533
c10527a
d81f44a
5a59a8a
27f373a
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -0,0 +1,362 @@ | ||
| # LoRA Without Regret | ||
burtenshaw marked this conversation as resolved.
Show resolved
Hide resolved
|
||
|
|
||
| Recent research from the team at [Thinking Machines Lab](https://thinkingmachines.ai/blog/lora/) (Schulman et al., 2025) shows that **LoRA can match full fine-tuning performance** when configured correctly, while using only ~67% of the compute. These findings are exciting to TRL users because they're straightforward to implement and can improve model performance on smaller budgets. | ||
|
|
||
| This guide provides simple instructions to reproduce the results of the blog post in TRL. | ||
burtenshaw marked this conversation as resolved.
Show resolved
Hide resolved
|
||
|
|
||
| > [!TIP] | ||
| > It is recommended to read the blog post before following this guide, or to consult both resources in parallel for best results. | ||
| ## Benefits of LoRA over full fine-tuning | ||
|
|
||
| First of all, let's remind ourselves of the benefits of [LoRA over full fine-tuning](https://huggingface.co/docs/trl/en/peft_integration). | ||
burtenshaw marked this conversation as resolved.
Show resolved
Hide resolved
|
||
|
|
||
| LoRA adds adapter layers on top of the base model, which contains significantly fewer parameters than the base model itself. This design reduces GPU memory requirements and enables more efficient training. As described in the [blog](https://thinkingmachines.ai/blog/lora/), this approach was originally thought to involve a performance trade-off, although careful configuration can overcome this trade-off and match full fine-tuning performance. | ||
|
|
||
| ## Examples with TRL | ||
|
|
||
| Let's implement and train LoRA adapters in TRL scripts based on the core findings of the blog post. Afterwards, we'll revisit each finding in light of the TRL results. | ||
|
|
||
| ### Supervised Fine-Tuning (SFT) | ||
|
|
||
| The blog post performs SFT on a range of models and datasets from the Hub, which we can reproduce in TRL. | ||
|
|
||
| | Model | Dataset | | ||
| |-------|---------| | ||
| | [Llama-3.2-1B-Instruct](https://huggingface.co/meta-llama/Llama-3.2-1B) | [allenai/tulu-3-sft-mixture](https://huggingface.co/datasets/allenai/tulu-3-sft-mixture) | | ||
| | [Llama-3.2-1B-Instruct](https://huggingface.co/meta-llama/Llama-3.2-1B) | [open-thoughts/OpenThoughts-114k](https://huggingface.co/datasets/open-thoughts/OpenThoughts-114k) | | ||
| | [Llama-3.1-8B-Instruct](https://huggingface.co/meta-llama/Llama-3.1-8B) | [allenai/tulu-3-sft-mixture](https://huggingface.co/datasets/allenai/tulu-3-sft-mixture) | | ||
| | [Llama-3.1-8B-Instruct](https://huggingface.co/meta-llama/Llama-3.1-8B) | [open-thoughts/OpenThoughts-114k](https://huggingface.co/datasets/open-thoughts/OpenThoughts-114k) | | ||
|
|
||
| <hfoptions id="sft"> | ||
| <hfoption id="jobs"> | ||
|
|
||
| ```bash | ||
|
|
||
| hf jobs uv run \ | ||
| --flavor a100-large \ | ||
| --timeout 8h \ | ||
| --secrets HF_TOKEN \ | ||
| "https://raw.githubusercontent.com/huggingface/trl/main/trl/scripts/sft.py" \ | ||
| --model_name_or_path Qwen/Qwen2.5-3B-Instruct \ | ||
| --dataset_name open-thoughts/OpenThoughts-114k \ | ||
| --learning_rate 2.0e-5 \ | ||
|
Member
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Same comment about the value of LR |
||
| --num_train_epochs 1 \ | ||
| --packing \ | ||
|
Member
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I would either remove packing, or ensure Flash attention is used (via model_init_kwarg). Using packing without FA isn't recommended |
||
| --per_device_train_batch_size 2 \ | ||
| --gradient_accumulation_steps 16 \ | ||
| --gradient_checkpointing \ | ||
| --eval_strategy no \ | ||
burtenshaw marked this conversation as resolved.
Outdated
Show resolved
Hide resolved
|
||
| --use_peft \ | ||
| --lora_r 256 \ | ||
| --lora_alpha 16 \ | ||
| --lora_target_modules all-linear \ | ||
| --output_dir Qwen2.5-3B-OpenThoughts-LoRA \ | ||
| --report_to trackio \ | ||
| --push_to_hub | ||
|
|
||
| ``` | ||
|
|
||
| To use Hugging Face Jobs, you will need to be logged in to the Hugging Face Hub (`hf auth login`) and have a [Pro](https://hf.co/pro), [Team](https://hf.co/enterprise), or [Enterprise](https://hf.co/enterprise) plan. Check out the [Jobs documentation](https://huggingface.co/docs/huggingface_hub/en/guides/jobs) for more details. | ||
|
|
||
| </hfoption> | ||
| <hfoption id="local"> | ||
|
|
||
| ```bash | ||
|
|
||
| uv run "https://raw.githubusercontent.com/huggingface/trl/main/trl/scripts/sft.py" \ | ||
| --model_name_or_path Qwen/Qwen2.5-3B-Instruct \ | ||
| --dataset_name open-thoughts/OpenThoughts-114k \ | ||
| --learning_rate 2.0e-5 \ | ||
| --num_train_epochs 1 \ | ||
| --packing \ | ||
| --per_device_train_batch_size 2 \ | ||
| --gradient_accumulation_steps 16 \ | ||
| --gradient_checkpointing \ | ||
| --eval_strategy no \ | ||
| --use_peft \ | ||
| --lora_r 256 \ | ||
| --lora_alpha 16 \ | ||
| --lora_target_modules all-linear \ | ||
| --output_dir Qwen2.5-3B-OpenThoughts-LoRA \ | ||
| --report_to trackio \ | ||
| --push_to_hub | ||
|
|
||
| ``` | ||
|
|
||
| To run the script locally, you will need to have `uv` installed. Check out the [uv documentation](https://docs.astral.sh/uv/) for more details. | ||
|
|
||
| </hfoption> | ||
| </hfoptions> | ||
|
|
||
| Once training starts, you can monitor the progress in [Trackio](https://huggingface.co/trackio) which will log the url. | ||
burtenshaw marked this conversation as resolved.
Outdated
Show resolved
Hide resolved
|
||
|
|
||
| ### Reinforcement Learning (GRPO) | ||
|
|
||
| The blog post performs GRPO on a range of models and datasets from the Hub, and once again we can reproduce the results in TRL. | ||
|
|
||
| | Model | Dataset | | ||
| |-------|---------| | ||
| | [Llama-3.1-8B-Base](https://huggingface.co/meta-llama/Llama-3.2-1B) | [GSM8k](https://huggingface.co/datasets/openai/gsm8k) | | ||
| | [Llama-3.1-8B-Base](https://huggingface.co/meta-llama/Llama-3.2-1B) | [DeepMath-103K](https://huggingface.co/datasets/zwhe99/DeepMath-103K) | | ||
| | [Qwen3-8b-base](https://huggingface.co/Qwen/Qwen3-8b-base) | [DeepMath-103K](https://huggingface.co/datasets/zwhe99/DeepMath-103K) | | ||
|
|
||
| For reinforcement learning, the blog uses a math reasoning task that we can reproduce as a Python function. | ||
|
|
||
| <details> | ||
| <summary>Reward function</summary> | ||
|
|
||
| ```python | ||
| def strip_reasoning_accuracy_reward( | ||
| completions: list[list[dict[str, str]]], solution: list[str], **kwargs | ||
| ) -> list[Optional[float]]: | ||
| """Reward function that strips reasoning tags and checks mathematical accuracy. | ||
| This function: | ||
| 1. Extracts the content from completions | ||
| 2. Removes <think></think> tags (for reasoning that shouldn't be evaluated) | ||
| 3. Parses both the gold solution and the predicted answer | ||
| 4. Uses math_verify to check if they are mathematically equivalent | ||
| Args: | ||
| completions: List of model completions, each containing a list of messages | ||
| solution: List of ground truth solutions | ||
| **kwargs: Additional arguments (ignored but required for trainer compatibility) | ||
| Returns: | ||
| List of rewards where: | ||
| - 1.0 if the answer is correct | ||
| - 0.0 if the answer is incorrect | ||
| - None if the solution is not parseable (skips this example) | ||
| """ | ||
| contents = [completion[0]["content"] for completion in completions] | ||
| rewards = [] | ||
|
|
||
| for content, sol in zip(contents, solution): | ||
| # Strip reasoning tags from completion | ||
| while "<think>" in content and "</think>" in content: | ||
| start = content.find("<think>") | ||
| end = content.find("</think>", start) | ||
| if start != -1 and end != -1: | ||
| content = content[:start] + content[end + len("</think>") :] | ||
| else: | ||
| break | ||
|
|
||
| # Parse gold solution | ||
| gold_parsed = parse( | ||
| f"${sol}$", | ||
| extraction_config=[ | ||
| LatexExtractionConfig( | ||
| boxed_match_priority=0, try_extract_without_anchor=True | ||
| ) | ||
| ], | ||
| ) | ||
|
|
||
| if len(gold_parsed) != 0: | ||
| # We require the answer to be provided in correct latex (no malformed operators) | ||
| answer_parsed = parse( | ||
| content, | ||
| extraction_config=[ | ||
| LatexExtractionConfig( | ||
| boxed_match_priority=0, | ||
| normalization_config=NormalizationConfig( | ||
| basic_latex=True, | ||
| units=True, | ||
| malformed_operators=False, | ||
| nits=False, | ||
| boxed=True, | ||
| ), | ||
| try_extract_without_anchor=False, | ||
| ) | ||
| ], | ||
| extraction_mode="first_match", | ||
| ) | ||
|
|
||
| # Compute binary rewards if verifiable, `None` otherwise to skip this example | ||
| try: | ||
| reward = float(verify(gold_parsed, answer_parsed)) | ||
| except Exception as e: | ||
| print( | ||
| f"verify failed: {e}, answer: {answer_parsed}, gold: {gold_parsed}" | ||
| ) | ||
| reward = None | ||
| else: | ||
| # If the gold solution is not parseable, we assign `None` to skip this example | ||
| reward = None | ||
|
|
||
| rewards.append(reward) | ||
|
|
||
| return rewards | ||
| ``` | ||
|
|
||
| </details> | ||
|
|
||
| <hfoptions id="grpo"> | ||
| <hfoption id="jobs"> | ||
|
|
||
| ```bash | ||
|
|
||
| hf jobs uv run \ | ||
burtenshaw marked this conversation as resolved.
Show resolved
Hide resolved
|
||
| --flavor a100-large \ | ||
| --timeout 4h \ | ||
| --secrets HF_TOKEN \ | ||
| --env PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True \ | ||
| "https://huggingface.co/datasets/burtenshaw/lora-without-regrets/resolve/main/grpo.py" \ | ||
| --model_name_or_path Qwen/Qwen3-0.6B \ | ||
| --dataset_name HuggingFaceH4/OpenR1-Math-220k-default-verified \ | ||
| --output_dir grpo-full-qwen3-0.6b \ | ||
| --learning_rate 1.0e-6 \ | ||
| --lr_scheduler_type cosine \ | ||
| --warmup_ratio 0.0 \ | ||
| --max_grad_norm 1.0 \ | ||
| --beta 0.0 \ | ||
| --max_prompt_length 1024 \ | ||
| --max_completion_length 4096 \ | ||
| --num_generations 16 \ | ||
| --generation_batch_size 16 \ | ||
| --gradient_accumulation_steps 8 \ | ||
| --per_device_train_batch_size 1 \ | ||
| --num_train_epochs 1 \ | ||
| --lora_r 1 \ | ||
| --lora_alpha 32 \ | ||
| --lora_dropout 0.0 \ | ||
| --lora_target_modules all-linear \ | ||
| --vllm_mode colocate \ | ||
| --save_strategy steps \ | ||
| --save_steps 50 \ | ||
| --save_total_limit 1 \ | ||
| --logging_steps 1 \ | ||
| --max_steps 200 \ | ||
| --report_to trackio | ||
| ``` | ||
|
|
||
| To use Hugging Face Jobs, you will need to be logged in to the Hugging Face Hub (`hf auth login`) and have a [Pro](https://hf.co/pro), [Team](https://hf.co/enterprise), or [Enterprise](https://hf.co/enterprise) plan. Check out the [Jobs documentation](https://huggingface.co/docs/huggingface_hub/en/guides/jobs) for more details. | ||
|
|
||
| </hfoption> | ||
| <hfoption id="local"> | ||
|
|
||
| ```bash | ||
|
|
||
| uv run "https://huggingface.co/datasets/burtenshaw/lora-without-regrets/resolve/main/grpo.py" \ | ||
| --model_name_or_path Qwen/Qwen3-0.6B \ | ||
|
Member
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. By default this model operates in "think" mode and thus produces many more tokens than the 4096 you've allocated. The best thing to do would be to copy the dataset (or make a subset) with a Alternatively you could pick a model like Gemma3 which doesn't reason.
Collaborator
Author
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. My mistake. The script and model choice don't align. In the SmolmLM3 and reasoning script I do: def make_conversation(example):
prompt = [{"role": "user", "content": example["problem"]}]
example["chat_template_kwargs"] = {"enable_thinking": False}
return {"prompt": prompt}I'll update the script now on the hub: https://huggingface.co/datasets/burtenshaw/lora-without-regrets/blob/main/grpo.py |
||
| --dataset_name HuggingFaceH4/OpenR1-Math-220k-default-verified \ | ||
| --output_dir grpo-full-qwen3-0.6b \ | ||
| --learning_rate 1.0e-6 \ | ||
| --lr_scheduler_type cosine \ | ||
| --warmup_ratio 0.0 \ | ||
| --max_grad_norm 1.0 \ | ||
| --beta 0.0 \ | ||
| --max_prompt_length 1024 \ | ||
| --max_completion_length 4096 \ | ||
| --num_generations 16 \ | ||
| --generation_batch_size 16 \ | ||
| --gradient_accumulation_steps 8 \ | ||
| --per_device_train_batch_size 1 \ | ||
| --num_train_epochs 1 \ | ||
| --lora_r 1 \ | ||
| --lora_alpha 32 \ | ||
| --lora_dropout 0.0 \ | ||
| --lora_target_modules all-linear \ | ||
| --vllm_mode colocate \ | ||
| --save_strategy steps \ | ||
| --save_steps 50 \ | ||
| --save_total_limit 1 \ | ||
| --logging_steps 1 \ | ||
| --max_steps 200 \ | ||
| --report_to trackio | ||
| ``` | ||
|
|
||
| To run the script locally, you will need to have `uv` installed. Check out the [uv documentation](https://docs.astral.sh/uv/) for more details. | ||
|
|
||
| </hfoption> | ||
| </hfoptions> | ||
|
|
||
| The reinforcement learning script with GRPO is implemented as custom scripts in TRL which uses the reward function shown above. You can review it at [`grpo.py`](https://huggingface.co/datasets/burtenshaw/lora-without-regrets/blob/main/grpo.py) - Reinforcement learning with LoRA best practices | ||
burtenshaw marked this conversation as resolved.
Outdated
Show resolved
Hide resolved
|
||
|
|
||
| ## Key findings in optimizing LoRA | ||
|
|
||
| The authors recommend applying LoRA to all weight matrices rather than limiting it to attention layers, as increasing the rank does not compensate for this restriction. In TRL, this can be configured using `--lora_target_modules all-linear` to apply LoRA to all weight matrices. | ||
|
|
||
| We were able to reproduce the results of the blog post using TRL and the SmolLM3 model. We trained the model for 500 steps on the [Math 220k dataset](https://huggingface.co/datasets/HuggingFaceH4/OpenR1-Math-220k-default-verified) with the reward function and configuration above. As you can see in the figure below, the LoRA model's average train reward curve matches the full fine-tuning curve. | ||
|
|
||
|  | ||
|
|
||
| And most importantly, the LoRA model uses significantly less memory than the full fine-tuning model, as we can see in the figure below. | ||
|
|
||
|  | ||
|
|
||
| Here are the parameters we used to train the above models | ||
|
|
||
| | Parameter | LoRA | Full FT | | ||
| |----------------------------------|----------------------------------------------------|-------------------------------| | ||
| | `--model_name_or_path` | HuggingFaceTB/SmolLM3-3B | HuggingFaceTB/SmolLM3-3B | | ||
| | `--dataset_name` | HuggingFaceH4/OpenR1-Math-220k-default-verified | HuggingFaceH4/OpenR1-Math-220k-default-verified | | ||
| | `--learning_rate` | 1.0e-6 | 1.0e-5 | | ||
| | `--max_prompt_length` | 1024 | 1024 | | ||
| | `--max_completion_length` | 4096 | 4096 | | ||
| | `--lora_r` | 1 | - | | ||
| | `--lora_alpha` | 32 | - | | ||
| | `--lora_dropout` | 0.0 | - | | ||
| | `--lora_target_modules` | all-linear | - | | ||
|
|
||
| Let's break down the key findings of the blog post and how we were able to reproduce them. | ||
|
|
||
| ### 1. *LoRA performs better when applied to all weight matrices* | ||
|
|
||
| The authors recommend applying LoRA to all weight matrices rather than limiting it to attention layers, as increasing the rank does not compensate for this restriction. | ||
|
|
||
|  | ||
|
|
||
| Attention-only LoRA underperforms even when using higher rank to match parameter count. In TRL, this can be configured using `--lora_target_modules all-linear` to apply LoRA to all weight matrices. In python, we can do this like so: | ||
burtenshaw marked this conversation as resolved.
Outdated
Show resolved
Hide resolved
|
||
|
|
||
| ```python | ||
| from peft import LoraConfig | ||
|
|
||
| peft_config = LoraConfig(target_modules="all-linear") | ||
| ``` | ||
|
|
||
| ### 2. *The adapter needs sufficient capacity to learn from the dataset* | ||
|
|
||
| The blog post recommends use a sufficient LoRA rank to learn from the dataset. The rank determines the number of trainable parameters in the LoRA adapter. Therefore, "For datasets that exceed LoRA capacity, LoRA underperforms FullFT". | ||
burtenshaw marked this conversation as resolved.
Outdated
Show resolved
Hide resolved
|
||
|
|
||
|  | ||
|
|
||
| In TRL script, we could use `--lora_r` to set the rank and adapt it based on the task and dataset we're training on. The blog post recommends the following ranks based on the task and dataset size: | ||
burtenshaw marked this conversation as resolved.
Outdated
Show resolved
Hide resolved
|
||
|
|
||
| Reinforcement learning tasks typically require lower capacity, so smaller LoRA ranks can be used. This is because policy gradient algorithms extract roughly ~1 bit of information per episode, demanding minimal parameter capacity. | ||
|
|
||
| The blog post defines the ideal dataset size for LoRA to match full fine-tuning as "Post-training scale". Which we can use to determine the recommended rank for SFT and RL LoRA's as: | ||
burtenshaw marked this conversation as resolved.
Outdated
Show resolved
Hide resolved
|
||
|
|
||
| | Task Type | Dataset Size | Recommended Rank | | ||
| |-----------|-------------|------------------| | ||
| | **SFT** | Post-training scale | 256 | | ||
| | **RL** | Any size | 1-32 | | ||
|
|
||
| ### 3. *"FullFT and high-rank LoRAs have similar learning curves"* | ||
|
|
||
| Counter-intuitively, the blog post recommends using similar learning rates to full fine-tuning. In TRL script, we could use `--learning_rate` to set the learning rate. The \\( \frac{1}{r} \\) scaling in LoRA makes optimal learning rate approximately rank-independent. | ||
burtenshaw marked this conversation as resolved.
Outdated
Show resolved
Hide resolved
burtenshaw marked this conversation as resolved.
Outdated
Show resolved
Hide resolved
|
||
|
|
||
|  | ||
|
|
||
| ### 4. *"In some scenarios, LoRA is less tolerant of large batch sizes than full fine-tuning."* | ||
|
|
||
| The blog post recommends using effective batch size < 32 because the authors found LoRA to be less tolerant of large batch sizes. This could not be mitigated by increasing the LoRA rank. In TRL script, we could use `--per_device_train_batch_size` and `--gradient_accumulation_steps` to set the batch size. | ||
burtenshaw marked this conversation as resolved.
Outdated
Show resolved
Hide resolved
|
||
|
|
||
|  | ||
|
|
||
burtenshaw marked this conversation as resolved.
Show resolved
Hide resolved
|
||
| ## Takeaways | ||
|
|
||
| Using TRL, you can efficiently implement LoRA adapters to match full fine-tuning performance, applying the core insights (targeting all weight matrices, choosing the right rank, and managing batch size and learning rate) without the heavy compute cost of FullFT. | ||
|
|
||
| ## Citation | ||
|
|
||
| ```bibtex | ||
| @article{schulman2025lora, | ||
| title = {{LoRA Without Regret}}, | ||
| author = {John Schulman and Thinking Machines Lab}, | ||
| year = 2025, | ||
| journal = {Thinking Machines Lab: Connectionism}, | ||
| doi = {10.64434/tml.20250929}, | ||
| note = {https://thinkingmachines.ai/blog/lora/} | ||
| } | ||
| ``` | ||
Uh oh!
There was an error while loading. Please reload this page.