diff --git a/notebooks/en/fine_tuning_code_llm_on_single_gpu.ipynb b/notebooks/en/fine_tuning_code_llm_on_single_gpu.ipynb index 153522d..49ffc06 100644 --- a/notebooks/en/fine_tuning_code_llm_on_single_gpu.ipynb +++ b/notebooks/en/fine_tuning_code_llm_on_single_gpu.ipynb @@ -625,7 +625,7 @@ "\n", "To train a model using LoRA technique, we need to wrap the base model as a `PeftModel`. This involves definign LoRA configuration with `LoraConfig`, and wrapping the original model with `get_peft_model()` using the `LoraConfig`.\n", "\n", - "To learn more about LoRA and its parameters, refer to [PEFT documentation](https://huggingface.co/docs/peft/conceptual_guides/lora)." + "To learn more about LoRA and its parameters, refer to [PEFT documentation](https://huggingface.co/docs/peft/main/en/conceptual_guides/lora)." ], "metadata": { "id": "lmnLjPZpDVtg"