Skip to content

Commit

Permalink
Merge pull request #130 from sergiopaniego/restore-broken-link
Browse files Browse the repository at this point in the history
Restored broken link in Fine-tuning a Code LLM on Custom Code on a single GPU notebook
  • Loading branch information
stevhliu committed Jun 28, 2024
2 parents d3f63c4 + 063914c commit d09e871
Showing 1 changed file with 1 addition and 1 deletion.
2 changes: 1 addition & 1 deletion notebooks/en/fine_tuning_code_llm_on_single_gpu.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -625,7 +625,7 @@
"\n",
"To train a model using LoRA technique, we need to wrap the base model as a `PeftModel`. This involves definign LoRA configuration with `LoraConfig`, and wrapping the original model with `get_peft_model()` using the `LoraConfig`.\n",
"\n",
"To learn more about LoRA and its parameters, refer to [PEFT documentation](https://huggingface.co/docs/peft/conceptual_guides/lora)."
"To learn more about LoRA and its parameters, refer to [PEFT documentation](https://huggingface.co/docs/peft/main/en/conceptual_guides/lora)."
],
"metadata": {
"id": "lmnLjPZpDVtg"
Expand Down

0 comments on commit d09e871

Please sign in to comment.