diff --git a/docs/source/training_tutorials/finetune_llm.mdx b/docs/source/training_tutorials/finetune_llm.mdx index f64a144df..4928ca37c 100644 --- a/docs/source/training_tutorials/finetune_llm.mdx +++ b/docs/source/training_tutorials/finetune_llm.mdx @@ -16,7 +16,7 @@ limitations under the License. # Fine-tune and Test Llama-3 8B on AWS Trainium -_Note: The complete script for this tutorial can be downloaded [here](https://github.com/huggingface/optimum-neuron/docs/source/training_tutorials/finetune_llm.py)._ +_Note: The complete script for this tutorial can be downloaded [here](https://github.com/huggingface/optimum-neuron/blob/main/docs/source/training_tutorials/finetune_llm.py)._ This tutorial will teach you how to fine-tune open source LLMs like [Llama 3](https://huggingface.co/meta-llama/Meta-Llama-3-8B) on AWS Trainium. In our example, we are going to leverage the [Optimum Neuron](https://huggingface.co/docs/optimum-neuron/index), [Transformers](https://huggingface.co/docs/transformers/index) and [Datasets](https://huggingface.co/docs/datasets/index) libraries.