From c63980b0b04ebe8f82ce3dfe22073fdb74a3ccc7 Mon Sep 17 00:00:00 2001 From: pagezyhf <165770107+pagezyhf@users.noreply.github.com> Date: Thu, 25 Jul 2024 16:17:10 +0200 Subject: [PATCH] broken link (#669) --- docs/source/training_tutorials/finetune_llm.mdx | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/source/training_tutorials/finetune_llm.mdx b/docs/source/training_tutorials/finetune_llm.mdx index f64a144df..4928ca37c 100644 --- a/docs/source/training_tutorials/finetune_llm.mdx +++ b/docs/source/training_tutorials/finetune_llm.mdx @@ -16,7 +16,7 @@ limitations under the License. # Fine-tune and Test Llama-3 8B on AWS Trainium -_Note: The complete script for this tutorial can be downloaded [here](https://github.com/huggingface/optimum-neuron/docs/source/training_tutorials/finetune_llm.py)._ +_Note: The complete script for this tutorial can be downloaded [here](https://github.com/huggingface/optimum-neuron/blob/main/docs/source/training_tutorials/finetune_llm.py)._ This tutorial will teach you how to fine-tune open source LLMs like [Llama 3](https://huggingface.co/meta-llama/Meta-Llama-3-8B) on AWS Trainium. In our example, we are going to leverage the [Optimum Neuron](https://huggingface.co/docs/optimum-neuron/index), [Transformers](https://huggingface.co/docs/transformers/index) and [Datasets](https://huggingface.co/docs/datasets/index) libraries.