Skip to content

Colab notebook for finetuning Microsoft's Phi-2-3B LLM for solving mathematical word problems using QLoRA

Notifications You must be signed in to change notification settings

zappy586/Phi-2-MATH

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

8 Commits
 
 
 
 

Repository files navigation

Phi-2-MATH

Designer (2)

This is a colab notebook for finetuning Microsoft's Phi-2-3B LLM for solving mathematical word problems using QLoRA, Uploading adapters to 🤗 Hub, Merging the adapters and then uploading it on 🤗 repo. The notebook also contains code for inferencing it directly from my repo.

image

  • The model was trained for 500 steps on a T4 colab pro GPU (16 GB VRAM) for about 2.5 hours on a subset(20%) of the original dataset using TRL's SFT Trainer.
  • The training loss obtained at the final step was 0.556700
  • The following is the PEFT Config for this training notebook:
    peft_config = LoraConfig(
          lora_alpha=16,
          lora_dropout=0.05,
          r=16,
          bias="none",
          task_type="CAUSAL_LM",
          target_modules= ["Wqkv", "out_proj"])
    
    • The following are the training metrics:

      image

About

Colab notebook for finetuning Microsoft's Phi-2-3B LLM for solving mathematical word problems using QLoRA

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published