FineTuning LLMs on conversational medical dataset.
-
Updated
Jul 1, 2024 - Jupyter Notebook
FineTuning LLMs on conversational medical dataset.
Unify Efficient Fine-Tuning of 100+ LLMs
ms-swift: Use PEFT or Full-parameter to finetune 250+ LLMs or 35+ MLLMs. (Qwen2, GLM4, Internlm2, Yi, Llama3, Llava, MiniCPM-V, Deepseek, Baichuan2, Gemma2, Phi3-Vision, ...)
手把手带你实战 Huggingface Transformers 课程视频同步更新在B站与YouTube
SORSA: Singular Values and Orthogonal Regularized Singular Vectors Adaptation of Large Language Models
This is the official repository for the paper "Flora: Low-Rank Adapters Are Secretly Gradient Compressors" in ICML 2024.
Speech, Language, Audio, Music Processing with Large Language Model
An efficient, flexible and full-featured toolkit for fine-tuning LLM (InternLM2, Llama3, Phi3, Qwen, Mistral, ...)
Implementation for the different ML tasks on Kaggle platform with GPUs.
Finetuning of low-bit quantized large language model
Parameter Efficient Fine-tuning of Self-supervised ViTs without Catastrophic Forgetting
High Quality Image Generation Model - Comes Under Zero Spaces@prithivmlmods
This repository is dedicated to small projects and some theoretical material that I used to get into NLP and LLM in a practical and efficient way.
MindSpore online courses: Step into LLM
Discrete Bayesian optimization with LLMs, PEFT finetuning methods, and the Laplace approximation.
a bro who codes with you
Low Tensor Rank adaptation of large language models
PEFT is a wonderful tool that enables training a very large model in a low resource environment. Quantization and PEFT will enable widespread adoption of LLM.
Firefly: 大模型训练工具,支持训练Qwen2、Yi1.5、Phi-3、Llama3、Gemma、MiniCPM、Yi、Deepseek、Orion、Xverse、Mixtral-8x7B、Zephyr、Mistral、Baichuan2、Llma2、Llama、Qwen、Baichuan、ChatGLM2、InternLM、Ziya2、Vicuna、Bloom等大模型
Add a description, image, and links to the peft topic page so that developers can more easily learn about it.
To associate your repository with the peft topic, visit your repo's landing page and select "manage topics."