Skip to content

Commit

Permalink
Update README.md
Browse files Browse the repository at this point in the history
  • Loading branch information
merlintang committed Aug 25, 2023
1 parent 2c64c2a commit 1d726cf
Showing 1 changed file with 5 additions and 11 deletions.
16 changes: 5 additions & 11 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,29 +1,23 @@
# ASPEN: Efficient Multi-Lora Fine Tuning with Shared-Based Model
# ASPEN: Efficient Multi-LoRA Fine Tuning with Shared-Based Model

This repository provides tools for fine-tuning large language models (LLMs) using the LoRA or QLoRA methods more efficiently. It provides the framework to support multiple lora/qlora models fine tunning at the same time. By reusing the shared frozen-based model, multiple fine models can reduce GPU Memory usage greatly.
This repository provides tools for fine-tuning large language models (LLMs) using the LoRA or QLoRA methods more efficiently. It provides the framework to support multiple LoRA/qLoRA models fine tunning at the same time. By reusing the shared frozen-based model, we provide the framework to reduce GPU Memory usage for multiple fine-tuning models.

## Table of Contents

- [Updates](#updates)
- [Overview](#overview)
- [Installation](#installation)
- [Getting Started](#started)
- [Getting Started](#Quickstart)
- [Contributing](#contributing)
- [License](#license)

## Updates
Support

## Overview

## Installation

1. **Clone the Repository**:
```bash
git clone
cd llm-lora-finetune
```
## Quickstart

## Started
The `mlora.py` code is a starting point for finetuning and inference on various datasets.
Basic command for finetuning a baseline model on the Alpaca dataset:
```bash
Expand Down

0 comments on commit 1d726cf

Please sign in to comment.