Skip to content

soufiane001/plop

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

8 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

PLoP: Precise LoRA Placement for Efficient Finetuning of Large Models

This project provides a simple script to compute alignment metrics for transformer models on various datasets. This the official code for "PLoP: Precise LoRA Placement for Efficient Finetuning of Large Models" (https://arxiv.org/abs/2506.20629).

Usage

Install dependencies:

pip install -r requirements.txt

Run the main script:

python main.py --model <huggingface-model-handle> --dataset <math|code|history|logic> --batchsize <BATCHSIZE> --nbsamples <N> --seqlen <SEQ_LEN> --aggregation <type|layer|None> --output_dir <RESULTS_DIR>

Example:

python main.py --model meta-llama/Llama-3.2-1B-Instruct --dataset math --batchsize 8 --nbsamples 100 --seqlen 256 --aggregation type --output_dir results/

Arguments

  • --model: HuggingFace model handle (e.g., google/gemma-2b)
  • --dataset: Dataset name (math, code, history, logic)
  • --batchsize: Batch size (not used in this simple version, all samples are processed at once)
  • --nbsamples: Number of samples to use from the dataset
  • --seqlen: Sequence length for tokenization
  • --aggregation: How to aggregate results (type, layer, or None)
  • --output_dir: Directory to save results

Output

  • Raw and aggregated metrics are saved as JSON files in the specified output directory.

About

Official code for PLoP

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published