Lighter makes PyTorch Lightning experiments reproducible and composable through YAML configuration. Stop hardcoding hyperparameters—configure everything from the command line.
You're already using PyTorch Lightning. But every experiment requires editing Python code to change hyperparameters:
# Want to try a different learning rate? Edit the code.
optimizer = Adam(params, lr=0.001) # Change this line
# Want to use a different batch size? Edit the code.
train_loader = DataLoader(dataset, batch_size=32) # And this one
# Want to train longer? Edit the code again.
trainer = Trainer(max_epochs=10) # And this one tooWith Lighter, configure everything in YAML and override from the CLI:
# Try different learning rates without touching code
lighter fit config.yaml model::optimizer::lr=0.001
lighter fit config.yaml model::optimizer::lr=0.01
lighter fit config.yaml model::optimizer::lr=0.1
# Every experiment is reproducible - just version control your configspip install lighterUse your existing PyTorch Lightning code:
# model.py
import torch
import torch.nn.functional as F
import pytorch_lightning as pl
class MyModel(pl.LightningModule):
def __init__(self, network, learning_rate=0.001):
super().__init__()
self.network = network
self.lr = learning_rate
def training_step(self, batch, batch_idx):
x, y = batch
loss = F.cross_entropy(self.network(x), y)
self.log("train/loss", loss)
return loss
def configure_optimizers(self):
return torch.optim.Adam(self.parameters(), lr=self.lr)Configure in YAML instead of hardcoding:
# config.yaml
trainer:
_target_: pytorch_lightning.Trainer
max_epochs: 10
model:
_target_: model.MyModel
network:
_target_: torchvision.models.resnet18
num_classes: 10
learning_rate: 0.001
data:
_target_: lighter.LighterDataModule
train_dataloader:
_target_: torch.utils.data.DataLoader
batch_size: 32
dataset:
_target_: torchvision.datasets.CIFAR10
root: ./data
train: true
download: trueRun and iterate fast:
# Run your experiment
lighter fit config.yaml
# Try different hyperparameters - no code editing needed
lighter fit config.yaml model::learning_rate=0.01
lighter fit config.yaml trainer::max_epochs=50
lighter fit config.yaml data::train_dataloader::batch_size=64
# Use multiple GPUs
lighter fit config.yaml trainer::devices=4
# Every run creates timestamped outputs with saved configs
# outputs/2025-11-21/14-30-45/config.yaml # Fully reproducible- Reproducible: Every experiment = one YAML file. Version control configs like code.
- Fast iteration: Override any parameter from CLI without editing code.
- Zero lock-in: Works with any PyTorch Lightning module. Your code, your logic.
- Composable: Merge configs, create recipes, share experiments as files.
- Organized: Automatic timestamped output directories with saved configs.
- Simple: ~500 lines of code. Read the framework in 30 minutes.
If you want automatic optimizer configuration and dual logging (step + epoch), use LighterModule:
from lighter import LighterModule
class MyModel(LighterModule):
def training_step(self, batch, batch_idx):
x, y = batch
pred = self(x)
loss = self.criterion(pred, y)
if self.train_metrics:
self.train_metrics(pred, y)
return {"loss": loss} # Framework logs automatically
# validation_step, test_step, predict_step...model:
_target_: model.MyModel
network:
_target_: torchvision.models.resnet18
num_classes: 10
criterion:
_target_: torch.nn.CrossEntropyLoss
optimizer:
_target_: torch.optim.Adam
params: "$@model::network.parameters()"
lr: 0.001
train_metrics:
- _target_: torchmetrics.Accuracy
task: multiclass
num_classes: 10LighterModule gives you:
- Automatic
configure_optimizers()handling - Automatic dual logging (step + epoch)
- Config-driven criterion and metrics
But you still control:
- All step implementations
- Loss computation logic
- When to call metrics
# Run grid search without editing code
for lr in 0.001 0.01 0.1; do
for bs in 32 64 128; do
lighter fit config.yaml \
model::optimizer::lr=$lr \
data::train_dataloader::batch_size=$bs
done
done
# Each run saved in outputs/YYYY-MM-DD/HH-MM-SS/ with config.yaml
# Compare experiments by diffing configs- 📚 Get Started Tutorial - 15 min walkthrough
- ⚙️ Configuration Guide - Master the syntax
- 🎯 LighterModule Design - Understand the internals
- 🏗️ Architecture Overview - How it all works
- 💬 Discord - Chat with users
- 🐛 GitHub Issues - Report bugs
- 📺 YouTube - Video tutorials
- 🤝 Contributing - Help improve Lighter
If Lighter helps your research, please cite our JOSS paper:
@article{lighter,
doi = {10.21105/joss.08101},
year = {2025}, publisher = {The Open Journal}, volume = {10}, number = {111}, pages = {8101},
author = {Hadzic, Ibrahim and Pai, Suraj and Bressem, Keno and Foldyna, Borek and Aerts, Hugo JWL},
title = {Lighter: Configuration-Driven Deep Learning},
journal = {Journal of Open Source Software}
}