Releases: aleximmer/Laplace
Releases · aleximmer/Laplace
0.2.2.2
0.2.2
What's Changed
- Incorporate
uv
by @wiseodd in #227 - Parallelize
pytest
CI with and without the oldasdfghjkl
by @wiseodd in #228 - Fix pytest badge in README by @wiseodd in #229
- Fix typo in the default value of
dict_key_x
by @wiseodd in #231 - Add instruction for
pip
in the README by @wiseodd in #230 - Delete .travis.yml by @aleximmer in #237
- Fix
SubnetLaplace
backend check by @wiseodd in #239 - Transform
y
into 2D tensor wheny.ndim == 1
andlikelihood == REGRESSION
by @wiseodd in #240 - Documentation facelift by @wiseodd in #242
- Remove
torchaudio
from dependencies by @o-laurent in #255 - Make
*Laplace
respect thedtype
of the base model by @wiseodd in #247 - Functional samples by @wiseodd in #243
New Contributors
- @o-laurent made their first contribution in #255
Full Changelog: 0.2.1...0.2.2
0.2.1
What's Changed
- Add
py.typed
to the package to enable the type annotations by @kourbou in #211 - Functional Laplace Updated by @Ludvins in #192
- Fix
sigma_noise
gradient propagation inBaseLaplace.log_likelihood
by @kourbou in #212 - Add pytest-mock for FunctionalLaplace Tests by @Ludvins in #215
- Fix
FullSubnetLaplace.sample()
by @elcorto in #216 - Document pros and cons of subnet selection methods by @elcorto in #218
- Update README by @wiseodd in #219
- Bump version number for new release by @runame in #220
- Update README re. ASDL by @wiseodd in #223
- Make importing
asdfghjkl
conditional to the installed optional dependency by @wiseodd in #225
New Contributors
Full Changelog: 0.2...0.2.1
0.2
TL;DR
- This release is all about laying down the groundwork for Laplace on foundation models. The main target for this version is LLMs. Support for other foundation models will added in versions >= 0.3.
- Additionally, this release adds support for Laplace for sequential decision making e.g. Bayesian optimization.
- A new backend,
curvlinops
, is added. - Added native serialization support.
- Updated the code base to conform modern Python development standard (typehinting, enums,
ruff
, etc.).
Thanks to all contributors!
What's Changed
LLM-related
- Bringing laplace-torch to foundation-model era by @wiseodd in #144
- Add support for cross entropy loss inputs with multiple leading dimensions by @runame in #132
- Add an option to reduce LLM features in
LLLaplace
by @wiseodd in #172 - Update huggingface_example.md by @wiseodd in #204
- Refactor reward-modeling likelihood by @runame in #200
- Doc & example for reward modeling by @wiseodd in #167
- Make the dict keys for models with dict-like inputs general by @wiseodd in #168
- Feature caching mechanism in LLLA by @wiseodd in #170
- Added more test coverage to the subset of params functionality by @wiseodd in #185
Bayesian optimization
Curvlinops
- Add Curvlinops backend & add default
functorch
implementations of many curvature quantities by @wiseodd in #146 - Point to curvlinops master branch in setup.cfg by @wiseodd in #151
Serialization
- Add native serialization support by @wiseodd in #148
- Enable
torch.save()
+torch.load()
for differentmap_location
s by @elcorto in #159
Devs
- Typehinting by @wiseodd in #180
- Add "Contributing" to the README by @wiseodd in #198
- Add
ruff check
,ruff format --check
, andpytest
Github actions by @wiseodd in #176 - Remove depreciated
loss_average
argument ofKFACLinearOperator
and add makefile for ruff by @runame in #197
Etc
- Fixes and features for SubnetLaplace by @edaxberger in #87
- Improve predictive samples by @aleximmer in #95
- Add experiments repo reference by @runame in #99
- Use ASDL as the default for classification by @edaxberger in #96
- Update Laplace bridge predictive by @runame in #101
- Make last-layer Jacobians agnostic to NN output shape by @runame in #112
- Fix device and dtype of
block_diag
used forKron.to_matrix()
by @runame in #117 - Add
KronDecomposed.diag()
feature by @aleximmer in #121 - Replacing
torch.einsum()
withopt_einsum
by @Heatdh in #125 - Computing the classification BMA, i.e. average of softmaxes, online. by @wiseodd in #133
- Add
asdl_fisher_kwargs
argument by @runame in #134 - Running metrics by @wiseodd in #135
- Add support for diagonal Kronecker factors in
Kron
matrix class by @runame in #136 - Add
prior_structure
argument tooptimize_prior_precision
by @runame in #123 - Move back to ASDL's main repo as dependency by @aleximmer in #183
- fix typo in Jacobian dimension by @ruili-pml in #190
- Prevent computing posterior precision in KronLaplace when it's not fitted by @wiseodd in #173
- Use backend-native Jacobians if available by @wiseodd in #187
- Update docs by @wiseodd in #184
- Remove
try-except
from gridsearch by @wiseodd in #199 - Add fast computation of functional_variance for DiagLLLaplace and KronLLLaplace by @wiseodd in #145
- Caveats by @wiseodd in #202
- Add some checks in
optimize_prior_precision
by @wiseodd in #205 - Add backend 'flowchart' by @wiseodd in #207
New Contributors
- @Heatdh made their first contribution in #125
- @elcorto made their first contribution in #159
- @ruili-pml made their first contribution in #190
Full Changelog: 0.1a2...0.2
laplace 0.1a2
What's Changed
- Remove abc by @metodj in #41
- Calibration example by @wiseodd in #54
- Enable cross validation support by @wiseodd in #38
- Minor improvements by @runame in #60
- Update symeig method based on new torch interface by @aleximmer in #61
- Online marglik as a method by @aleximmer in #64
- Support for refitting LA with option to override or update by @aleximmer in #62
- Low rank LA by @aleximmer in #65
- Fix device of eye in symeig by @aleximmer in #66
- Keep initialization of H for all-weights and last-layer separate by @runame in #72
- Fix device bug in eig_lowrank by @runame in #74
- Subnetwork Laplace by @edaxberger in #58
laplace 0.1a1
Initial release of the laplace
package.