Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Introduce the LTMADS solver #433

Open
wants to merge 31 commits into
base: master
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
31 commits
Select commit Hold shift + click to select a range
8baa2f0
Initial sketch of (LT)MADS
kellertuer Sep 27, 2024
d0ee13f
Merge branch 'master' into kellertuer/LTMADS
kellertuer Jan 2, 2025
4c6d740
Design concrete search and poll structs further and document them.
kellertuer Jan 2, 2025
fb945c2
Add remaining todos.
kellertuer Jan 2, 2025
fa6004d
continue docs.
kellertuer Jan 3, 2025
435aea1
Implement most of the logic, just not yet the updates(vector transpor…
kellertuer Jan 3, 2025
88b9683
forgot to store poll_size.
kellertuer Jan 3, 2025
9d40813
Merge branch 'master' into kellertuer/LTMADS
kellertuer Jan 5, 2025
3c2b537
first MADS variant that includes all necessary functions.
kellertuer Jan 5, 2025
354c8ff
extend docs.
kellertuer Jan 5, 2025
7b8ad8d
Fix a few typos.
kellertuer Jan 6, 2025
939d7b0
Fix two typos.
kellertuer Jan 7, 2025
3a4e2ac
Fix typos add a first running, but failing test.
kellertuer Jan 8, 2025
9167d93
Stabilize I
kellertuer Jan 9, 2025
2b29824
Finally found the bug in scaling the mesh to be the culprit
kellertuer Jan 9, 2025
57ee145
Fix state print a bit.
kellertuer Jan 9, 2025
a7e9f8c
change poll and mesh size to be internal parameters.
kellertuer Jan 9, 2025
e430d73
unify naming and add docstrings to all new (small) functions
kellertuer Jan 9, 2025
5a59142
Fix docs.
kellertuer Jan 9, 2025
aff7900
work on code coverage.
kellertuer Jan 26, 2025
df0f042
Cover a final line.
kellertuer Jan 26, 2025
a8d47e4
improve typing and performance a little
mateuszbaran Jan 26, 2025
1d6454e
formatting
mateuszbaran Jan 26, 2025
83da62e
fix some typos, add some types
mateuszbaran Jan 27, 2025
0c6322b
A bit of work on typos.
kellertuer Feb 4, 2025
5e7f232
Update metadata.
kellertuer Feb 4, 2025
577dfb5
Rearrange the order of names.
kellertuer Feb 4, 2025
6ba7b9a
Update docs/src/references.bib
kellertuer Feb 4, 2025
58f3b1a
fix 2 more typos.
kellertuer Feb 4, 2025
20901fc
Bring vale to zero errors.
kellertuer Feb 4, 2025
8cd4880
Fix a few more typos.
kellertuer Feb 5, 2025
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
17 changes: 9 additions & 8 deletions .vale.ini
Original file line number Diff line number Diff line change
Expand Up @@ -7,14 +7,12 @@ Packages = Google
[formats]
# code blocks with Julia in Markdown do not yet work well
qmd = md
jl = md

[docs/src/*.md]
BasedOnStyles = Vale, Google

[docs/src/contributing.md]
BasedOnStyles =

[Changelog.md, CONTRIBUTING.md]
[{docs/src/contributing.md, Changelog.md, CONTRIBUTING.md}]
BasedOnStyles = Vale, Google
Google.Will = false ; given format and really with intend a _will_
Google.Headings = false ; some might jeally ahabe [] in their headers
Expand All @@ -39,12 +37,15 @@ TokenIgnores = \$(.+)\$,\[.+?\]\(@(ref|id|cite).+?\),`.+`,``.*``,\s{4}.+\n
Google.Units = false #wto ignore formats= for now.
TokenIgnores = \$(.+)\$,\[.+?\]\(@(ref|id|cite).+?\),`.+`,``.*``,\s{4}.+\n

[tutorials/*.md] ; actually .qmd for the first, second autogenerated
[tutorials/*.qmd] ; actually .qmd for the first, second autogenerated
BasedOnStyles = Vale, Google
; ignore (1) math (2) ref and cite keys (3) code in docs (4) math in docs (5,6) indented blocks
TokenIgnores = (\$+[^\n$]+\$+)
Google.We = false # For tutorials we want to address the user directly.

[docs/src/tutorials/*.md]
; ignore since they are derived files
BasedOnStyles =
[docs/src/tutorials/*.md] ; Can I somehow just deactivate these?
BasedOnStyles = Vale, Google
; ignore (1) math (2) ref and cite keys (3) code in docs (4) math in docs (5,6) indented blocks
TokenIgnores = (\$+[^\n$]+\$+)
Google.We = false # For tutorials we want to address the user directly.
Google.Spacing = false # one reference uses this
5 changes: 5 additions & 0 deletions .zenodo.json
Original file line number Diff line number Diff line change
Expand Up @@ -25,6 +25,11 @@
"name": "Riemer, Tom-Christian",
"type": "ProjectMember"
},
{
"affiliation": "NTNU Trondheim",
"name": "Oddsen, Sander Engen",
"type": "ProjectMember"
},
{
"name": "Schilly, Harald",
"type": "Other"
Expand Down
55 changes: 31 additions & 24 deletions Changelog.md
Original file line number Diff line number Diff line change
@@ -1,11 +1,18 @@
# Changelog

All notable Changes to the Julia package `Manopt.jl` will be documented in this file. The file was started with Version `0.4`.
All notable Changes to the Julia package `Manopt.jl` are documented in this file.
The file was started with Version `0.4`.

The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/),
and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0.html).

## [0.5.5] Januaey 4, 2025
## [0.5.6] February 10, 20265

### Added

* A mesh adaptive direct search algorithm (MADS), for now with the LTMADS variant using a lower triangular random matrix in the poll step.

## [0.5.5] January 4, 2025

### Added

Expand All @@ -23,16 +30,16 @@ and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0

* The geodesic regression example, first because it is not correct, second because it should become part of ManoptExamples.jl once it is correct.

## [0.5.4] - December 11, 2024
## [0.5.4] December 11, 2024

### Added

* An automated detection whether the tutorials are present
if not an also no quarto run is done, an automated `--exclude-tutorials` option is added.
* Support for ManifoldDiff 0.4
* icons upfront external links when they link to another package or wikipedia.
* icons upfront external links when they link to another package or Wikipedia.

## [0.5.3] October 18, 2024
## [0.5.3] October 18, 2024

### Added

Expand All @@ -42,9 +49,9 @@ and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0

* stabilize `max_stepsize` to also work when `injectivity_radius` dos not exist.
It however would warn new users, that activate tutorial mode.
* Start a `ManoptTestSuite` subpackage to store dummy types and common test helpers in.
* Start a `ManoptTestSuite` sub package to store dummy types and common test helpers in.

## [0.5.2] October 5, 2024
## [0.5.2] October 5, 2024

### Added

Expand All @@ -55,7 +62,7 @@ and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0
* fix a few typos in the documentation
* improved the documentation for the initial guess of [`ArmijoLinesearchStepsize`](https://manoptjl.org/stable/plans/stepsize/#Manopt.ArmijoLinesearch).

## [0.5.1] September 4, 2024
## [0.5.1] September 4, 2024

### Changed

Expand All @@ -65,17 +72,17 @@ and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0

* the `proximal_point` method.

## [0.5.0] August 29, 2024
## [0.5.0] August 29, 2024

This breaking update is mainly concerned with improving a unified experience through all solvers
and some usability improvements, such that for example the different gradient update rules are easier to specify.

In general we introduce a few factories, that avoid having to pass the manifold to keyword arguments
In general this introduces a few factories, that avoid having to pass the manifold to keyword arguments

### Added

* A `ManifoldDefaultsFactory` that postpones the creation/allocation of manifold-specific fields in for example direction updates, step sizes and stopping criteria. As a rule of thumb, internal structures, like a solver state should store the final type. Any high-level interface, like the functions to start solvers, should accept such a factory in the appropriate places and call the internal `_produce_type(factory, M)`, for example before passing something to the state.
* a `documentation_glossary.jl` file containing a glossary of often used variables in fields, arguments, and keywords, to print them in a unified manner. The same for usual sections, tex, and math notation that is often used within the doc-strings.
* a `documentation_glossary.jl` file containing a glossary of often used variables in fields, arguments, and keywords, to print them in a unified manner. The same for usual sections, text, and math notation that is often used within the doc-strings.

### Changed

Expand All @@ -100,12 +107,12 @@ In general we introduce a few factories, that avoid having to pass the manifold
* `HestenesStiefelCoefficient` is now called `HestenesStiefelCoefficientRule`. For the `HestenesStiefelCoefficient` the manifold as its first parameter is no longer necessary and the vector transport has been unified/moved to the `vector_transport_method=` keyword.
* `LiuStoreyCoefficient` is now called `LiuStoreyCoefficientRule`. For the `LiuStoreyCoefficient` the manifold as its first parameter is no longer necessary and the vector transport has been unified/moved to the `vector_transport_method=` keyword.
* `PolakRibiereCoefficient` is now called `PolakRibiereCoefficientRule`. For the `PolakRibiereCoefficient` the manifold as its first parameter is no longer necessary and the vector transport has been unified/moved to the `vector_transport_method=` keyword.
* the `SteepestDirectionUpdateRule` is now called `SteepestDescentCoefficientRule`. The `SteepestDescentCoefficient` is equivalent, but creates the new factory interims wise.
* the `SteepestDirectionUpdateRule` is now called `SteepestDescentCoefficientRule`. The `SteepestDescentCoefficient` is equivalent, but creates the new factory temporarily.
* `AbstractGradientGroupProcessor` is now called `AbstractGradientGroupDirectionRule`
* the `StochasticGradient` is now called `StochasticGradientRule`. The `StochasticGradient` is equivalent, but creates the new factory interims wise, so that the manifold is not longer necessary.
* the `StochasticGradient` is now called `StochasticGradientRule`. The `StochasticGradient` is equivalent, but creates the new factory temporarily, so that the manifold is not longer necessary.
* the `AlternatingGradient` is now called `AlternatingGradientRule`.
The `AlternatingGradient` is equivalent, but creates the new factory interims wise, so that the manifold is not longer necessary.
* `quasi_Newton` had a keyword `scale_initial_operator=` that was inconsistently declared (sometimes bool, sometimes real) and was unused.
The `AlternatingGradient` is equivalent, but creates the new factory temporarily, so that the manifold is not longer necessary.
* `quasi_Newton` had a keyword `scale_initial_operator=` that was inconsistently declared (sometimes boolean, sometimes real) and was unused.
It is now called `initial_scale=1.0` and scales the initial (diagonal, unit) matrix within the approximation of the Hessian additionally to the $\frac{1}{\lVert g_k\rVert}$ scaling with the norm of the oldest gradient for the limited memory variant. For the full matrix variant the initial identity matrix is now scaled with this parameter.
* Unify doc strings and presentation of keyword arguments
* general indexing, for example in a vector, uses `i`
Expand All @@ -122,7 +129,7 @@ In general we introduce a few factories, that avoid having to pass the manifold
* the previous `stabilize=true` is now set with `(project!)=embed_project!` in general,
and if the manifold is represented by points in the embedding, like the sphere, `(project!)=project!` suffices
* the new default is `(project!)=copyto!`, so by default no projection/stabilization is performed.
* the positional argument `p` (usually the last or the third to last if subsolvers existed) has been moved to a keyword argument `p=` in all State constructors
* the positional argument `p` (usually the last or the third to last if sub solvers existed) has been moved to a keyword argument `p=` in all State constructors
* in `NelderMeadState` the `population` moved from positional to keyword argument as well,
* the way to initialise sub solvers in the solver states has been unified In the new variant
* the `sub_problem` is always a positional argument; namely the last one
Expand All @@ -138,14 +145,14 @@ In general we introduce a few factories, that avoid having to pass the manifold
* `AdaptiveRegularizationState(M, sub_problem [, sub_state]; kwargs...)` replaces
the (anyways unused) variant to only provide the objective; both `X` and `p` moved to keyword arguments.
* `AugmentedLagrangianMethodState(M, objective, sub_problem; evaluation=...)` was added
* ``AugmentedLagrangianMethodState(M, objective, sub_problem, sub_state; evaluation=...)` now has `p=rand(M)` as keyword argument instead of being the second positional one
* `AugmentedLagrangianMethodState(M, objective, sub_problem, sub_state; evaluation=...)` now has `p=rand(M)` as keyword argument instead of being the second positional one
* `ExactPenaltyMethodState(M, sub_problem; evaluation=...)` was added and `ExactPenaltyMethodState(M, sub_problem, sub_state; evaluation=...)` now has `p=rand(M)` as keyword argument instead of being the second positional one
* `DifferenceOfConvexState(M, sub_problem; evaluation=...)` was added and `DifferenceOfConvexState(M, sub_problem, sub_state; evaluation=...)` now has `p=rand(M)` as keyword argument instead of being the second positional one
* `DifferenceOfConvexProximalState(M, sub_problem; evaluation=...)` was added and `DifferenceOfConvexProximalState(M, sub_problem, sub_state; evaluation=...)` now has `p=rand(M)` as keyword argument instead of being the second positional one
* bumped `Manifolds.jl`to version 0.10; this mainly means that any algorithm working on a product manifold and requiring `ArrayPartition` now has to explicitly do `using RecursiveArrayTools`.
### Fixed

* the `AverageGradientRule` filled its internal vector of gradients wrongly or mixed it up in parallel transport. This is now fixed.
* the `AverageGradientRule` filled its internal vector of gradients wrongly or mixed it up in parallel transport. This is now fixed.

### Removed

Expand All @@ -165,31 +172,31 @@ In general we introduce a few factories, that avoid having to pass the manifold
* to update a stopping criterion in a solver state, replace the old `update_stopping_criterion!(state, :Val, v)` tat passed down to the stopping criterion by the explicit pass down with `set_parameter!(state, :StoppingCriterion, :Val, v)`


## [0.4.69] August 3, 2024
## [0.4.69] August 3, 2024

### Changed

* Improved performance of Interior Point Newton Method.

## [0.4.68] August 2, 2024
## [0.4.68] August 2, 2024

### Added

* an Interior Point Newton Method, the `interior_point_newton`
* a `conjugate_residual` Algorithm to solve a linear system on a tangent space.
* `ArmijoLinesearch` now allows for additional `additional_decrease_condition` and `additional_increase_condition` keywords to add further conditions to accept additional conditions when to accept an decreasing or increase of the stepsize.
* add a `DebugFeasibility` to have a debug print about feasibility of points in constrained optimisation employing the new `is_feasible` function
* add a `InteriorPointCentralityCondition` check that can be added for step candidates within the line search of `interior_point_newton`
* add a `InteriorPointCentralityCondition` that can be added for step candidates within the line search of `interior_point_newton`
* Add Several new functors
* the `LagrangianCost`, `LagrangianGradient`, `LagrangianHessian`, that based on a constrained objective allow to construct the hessian objective of its Lagrangian
* the `LagrangianCost`, `LagrangianGradient`, `LagrangianHessian`, that based on a constrained objective allow to construct the Hessian objective of its Lagrangian
* the `CondensedKKTVectorField` and its `CondensedKKTVectorFieldJacobian`, that are being used to solve a linear system within `interior_point_newton`
* the `KKTVectorField` as well as its `KKTVectorFieldJacobian` and ``KKTVectorFieldAdjointJacobian`
* the `KKTVectorFieldNormSq` and its `KKTVectorFieldNormSqGradient` used within the Armijo line search of `interior_point_newton`
* New stopping criteria
* A `StopWhenRelativeResidualLess` for the `conjugate_residual`
* A `StopWhenKKTResidualLess` for the `interior_point_newton`

## [0.4.67] July 25, 2024
## [0.4.67] July 25, 2024

### Added

Expand Down Expand Up @@ -235,7 +242,7 @@ In general we introduce a few factories, that avoid having to pass the manifold
* Remodel `ConstrainedManifoldObjective` to store an `AbstractManifoldObjective`
internally instead of directly `f` and `grad_f`, allowing also Hessian objectives
therein and implementing access to this Hessian
* Fixed a bug that Lanczos produced NaNs when started exactly in a minimizer, since we divide by the gradient norm.
* Fixed a bug that Lanczos produced NaNs when started exactly in a minimizer, since the algorithm initially divides by the gradient norm.

### Deprecated

Expand Down
2 changes: 1 addition & 1 deletion Readme.md
Original file line number Diff line number Diff line change
Expand Up @@ -36,7 +36,7 @@ In Julia you can get started by just typing
using Pkg; Pkg.add("Manopt");
```

and then checkout the [Get started: optimize!](https://manoptjl.org/stable/tutorials/Optimize/) tutorial.
and then checkout the [🏔️ Get started with Manopt.jl](https://manoptjl.org/stable/tutorials/Optimize/) tutorial.

## Related packages

Expand Down
3 changes: 2 additions & 1 deletion docs/make.jl
Original file line number Diff line number Diff line change
Expand Up @@ -35,7 +35,7 @@ tutorials_in_menu = !("--exclude-tutorials" ∈ ARGS)
# (a) setup the tutorials menu – check whether all files exist
tutorials_menu =
"How to..." => [
"🏔️ Get started: optimize." => "tutorials/Optimize.md",
"🏔️ Get started with Manopt.jl." => "tutorials/Optimize.md",
"Speedup using in-place computations" => "tutorials/InplaceGradient.md",
"Use automatic differentiation" => "tutorials/AutomaticDifferentiation.md",
"Define objectives in the embedding" => "tutorials/EmbeddingObjectives.md",
Expand Down Expand Up @@ -200,6 +200,7 @@ makedocs(;
"Gradient Descent" => "solvers/gradient_descent.md",
"Interior Point Newton" => "solvers/interior_point_Newton.md",
"Levenberg–Marquardt" => "solvers/LevenbergMarquardt.md",
"MADS" => "solvers/mesh_adaptive_direct_search.md",
"Nelder–Mead" => "solvers/NelderMead.md",
"Particle Swarm Optimization" => "solvers/particle_swarm.md",
"Primal-dual Riemannian semismooth Newton" => "solvers/primal_dual_semismooth_Newton.md",
Expand Down
5 changes: 3 additions & 2 deletions docs/src/about.md
Original file line number Diff line number Diff line change
Expand Up @@ -13,6 +13,7 @@ Thanks to the following contributors to `Manopt.jl`:
* [Hajg Jasa](https://www.ntnu.edu/employees/hajg.jasa) implemented the [convex bundle method](solvers/convex_bundle_method.md) and the [proximal bundle method](solvers/proximal_bundle_method.md) and a default subsolver each of them.
* Even Stephansen Kjemsås contributed to the implementation of the [Frank Wolfe Method](solvers/FrankWolfe.md) solver.
* Mathias Ravn Munkvold contributed most of the implementation of the [Adaptive Regularization with Cubics](solvers/adaptive-regularization-with-cubics.md) solver as well as its [Lanczos](@ref arc-Lanczos) subsolver
* [Sander Engen Oddsen](https://github.com/oddsen) contributed to the implementation of the [LTMADS](solvers/mesh_adaptive_direct_search.md) solver.
* [Tom-Christian Riemer](https://www.tu-chemnitz.de/mathematik/wire/mitarbeiter.php) implemented the [trust regions](solvers/trust_regions.md) and [quasi Newton](solvers/quasi_Newton.md) solvers as well as the [truncated conjugate gradient descent](solvers/truncated_conjugate_gradient_descent.md) subsolver.
* [Markus A. Stokkenes](https://www.linkedin.com/in/markus-a-stokkenes-b41bba17b/) contributed most of the implementation of the [Interior Point Newton Method](solvers/interior_point_Newton.md) as well as its default [Conjugate Residual](solvers/conjugate_residual.md) subsolver
* [Manuel Weiss](https://scoop.iwr.uni-heidelberg.de/author/manuel-weiß/) implemented most of the [conjugate gradient update rules](@ref cg-coeffs)
Expand All @@ -28,8 +29,8 @@ to clone/fork the repository or open an issue.
* [ExponentialFamilyProjection.jl](https://github.com/ReactiveBayes/ExponentialFamilyProjection.jl) package uses `Manopt.jl` to project arbitrary functions onto the closest exponential family distributions. The package also integrates with [`RxInfer.jl`](https://github.com/ReactiveBayes/RxInfer.jl) to enable Bayesian inference in a larger set of probabilistic models.
* [Caesar.jl](https://github.com/JuliaRobotics/Caesar.jl) within non-Gaussian factor graph inference algorithms

Is a package missing? [Open an issue](https://github.com/JuliaManifolds/Manopt.jl/issues/new)!
It would be great to collect anything and anyone using Manopt.jl
If you are missing a package, that uses `Manopt.jl`, please [open an issue](https://github.com/JuliaManifolds/Manopt.jl/issues/new).
It would be great to collect anything and anyone using Manopt.jl in this list.

## Further packages

Expand Down
4 changes: 2 additions & 2 deletions docs/src/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -20,7 +20,7 @@ or in other words: find the point ``p`` on the manifold, where ``f`` reaches its
It belongs to the “Manopt family”, which includes [Manopt](https://manopt.org) (Matlab) and [pymanopt.org](https://www.pymanopt.org/) (Python).

If you want to delve right into `Manopt.jl` read the
[🏔️ Get started: optimize.](tutorials/Optimize.md) tutorial.
[🏔️ Get started with Manopt.jl.](tutorials/Optimize.md) tutorial.

`Manopt.jl` makes it easy to use an algorithm for your favourite
manifold as well as a manifold for your favourite algorithm. It already provides
Expand Down Expand Up @@ -94,7 +94,7 @@ The notation in the documentation aims to follow the same [notation](https://jul
### Visualization

To visualize and interpret results, `Manopt.jl` aims to provide both easy plot functions as well as [exports](helpers/exports.md). Furthermore a system to get [debug](plans/debug.md) during the iterations of an algorithms as well as [record](plans/record.md) capabilities, for example to record a specified tuple of values per iteration, most prominently [`RecordCost`](@ref) and
[`RecordIterate`](@ref). Take a look at the [🏔️ Get started: optimize.](tutorials/Optimize.md) tutorial on how to easily activate this.
[`RecordIterate`](@ref). Take a look at the [🏔️ Get started with Manopt.jl.](tutorials/Optimize.md) tutorial on how to easily activate this.

## Literature

Expand Down
7 changes: 7 additions & 0 deletions docs/src/references.bib
Original file line number Diff line number Diff line change
Expand Up @@ -325,6 +325,13 @@ @article{DiepeveenLellmann:2021
VOLUME = {14},
YEAR = {2021},
}
@techreport{Dreisigmeyer:2007,
AUTHOR = {Dreisigmeyer, David W.},
INSTITUTION = {Optimization Online},
TITLE = {Direct Search Alogirthms over Riemannian Manifolds},
URL = {https://optimization-online.org/?p=9134},
YEAR = {2007}
}
@article{DuranMoelleSbertCremers:2016,
AUTHOR = {Duran, J. and Moeller, M. and Sbert, C. and Cremers, D.},
TITLE = {Collaborative Total Variation: A General Framework for Vectorial TV Models},
Expand Down
Loading
Loading