diff --git a/docs/.vale.ini b/docs/.vale.ini index f4db122493..20f8d75fc3 100644 --- a/docs/.vale.ini +++ b/docs/.vale.ini @@ -1,9 +1,12 @@ StylesPath = styles -MinAlertLevel = error +MinAlertLevel = warning Vocab = Manopt Packages = Google +[formats] +qmd = md + [*.md] BasedOnStyles = Vale, Google TokenIgnores = \ diff --git a/docs/src/notation.md b/docs/src/notation.md index 910ff44c06..f95e5d2d81 100644 --- a/docs/src/notation.md +++ b/docs/src/notation.md @@ -1,8 +1,7 @@ # Notation -In this package, we follow the notation introduced in [Manifolds.jl Notation](https://juliamanifolds.github.io/Manifolds.jl/latest/misc/notation.html) - -with the following additional notation +In this package,the notation introduced in [Manifolds.jl Notation](https://juliamanifolds.github.io/Manifolds.jl/latest/misc/notation.html) is used +with the following additional parts. | Symbol | Description | Also used | Comment | |:--:|:--------------- |:--:|:-- | diff --git a/docs/src/plans/index.md b/docs/src/plans/index.md index 33c5489193..54bf3f4c76 100644 --- a/docs/src/plans/index.md +++ b/docs/src/plans/index.md @@ -9,7 +9,7 @@ information is required about both the optimisation task or “problem” at han This together is called a __plan__ in `Manopt.jl` and it consists of two data structures: * The [Manopt Problem](@ref ProblemSection) describes all _static_ data of a task, most prominently the manifold and the objective. -* The [Solver State](@ref SolverStateSection) describes all _varying_ data and parameters for the solver that is used. This also means that each solver has its own data structure for the state. +* The [Solver State](@refsec:solver-state) describes all _varying_ data and parameters for the solver that is used. This also means that each solver has its own data structure for the state. By splitting these two parts, one problem can be define an then be solved using different solvers. diff --git a/docs/src/plans/state.md b/docs/src/plans/state.md index aa661ea736..2a3ae0dba4 100644 --- a/docs/src/plans/state.md +++ b/docs/src/plans/state.md @@ -1,4 +1,4 @@ -# [The solver state](@id SolverStateSection) +# [The solver state](@id sec-solver-state) ```@meta CurrentModule = Manopt diff --git a/docs/src/solvers/adaptive-regularization-with-cubics.md b/docs/src/solvers/adaptive-regularization-with-cubics.md index 473f51b6cb..fbd4168f1d 100644 --- a/docs/src/solvers/adaptive-regularization-with-cubics.md +++ b/docs/src/solvers/adaptive-regularization-with-cubics.md @@ -54,6 +54,18 @@ StopWhenAllLanczosVectorsUsed StopWhenFirstOrderProgress ``` +## [Technical Details](@id sec-arc-technical-details) + +The [`adaptive_regularization_with_cubics`](@ref) requires the following functions +of a manifolds to be available + +* A [retract!](https://juliamanifolds.github.io/ManifoldsBase.jl/stable/retractions/)ion; it is recommended to set the [`default_retraction_method`](https://juliamanifolds.github.io/ManifoldsBase.jl/stable/retractions/#ManifoldsBase.default_retraction_method-Tuple{AbstractManifold}) to a favourite retraction. If this default is set, a `retraction_method=` does not have to be specified. +* if you do not provide an initial regularization parameter `σ`, a [`manifold_dimension`](https://juliamanifolds.github.io/ManifoldsBase.jl/stable/functions/#ManifoldsBase.manifold_dimension-Tuple{AbstractManifold}) is required. +* By default the tangent vector storing the gradient is initialized calling [`zero_vector`](https://juliamanifolds.github.io/ManifoldsBase.jl/stable/functions/#ManifoldsBase.zero_vector-Tuple{AbstractManifold,%20Any})`(M,p)`. +* [`inner`](https://juliamanifolds.github.io/ManifoldsBase.jl/stable/functions/#ManifoldsBase.inner-Tuple{AbstractManifold,%20Any,%20Any,%20Any})`(M, p, X, Y)` is used within the algorithm step + +Furthermore, within the Lanczos subsolver, generating a random vector (at `p`) using [`rand!`](https://juliamanifolds.github.io/ManifoldsBase.jl/stable/functions/#Base.rand-Tuple{AbstractManifold})`(M, X; vector_at=p)` in place of `X` is required + ## Literature ```@bibliography diff --git a/docs/src/solvers/gradient_descent.md b/docs/src/solvers/gradient_descent.md index bd44d851c2..e5ccf90cb9 100644 --- a/docs/src/solvers/gradient_descent.md +++ b/docs/src/solvers/gradient_descent.md @@ -43,9 +43,9 @@ RecordGradientNorm RecordStepsize ``` -## [Technical Details](@id GradientDescent-Technical-Details) +## [Technical Details](@id sec-gradient-descent-technical-details) -The [`gradient_descent`](@ref) solver requires the following functions of your manifold to be available +The [`gradient_descent`](@ref) solver requires the following functions of a manifold to be available * A [retract!](https://juliamanifolds.github.io/ManifoldsBase.jl/stable/retractions/)ion; it is recommended to set the [`default_retraction_method`](https://juliamanifolds.github.io/ManifoldsBase.jl/stable/retractions/#ManifoldsBase.default_retraction_method-Tuple{AbstractManifold}) to a favourite retraction, for this case it does not have to be specified. diff --git a/docs/src/solvers/index.md b/docs/src/solvers/index.md index 820fe15ad2..e6e3866ff2 100644 --- a/docs/src/solvers/index.md +++ b/docs/src/solvers/index.md @@ -31,7 +31,7 @@ The following algorithms are currently available [Primal-dual Riemannian semismooth Newton Algorithm](@ref PDRSSNSolver) | [`primal_dual_semismooth_Newton`](@ref), [`PrimalDualSemismoothNewtonState`](@ref) (using [`TwoManifoldProblem`](@ref)) | ``f=F+G(Λ\cdot)``, ``\operatorname{prox}_{σ F}`` & diff., ``\operatorname{prox}_{τ G^*}`` & diff., ``Λ`` [Quasi-Newton Method](@ref quasiNewton) | [`quasi_Newton`](@ref), [`QuasiNewtonState`](@ref) | ``f``, ``\operatorname{grad} f`` | [Steihaug-Toint Truncated Conjugate-Gradient Method](@ref tCG) | [`truncated_conjugate_gradient_descent`](@ref), [`TruncatedConjugateGradientState`](@ref) | ``f``, ``\operatorname{grad} f``, ``\operatorname{Hess} f`` | -[Subgradient Method](@ref SubgradientSolver) | [`subgradient_method`](@ref), [`SubGradientMethodState`](@ref) | ``f``, ``∂ f`` | +[Subgradient Method](@refsec-subgradient-method) | [`subgradient_method`](@ref), [`SubGradientMethodState`](@ref) | ``f``, ``∂ f`` | [Stochastic Gradient Descent](@ref StochasticGradientDescentSolver) | [`stochastic_gradient_descent`](@ref), [`StochasticGradientDescentState`](@ref) | ``f = \sum_i f_i``, ``\operatorname{grad} f_i`` | [The Riemannian Trust-Regions Solver](@ref trust_regions) | [`trust_regions`](@ref), [`TrustRegionsState`](@ref) | ``f``, ``\operatorname{grad} f``, ``\operatorname{Hess} f`` | diff --git a/docs/src/solvers/subgradient.md b/docs/src/solvers/subgradient.md index ca2215285a..9a6b6966f8 100644 --- a/docs/src/solvers/subgradient.md +++ b/docs/src/solvers/subgradient.md @@ -1,4 +1,4 @@ -# [Subgradient Method](@id SubgradientSolver) +# [Subgradient method](@idsec-subgradient-method) ```@docs subgradient_method diff --git a/docs/styles/Vocab/Manopt/accept.txt b/docs/styles/Vocab/Manopt/accept.txt index daa3167ffb..3b417451cc 100644 --- a/docs/styles/Vocab/Manopt/accept.txt +++ b/docs/styles/Vocab/Manopt/accept.txt @@ -1,7 +1,6 @@ Absil Adagrad -Adjoint -adjoint +[A|a]djoint Armijo Bergmann Chambolle @@ -26,14 +25,11 @@ Lanczos LineSearches.jl Manifolds.jl ManifoldsBase.jl -manopt -manopt.org -Manopt -Manopt.jl +[Mm]anopt(:?.org|.jl)? Munkvold Mead Nelder -parametrising +[Pp]arametrising Parametrising Pock preconditioner