From f2dc3a431729984cd6e0a027c354a983e9bfd1af Mon Sep 17 00:00:00 2001 From: "Documenter.jl" Date: Tue, 5 Sep 2023 17:41:41 +0000 Subject: [PATCH] build based on ccd4ba9 --- dev/func_index/index.html | 2 +- dev/index.html | 2 +- dev/internals/predictive_control/index.html | 12 +- dev/internals/sim_model/index.html | 2 +- dev/internals/state_estim/index.html | 20 +- dev/manual/installation/index.html | 2 +- dev/manual/linmpc/index.html | 290 +++++----- dev/manual/nonlinmpc/index.html | 570 ++++++++++---------- dev/public/generic_func/index.html | 16 +- dev/public/predictive_control/index.html | 20 +- dev/public/sim_model/index.html | 12 +- dev/public/state_estim/index.html | 16 +- dev/search/index.html | 2 +- dev/search_index.js | 2 +- 14 files changed, 484 insertions(+), 484 deletions(-) diff --git a/dev/func_index/index.html b/dev/func_index/index.html index f37f56f4c..7885cd2eb 100644 --- a/dev/func_index/index.html +++ b/dev/func_index/index.html @@ -1,2 +1,2 @@ -Index · ModelPredictiveControl.jl

Index

+Index · ModelPredictiveControl.jl

Index

diff --git a/dev/index.html b/dev/index.html index 33885a03c..4b27231a4 100644 --- a/dev/index.html +++ b/dev/index.html @@ -1,2 +1,2 @@ -Home · ModelPredictiveControl.jl

ModelPredictiveControl.jl

A model predictive control package for Julia.

The package depends on ControlSystemsBase.jl for the linear systems and JuMP.jl for the solving.

The objective is to provide a simple and clear framework to quickly design model predictive controllers (MPCs) in Julia, while preserving the flexibility for advanced real-time optimization. Modern MPCs based on closed-loop state estimators are the main focus of the package, but classical approaches that rely on internal models are also possible. The JuMP.jl interface allows to easily test different solvers if the performance of the default settings is not satisfactory.

The documentation is divided in two parts:

  • Manual — This section includes step-by-step guides to design predictive controllers on multiple case studies.
  • Functions — Documentation of methods and types exported by the package. The "Internals" section provides implementation details of functions that are not exported.

Manual

Functions: Public

Functions: Internals

+Home · ModelPredictiveControl.jl

ModelPredictiveControl.jl

A model predictive control package for Julia.

The package depends on ControlSystemsBase.jl for the linear systems and JuMP.jl for the solving.

The objective is to provide a simple and clear framework to quickly design model predictive controllers (MPCs) in Julia, while preserving the flexibility for advanced real-time optimization. Modern MPCs based on closed-loop state estimators are the main focus of the package, but classical approaches that rely on internal models are also possible. The JuMP.jl interface allows to easily test different solvers if the performance of the default settings is not satisfactory.

The documentation is divided in two parts:

  • Manual — This section includes step-by-step guides to design predictive controllers on multiple case studies.
  • Functions — Documentation of methods and types exported by the package. The "Internals" section provides implementation details of functions that are not exported.

Manual

Functions: Public

Functions: Internals

diff --git a/dev/internals/predictive_control/index.html b/dev/internals/predictive_control/index.html index b76adc881..311f7c86d 100644 --- a/dev/internals/predictive_control/index.html +++ b/dev/internals/predictive_control/index.html @@ -38,32 +38,32 @@ \vdots \\ \mathbf{W}_{H_p-1} \end{bmatrix} -\end{aligned}\]

Note

Stochastic predictions $\mathbf{Ŷ_s}$ are calculated separately (see init_stochpred) and added to the $\mathbf{F}$ matrix to support internal model structure and reduce NonLinMPC computational costs. That is also why the prediction matrices are built on $\mathbf{A, B_u, C, B_d, D_d}$ instead of the augmented model $\mathbf{Â, B̂_u, Ĉ, B̂_d, D̂_d}$.

source

Return empty matrices if model is not a LinModel

source
ModelPredictiveControl.init_ΔUtoUFunction
init_ΔUtoU(nu, Hp, Hc, C, c_Umin, c_Umax) -> S_Hp, T_Hp, S_Hc, T_Hc

Init manipulated input increments to inputs conversion matrices.

The conversion from the input increments $\mathbf{ΔU}$ to manipulated inputs over $H_p$ and $H_c$ are calculated by:

\[\begin{aligned} +\end{aligned}\]

Note

Stochastic predictions $\mathbf{Ŷ_s}$ are calculated separately (see init_stochpred) and added to the $\mathbf{F}$ matrix to support internal model structure and reduce NonLinMPC computational costs. That is also why the prediction matrices are built on $\mathbf{A, B_u, C, B_d, D_d}$ instead of the augmented model $\mathbf{Â, B̂_u, Ĉ, B̂_d, D̂_d}$.

source

Return empty matrices if model is not a LinModel

source
ModelPredictiveControl.init_ΔUtoUFunction
init_ΔUtoU(nu, Hp, Hc, C, c_Umin, c_Umax) -> S_Hp, T_Hp, S_Hc, T_Hc

Init manipulated input increments to inputs conversion matrices.

The conversion from the input increments $\mathbf{ΔU}$ to manipulated inputs over $H_p$ and $H_c$ are calculated by:

\[\begin{aligned} \mathbf{U} = \mathbf{U}_{H_p} &= \mathbf{S}_{H_p} \mathbf{ΔU} + \mathbf{T}_{H_p} \mathbf{u}(k-1) \\ \mathbf{U}_{H_c} &= \mathbf{S}_{H_c} \mathbf{ΔU} + \mathbf{T}_{H_c} \mathbf{u}(k-1) -\end{aligned}\]

source
ModelPredictiveControl.init_quadprogFunction
init_quadprog(model::LinModel, Ẽ, S_Hp, M_Hp, N_Hc, L_Hp) -> P̃, q̃, p

Init the quadratic programming optimization matrix and .

The matrices appear in the quadratic general form :

\[ J = \min_{\mathbf{ΔŨ}} \frac{1}{2}\mathbf{(ΔŨ)'P̃(ΔŨ)} + \mathbf{q̃'(ΔŨ)} + p \]

$\mathbf{P̃}$ is constant if the model and weights are linear and time invariant (LTI). The vector $\mathbf{q̃}$ and scalar $p$ need recalculation each control period $k$ (see initpred! method). $p$ does not impact the minima position. It is thus useless at optimization but required to evaluate the minimal $J$ value.

source

Return empty matrices if model is not a LinModel.

source
ModelPredictiveControl.init_stochpredFunction
init_stochpred(estim::StateEstimator, Hp) -> Ks, Ps

Init the stochastic prediction matrix Ks from estim estimator for predictive control.

$\mathbf{K_s}$ is the prediction matrix of the stochastic model (composed exclusively of integrators):

\[ \mathbf{Ŷ_s} = \mathbf{K_s x̂_s}(k)\]

The stochastic predictions $\mathbf{Ŷ_s}$ are the integrator outputs from $k+1$ to $k+H_p$. $\mathbf{x̂_s}(k)$ is extracted from current estimates $\mathbf{x̂}_{k-1}(k)$ with getestimates!. The method also returns an empty $\mathbf{P_s}$ matrix, since it is useless except for InternalModel estimators.

source
init_stochpred(estim::InternalModel, Hp) -> Ks, Ps

Init the stochastic prediction matrices for InternalModel.

Ks and Ps matrices are defined as:

\[ \mathbf{Ŷ_s} = \mathbf{K_s x̂_s}(k) + \mathbf{P_s ŷ_s}(k)\]

Current stochastic outputs $\mathbf{ŷ_s}(k)$ comprises the measured outputs $\mathbf{ŷ_s^m}(k) = \mathbf{y^m}(k) - \mathbf{ŷ_d^m}(k)$ and unmeasured $\mathbf{ŷ_s^u}(k) = \mathbf{0}$. See [2].

source
ModelPredictiveControl.init_linconstraintFunction
init_linconstraint(model::LinModel, 
+\end{aligned}\]

source
ModelPredictiveControl.init_quadprogFunction
init_quadprog(model::LinModel, Ẽ, S_Hp, M_Hp, N_Hc, L_Hp) -> P̃, q̃, p

Init the quadratic programming optimization matrix and .

The matrices appear in the quadratic general form :

\[ J = \min_{\mathbf{ΔŨ}} \frac{1}{2}\mathbf{(ΔŨ)'P̃(ΔŨ)} + \mathbf{q̃'(ΔŨ)} + p \]

$\mathbf{P̃}$ is constant if the model and weights are linear and time invariant (LTI). The vector $\mathbf{q̃}$ and scalar $p$ need recalculation each control period $k$ (see initpred! method). $p$ does not impact the minima position. It is thus useless at optimization but required to evaluate the minimal $J$ value.

source

Return empty matrices if model is not a LinModel.

source
ModelPredictiveControl.init_stochpredFunction
init_stochpred(estim::StateEstimator, Hp) -> Ks, Ps

Init the stochastic prediction matrix Ks from estim estimator for predictive control.

$\mathbf{K_s}$ is the prediction matrix of the stochastic model (composed exclusively of integrators):

\[ \mathbf{Ŷ_s} = \mathbf{K_s x̂_s}(k)\]

The stochastic predictions $\mathbf{Ŷ_s}$ are the integrator outputs from $k+1$ to $k+H_p$. $\mathbf{x̂_s}(k)$ is extracted from current estimates $\mathbf{x̂}_{k-1}(k)$ with getestimates!. The method also returns an empty $\mathbf{P_s}$ matrix, since it is useless except for InternalModel estimators.

source
init_stochpred(estim::InternalModel, Hp) -> Ks, Ps

Init the stochastic prediction matrices for InternalModel.

Ks and Ps matrices are defined as:

\[ \mathbf{Ŷ_s} = \mathbf{K_s x̂_s}(k) + \mathbf{P_s ŷ_s}(k)\]

Current stochastic outputs $\mathbf{ŷ_s}(k)$ comprises the measured outputs $\mathbf{ŷ_s^m}(k) = \mathbf{y^m}(k) - \mathbf{ŷ_d^m}(k)$ and unmeasured $\mathbf{ŷ_s^u}(k) = \mathbf{0}$. See [2].

source
ModelPredictiveControl.init_linconstraintFunction
init_linconstraint(model::LinModel, 
     A_Umin, A_Umax, A_ΔŨmin, A_ΔŨmax, A_Ŷmin, A_Ŷmax,
     i_Umin, i_Umax, i_ΔŨmin, i_ΔŨmax, i_Ŷmin, i_Ŷmax
-) -> A, i_b, b

Init A, b and i_b for the linear inequality constraints ($\mathbf{A ΔŨ ≤ b}$).

i_b is a BitVector including the indices of $\mathbf{b}$ that are finite numbers.

source

Init values without predicted output constraints if model is not a LinModel.

source

Constraint Relaxation

ModelPredictiveControl.relaxUFunction
relaxU(C, c_Umin, c_Umax, S_Hp, S_Hc) -> A_Umin, A_Umax, S̃_Hp

Augment manipulated inputs constraints with slack variable ϵ for softening.

Denoting the input increments augmented with the slack variable $\mathbf{ΔŨ} = [\begin{smallmatrix} \mathbf{ΔU} \\ ϵ \end{smallmatrix}]$, it returns the augmented conversion matrix $\mathbf{S̃}_{H_p}$, similar to the one described at init_ΔUtoU. It also returns the $\mathbf{A}$ matrices for the inequality constraints:

\[\begin{bmatrix} +) -> A, i_b, b

Init A, b and i_b for the linear inequality constraints ($\mathbf{A ΔŨ ≤ b}$).

i_b is a BitVector including the indices of $\mathbf{b}$ that are finite numbers.

source

Init values without predicted output constraints if model is not a LinModel.

source

Constraint Relaxation

ModelPredictiveControl.relaxUFunction
relaxU(C, c_Umin, c_Umax, S_Hp, S_Hc) -> A_Umin, A_Umax, S̃_Hp

Augment manipulated inputs constraints with slack variable ϵ for softening.

Denoting the input increments augmented with the slack variable $\mathbf{ΔŨ} = [\begin{smallmatrix} \mathbf{ΔU} \\ ϵ \end{smallmatrix}]$, it returns the augmented conversion matrix $\mathbf{S̃}_{H_p}$, similar to the one described at init_ΔUtoU. It also returns the $\mathbf{A}$ matrices for the inequality constraints:

\[\begin{bmatrix} \mathbf{A_{U_{min}}} \\ \mathbf{A_{U_{max}}} \end{bmatrix} \mathbf{ΔŨ} ≤ \begin{bmatrix} - \mathbf{U_{min}} + \mathbf{T}_{H_c} \mathbf{u}(k-1) \\ + \mathbf{U_{max}} - \mathbf{T}_{H_c} \mathbf{u}(k-1) -\end{bmatrix}\]

source
ModelPredictiveControl.relaxΔUFunction
relaxΔU(C, c_ΔUmin, c_ΔUmax, ΔUmin, ΔUmax, N_Hc) -> A_ΔŨmin, A_ΔŨmax, ΔŨmin, ΔŨmax, Ñ_Hc

Augment input increments constraints with slack variable ϵ for softening.

Denoting the input increments augmented with the slack variable $\mathbf{ΔŨ} = [\begin{smallmatrix} \mathbf{ΔU} \\ ϵ \end{smallmatrix}]$, it returns the augmented input increment weights $\mathbf{Ñ}_{H_c}$ (that incorporate $C$). It also returns the augmented constraints $\mathbf{ΔŨ_{min}}$ and $\mathbf{ΔŨ_{max}}$ and the $\mathbf{A}$ matrices for the inequality constraints:

\[\begin{bmatrix} +\end{bmatrix}\]

source
ModelPredictiveControl.relaxΔUFunction
relaxΔU(C, c_ΔUmin, c_ΔUmax, ΔUmin, ΔUmax, N_Hc) -> A_ΔŨmin, A_ΔŨmax, ΔŨmin, ΔŨmax, Ñ_Hc

Augment input increments constraints with slack variable ϵ for softening.

Denoting the input increments augmented with the slack variable $\mathbf{ΔŨ} = [\begin{smallmatrix} \mathbf{ΔU} \\ ϵ \end{smallmatrix}]$, it returns the augmented input increment weights $\mathbf{Ñ}_{H_c}$ (that incorporate $C$). It also returns the augmented constraints $\mathbf{ΔŨ_{min}}$ and $\mathbf{ΔŨ_{max}}$ and the $\mathbf{A}$ matrices for the inequality constraints:

\[\begin{bmatrix} \mathbf{A_{ΔŨ_{min}}} \\ \mathbf{A_{ΔŨ_{max}}} \end{bmatrix} \mathbf{ΔŨ} ≤ \begin{bmatrix} - \mathbf{ΔŨ_{min}} \\ + \mathbf{ΔŨ_{max}} -\end{bmatrix}\]

source
ModelPredictiveControl.relaxŶFunction
relaxŶ(::LinModel, C, c_Ŷmin, c_Ŷmax, E) -> A_Ŷmin, A_Ŷmax, Ẽ

Augment linear output prediction constraints with slack variable ϵ for softening.

Denoting the input increments augmented with the slack variable $\mathbf{ΔŨ} = [\begin{smallmatrix} \mathbf{ΔU} \\ ϵ \end{smallmatrix}]$, it returns the $\mathbf{Ẽ}$ matrix that appears in the linear model prediction equation $\mathbf{Ŷ = Ẽ ΔŨ + F}$, and the $\mathbf{A}$ matrices for the inequality constraints:

\[\begin{bmatrix} +\end{bmatrix}\]

source
ModelPredictiveControl.relaxŶFunction
relaxŶ(::LinModel, C, c_Ŷmin, c_Ŷmax, E) -> A_Ŷmin, A_Ŷmax, Ẽ

Augment linear output prediction constraints with slack variable ϵ for softening.

Denoting the input increments augmented with the slack variable $\mathbf{ΔŨ} = [\begin{smallmatrix} \mathbf{ΔU} \\ ϵ \end{smallmatrix}]$, it returns the $\mathbf{Ẽ}$ matrix that appears in the linear model prediction equation $\mathbf{Ŷ = Ẽ ΔŨ + F}$, and the $\mathbf{A}$ matrices for the inequality constraints:

\[\begin{bmatrix} \mathbf{A_{Ŷ_{min}}} \\ \mathbf{A_{Ŷ_{max}}} \end{bmatrix} \mathbf{ΔŨ} ≤ \begin{bmatrix} - \mathbf{Ŷ_{min}} + \mathbf{F} \\ + \mathbf{Ŷ_{max}} - \mathbf{F} -\end{bmatrix}\]

source

Return empty matrices if model is not a LinModel

source

Get Estimates

ModelPredictiveControl.getestimates!Function
getestimates!(mpc::PredictiveController, estim::StateEstimator) -> x̂d, x̂s, ŷ

Get estimator output and split into the deterministic x̂d and stochastic x̂s states.

source
getestimates!(mpc::PredictiveController, estim::InternalModel) -> x̂d, x̂s, ŷ

Get the internal model deterministic estim.x̂d and stochastic estim.x̂s states.

source

Predictions

ModelPredictiveControl.initpred!Function
initpred!(mpc, model::LinModel, d, D̂, R̂y)

Init linear model prediction matrices F, and p.

See init_deterpred and init_quadprog for the definition of the matrices.

source
initpred!(mpc::PredictiveController, model::SimModel, d, D̂, R̂y)

Init d0 and D̂0 matrices when model is not a LinModel.

d0 and D̂0 are the measured disturbances and its predictions without the operating points $\mathbf{d_{op}}$.

source

Constraints

ModelPredictiveControl.linconstraint!Function
linconstraint!(mpc::PredictiveController, model::LinModel)

Set b vector for the linear model inequality constraints ($\mathbf{A ΔŨ ≤ b}$).

source

Set b excluding predicted output constraints when model is not a LinModel.

source
+\end{bmatrix}\]

source

Return empty matrices if model is not a LinModel

source

Get Estimates

ModelPredictiveControl.getestimates!Function
getestimates!(mpc::PredictiveController, estim::StateEstimator) -> x̂d, x̂s, ŷ

Get estimator output and split into the deterministic x̂d and stochastic x̂s states.

source
getestimates!(mpc::PredictiveController, estim::InternalModel) -> x̂d, x̂s, ŷ

Get the internal model deterministic estim.x̂d and stochastic estim.x̂s states.

source

Predictions

ModelPredictiveControl.initpred!Function
initpred!(mpc, model::LinModel, d, D̂, R̂y)

Init linear model prediction matrices F, and p.

See init_deterpred and init_quadprog for the definition of the matrices.

source
initpred!(mpc::PredictiveController, model::SimModel, d, D̂, R̂y)

Init d0 and D̂0 matrices when model is not a LinModel.

d0 and D̂0 are the measured disturbances and its predictions without the operating points $\mathbf{d_{op}}$.

source

Constraints

ModelPredictiveControl.linconstraint!Function
linconstraint!(mpc::PredictiveController, model::LinModel)

Set b vector for the linear model inequality constraints ($\mathbf{A ΔŨ ≤ b}$).

source

Set b excluding predicted output constraints when model is not a LinModel.

source
diff --git a/dev/internals/sim_model/index.html b/dev/internals/sim_model/index.html index b799c5f4d..e1c1234f4 100644 --- a/dev/internals/sim_model/index.html +++ b/dev/internals/sim_model/index.html @@ -1,2 +1,2 @@ -Plant Models · ModelPredictiveControl.jl

Functions: SimModel Internals

ModelPredictiveControl.steadystateFunction
steadystate(model::LinModel, u, d=Float64[])

Evaluate the steady-state vector when model is a LinModel.

Following setop! notation, the method evaluates the equilibrium $\mathbf{x}(∞)$ from:

\[ \mathbf{x}(∞) = \mathbf{(I - A)^{-1}(B_u u_0 + B_d d_0)}\]

with the manipulated inputs held constant at $\mathbf{u_0}$ and, the measured disturbances, at $\mathbf{d_0}$. The Moore-Penrose pseudo-inverse computes $\mathbf{(I - A)^{-1}}$ to support integrating model (integrator states will be 0).

source
+Plant Models · ModelPredictiveControl.jl

Functions: SimModel Internals

ModelPredictiveControl.steadystateFunction
steadystate(model::LinModel, u, d=Float64[])

Evaluate the steady-state vector when model is a LinModel.

Following setop! notation, the method evaluates the equilibrium $\mathbf{x}(∞)$ from:

\[ \mathbf{x}(∞) = \mathbf{(I - A)^{-1}(B_u u_0 + B_d d_0)}\]

with the manipulated inputs held constant at $\mathbf{u_0}$ and, the measured disturbances, at $\mathbf{d_0}$. The Moore-Penrose pseudo-inverse computes $\mathbf{(I - A)^{-1}}$ to support integrating model (integrator states will be 0).

source
diff --git a/dev/internals/state_estim/index.html b/dev/internals/state_estim/index.html index deb2ea7ca..57ee299f1 100644 --- a/dev/internals/state_estim/index.html +++ b/dev/internals/state_estim/index.html @@ -2,24 +2,24 @@ State Estimators · ModelPredictiveControl.jl

Functions: StateEstimator Internals

Estimator Initialization

ModelPredictiveControl.init_estimstochFunction
init_estimstoch(i_ym, nint_ym::Vector{Int}) -> Asm, Csm, nint_ym

Calc stochastic model matrices from output integrators specifications for state estimation.

For closed-loop state estimators. nint_ym is a vector providing how many integrator should be added for each measured output $\mathbf{y^m}$. The argument generates the Asm and Csm matrices:

\[\begin{aligned} \mathbf{x_s}(k+1) &= \mathbf{A_s^m x_s}(k) + \mathbf{B_s^m e}(k) \\ \mathbf{y_s^m}(k) &= \mathbf{C_s^m x_s}(k) -\end{aligned}\]

where $\mathbf{e}(k)$ is a conceptual and unknown zero mean white noise. $\mathbf{B_s^m}$ is not used for closed-loop state estimators thus ignored.

source
ModelPredictiveControl.augment_modelFunction
augment_model(model::LinModel, As, Cs; verify_obsv=true) -> Â, B̂u, Ĉ, B̂d, D̂d

Augment LinModel state-space matrices with the stochastic ones As and Cs.

If $\mathbf{x}$ are model.x states, and $\mathbf{x_s}$, the states defined at init_estimstoch, we define an augmented state vector $\mathbf{x̂} = [ \begin{smallmatrix} \mathbf{x} \\ \mathbf{x_s} \end{smallmatrix} ]$. The method returns the augmented matrices , B̂u, , B̂d and D̂d:

\[\begin{aligned} +\end{aligned}\]

where $\mathbf{e}(k)$ is a conceptual and unknown zero mean white noise. $\mathbf{B_s^m}$ is not used for closed-loop state estimators thus ignored.

source
ModelPredictiveControl.augment_modelFunction
augment_model(model::LinModel, As, Cs; verify_obsv=true) -> Â, B̂u, Ĉ, B̂d, D̂d

Augment LinModel state-space matrices with the stochastic ones As and Cs.

If $\mathbf{x}$ are model.x states, and $\mathbf{x_s}$, the states defined at init_estimstoch, we define an augmented state vector $\mathbf{x̂} = [ \begin{smallmatrix} \mathbf{x} \\ \mathbf{x_s} \end{smallmatrix} ]$. The method returns the augmented matrices , B̂u, , B̂d and D̂d:

\[\begin{aligned} \mathbf{x̂}(k+1) &= \mathbf{Â x̂}(k) + \mathbf{B̂_u u}(k) + \mathbf{B̂_d d}(k) \\ \mathbf{ŷ}(k) &= \mathbf{Ĉ x̂}(k) + \mathbf{D̂_d d}(k) -\end{aligned}\]

An error is thrown if the augmented model is not observable and verify_obsv == true.

source

No need to augment the model if model is not a LinModel.

source
ModelPredictiveControl.init_ukfFunction
init_ukf(nx̂, α, β, κ) -> nσ, γ, m̂, Ŝ

Compute the UnscentedKalmanFilter constants from $α, β$ and $κ$.

With $n_\mathbf{x̂}$ elements in the state vector $\mathbf{x̂}$ and $n_σ = 2 n_\mathbf{x̂} + 1$ sigma points, the scaling factor applied on standard deviation matrices $\sqrt{\mathbf{P̂}}$ is:

\[ γ = α \sqrt{ n_\mathbf{x̂} + κ }\]

The weight vector $(n_σ × 1)$ for the mean and the weight matrix $(n_σ × n_σ)$ for the covariance are respectively:

\[\begin{aligned} +\end{aligned}\]

An error is thrown if the augmented model is not observable and verify_obsv == true.

source

No need to augment the model if model is not a LinModel.

source
ModelPredictiveControl.init_ukfFunction
init_ukf(nx̂, α, β, κ) -> nσ, γ, m̂, Ŝ

Compute the UnscentedKalmanFilter constants from $α, β$ and $κ$.

With $n_\mathbf{x̂}$ elements in the state vector $\mathbf{x̂}$ and $n_σ = 2 n_\mathbf{x̂} + 1$ sigma points, the scaling factor applied on standard deviation matrices $\sqrt{\mathbf{P̂}}$ is:

\[ γ = α \sqrt{ n_\mathbf{x̂} + κ }\]

The weight vector $(n_σ × 1)$ for the mean and the weight matrix $(n_σ × n_σ)$ for the covariance are respectively:

\[\begin{aligned} \mathbf{m̂} &= \begin{bmatrix} 1 - \tfrac{n_\mathbf{x̂}}{γ^2} & \tfrac{1}{2γ^2} & \tfrac{1}{2γ^2} & \cdots & \tfrac{1}{2γ^2} \end{bmatrix}' \\ \mathbf{Ŝ} &= \mathrm{diag}\big( 2 - α^2 + β - \tfrac{n_\mathbf{x̂}}{γ^2} \:,\; \tfrac{1}{2γ^2} \:,\; \tfrac{1}{2γ^2} \:,\; \cdots \:,\; \tfrac{1}{2γ^2} \big) -\end{aligned}\]

See update_estimate!(::UnscentedKalmanFilter) for other details.

source
ModelPredictiveControl.init_internalmodelFunction
init_internalmodel(As, Bs, Cs, Ds) -> Âs, B̂s

Calc stochastic model update matrices Âs and B̂s for InternalModel estimator.

As, Bs, Cs and Ds are the stochastic model matrices :

\[\begin{aligned} \mathbf{x_s}(k+1) &= \mathbf{A_s x_s}(k) + \mathbf{B_s e}(k) \\ \mathbf{y_s}(k) &= \mathbf{C_s x_s}(k) + \mathbf{D_s e}(k) \end{aligned}\]

where $\mathbf{e}(k)$ is conceptual and unknown zero mean white noise. Its optimal estimation is $\mathbf{ê=0}$, the expected value. Thus, the Âs and B̂s matrices that optimally update the stochastic estimate $\mathbf{x̂_s}$ are:

\[\begin{aligned} \mathbf{x̂_s}(k+1) &= \mathbf{(A_s - B_s D_s^{-1} C_s) x̂_s}(k) + \mathbf{(B_s D_s^{-1}) ŷ_s}(k) \\ &= \mathbf{Â_s x̂_s}(k) + \mathbf{B̂_s ŷ_s}(k) -\end{aligned}\]

with current stochastic outputs estimation $\mathbf{ŷ_s}(k)$, composed of the measured $\mathbf{ŷ_s^m}(k) = \mathbf{y^m}(k) - \mathbf{ŷ_d^m}(k)$ and unmeasured $\mathbf{ŷ_s^u = 0}$ outputs. See [1].

source

Augmented Model

ModelPredictiveControl.f̂Function
f̂(estim::StateEstimator, x̂, u, d)

State function $\mathbf{f̂}$ of the augmented model.

By introducing an augmented state vector $\mathbf{x̂}$ like in augment_model, the function returns the next state of the augmented model, defined as:

\[\begin{aligned} +\end{aligned}\]

with current stochastic outputs estimation $\mathbf{ŷ_s}(k)$, composed of the measured $\mathbf{ŷ_s^m}(k) = \mathbf{y^m}(k) - \mathbf{ŷ_d^m}(k)$ and unmeasured $\mathbf{ŷ_s^u = 0}$ outputs. See [1].

source

Augmented Model

ModelPredictiveControl.f̂Function
f̂(estim::StateEstimator, x̂, u, d)

State function $\mathbf{f̂}$ of the augmented model.

By introducing an augmented state vector $\mathbf{x̂}$ like in augment_model, the function returns the next state of the augmented model, defined as:

\[\begin{aligned} \mathbf{x̂}(k+1) &= \mathbf{f̂}\Big(\mathbf{x̂}(k), \mathbf{u}(k), \mathbf{d}(k)\Big) \\ \mathbf{ŷ}(k) &= \mathbf{ĥ}\Big(\mathbf{x̂}(k), \mathbf{d}(k)\Big) -\end{aligned}\]

source

Remove Operating Points

ModelPredictiveControl.remove_op!Function
remove_op!(estim::StateEstimator, u, d, ym) -> u0, d0, ym0

Remove operating points on inputs u, measured outputs ym and disturbances d.

Also store current inputs without operating points u0 in estim.lastu0. This field is used for PredictiveController computations.

source

Update Estimate

Info

All these methods assume that the operating points are already removed in u, ym and d arguments. Strickly speaking, the arguments should be called u0, ym0 and d0, following setop! notation. The 0 is dropped to simplify the notation.

ModelPredictiveControl.update_estimate!Function
update_estimate!(estim::SteadyKalmanFilter, u, ym, d)

Update estim.x̂ estimate with current inputs u, measured outputs ym and dist. d.

The SteadyKalmanFilter updates it with the precomputed Kalman gain $\mathbf{K}$:

\[\mathbf{x̂}_{k}(k+1) = \mathbf{Â x̂}_{k-1}(k) + \mathbf{B̂_u u}(k) + \mathbf{B̂_d d}(k) - + \mathbf{K}[\mathbf{y^m}(k) - \mathbf{Ĉ^m x̂}_{k-1}(k) - \mathbf{D̂_d^m d}(k)]\]

source
update_estimate!(estim::KalmanFilter, u, ym, d)

Update KalmanFilter state estim.x̂ and estimation error covariance estim.P̂.

It implements the time-varying Kalman Filter in its predictor (observer) form :

\[\begin{aligned} +\end{aligned}\]

source

Remove Operating Points

ModelPredictiveControl.remove_op!Function
remove_op!(estim::StateEstimator, u, d, ym) -> u0, d0, ym0

Remove operating points on inputs u, measured outputs ym and disturbances d.

Also store current inputs without operating points u0 in estim.lastu0. This field is used for PredictiveController computations.

source

Update Estimate

Info

All these methods assume that the operating points are already removed in u, ym and d arguments. Strickly speaking, the arguments should be called u0, ym0 and d0, following setop! notation. The 0 is dropped to simplify the notation.

ModelPredictiveControl.update_estimate!Function
update_estimate!(estim::SteadyKalmanFilter, u, ym, d)

Update estim.x̂ estimate with current inputs u, measured outputs ym and dist. d.

The SteadyKalmanFilter updates it with the precomputed Kalman gain $\mathbf{K}$:

\[\mathbf{x̂}_{k}(k+1) = \mathbf{Â x̂}_{k-1}(k) + \mathbf{B̂_u u}(k) + \mathbf{B̂_d d}(k) + + \mathbf{K}[\mathbf{y^m}(k) - \mathbf{Ĉ^m x̂}_{k-1}(k) - \mathbf{D̂_d^m d}(k)]\]

source
update_estimate!(estim::KalmanFilter, u, ym, d)

Update KalmanFilter state estim.x̂ and estimation error covariance estim.P̂.

It implements the time-varying Kalman Filter in its predictor (observer) form :

\[\begin{aligned} \mathbf{M}(k) &= \mathbf{P̂}_{k-1}(k)\mathbf{Ĉ^m}' [\mathbf{Ĉ^m P̂}_{k-1}(k)\mathbf{Ĉ^m}' + \mathbf{R̂}]^{-1} \\ \mathbf{K}(k) &= \mathbf{Â M(k)} \\ @@ -28,7 +28,7 @@ + \mathbf{K}(k)[\mathbf{y^m}(k) - \mathbf{ŷ^m}(k)] \\ \mathbf{P̂}_{k}(k+1) &= \mathbf{Â}[\mathbf{P̂}_{k-1}(k) - \mathbf{M}(k)\mathbf{Ĉ^m P̂}_{k-1}(k)]\mathbf{Â}' + \mathbf{Q̂} -\end{aligned}\]

based on the process model described in SteadyKalmanFilter. The notation $\mathbf{x̂}_{k-1}(k)$ refers to the state for the current time $k$ estimated at the last control period $k-1$. See [2] for details.

source
update_estimate!(estim::UnscentedKalmanFilter, u, ym, d)

Update UnscentedKalmanFilter state estim.x̂ and covariance estimate estim.P̂.

It implements the unscented Kalman Filter in its predictor (observer) form, based on the generalized unscented transform[3]. See init_ukf for the definition of the constants $\mathbf{m̂, Ŝ}$ and $γ$.

Denoting $\mathbf{x̂}_{k-1}(k)$ as the state for the current time $k$ estimated at the last period $k-1$, $\mathbf{0}$, a null vector, $n_σ = 2 n_\mathbf{x̂} + 1$, the number of sigma points, and $\mathbf{X̂}_{k-1}^j(k)$, the vector at the $j$th column of $\mathbf{X̂}_{k-1}(k)$, the estimator updates the states with:

\[\begin{aligned} +\end{aligned}\]

based on the process model described in SteadyKalmanFilter. The notation $\mathbf{x̂}_{k-1}(k)$ refers to the state for the current time $k$ estimated at the last control period $k-1$. See [2] for details.

source
update_estimate!(estim::UnscentedKalmanFilter, u, ym, d)

Update UnscentedKalmanFilter state estim.x̂ and covariance estimate estim.P̂.

It implements the unscented Kalman Filter in its predictor (observer) form, based on the generalized unscented transform[3]. See init_ukf for the definition of the constants $\mathbf{m̂, Ŝ}$ and $γ$.

Denoting $\mathbf{x̂}_{k-1}(k)$ as the state for the current time $k$ estimated at the last period $k-1$, $\mathbf{0}$, a null vector, $n_σ = 2 n_\mathbf{x̂} + 1$, the number of sigma points, and $\mathbf{X̂}_{k-1}^j(k)$, the vector at the $j$th column of $\mathbf{X̂}_{k-1}(k)$, the estimator updates the states with:

\[\begin{aligned} \mathbf{X̂}_{k-1}(k) &= \bigg[\begin{matrix} \mathbf{x̂}_{k-1}(k) & \mathbf{x̂}_{k-1}(k) & \cdots & \mathbf{x̂}_{k-1}(k) \end{matrix}\bigg] + \bigg[\begin{matrix} \mathbf{0} & γ \sqrt{\mathbf{P̂}_{k-1}(k)} & -γ \sqrt{\mathbf{P̂}_{k-1}(k)} \end{matrix}\bigg] \\ \mathbf{Ŷ^m}(k) &= \bigg[\begin{matrix} \mathbf{ĥ^m}\Big( \mathbf{X̂}_{k-1}^{1}(k) \Big) & \mathbf{ĥ^m}\Big( \mathbf{X̂}_{k-1}^{2}(k) \Big) & \cdots & \mathbf{ĥ^m}\Big( \mathbf{X̂}_{k-1}^{n_σ}(k) \Big) \end{matrix}\bigg] \\ \mathbf{ŷ^m}(k) &= \mathbf{Ŷ^m}(k) \mathbf{m̂} \\ @@ -43,7 +43,7 @@ \mathbf{x̂}_{k}(k+1) &= \mathbf{X̂}_{k}(k+1)\mathbf{m̂} \\ \mathbf{X̄}_k(k+1) &= \begin{bmatrix} \mathbf{X̂}_{k}^{1}(k+1) - \mathbf{x̂}_{k}(k+1) & \mathbf{X̂}_{k}^{2}(k+1) - \mathbf{x̂}_{k}(k+1) & \cdots &\, \mathbf{X̂}_{k}^{n_σ}(k+1) - \mathbf{x̂}_{k}(k+1) \end{bmatrix} \\ \mathbf{P̂}_k(k+1) &= \mathbf{X̄}_k(k+1) \mathbf{Ŝ} \mathbf{X̄}_k'(k+1) + \mathbf{Q̂} -\end{aligned} \]

by using the lower triangular factor of cholesky to compute $\sqrt{\mathbf{P̂}_{k-1}(k)}$ and $\sqrt{\mathbf{P̂}_{k}(k)}$. The matrices $\mathbf{P̂, Q̂, R̂}$ are the covariance of the estimation error, process noise and sensor noise, respectively.

source
update_estimate!(estim::ExtendedKalmanFilter, u, ym, d=Float64[])

Update ExtendedKalmanFilter state estim.x̂ and covariance estim.P̂.

The equations are similar to update_estimate!(::KalmanFilter) but with the substitutions $\mathbf{Â = F̂}(k)$ and $\mathbf{Ĉ^m = Ĥ^m}(k)$:

\[\begin{aligned} +\end{aligned} \]

by using the lower triangular factor of cholesky to compute $\sqrt{\mathbf{P̂}_{k-1}(k)}$ and $\sqrt{\mathbf{P̂}_{k}(k)}$. The matrices $\mathbf{P̂, Q̂, R̂}$ are the covariance of the estimation error, process noise and sensor noise, respectively.

source
update_estimate!(estim::ExtendedKalmanFilter, u, ym, d=Float64[])

Update ExtendedKalmanFilter state estim.x̂ and covariance estim.P̂.

The equations are similar to update_estimate!(::KalmanFilter) but with the substitutions $\mathbf{Â = F̂}(k)$ and $\mathbf{Ĉ^m = Ĥ^m}(k)$:

\[\begin{aligned} \mathbf{M}(k) &= \mathbf{P̂}_{k-1}(k)\mathbf{Ĥ^m}'(k) [\mathbf{Ĥ^m}(k)\mathbf{P̂}_{k-1}(k)\mathbf{Ĥ^m}'(k) + \mathbf{R̂}]^{-1} \\ \mathbf{K}(k) &= \mathbf{F̂}(k) \mathbf{M}(k) \\ @@ -56,7 +56,7 @@ \end{aligned}\]

ForwardDiff.jacobian automatically computes the Jacobians:

\[\begin{aligned} \mathbf{F̂}(k) &= \left. \frac{∂\mathbf{f̂}(\mathbf{x̂}, \mathbf{u}, \mathbf{d})}{∂\mathbf{x̂}} \right|_{\mathbf{x̂ = x̂}_{k-1}(k),\, \mathbf{u = u}(k),\, \mathbf{d = d}(k)} \\ \mathbf{Ĥ}(k) &= \left. \frac{∂\mathbf{ĥ}(\mathbf{x̂}, \mathbf{d})}{∂\mathbf{x̂}} \right|_{\mathbf{x = x̂}_{k-1}(k),\, \mathbf{d = d}(k)} -\end{aligned}\]

The matrix $\mathbf{Ĥ^m}$ is the rows of $\mathbf{Ĥ}$ that are measured outputs.

source
update_estimate!(estim::Luenberger, u, ym, d=Float64[])

Same than update_estimate!(::SteadyKalmanFilter) but using Luenberger.

source
update_estimate!(estim::InternalModel, u, ym, d=Float64[])

Update estim.x̂ \ x̂d \ x̂s with current inputs u, measured outputs ym and dist. d.

The InternalModel updates the deterministic x̂d and stochastic x̂s estimates with:

\[\begin{aligned} +\end{aligned}\]

The matrix $\mathbf{Ĥ^m}$ is the rows of $\mathbf{Ĥ}$ that are measured outputs.

source
update_estimate!(estim::Luenberger, u, ym, d=Float64[])

Same than update_estimate!(::SteadyKalmanFilter) but using Luenberger.

source
update_estimate!(estim::InternalModel, u, ym, d=Float64[])

Update estim.x̂ \ x̂d \ x̂s with current inputs u, measured outputs ym and dist. d.

The InternalModel updates the deterministic x̂d and stochastic x̂s estimates with:

\[\begin{aligned} \mathbf{x̂_d}(k+1) &= \mathbf{f}\Big( \mathbf{x̂_d}(k), \mathbf{u}(k), \mathbf{d}(k) \Big) \\ \mathbf{x̂_s}(k+1) &= \mathbf{Â_s x̂_s}(k) + \mathbf{B̂_s ŷ_s}(k) -\end{aligned}\]

This estimator does not augment the state vector, thus $\mathbf{x̂ = x̂_d}$. See init_internalmodel for details.

source
  • 1Desbiens, A., D. Hodouin & É. Plamondon. 2000, "Global predictive control : a unified control structure for decoupling setpoint tracking, feedforward compensation and disturbance rejection dynamics", IEE Proceedings - Control Theory and Applications, vol. 147, no 4, https://doi.org/10.1049/ip-cta:20000443, p. 465–475, ISSN 1350-2379.
  • 2Boyd S., "Lecture 8 : The Kalman Filter" (Winter 2008-09) [course slides], EE363: Linear Dynamical Systems, https://web.stanford.edu/class/ee363/lectures/kf.pdf.
  • 3Simon, D. 2006, "Chapter 14: The unscented Kalman filter" in "Optimal State Estimation: Kalman, H∞, and Nonlinear Approaches", John Wiley & Sons, p. 433–459, https://doi.org/10.1002/0470045345.ch14, ISBN9780470045343.
+\end{aligned}\]

This estimator does not augment the state vector, thus $\mathbf{x̂ = x̂_d}$. See init_internalmodel for details.

source
diff --git a/dev/manual/installation/index.html b/dev/manual/installation/index.html index 5b401137a..8883b6742 100644 --- a/dev/manual/installation/index.html +++ b/dev/manual/installation/index.html @@ -1,2 +1,2 @@ -Installation · ModelPredictiveControl.jl

Manual: Installation

To install the ModelPredictiveControl package, run this command in the Julia REPL:

using Pkg; Pkg.add("ModelPredictiveControl")

Note that that the construction of linear models typically requires ss or tf functions, it is thus recommended to load the package with:

using ModelPredictiveControl, ControlSystemsBase
+Installation · ModelPredictiveControl.jl

Manual: Installation

To install the ModelPredictiveControl package, run this command in the Julia REPL:

using Pkg; Pkg.add("ModelPredictiveControl")

Note that that the construction of linear models typically requires ss or tf functions, it is thus recommended to load the package with:

using ModelPredictiveControl, ControlSystemsBase
diff --git a/dev/manual/linmpc/index.html b/dev/manual/linmpc/index.html index c26464062..41008f5bb 100644 --- a/dev/manual/linmpc/index.html +++ b/dev/manual/linmpc/index.html @@ -65,6 +65,134 @@ end plot_data(t_data, u_data, y_data, ry_data) + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

For some situations, when LinMPC matrices are small/medium and dense, DAQP optimizer may be more efficient. To install it, run:

using Pkg; Pkg.add("DAQP")

Constructing a LinMPC with DAQP:

using JuMP, DAQP
+daqp = Model(DAQP.Optimizer)
+mpc2 = setconstraint!(LinMPC(model, Hp=15, Hc=2, optim=daqp), ŷmin=[45, -Inf])
LinMPC controller with a sample time Ts = 4.0 s, DAQP optimizer, SteadyKalmanFilter estimator and:
+ 15 prediction steps Hp
+  2 control steps Hc
+  2 manipulated inputs u
+  4 states x̂
+  2 measured outputs ym
+  0 unmeasured outputs yu
+  0 measured disturbances d

leads to identical results here:

setstate!(model, zeros(model.nx))
+initstate!(mpc2, model.uop, model())
+u_data2, y_data2, ry_data2 = test_mpc(mpc2, model)
+plot_data(t_data, u_data2, y_data2, ry_data2)
+ @@ -93,20 +221,20 @@ - - - - - + + + + + - - - - - - - - + + + + + + + + @@ -138,7 +266,7 @@ - + @@ -172,139 +300,11 @@ - - + + -

For some situations, when LinMPC matrices are small/medium and dense, DAQP optimizer may be more efficient. To install it, run:

using Pkg; Pkg.add("DAQP")

Constructing a LinMPC with DAQP:

using JuMP, DAQP
-daqp = Model(DAQP.Optimizer)
-mpc2 = setconstraint!(LinMPC(model, Hp=15, Hc=2, optim=daqp), ŷmin=[45, -Inf])
LinMPC controller with a sample time Ts = 4.0 s, DAQP optimizer, SteadyKalmanFilter estimator and:
- 15 prediction steps Hp
-  2 control steps Hc
-  2 manipulated inputs u
-  4 states x̂
-  2 measured outputs ym
-  0 unmeasured outputs yu
-  0 measured disturbances d

leads to identical results here:

setstate!(model, zeros(model.nx))
-initstate!(mpc2, model.uop, model())
-u_data2, y_data2, ry_data2 = test_mpc(mpc2, model)
-plot_data(t_data, u_data2, y_data2, ry_data2)
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - + diff --git a/dev/manual/nonlinmpc/index.html b/dev/manual/nonlinmpc/index.html index bf450a7e4..d4d3b87e4 100644 --- a/dev/manual/nonlinmpc/index.html +++ b/dev/manual/nonlinmpc/index.html @@ -7,7 +7,7 @@ \dot{ω}(t) &= -\frac{g}{L}\sin\big( θ(t) \big) - \frac{K}{m} ω(t) + \frac{1}{m L^2} τ(t) \end{aligned}\]

in which $g$ is the gravitational acceleration, $L$, the pendulum length, $K$, the friction coefficient at the pivot point, and $m$, the mass attached at the end of the pendulum. Here, the explicit Euler method discretizes the system to construct a NonLinModel:

using ModelPredictiveControl
 function pendulum(par, x, u)
-    g, L, K, m = par        # [m/s], [m], [kg/s], [kg]
+    g, L, K, m = par        # [m/s²], [m], [kg/s], [kg]
     θ, ω = x[1], x[2]       # [rad], [rad/s]
     τ  = u[1]               # [N m]
     dθ = ω
@@ -28,49 +28,49 @@
 plot(sim!(model, 60, u), plotu=false)
- + - + - + - + - + - - - - - - - - - - - - - - - - - - - - - - - - - - - + + + + + + + + + + + + + + + + + + + + + + + + + + +

Nonlinear Predictive Controller

An UnscentedKalmanFilter estimates the plant state :

estim = UnscentedKalmanFilter(model, σQ=[0.1, 0.5], σR=[0.5], nint_ym=[1], σQ_int=[5.0])
UnscentedKalmanFilter estimator with a sample time Ts = 0.1 s, NonLinModel and:
  1 manipulated inputs u
@@ -84,165 +84,165 @@ 

- + - + - + - + - + - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + - + - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + - + - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + - + - - - - - - - - - - - - - - - - - - - - - - - - - - - + + + + + + + + + + + + + + + + + + + + + + + + + + +

The estimate $x̂_3$ is the integrator state that compensates for static errors (nint_ym and σQ_int parameters of UnscentedKalmanFilter). The Kalman filter performance seems sufficient for control. As the motor torque is limited to -1.5 to 1.5 N m, we incorporate the input constraints in a NonLinMPC:

mpc = NonLinMPC(estim, Hp=20, Hc=4, Mwt=[0.5], Nwt=[2.5], Cwt=Inf)
 mpc = setconstraint!(mpc, umin=[-1.5], umax=[+1.5])
NonLinMPC controller with a sample time Ts = 0.1 s, Ipopt optimizer, UnscentedKalmanFilter estimator and:
@@ -255,6 +255,101 @@ 

res = sim!(mpc, 60, [180.0], plant=plant, x0=zeros(plant.nx), x̂0=zeros(mpc.estim.nx̂)) plot(res)

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

The controller seems robust enough to variations on $K$ coefficient. Starting from this inverted position, the closed-loop response to a step disturbances of 10° is also satisfactory:

res = sim!(mpc, 60, [180.0], plant=plant, x0=[π, 0], x̂0=[π, 0, 0], y_step=[10])
+plot(res)
+ @@ -287,22 +382,26 @@

- - - - + + + + + + - - - - - - - - - - - + + + + + + + + + + + + + @@ -338,7 +437,7 @@

- + @@ -347,103 +446,4 @@

-

The controller seems robust enough to variations on $K$ coefficient. Starting from this inverted position, the closed-loop response to a step disturbances of 10° is also satisfactory:

res = sim!(mpc, 60, [180.0], plant=plant, x0=[π, 0], x̂0=[π, 0, 0], y_step=[10])
-plot(res)
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
+ diff --git a/dev/public/generic_func/index.html b/dev/public/generic_func/index.html index 7dd3cabd4..34ab26603 100644 --- a/dev/public/generic_func/index.html +++ b/dev/public/generic_func/index.html @@ -3,31 +3,31 @@ julia> y = evaloutput(model) 1-element Vector{Float64}: - 20.0

source
evaloutput(estim::StateEstimator, d=Float64[])

Evaluate StateEstimator outputs from estim.x̂ states and disturbances d.

Calling a StateEstimator object calls this evaloutput method.

Examples

julia> kf = SteadyKalmanFilter(setop!(LinModel(tf(2, [10, 1]), 5), yop=[20]));
+ 20.0
source
evaloutput(estim::StateEstimator, d=Float64[])

Evaluate StateEstimator outputs from estim.x̂ states and disturbances d.

Calling a StateEstimator object calls this evaloutput method.

Examples

julia> kf = SteadyKalmanFilter(setop!(LinModel(tf(2, [10, 1]), 5), yop=[20]));
 
 julia> ŷ = evaloutput(kf)
 1-element Vector{Float64}:
- 20.0
source
evaloutput(estim::InternalModel, ym, d=Float64[])

Evaluate InternalModel outputs from estim.x̂d states and measured outputs ym.

InternalModel estimator needs current measured outputs $\mathbf{y^m}(k)$ to estimate its outputs $\mathbf{ŷ}(k)$, since the strategy imposes that $\mathbf{ŷ^m}(k) = \mathbf{y^m}(k)$ is always true.

source

Update State x

ModelPredictiveControl.updatestate!Function
updatestate!(model::SimModel, u, d=Float64[])

Update model.x states with current inputs u and measured disturbances d.

Examples

julia> model = LinModel(ss(1, 1, 1, 0, 1.0));
+ 20.0
source
evaloutput(estim::InternalModel, ym, d=Float64[])

Evaluate InternalModel outputs from estim.x̂d states and measured outputs ym.

InternalModel estimator needs current measured outputs $\mathbf{y^m}(k)$ to estimate its outputs $\mathbf{ŷ}(k)$, since the strategy imposes that $\mathbf{ŷ^m}(k) = \mathbf{y^m}(k)$ is always true.

source

Update State x

ModelPredictiveControl.updatestate!Function
updatestate!(model::SimModel, u, d=Float64[])

Update model.x states with current inputs u and measured disturbances d.

Examples

julia> model = LinModel(ss(1, 1, 1, 0, 1.0));
 
 julia> x = updatestate!(model, [1])
 1-element Vector{Float64}:
- 1.0
source
updatestate!(estim::StateEstimator, u, ym, d=Float64[])

Update estim.x̂ estimate with current inputs u, measured outputs ym and dist. d.

The method removes the operating points with remove_op! and call update_estimate!.

Examples

julia> kf = SteadyKalmanFilter(LinModel(ss(0.1, 0.5, 1, 0, 4.0)));
+ 1.0
source
updatestate!(estim::StateEstimator, u, ym, d=Float64[])

Update estim.x̂ estimate with current inputs u, measured outputs ym and dist. d.

The method removes the operating points with remove_op! and call update_estimate!.

Examples

julia> kf = SteadyKalmanFilter(LinModel(ss(0.1, 0.5, 1, 0, 4.0)));
 
 julia> x̂ = updatestate!(kf, [1], [0]) # x̂[2] is the integrator state (nint_ym argument)
 2-element Vector{Float64}:
  0.5
- 0.0
source
updatestate!(mpc::PredictiveController, u, ym, d=Float64[])

Call updatestate! on mpc.estim StateEstimator.

source

Init State x

ModelPredictiveControl.initstate!Function
initstate!(estim::StateEstimator, u, ym, d=Float64[])

Init estim.x̂ states from current inputs u, measured outputs ym and disturbances d.

The method tries to find a good steady-state to initialize estim.x̂ estimate :

  • If estim.model is a LinModel, it evaluates estim.model steady-state (using steadystate) with current inputs u and measured disturbances d, and saves the result to estim.x̂[1:nx].
  • Else, the current deterministic states estim.x̂[1:nx] are left unchanged (use setstate! to manually modify them).

It then estimates the measured outputs ŷm from these states, and the residual offset with current measured outputs (ym - ŷm) initializes the integrators of the stochastic model. This approach ensures that $\mathbf{ŷ^m}(0) = \mathbf{y^m}(0)$. For LinModel, it also ensures that the estimator starts at steady-state, resulting in a bumpless manual to automatic transfer for control applications.

Examples

julia> estim = SteadyKalmanFilter(LinModel(tf(3, [10, 1]), 0.5), nint_ym=[2]);
+ 0.0
source
updatestate!(mpc::PredictiveController, u, ym, d=Float64[])

Call updatestate! on mpc.estim StateEstimator.

source

Init State x

ModelPredictiveControl.initstate!Function
initstate!(estim::StateEstimator, u, ym, d=Float64[])

Init estim.x̂ states from current inputs u, measured outputs ym and disturbances d.

The method tries to find a good steady-state to initialize estim.x̂ estimate :

  • If estim.model is a LinModel, it evaluates estim.model steady-state (using steadystate) with current inputs u and measured disturbances d, and saves the result to estim.x̂[1:nx].
  • Else, the current deterministic states estim.x̂[1:nx] are left unchanged (use setstate! to manually modify them).

It then estimates the measured outputs ŷm from these states, and the residual offset with current measured outputs (ym - ŷm) initializes the integrators of the stochastic model. This approach ensures that $\mathbf{ŷ^m}(0) = \mathbf{y^m}(0)$. For LinModel, it also ensures that the estimator starts at steady-state, resulting in a bumpless manual to automatic transfer for control applications.

Examples

julia> estim = SteadyKalmanFilter(LinModel(tf(3, [10, 1]), 0.5), nint_ym=[2]);
 
 julia> x̂ = initstate!(estim, [1], [3 - 0.1])
 3-element Vector{Float64}:
   5.0000000000000115
   0.0
- -0.10000000000000675
source
initstate!(estim::InternalModel, u, ym, d=Float64[])

Init estim.x̂d / x̂s states from current inputs u, meas. outputs ym and disturb. d.

The deterministic state estim.x̂d initialization method is identical to initstate!(::StateEstimator). The stochastic states estim.x̂s are init at 0.

source
initstate!(mpc::PredictiveController, u, ym, d=Float64[])

Init mpc.ΔŨ for warm-starting and the states of mpc.estim StateEstimator.

Before calling initstate!(::StateEstimator,_,_), it warm-starts $\mathbf{ΔŨ}$:

  • If model is a LinModel, the vector is filled with the analytical minimum $J$ of the unconstrained problem.
  • Else, the vector is filled with zeros.
source

Set State x

ModelPredictiveControl.setstate!Function
setstate!(model::SimModel, x)

Set model.x states to values specified by x.

source
setstate!(estim::StateEstimator, x̂)

Set estim.x̂ states to values specified by .

source
setstate!(mpc::PredictiveController, x̂)

Set the estimate at mpc.estim.x̂.

source

Quick Simulation

ModelPredictiveControl.sim!Function
sim!(plant::SimModel, N::Int, u=plant.uop.+1, d=plant.dop; x0=zeros(plant.nx))

Open-loop simulation of plant for N time steps, default to unit bump test on all inputs.

The manipulated inputs $\mathbf{u}$ and measured disturbances $\mathbf{d}$ are held constant at u and d values, respectively. The plant initial state $\mathbf{x}(0)$ is specified by x0 keyword arguments. The function returns SimResult instances that can be visualized by calling plot from Plots.jl on them (see Examples below). Note that the method mutates plant internal states.

Examples

julia> plant = NonLinModel((x,u,d)->0.1x+u+d, (x,_)->2x, 10.0, 1, 1, 1, 1);
+ -0.10000000000000675
source
initstate!(estim::InternalModel, u, ym, d=Float64[])

Init estim.x̂d / x̂s states from current inputs u, meas. outputs ym and disturb. d.

The deterministic state estim.x̂d initialization method is identical to initstate!(::StateEstimator). The stochastic states estim.x̂s are init at 0.

source
initstate!(mpc::PredictiveController, u, ym, d=Float64[])

Init mpc.ΔŨ for warm-starting and the states of mpc.estim StateEstimator.

Before calling initstate!(::StateEstimator,_,_), it warm-starts $\mathbf{ΔŨ}$:

  • If model is a LinModel, the vector is filled with the analytical minimum $J$ of the unconstrained problem.
  • Else, the vector is filled with zeros.
source

Set State x

ModelPredictiveControl.setstate!Function
setstate!(model::SimModel, x)

Set model.x states to values specified by x.

source
setstate!(estim::StateEstimator, x̂)

Set estim.x̂ states to values specified by .

source
setstate!(mpc::PredictiveController, x̂)

Set the estimate at mpc.estim.x̂.

source

Quick Simulation

ModelPredictiveControl.sim!Function
sim!(plant::SimModel, N::Int, u=plant.uop.+1, d=plant.dop; x0=zeros(plant.nx))

Open-loop simulation of plant for N time steps, default to unit bump test on all inputs.

The manipulated inputs $\mathbf{u}$ and measured disturbances $\mathbf{d}$ are held constant at u and d values, respectively. The plant initial state $\mathbf{x}(0)$ is specified by x0 keyword arguments. The function returns SimResult instances that can be visualized by calling plot from Plots.jl on them (see Examples below). Note that the method mutates plant internal states.

Examples

julia> plant = NonLinModel((x,u,d)->0.1x+u+d, (x,_)->2x, 10.0, 1, 1, 1, 1);
 
 julia> res = sim!(plant, 15, [0], [0], x0=[1]);
 
 julia> using Plots; plot(res, plotu=false, plotd=false, plotx=true)
-
source
sim!(
+
source
sim!(
     estim::StateEstimator,
     N::Int,
     u = estim.model.uop .+ 1,
@@ -40,7 +40,7 @@
 julia> res = sim!(estim, 50, [0], y_noise=[0.5], x_noise=[0.25], x0=[-10], x̂0=[0, 0]);
 
 julia> using Plots; plot(res, plotŷ=true, plotu=false, plotxwithx̂=true)
-
source
sim!(
+
source
sim!(
     mpc::PredictiveController, 
     N::Int,
     ry = mpc.estim.model.yop .+ 1, 
@@ -53,4 +53,4 @@
 julia> res = sim!(mpc, 25, [0, 0], y_noise=[0.1], y_step=[-10, 0]);
 
 julia> using Plots; plot(res, plotry=true, plotŷ=true, plotŷmin=true, plotu=true)
-
source
+source diff --git a/dev/public/predictive_control/index.html b/dev/public/predictive_control/index.html index a503b0130..ce0b88d67 100644 --- a/dev/public/predictive_control/index.html +++ b/dev/public/predictive_control/index.html @@ -19,7 +19,7 @@ julia> u = mpc([5]); round.(u, digits=3) 1-element Vector{Float64}: - 1.0source

LinMPC

ModelPredictiveControl.LinMPCType
LinMPC(model::LinModel; <keyword arguments>)

Construct a linear predictive controller based on LinModel model.

The controller minimizes the following objective function at each discrete time $k$:

\[\min_{\mathbf{ΔU}, ϵ} \mathbf{(R̂_y - Ŷ)}' \mathbf{M}_{H_p} \mathbf{(R̂_y - Ŷ)} + 1.0

source

LinMPC

ModelPredictiveControl.LinMPCType
LinMPC(model::LinModel; <keyword arguments>)

Construct a linear predictive controller based on LinModel model.

The controller minimizes the following objective function at each discrete time $k$:

\[\min_{\mathbf{ΔU}, ϵ} \mathbf{(R̂_y - Ŷ)}' \mathbf{M}_{H_p} \mathbf{(R̂_y - Ŷ)} + \mathbf{(ΔU)}' \mathbf{N}_{H_c} \mathbf{(ΔU)} + \mathbf{(R̂_u - U)}' \mathbf{L}_{H_p} \mathbf{(R̂_u - U)} + C ϵ^2\]

in which the weight matrices are repeated $H_p$ or $H_c$ times:

\[\begin{aligned} @@ -36,7 +36,7 @@ 4 states x̂ 2 measured outputs ym 0 unmeasured outputs yu - 0 measured disturbances d

Extended Help

Manipulated inputs setpoints $\mathbf{r_u}$ are not common but they can be interesting for over-actuated systems, when nu > ny (e.g. prioritize solutions with lower economical costs). The default Lwt value implies that this feature is disabled by default.

source
LinMPC(estim::StateEstimator; <keyword arguments>)

Use custom state estimator estim to construct LinMPC.

estim.model must be a LinModel. Else, a NonLinMPC is required.

Examples

julia> estim = KalmanFilter(LinModel([tf(3, [30, 1]); tf(-2, [5, 1])], 4), i_ym=[2]);
+  0 measured disturbances d

Extended Help

Manipulated inputs setpoints $\mathbf{r_u}$ are not common but they can be interesting for over-actuated systems, when nu > ny (e.g. prioritize solutions with lower economical costs). The default Lwt value implies that this feature is disabled by default.

source
LinMPC(estim::StateEstimator; <keyword arguments>)

Use custom state estimator estim to construct LinMPC.

estim.model must be a LinModel. Else, a NonLinMPC is required.

Examples

julia> estim = KalmanFilter(LinModel([tf(3, [30, 1]); tf(-2, [5, 1])], 4), i_ym=[2]);
 
 julia> mpc = LinMPC(estim, Mwt=[0, 1], Nwt=[0.5], Hp=30, Hc=1)
 LinMPC controller with a sample time Ts = 4.0 s, OSQP optimizer, KalmanFilter estimator and:
@@ -46,7 +46,7 @@
   3 states x̂
   1 measured outputs ym
   1 unmeasured outputs yu
-  0 measured disturbances d
source

ExplicitMPC

ModelPredictiveControl.ExplicitMPCType
ExplicitMPC(model::LinModel; <keyword arguments>)

Construct an explicit linear predictive controller based on LinModel model.

The controller minimizes the following objective function at each discrete time $k$:

\[\min_{\mathbf{ΔU}} \mathbf{(R̂_y - Ŷ)}' \mathbf{M}_{H_p} \mathbf{(R̂_y - Ŷ)} + 0 measured disturbances d

source

ExplicitMPC

ModelPredictiveControl.ExplicitMPCType
ExplicitMPC(model::LinModel; <keyword arguments>)

Construct an explicit linear predictive controller based on LinModel model.

The controller minimizes the following objective function at each discrete time $k$:

\[\min_{\mathbf{ΔU}} \mathbf{(R̂_y - Ŷ)}' \mathbf{M}_{H_p} \mathbf{(R̂_y - Ŷ)} + \mathbf{(ΔU)}' \mathbf{N}_{H_c} \mathbf{(ΔU)} + \mathbf{(R̂_u - U)}' \mathbf{L}_{H_p} \mathbf{(R̂_u - U)} \]

See LinMPC for the variable definitions. This controller does not support constraints but the computational costs are extremely low (array division), therefore suitable for applications that require small sample times. This method uses the default state estimator, a SteadyKalmanFilter with default arguments.

Arguments

  • model::LinModel : model used for controller predictions and state estimations.
  • Hp=10+nk: prediction horizon $H_p$, nk is the number of delays in model.
  • Hc=2 : control horizon $H_c$.
  • Mwt=fill(1.0,model.ny) : main diagonal of $\mathbf{M}$ weight matrix (vector).
  • Nwt=fill(0.1,model.nu) : main diagonal of $\mathbf{N}$ weight matrix (vector).
  • Lwt=fill(0.0,model.nu) : main diagonal of $\mathbf{L}$ weight matrix (vector).
  • ru=model.uop : manipulated input setpoints $\mathbf{r_u}$ (vector).

Examples

julia> model = LinModel([tf(3, [30, 1]); tf(-2, [5, 1])], 4);
 
@@ -58,7 +58,7 @@
   4 states x̂
   2 measured outputs ym
   0 unmeasured outputs yu
-  0 measured disturbances d
source
ExplicitMPC(estim::StateEstimator; <keyword arguments>)

Use custom state estimator estim to construct ExplicitMPC.

estim.model must be a LinModel. Else, a NonLinMPC is required.

Examples

julia> estim = KalmanFilter(LinModel([tf(3, [30, 1]); tf(-2, [5, 1])], 4), i_ym=[2]);
+  0 measured disturbances d
source
ExplicitMPC(estim::StateEstimator; <keyword arguments>)

Use custom state estimator estim to construct ExplicitMPC.

estim.model must be a LinModel. Else, a NonLinMPC is required.

Examples

julia> estim = KalmanFilter(LinModel([tf(3, [30, 1]); tf(-2, [5, 1])], 4), i_ym=[2]);
 
 julia> mpc = ExplicitMPC(estim, Mwt=[0, 1], Nwt=[0.5], Hp=30, Hc=1)
 ExplicitMPC controller with a sample time Ts = 4.0 s, KalmanFilter estimator and:
@@ -68,7 +68,7 @@
   3 states x̂
   1 measured outputs ym
   1 unmeasured outputs yu
-  0 measured disturbances d
source

NonLinMPC

ModelPredictiveControl.NonLinMPCType
NonLinMPC(model::SimModel; <keyword arguments>)

Construct a nonlinear predictive controller based on SimModel model.

Both NonLinModel and LinModel are supported (see Extended Help). The controller minimizes the following objective function at each discrete time $k$:

\[\min_{\mathbf{ΔU}, ϵ} \mathbf{(R̂_y - Ŷ)}' \mathbf{M}_{H_p} \mathbf{(R̂_y - Ŷ)} + 0 measured disturbances d

source

NonLinMPC

ModelPredictiveControl.NonLinMPCType
NonLinMPC(model::SimModel; <keyword arguments>)

Construct a nonlinear predictive controller based on SimModel model.

Both NonLinModel and LinModel are supported (see Extended Help). The controller minimizes the following objective function at each discrete time $k$:

\[\min_{\mathbf{ΔU}, ϵ} \mathbf{(R̂_y - Ŷ)}' \mathbf{M}_{H_p} \mathbf{(R̂_y - Ŷ)} + \mathbf{(ΔU)}' \mathbf{N}_{H_c} \mathbf{(ΔU)} + \mathbf{(R̂_u - U)}' \mathbf{L}_{H_p} \mathbf{(R̂_u - U)} + C ϵ^2 + E J_E(\mathbf{U}_E, \mathbf{Ŷ}_E, \mathbf{D̂}_E)\]

See LinMPC for the variable definitions. The custom economic function $J_E$ can penalizes solutions with high economic costs. Setting all the weights to 0 except $E$ creates a pure economic model predictive controller (EMPC). The arguments of $J_E$ are the manipulated inputs, the predicted outputs and measured disturbances from $k$ to $k+H_p$ inclusively:

\[ \mathbf{U}_E = \begin{bmatrix} \mathbf{U} \\ \mathbf{u}(k+H_p-1) \end{bmatrix} \text{,} \qquad @@ -83,7 +83,7 @@ 2 states x̂ 1 measured outputs ym 0 unmeasured outputs yu - 0 measured disturbances d

Extended Help

NonLinMPC controllers based on LinModel compute the predictions with matrix algebra instead of a for loop. This feature can accelerate the optimization and is not available in any other package, to my knowledge.

The optimizations rely on JuMP.jl automatic differentiation (AD) to compute the objective and constraint derivatives. Optimizers generally benefit from exact derivatives like AD. However, the NonLinModel f and h functions must be compatible with this feature. See Automatic differentiation for common mistakes when writing these functions.

source
NonLinMPC(estim::StateEstimator; <keyword arguments>)

Use custom state estimator estim to construct NonLinMPC.

Examples

julia> model = NonLinModel((x,u,_)->0.5x+u, (x,_)->2x, 10.0, 1, 1, 1);
+  0 measured disturbances d

Extended Help

NonLinMPC controllers based on LinModel compute the predictions with matrix algebra instead of a for loop. This feature can accelerate the optimization and is not available in any other package, to my knowledge.

The optimizations rely on JuMP.jl automatic differentiation (AD) to compute the objective and constraint derivatives. Optimizers generally benefit from exact derivatives like AD. However, the NonLinModel f and h functions must be compatible with this feature. See Automatic differentiation for common mistakes when writing these functions.

source
NonLinMPC(estim::StateEstimator; <keyword arguments>)

Use custom state estimator estim to construct NonLinMPC.

Examples

julia> model = NonLinModel((x,u,_)->0.5x+u, (x,_)->2x, 10.0, 1, 1, 1);
 
 julia> estim = UnscentedKalmanFilter(model, σQ_int=[0.05]);
 
@@ -95,7 +95,7 @@
   2 states x̂
   1 measured outputs ym
   0 unmeasured outputs yu
-  0 measured disturbances d
source

Set Constraint

ModelPredictiveControl.setconstraint!Function
setconstraint!(mpc::PredictiveController; <keyword arguments>)

Set the constraint parameters of mpc predictive controller.

The predictive controllers support both soft and hard constraints, defined by:

\[\begin{alignat*}{3} + 0 measured disturbances d

source

Set Constraint

ModelPredictiveControl.setconstraint!Function
setconstraint!(mpc::PredictiveController; <keyword arguments>)

Set the constraint parameters of mpc predictive controller.

The predictive controllers support both soft and hard constraints, defined by:

\[\begin{alignat*}{3} \mathbf{u_{min} - c_{u_{min}}} ϵ &≤ \mathbf{u}(k+j) &&≤ \mathbf{u_{max} + c_{u_{max}}} ϵ &&\qquad j = 0, 1 ,..., H_c - 1 \\ \mathbf{Δu_{min} - c_{Δu_{min}}} ϵ &≤ \mathbf{Δu}(k+j) &&≤ \mathbf{Δu_{max} + c_{Δu_{max}}} ϵ &&\qquad j = 0, 1 ,..., H_c - 1 \\ \mathbf{ŷ_{min} - c_{ŷ_{min}}} ϵ &≤ \mathbf{ŷ}(k+j) &&≤ \mathbf{ŷ_{max} + c_{ŷ_{max}}} ϵ &&\qquad j = 1, 2 ,..., H_p \\ @@ -111,7 +111,7 @@ 2 states x̂ 1 measured outputs ym 0 unmeasured outputs yu - 0 measured disturbances d

source

Move Manipulated Input u

ModelPredictiveControl.moveinput!Function
moveinput!(
+  0 measured disturbances d
source

Move Manipulated Input u

ModelPredictiveControl.moveinput!Function
moveinput!(
     mpc::PredictiveController, 
     ry = mpc.estim.model.yop, 
     d  = Float64[];
@@ -122,10 +122,10 @@
 
 julia> ry = [5]; u = moveinput!(mpc, ry); round.(u, digits=3)
 1-element Vector{Float64}:
- 1.0
source

Get Additional Information

ModelPredictiveControl.getinfoFunction
getinfo(mpc::PredictiveController) -> sol_summary, info

Get additional information about mpc controller optimum to ease troubleshooting.

Return the optimizer solution summary that can be printed, sol_summary, and the dictionary info with the following fields:

  • :ΔU : optimal manipulated input increments over Hc $(\mathbf{ΔU})$
  • : optimal slack variable $(ϵ)$
  • :J : objective value optimum $(J)$
  • :U : optimal manipulated inputs over Hp $(\mathbf{U})$
  • :u : current optimal manipulated input $(\mathbf{u})$
  • :d : current measured disturbance $(\mathbf{d})$
  • :D̂ : predicted measured disturbances over Hp $(\mathbf{D̂})$
  • :ŷ : current estimated output $(\mathbf{ŷ})$
  • :Ŷ : optimal predicted outputs over Hp $(\mathbf{Ŷ = Ŷ_d + Ŷ_s})$
  • :Ŷd : optimal predicted deterministic output over Hp $(\mathbf{Ŷ_d})$
  • :Ŷs : predicted stochastic output over Hp $(\mathbf{Ŷ_s})$
  • :R̂y : predicted output setpoint over Hp $(\mathbf{R̂_y})$
  • :R̂u : predicted manipulated input setpoint over Hp $(\mathbf{R̂_u})$

Examples

julia> mpc = LinMPC(LinModel(tf(5, [2, 1]), 3), Nwt=[0], Hp=1, Hc=1);
+ 1.0
source

Get Additional Information

ModelPredictiveControl.getinfoFunction
getinfo(mpc::PredictiveController) -> sol_summary, info

Get additional information about mpc controller optimum to ease troubleshooting.

Return the optimizer solution summary that can be printed, sol_summary, and the dictionary info with the following fields:

  • :ΔU : optimal manipulated input increments over Hc $(\mathbf{ΔU})$
  • : optimal slack variable $(ϵ)$
  • :J : objective value optimum $(J)$
  • :U : optimal manipulated inputs over Hp $(\mathbf{U})$
  • :u : current optimal manipulated input $(\mathbf{u})$
  • :d : current measured disturbance $(\mathbf{d})$
  • :D̂ : predicted measured disturbances over Hp $(\mathbf{D̂})$
  • :ŷ : current estimated output $(\mathbf{ŷ})$
  • :Ŷ : optimal predicted outputs over Hp $(\mathbf{Ŷ = Ŷ_d + Ŷ_s})$
  • :Ŷd : optimal predicted deterministic output over Hp $(\mathbf{Ŷ_d})$
  • :Ŷs : predicted stochastic output over Hp $(\mathbf{Ŷ_s})$
  • :R̂y : predicted output setpoint over Hp $(\mathbf{R̂_y})$
  • :R̂u : predicted manipulated input setpoint over Hp $(\mathbf{R̂_u})$

Examples

julia> mpc = LinMPC(LinModel(tf(5, [2, 1]), 3), Nwt=[0], Hp=1, Hc=1);
 
 julia> u = moveinput!(mpc, [10]);
 
 julia> sol_summary, info = getinfo(mpc); round.(info[:Ŷ], digits=3)
 1-element Vector{Float64}:
- 10.0
source
getinfo(mpc::NonLinMPC) -> sol_summary, info

Invoke getinfo(::PredictiveController) and add :JE the economic optimum $J_E$.

source
+ 10.0source
getinfo(mpc::NonLinMPC) -> sol_summary, info

Invoke getinfo(::PredictiveController) and add :JE the economic optimum $J_E$.

source
diff --git a/dev/public/sim_model/index.html b/dev/public/sim_model/index.html index cacc5ea91..29f6c8020 100644 --- a/dev/public/sim_model/index.html +++ b/dev/public/sim_model/index.html @@ -3,7 +3,7 @@ julia> y = model() 1-element Vector{Float64}: - 20.0source

LinModel

ModelPredictiveControl.LinModelType
LinModel(sys::StateSpace[, Ts]; i_u=1:size(sys,2), i_d=Int[])

Construct a linear model from state-space model sys with sampling time Ts in second.

Ts can be omitted when sys is discrete-time. Its state-space matrices are:

\[\begin{aligned} + 20.0

source

LinModel

ModelPredictiveControl.LinModelType
LinModel(sys::StateSpace[, Ts]; i_u=1:size(sys,2), i_d=Int[])

Construct a linear model from state-space model sys with sampling time Ts in second.

Ts can be omitted when sys is discrete-time. Its state-space matrices are:

\[\begin{aligned} \mathbf{x}(k+1) &= \mathbf{A x}(k) + \mathbf{B z}(k) \\ \mathbf{y}(k) &= \mathbf{C x}(k) + \mathbf{D z}(k) \end{aligned}\]

with the state $\mathbf{x}$ and output $\mathbf{y}$ vectors. The $\mathbf{z}$ vector comprises the manipulated inputs $\mathbf{u}$ and measured disturbances $\mathbf{d}$, in any order. i_u provides the indices of $\mathbf{z}$ that are manipulated, and i_d, the measured disturbances. See Extended Help if sys is continuous-time, or discrete-time with Ts ≠ sys.Ts.

See also ss, tf.

Examples

julia> model = LinModel(ss(0.4, 0.2, 0.3, 0, 0.1))
@@ -14,17 +14,17 @@
  0 measured disturbances d

Extended Help

State-space matrices are similar if sys is continuous (replace $\mathbf{x}(k+1)$ with $\mathbf{ẋ}(t)$ and $k$ with $t$ on the LHS). In such a case, it's discretized with c2d and :zoh for manipulated inputs, and :tustin, for measured disturbances. Lastly, if sys is discrete and the provided argument Ts ≠ sys.Ts, the system is resampled by using the aforementioned discretization methods.

The constructor transforms the system to a more practical form ($\mathbf{D_u=0}$ because of the zero-order hold):

\[\begin{aligned} \mathbf{x}(k+1) &= \mathbf{A x}(k) + \mathbf{B_u u}(k) + \mathbf{B_d d}(k) \\ \mathbf{y}(k) &= \mathbf{C x}(k) + \mathbf{D_d d}(k) -\end{aligned}\]

source
LinModel(sys::TransferFunction[, Ts]; i_u=1:size(sys,2), i_d=Int[])

Convert to minimal realization state-space when sys is a transfer function.

sys is equal to $\frac{\mathbf{y}(s)}{\mathbf{z}(s)}$ for continuous-time, and $\frac{\mathbf{y}(z)}{\mathbf{z}(z)}$, for discrete-time.

Examples

julia> model = LinModel([tf(3, [30, 1]) tf(-2, [5, 1])], 0.5, i_d=[2])
+\end{aligned}\]

source
LinModel(sys::TransferFunction[, Ts]; i_u=1:size(sys,2), i_d=Int[])

Convert to minimal realization state-space when sys is a transfer function.

sys is equal to $\frac{\mathbf{y}(s)}{\mathbf{z}(s)}$ for continuous-time, and $\frac{\mathbf{y}(z)}{\mathbf{z}(z)}$, for discrete-time.

Examples

julia> model = LinModel([tf(3, [30, 1]) tf(-2, [5, 1])], 0.5, i_d=[2])
 Discrete-time linear model with a sample time Ts = 0.5 s and:
  1 manipulated inputs u
  2 states x
  1 outputs y
- 1 measured disturbances d
source
LinModel(sys::DelayLtiSystem, Ts; i_u=1:size(sys,2), i_d=Int[])

Discretize with zero-order hold when sys is a continuous system with delays.

The delays must be multiples of the sample time Ts.

Examples

julia> model = LinModel(tf(4, [10, 1])*delay(2), 0.5)
+ 1 measured disturbances d
source
LinModel(sys::DelayLtiSystem, Ts; i_u=1:size(sys,2), i_d=Int[])

Discretize with zero-order hold when sys is a continuous system with delays.

The delays must be multiples of the sample time Ts.

Examples

julia> model = LinModel(tf(4, [10, 1])*delay(2), 0.5)
 Discrete-time linear model with a sample time Ts = 0.5 s and:
  1 manipulated inputs u
  5 states x
  1 outputs y
- 0 measured disturbances d
source

NonLinModel

ModelPredictiveControl.NonLinModelType
NonLinModel(f::Function, h::Function, Ts, nu, nx, ny, nd=0)

Construct a nonlinear model from discrete-time state-space functions f and h.

The state update $\mathbf{f}$ and output $\mathbf{h}$ functions are defined as :

\[ \begin{aligned} + 0 measured disturbances d

source

NonLinModel

ModelPredictiveControl.NonLinModelType
NonLinModel(f::Function, h::Function, Ts, nu, nx, ny, nd=0)

Construct a nonlinear model from discrete-time state-space functions f and h.

The state update $\mathbf{f}$ and output $\mathbf{h}$ functions are defined as :

\[ \begin{aligned} \mathbf{x}(k+1) &= \mathbf{f}\Big( \mathbf{x}(k), \mathbf{u}(k), \mathbf{d}(k) \Big) \\ \mathbf{y}(k) &= \mathbf{h}\Big( \mathbf{x}(k), \mathbf{d}(k) \Big) \end{aligned}\]

Ts is the sampling time in second. nu, nx, ny and nd are the respective number of manipulated inputs, states, outputs and measured disturbances.

Tip

Replace the d argument with _ if nd = 0 (see Examples below).

Nonlinear continuous-time state-space functions are not supported for now. In such a case, manually call a differential equation solver in f (see Manual).

Warning

f and h must be pure Julia functions to use the model in NonLinMPC, ExtendedKalmanFilter and MovingHorizonEstimator.

See also LinModel.

Examples

julia> model = NonLinModel((x,u,_)->0.1x+u, (x,_)->2x, 10.0, 1, 1, 1)
@@ -32,7 +32,7 @@
  1 manipulated inputs u
  1 states x
  1 outputs y
- 0 measured disturbances d
source

Set Operating Points

ModelPredictiveControl.setop!Function
setop!(model::SimModel; uop=nothing, yop=nothing, dop=nothing)

Set model inputs uop, outputs yop and measured disturbances dop operating points.

The state-space model with operating points (a.k.a. nominal values) is:

\[\begin{aligned} + 0 measured disturbances d

source

Set Operating Points

ModelPredictiveControl.setop!Function
setop!(model::SimModel; uop=nothing, yop=nothing, dop=nothing)

Set model inputs uop, outputs yop and measured disturbances dop operating points.

The state-space model with operating points (a.k.a. nominal values) is:

\[\begin{aligned} \mathbf{x}(k+1) &= \mathbf{A x}(k) + \mathbf{B_u u_0}(k) + \mathbf{B_d d_0}(k) \\ \mathbf{y_0}(k) &= \mathbf{C x}(k) + \mathbf{D_d d_0}(k) \end{aligned}\]

in which the uop, yop and dop vectors evaluate:

\[\begin{aligned} @@ -47,4 +47,4 @@ 1 manipulated inputs u 1 states x 1 outputs y - 0 measured disturbances d

source
+ 0 measured disturbances dsource diff --git a/dev/public/state_estim/index.html b/dev/public/state_estim/index.html index dabc62236..8308e1e03 100644 --- a/dev/public/state_estim/index.html +++ b/dev/public/state_estim/index.html @@ -3,7 +3,7 @@ julia> ŷ = kf() 1-element Vector{Float64}: - 20.0source

SteadyKalmanFilter

ModelPredictiveControl.SteadyKalmanFilterType
SteadyKalmanFilter(model::LinModel; <keyword arguments>)

Construct a steady-state Kalman Filter with the LinModel model.

The steady-state (or asymptotic) Kalman filter is based on the process model :

\[\begin{aligned} + 20.0

source

SteadyKalmanFilter

ModelPredictiveControl.SteadyKalmanFilterType
SteadyKalmanFilter(model::LinModel; <keyword arguments>)

Construct a steady-state Kalman Filter with the LinModel model.

The steady-state (or asymptotic) Kalman filter is based on the process model :

\[\begin{aligned} \mathbf{x}(k+1) &= \mathbf{Â x}(k) + \mathbf{B̂_u u}(k) + \mathbf{B̂_d d}(k) + \mathbf{w}(k) \\ \mathbf{y^m}(k) &= \mathbf{Ĉ^m x}(k) + \mathbf{D̂_d^m d}(k) + \mathbf{v}(k) \\ @@ -16,7 +16,7 @@ 3 states x̂ 1 measured outputs ym 1 unmeasured outputs yu - 0 measured disturbances d

Extended Help

The model augmentation with nint_ym vector adds integrators at model measured outputs $\mathbf{y^m}$. It creates the integral action when the estimator is used in a controller as state feedback. The method default_nint computes the default value of nint_ym. It can also be tweaked by following these rules on each measured output:

  • Use 0 integrator if the model output is already integrating (else it will be unobservable)
  • Use 1 integrator if the disturbances on the output are typically "step-like"
  • Use 2 integrators if the disturbances on the output are typically "ramp-like"

The function init_estimstoch builds the stochastic model from nint_ym.

Tip

Increasing σQ_int values increases the integral action "gain".

The constructor pre-compute the steady-state Kalman gain K with the kalman function. It can sometimes fail, for example when model matrices are ill-conditioned. In such a case, you can try the alternative time-varying KalmanFilter.

source
SteadyKalmanFilter(model, i_ym, nint_ym, Q̂, R̂)

Construct the estimator from the augmented covariance matrices and .

This syntax allows nonzero off-diagonal elements in $\mathbf{Q̂, R̂}$.

source

KalmanFilter

ModelPredictiveControl.KalmanFilterType
KalmanFilter(model::LinModel; <keyword arguments>)

Construct a time-varying Kalman Filter with the LinModel model.

The process model is identical to SteadyKalmanFilter. The matrix $\mathbf{P̂}_k(k+1)$ is the estimation error covariance of model states augmented with the stochastic ones (specified by nint_ym). Two keyword arguments modify its initial value with $\mathbf{P̂}_{-1}(0) = \mathrm{diag}\{ \mathbf{P}(0), \mathbf{P_{int}}(0) \}$.

Arguments

  • model::LinModel : (deterministic) model for the estimations.
  • σP0=fill(1/model.nx,model.nx) : main diagonal of the initial estimate covariance $\mathbf{P}(0)$, specified as a standard deviation vector.
  • σP0_int=fill(1,sum(nint_ym)) : same than σP0 but for the stochastic model covariance $\mathbf{P_{int}}(0)$ (composed of output integrators).
  • <keyword arguments> of SteadyKalmanFilter constructor.

Examples

julia> model = LinModel([tf(3, [30, 1]); tf(-2, [5, 1])], 0.5);
+ 0 measured disturbances d

Extended Help

The model augmentation with nint_ym vector adds integrators at model measured outputs $\mathbf{y^m}$. It creates the integral action when the estimator is used in a controller as state feedback. The method default_nint computes the default value of nint_ym. It can also be tweaked by following these rules on each measured output:

  • Use 0 integrator if the model output is already integrating (else it will be unobservable)
  • Use 1 integrator if the disturbances on the output are typically "step-like"
  • Use 2 integrators if the disturbances on the output are typically "ramp-like"

The function init_estimstoch builds the stochastic model from nint_ym.

Tip

Increasing σQ_int values increases the integral action "gain".

The constructor pre-compute the steady-state Kalman gain K with the kalman function. It can sometimes fail, for example when model matrices are ill-conditioned. In such a case, you can try the alternative time-varying KalmanFilter.

source
SteadyKalmanFilter(model, i_ym, nint_ym, Q̂, R̂)

Construct the estimator from the augmented covariance matrices and .

This syntax allows nonzero off-diagonal elements in $\mathbf{Q̂, R̂}$.

source

KalmanFilter

ModelPredictiveControl.KalmanFilterType
KalmanFilter(model::LinModel; <keyword arguments>)

Construct a time-varying Kalman Filter with the LinModel model.

The process model is identical to SteadyKalmanFilter. The matrix $\mathbf{P̂}_k(k+1)$ is the estimation error covariance of model states augmented with the stochastic ones (specified by nint_ym). Two keyword arguments modify its initial value with $\mathbf{P̂}_{-1}(0) = \mathrm{diag}\{ \mathbf{P}(0), \mathbf{P_{int}}(0) \}$.

Arguments

  • model::LinModel : (deterministic) model for the estimations.
  • σP0=fill(1/model.nx,model.nx) : main diagonal of the initial estimate covariance $\mathbf{P}(0)$, specified as a standard deviation vector.
  • σP0_int=fill(1,sum(nint_ym)) : same than σP0 but for the stochastic model covariance $\mathbf{P_{int}}(0)$ (composed of output integrators).
  • <keyword arguments> of SteadyKalmanFilter constructor.

Examples

julia> model = LinModel([tf(3, [30, 1]); tf(-2, [5, 1])], 0.5);
 
 julia> estim = KalmanFilter(model, i_ym=[2], σR=[1], σP0=[100, 100], σQ_int=[0.01])
 KalmanFilter estimator with a sample time Ts = 0.5 s, LinModel and:
@@ -24,7 +24,7 @@
  3 states x̂
  1 measured outputs ym
  1 unmeasured outputs yu
- 0 measured disturbances d
source
KalmanFilter(model, i_ym, nint_ym, P̂0, Q̂, R̂)

Construct the estimator from the augmented covariance matrices P̂0, and .

This syntax allows nonzero off-diagonal elements in $\mathbf{P̂}_{-1}(0), \mathbf{Q̂, R̂}$.

source

Luenberger

ModelPredictiveControl.LuenbergerType
Luenberger(
+ 0 measured disturbances d
source
KalmanFilter(model, i_ym, nint_ym, P̂0, Q̂, R̂)

Construct the estimator from the augmented covariance matrices P̂0, and .

This syntax allows nonzero off-diagonal elements in $\mathbf{P̂}_{-1}(0), \mathbf{Q̂, R̂}$.

source

Luenberger

ModelPredictiveControl.LuenbergerType
Luenberger(
     model::LinModel; 
     i_ym = 1:model.ny, 
     nint_ym = default_nint(model, i_ym),
@@ -37,7 +37,7 @@
  4 states x̂
  2 measured outputs ym
  0 unmeasured outputs yu
- 0 measured disturbances d
source

UnscentedKalmanFilter

ModelPredictiveControl.UnscentedKalmanFilterType
UnscentedKalmanFilter(model::SimModel; <keyword arguments>)

Construct an unscented Kalman Filter with the SimModel model.

Both LinModel and NonLinModel are supported. The unscented Kalman filter is based on the process model :

\[\begin{aligned} + 0 measured disturbances d

source

UnscentedKalmanFilter

ModelPredictiveControl.UnscentedKalmanFilterType
UnscentedKalmanFilter(model::SimModel; <keyword arguments>)

Construct an unscented Kalman Filter with the SimModel model.

Both LinModel and NonLinModel are supported. The unscented Kalman filter is based on the process model :

\[\begin{aligned} \mathbf{x}(k+1) &= \mathbf{f̂}\Big(\mathbf{x}(k), \mathbf{u}(k), \mathbf{d}(k)\Big) + \mathbf{w}(k) \\ \mathbf{y^m}(k) &= \mathbf{ĥ^m}\Big(\mathbf{x}(k), \mathbf{d}(k)\Big) + \mathbf{v}(k) \\ @@ -50,7 +50,7 @@ 3 states x̂ 1 measured outputs ym 0 unmeasured outputs yu - 0 measured disturbances d

source
UnscentedKalmanFilter{M<:SimModel}(model::M, i_ym, nint_ym, P̂0, Q̂, R̂, α, β, κ)

Construct the estimator from the augmented covariance matrices P̂0, and .

This syntax allows nonzero off-diagonal elements in $\mathbf{P̂}_{-1}(0), \mathbf{Q̂, R̂}$.

source

ExtendedKalmanFilter

ModelPredictiveControl.ExtendedKalmanFilterType
ExtendedKalmanFilter(model::SimModel; <keyword arguments>)

Construct an extended Kalman Filter with the SimModel model.

Both LinModel and NonLinModel are supported. The process model is identical to UnscentedKalmanFilter. The Jacobians of the augmented model $\mathbf{f̂, ĥ}$ are computed with ForwardDiff.jl automatic differentiation.

Warning

See Extended Help if you get an error like: MethodError: no method matching (::var"##")(::Vector{ForwardDiff.Dual}).

Arguments

  • model::SimModel : (deterministic) model for the estimations.
  • <keyword arguments> of SteadyKalmanFilter constructor.
  • <keyword arguments> of KalmanFilter constructor.

Examples

julia> model = NonLinModel((x,u,_)->0.2x+u, (x,_)->-3x, 5.0, 1, 1, 1);
+ 0 measured disturbances d
source
UnscentedKalmanFilter{M<:SimModel}(model::M, i_ym, nint_ym, P̂0, Q̂, R̂, α, β, κ)

Construct the estimator from the augmented covariance matrices P̂0, and .

This syntax allows nonzero off-diagonal elements in $\mathbf{P̂}_{-1}(0), \mathbf{Q̂, R̂}$.

source

ExtendedKalmanFilter

ModelPredictiveControl.ExtendedKalmanFilterType
ExtendedKalmanFilter(model::SimModel; <keyword arguments>)

Construct an extended Kalman Filter with the SimModel model.

Both LinModel and NonLinModel are supported. The process model is identical to UnscentedKalmanFilter. The Jacobians of the augmented model $\mathbf{f̂, ĥ}$ are computed with ForwardDiff.jl automatic differentiation.

Warning

See Extended Help if you get an error like: MethodError: no method matching (::var"##")(::Vector{ForwardDiff.Dual}).

Arguments

  • model::SimModel : (deterministic) model for the estimations.
  • <keyword arguments> of SteadyKalmanFilter constructor.
  • <keyword arguments> of KalmanFilter constructor.

Examples

julia> model = NonLinModel((x,u,_)->0.2x+u, (x,_)->-3x, 5.0, 1, 1, 1);
 
 julia> estim = ExtendedKalmanFilter(model, σQ=[2], σQ_int=[2], σP0=[0.1], σP0_int=[0.1])
 ExtendedKalmanFilter estimator with a sample time Ts = 5.0 s, NonLinModel and:
@@ -58,16 +58,16 @@
  2 states x̂
  1 measured outputs ym
  0 unmeasured outputs yu
- 0 measured disturbances d

Extended Help

Automatic differentiation (AD) allows exact Jacobians. The NonLinModel f and h functions must be compatible with this feature though. See Automatic differentiation for common mistakes when writing these functions.

source
ExtendedKalmanFilter{M<:SimModel}(model::M, i_ym, nint_ym, P̂0, Q̂, R̂)

Construct the estimator from the augmented covariance matrices P̂0, and .

This syntax allows nonzero off-diagonal elements in $\mathbf{P̂}_{-1}(0), \mathbf{Q̂, R̂}$.

source

InternalModel

ModelPredictiveControl.InternalModelType
InternalModel(model::SimModel; i_ym=1:model.ny, stoch_ym=ss(1,1,1,1,model.Ts).*I)

Construct an internal model estimator based on model (LinModel or NonLinModel).

i_ym provides the model output indices that are measured $\mathbf{y^m}$, the rest are unmeasured $\mathbf{y^u}$. model evaluates the deterministic predictions $\mathbf{ŷ_d}$, and stoch_ym, the stochastic predictions of the measured outputs $\mathbf{ŷ_s^m}$ (the unmeasured ones being $\mathbf{ŷ_s^u=0}$). The predicted outputs sum both values : $\mathbf{ŷ = ŷ_d + ŷ_s}$.

Warning

InternalModel estimator does not work if model is integrating or unstable. The constructor verifies these aspects for LinModel but not for NonLinModel. Uses any other state estimator in such cases.

Examples

julia> estim = InternalModel(LinModel([tf(3, [30, 1]); tf(-2, [5, 1])], 0.5), i_ym=[2])
+ 0 measured disturbances d

Extended Help

Automatic differentiation (AD) allows exact Jacobians. The NonLinModel f and h functions must be compatible with this feature though. See Automatic differentiation for common mistakes when writing these functions.

source
ExtendedKalmanFilter{M<:SimModel}(model::M, i_ym, nint_ym, P̂0, Q̂, R̂)

Construct the estimator from the augmented covariance matrices P̂0, and .

This syntax allows nonzero off-diagonal elements in $\mathbf{P̂}_{-1}(0), \mathbf{Q̂, R̂}$.

source

InternalModel

ModelPredictiveControl.InternalModelType
InternalModel(model::SimModel; i_ym=1:model.ny, stoch_ym=ss(1,1,1,1,model.Ts).*I)

Construct an internal model estimator based on model (LinModel or NonLinModel).

i_ym provides the model output indices that are measured $\mathbf{y^m}$, the rest are unmeasured $\mathbf{y^u}$. model evaluates the deterministic predictions $\mathbf{ŷ_d}$, and stoch_ym, the stochastic predictions of the measured outputs $\mathbf{ŷ_s^m}$ (the unmeasured ones being $\mathbf{ŷ_s^u=0}$). The predicted outputs sum both values : $\mathbf{ŷ = ŷ_d + ŷ_s}$.

Warning

InternalModel estimator does not work if model is integrating or unstable. The constructor verifies these aspects for LinModel but not for NonLinModel. Uses any other state estimator in such cases.

Examples

julia> estim = InternalModel(LinModel([tf(3, [30, 1]); tf(-2, [5, 1])], 0.5), i_ym=[2])
 InternalModel estimator with a sample time Ts = 0.5 s, LinModel and:
  1 manipulated inputs u
  2 states x̂
  1 measured outputs ym
  1 unmeasured outputs yu
- 0 measured disturbances d

Extended Help

stoch_ym is a TransferFunction or StateSpace object that models disturbances on $\mathbf{y^m}$. Its input is a hypothetical zero mean white noise vector. stoch_ym supposes 1 integrator per measured outputs by default, assuming that the current stochastic estimate $\mathbf{ŷ_s^m}(k) = \mathbf{y^m}(k) - \mathbf{ŷ_d^m}(k)$ is constant in the future. This is the dynamic matrix control (DMC) strategy, which is simple but sometimes too aggressive. Additional poles and zeros in stoch_ym can mitigate this.

source

Default model augmentation

ModelPredictiveControl.default_nintFunction
default_nint(model::LinModel, i_ym=1:model.ny)

Get default integrator quantity per measured outputs nint_ym for LinModel.

The measured output $\mathbf{y^m}$ indices are specified by i_ym argument. By default, one integrator is added on each measured outputs. If $\mathbf{Â, Ĉ}$ matrices of the augmented model becomes unobservable, the integrator is removed. This approach works well for stable, integrating and unstable model (see Examples).

Examples

julia> model = LinModel(append(tf(3, [10, 1]), tf(2, [1, 0]), tf(4,[-5, 1])), 1.0);
+ 0 measured disturbances d

Extended Help

stoch_ym is a TransferFunction or StateSpace object that models disturbances on $\mathbf{y^m}$. Its input is a hypothetical zero mean white noise vector. stoch_ym supposes 1 integrator per measured outputs by default, assuming that the current stochastic estimate $\mathbf{ŷ_s^m}(k) = \mathbf{y^m}(k) - \mathbf{ŷ_d^m}(k)$ is constant in the future. This is the dynamic matrix control (DMC) strategy, which is simple but sometimes too aggressive. Additional poles and zeros in stoch_ym can mitigate this.

source

Default model augmentation

ModelPredictiveControl.default_nintFunction
default_nint(model::LinModel, i_ym=1:model.ny)

Get default integrator quantity per measured outputs nint_ym for LinModel.

The measured output $\mathbf{y^m}$ indices are specified by i_ym argument. By default, one integrator is added on each measured outputs. If $\mathbf{Â, Ĉ}$ matrices of the augmented model become unobservable, the integrator is removed. This approach works well for stable, integrating and unstable model (see Examples).

Examples

julia> model = LinModel(append(tf(3, [10, 1]), tf(2, [1, 0]), tf(4,[-5, 1])), 1.0);
 
 julia> nint_ym = default_nint(model)
 3-element Vector{Int64}:
  1
  0
- 1
source
default_nint(model::SimModel, i_ym=1:model.ny)

One integrator on each measured output by default if model is not a LinModel.

source
+ 1source
default_nint(model::SimModel, i_ym=1:model.ny)

One integrator on each measured output by default if model is not a LinModel.

source
diff --git a/dev/search/index.html b/dev/search/index.html index 9972b246f..ea68837a5 100644 --- a/dev/search/index.html +++ b/dev/search/index.html @@ -1,2 +1,2 @@ -Search · ModelPredictiveControl.jl

Loading search...

    +Search · ModelPredictiveControl.jl

    Loading search...

      diff --git a/dev/search_index.js b/dev/search_index.js index 44559a802..ce591badc 100644 --- a/dev/search_index.js +++ b/dev/search_index.js @@ -1,3 +1,3 @@ var documenterSearchIndex = {"docs": -[{"location":"manual/installation/#Manual:-Installation","page":"Installation","title":"Manual: Installation","text":"","category":"section"},{"location":"manual/installation/","page":"Installation","title":"Installation","text":"To install the ModelPredictiveControl package, run this command in the Julia REPL:","category":"page"},{"location":"manual/installation/","page":"Installation","title":"Installation","text":"using Pkg; Pkg.add(\"ModelPredictiveControl\")","category":"page"},{"location":"manual/installation/","page":"Installation","title":"Installation","text":"Note that that the construction of linear models typically requires ss or tf functions, it is thus recommended to load the package with:","category":"page"},{"location":"manual/installation/","page":"Installation","title":"Installation","text":"using ModelPredictiveControl, ControlSystemsBase","category":"page"},{"location":"internals/predictive_control/#Functions:-PredictiveController-Internals","page":"Predictive Controllers","title":"Functions: PredictiveController Internals","text":"","category":"section"},{"location":"internals/predictive_control/","page":"Predictive Controllers","title":"Predictive Controllers","text":"The prediction methodology of this module is mainly based on Maciejowski textbook [1].","category":"page"},{"location":"internals/predictive_control/","page":"Predictive Controllers","title":"Predictive Controllers","text":"[1]: Maciejowski, J. 2000, \"Predictive control : with constraints\", 1st ed., Prentice Hall, ISBN 978-0201398236.","category":"page"},{"location":"internals/predictive_control/#Controller-Initialization","page":"Predictive Controllers","title":"Controller Initialization","text":"","category":"section"},{"location":"internals/predictive_control/","page":"Predictive Controllers","title":"Predictive Controllers","text":"ModelPredictiveControl.init_deterpred\nModelPredictiveControl.init_ΔUtoU\nModelPredictiveControl.init_quadprog\nModelPredictiveControl.init_stochpred\nModelPredictiveControl.init_linconstraint","category":"page"},{"location":"internals/predictive_control/#ModelPredictiveControl.init_deterpred","page":"Predictive Controllers","title":"ModelPredictiveControl.init_deterpred","text":"init_deterpred(model::LinModel, Hp, Hc) -> E, G, J, Kd, Q\n\nConstruct deterministic prediction matrices for LinModel model.\n\nThe linear model predictions are evaluated by :\n\nbeginaligned\n mathbfY = mathbfE ΔU + mathbfG d(k) + mathbfJ D + mathbfK_d x_d(k) \n + mathbfQ u(k-1) + mathbfY_s \n = mathbfE ΔU + mathbfF\nendaligned\n\nwhere predicted outputs mathbfY, stochastic outputs mathbfY_s, and measured disturbances mathbfD are from k + 1 to k + H_p. Input increments mathbfΔU are from k to k + H_c - 1. Deterministic state estimates mathbfx_d(k) are extracted from current estimates mathbfx_k-1(k) with getestimates!. Operating points on mathbfu, mathbfd and mathbfy are omitted in above equations.\n\nExtended Help\n\nUsing the mathbfA B_u C B_d D_d matrices in model and the equation mathbfW_j = mathbfC ( _i=0^j mathbfA^i ) mathbfB_u, the prediction matrices are computed by :\n\nbeginaligned\nmathbfE = beginbmatrix\nmathbfW_0 mathbf0 cdots mathbf0 \nmathbfW_1 mathbfW_0 cdots mathbf0 \nvdots vdots ddots vdots \nmathbfW_H_p-1 mathbfW_H_p-2 cdots mathbfW_H_p-H_c+1\nendbmatrix\n\nmathbfG = beginbmatrix\nmathbfCmathbfA^0 mathbfB_d \nmathbfCmathbfA^1 mathbfB_d \nvdots \nmathbfCmathbfA^H_p-1 mathbfB_d\nendbmatrix\n\nmathbfJ = beginbmatrix\nmathbfD_d mathbf0 cdots mathbf0 \nmathbfCmathbfA^0 mathbfB_d mathbfD_d cdots mathbf0 \nvdots vdots ddots vdots \nmathbfCmathbfA^H_p-2 mathbfB_d mathbfCmathbfA^H_p-3 mathbfB_d cdots mathbfD_d\nendbmatrix\n\nmathbfK_d = beginbmatrix\nmathbfCmathbfA^0 \nmathbfCmathbfA^1 \nvdots \nmathbfCmathbfA^H_p-1\nendbmatrix\n\nmathbfQ = beginbmatrix\nmathbfW_0 \nmathbfW_1 \nvdots \nmathbfW_H_p-1\nendbmatrix\nendaligned\n\nnote: Note\nStochastic predictions mathbfY_s are calculated separately (see init_stochpred) and added to the mathbfF matrix to support internal model structure and reduce NonLinMPC computational costs. That is also why the prediction matrices are built on mathbfA B_u C B_d D_d instead of the augmented model mathbfA B_u C B_d D_d.\n\n\n\n\n\nReturn empty matrices if model is not a LinModel\n\n\n\n\n\n","category":"function"},{"location":"internals/predictive_control/#ModelPredictiveControl.init_ΔUtoU","page":"Predictive Controllers","title":"ModelPredictiveControl.init_ΔUtoU","text":"init_ΔUtoU(nu, Hp, Hc, C, c_Umin, c_Umax) -> S_Hp, T_Hp, S_Hc, T_Hc\n\nInit manipulated input increments to inputs conversion matrices.\n\nThe conversion from the input increments mathbfΔU to manipulated inputs over H_p and H_c are calculated by:\n\nbeginaligned\nmathbfU = \n mathbfU_H_p = mathbfS_H_p mathbfΔU + mathbfT_H_p mathbfu(k-1) \n mathbfU_H_c = mathbfS_H_c mathbfΔU + mathbfT_H_c mathbfu(k-1)\nendaligned\n\n\n\n\n\n","category":"function"},{"location":"internals/predictive_control/#ModelPredictiveControl.init_quadprog","page":"Predictive Controllers","title":"ModelPredictiveControl.init_quadprog","text":"init_quadprog(model::LinModel, Ẽ, S_Hp, M_Hp, N_Hc, L_Hp) -> P̃, q̃, p\n\nInit the quadratic programming optimization matrix P̃ and q̃.\n\nThe matrices appear in the quadratic general form :\n\n J = min_mathbfΔU frac12mathbf(ΔU)P(ΔU) + mathbfq(ΔU) + p \n\nmathbfP is constant if the model and weights are linear and time invariant (LTI). The vector mathbfq and scalar p need recalculation each control period k (see initpred! method). p does not impact the minima position. It is thus useless at optimization but required to evaluate the minimal J value.\n\n\n\n\n\nReturn empty matrices if model is not a LinModel.\n\n\n\n\n\n","category":"function"},{"location":"internals/predictive_control/#ModelPredictiveControl.init_stochpred","page":"Predictive Controllers","title":"ModelPredictiveControl.init_stochpred","text":"init_stochpred(estim::StateEstimator, Hp) -> Ks, Ps\n\nInit the stochastic prediction matrix Ks from estim estimator for predictive control.\n\nmathbfK_s is the prediction matrix of the stochastic model (composed exclusively of integrators):\n\n mathbfY_s = mathbfK_s x_s(k)\n\nThe stochastic predictions mathbfY_s are the integrator outputs from k+1 to k+H_p. mathbfx_s(k) is extracted from current estimates mathbfx_k-1(k) with getestimates!. The method also returns an empty mathbfP_s matrix, since it is useless except for InternalModel estimators.\n\n\n\n\n\ninit_stochpred(estim::InternalModel, Hp) -> Ks, Ps\n\nInit the stochastic prediction matrices for InternalModel.\n\nKs and Ps matrices are defined as:\n\n mathbfY_s = mathbfK_s x_s(k) + mathbfP_s y_s(k)\n\nCurrent stochastic outputs mathbfy_s(k) comprises the measured outputs mathbfy_s^m(k) = mathbfy^m(k) - mathbfy_d^m(k) and unmeasured mathbfy_s^u(k) = mathbf0. See [2].\n\n[2]: Desbiens, A., D. Hodouin & É. Plamondon. 2000, \"Global predictive control : a unified control structure for decoupling setpoint tracking, feedforward compensation and disturbance rejection dynamics\", IEE Proceedings - Control Theory and Applications, vol. 147, no 4, https://doi.org/10.1049/ip-cta:20000443, p. 465–475, ISSN 1350-2379.\n\n\n\n\n\n","category":"function"},{"location":"internals/predictive_control/#ModelPredictiveControl.init_linconstraint","page":"Predictive Controllers","title":"ModelPredictiveControl.init_linconstraint","text":"init_linconstraint(model::LinModel, \n A_Umin, A_Umax, A_ΔŨmin, A_ΔŨmax, A_Ŷmin, A_Ŷmax,\n i_Umin, i_Umax, i_ΔŨmin, i_ΔŨmax, i_Ŷmin, i_Ŷmax\n) -> A, i_b, b\n\nInit A, b and i_b for the linear inequality constraints (mathbfA ΔU b).\n\ni_b is a BitVector including the indices of mathbfb that are finite numbers.\n\n\n\n\n\nInit values without predicted output constraints if model is not a LinModel.\n\n\n\n\n\n","category":"function"},{"location":"internals/predictive_control/#Constraint-Relaxation","page":"Predictive Controllers","title":"Constraint Relaxation","text":"","category":"section"},{"location":"internals/predictive_control/","page":"Predictive Controllers","title":"Predictive Controllers","text":"ModelPredictiveControl.relaxU\nModelPredictiveControl.relaxΔU\nModelPredictiveControl.relaxŶ","category":"page"},{"location":"internals/predictive_control/#ModelPredictiveControl.relaxU","page":"Predictive Controllers","title":"ModelPredictiveControl.relaxU","text":"relaxU(C, c_Umin, c_Umax, S_Hp, S_Hc) -> A_Umin, A_Umax, S̃_Hp\n\nAugment manipulated inputs constraints with slack variable ϵ for softening.\n\nDenoting the input increments augmented with the slack variable mathbfΔU = beginsmallmatrix mathbfΔU ϵ endsmallmatrix, it returns the augmented conversion matrix mathbfS_H_p, similar to the one described at init_ΔUtoU. It also returns the mathbfA matrices for the inequality constraints:\n\nbeginbmatrix \n mathbfA_U_min \n mathbfA_U_max \nendbmatrix mathbfΔU \nbeginbmatrix\n - mathbfU_min + mathbfT_H_c mathbfu(k-1) \n + mathbfU_max - mathbfT_H_c mathbfu(k-1)\nendbmatrix\n\n\n\n\n\n","category":"function"},{"location":"internals/predictive_control/#ModelPredictiveControl.relaxΔU","page":"Predictive Controllers","title":"ModelPredictiveControl.relaxΔU","text":"relaxΔU(C, c_ΔUmin, c_ΔUmax, ΔUmin, ΔUmax, N_Hc) -> A_ΔŨmin, A_ΔŨmax, ΔŨmin, ΔŨmax, Ñ_Hc\n\nAugment input increments constraints with slack variable ϵ for softening.\n\nDenoting the input increments augmented with the slack variable mathbfΔU = beginsmallmatrix mathbfΔU ϵ endsmallmatrix, it returns the augmented input increment weights mathbfN_H_c (that incorporate C). It also returns the augmented constraints mathbfΔU_min and mathbfΔU_max and the mathbfA matrices for the inequality constraints:\n\nbeginbmatrix \n mathbfA_ΔU_min \n mathbfA_ΔU_max\nendbmatrix mathbfΔU \nbeginbmatrix\n - mathbfΔU_min \n + mathbfΔU_max\nendbmatrix\n\n\n\n\n\n","category":"function"},{"location":"internals/predictive_control/#ModelPredictiveControl.relaxŶ","page":"Predictive Controllers","title":"ModelPredictiveControl.relaxŶ","text":"relaxŶ(::LinModel, C, c_Ŷmin, c_Ŷmax, E) -> A_Ŷmin, A_Ŷmax, Ẽ\n\nAugment linear output prediction constraints with slack variable ϵ for softening.\n\nDenoting the input increments augmented with the slack variable mathbfΔU = beginsmallmatrix mathbfΔU ϵ endsmallmatrix, it returns the mathbfE matrix that appears in the linear model prediction equation mathbfY = E ΔU + F, and the mathbfA matrices for the inequality constraints:\n\nbeginbmatrix \n mathbfA_Y_min \n mathbfA_Y_max\nendbmatrix mathbfΔU \nbeginbmatrix\n - mathbfY_min + mathbfF \n + mathbfY_max - mathbfF \nendbmatrix\n\n\n\n\n\nReturn empty matrices if model is not a LinModel\n\n\n\n\n\n","category":"function"},{"location":"internals/predictive_control/#Get-Estimates","page":"Predictive Controllers","title":"Get Estimates","text":"","category":"section"},{"location":"internals/predictive_control/","page":"Predictive Controllers","title":"Predictive Controllers","text":"ModelPredictiveControl.getestimates!","category":"page"},{"location":"internals/predictive_control/#ModelPredictiveControl.getestimates!","page":"Predictive Controllers","title":"ModelPredictiveControl.getestimates!","text":"getestimates!(mpc::PredictiveController, estim::StateEstimator) -> x̂d, x̂s, ŷ\n\nGet estimator output and split x̂ into the deterministic x̂d and stochastic x̂s states.\n\n\n\n\n\ngetestimates!(mpc::PredictiveController, estim::InternalModel) -> x̂d, x̂s, ŷ\n\nGet the internal model deterministic estim.x̂d and stochastic estim.x̂s states.\n\n\n\n\n\n","category":"function"},{"location":"internals/predictive_control/#Predictions","page":"Predictive Controllers","title":"Predictions","text":"","category":"section"},{"location":"internals/predictive_control/","page":"Predictive Controllers","title":"Predictive Controllers","text":"ModelPredictiveControl.initpred!","category":"page"},{"location":"internals/predictive_control/#ModelPredictiveControl.initpred!","page":"Predictive Controllers","title":"ModelPredictiveControl.initpred!","text":"initpred!(mpc, model::LinModel, d, D̂, R̂y)\n\nInit linear model prediction matrices F, q̃ and p.\n\nSee init_deterpred and init_quadprog for the definition of the matrices.\n\n\n\n\n\ninitpred!(mpc::PredictiveController, model::SimModel, d, D̂, R̂y)\n\nInit d0 and D̂0 matrices when model is not a LinModel.\n\nd0 and D̂0 are the measured disturbances and its predictions without the operating points mathbfd_op.\n\n\n\n\n\n","category":"function"},{"location":"internals/predictive_control/#Constraints","page":"Predictive Controllers","title":"Constraints","text":"","category":"section"},{"location":"internals/predictive_control/","page":"Predictive Controllers","title":"Predictive Controllers","text":"ModelPredictiveControl.linconstraint!","category":"page"},{"location":"internals/predictive_control/#ModelPredictiveControl.linconstraint!","page":"Predictive Controllers","title":"ModelPredictiveControl.linconstraint!","text":"linconstraint!(mpc::PredictiveController, model::LinModel)\n\nSet b vector for the linear model inequality constraints (mathbfA ΔU b).\n\n\n\n\n\nSet b excluding predicted output constraints when model is not a LinModel.\n\n\n\n\n\n","category":"function"},{"location":"internals/state_estim/#Functions:-StateEstimator-Internals","page":"State Estimators","title":"Functions: StateEstimator Internals","text":"","category":"section"},{"location":"internals/state_estim/#Estimator-Initialization","page":"State Estimators","title":"Estimator Initialization","text":"","category":"section"},{"location":"internals/state_estim/","page":"State Estimators","title":"State Estimators","text":"ModelPredictiveControl.init_estimstoch\nModelPredictiveControl.augment_model\nModelPredictiveControl.init_ukf\nModelPredictiveControl.init_internalmodel","category":"page"},{"location":"internals/state_estim/#ModelPredictiveControl.init_estimstoch","page":"State Estimators","title":"ModelPredictiveControl.init_estimstoch","text":"init_estimstoch(i_ym, nint_ym::Vector{Int}) -> Asm, Csm, nint_ym\n\nCalc stochastic model matrices from output integrators specifications for state estimation.\n\nFor closed-loop state estimators. nint_ym is a vector providing how many integrator should be added for each measured output mathbfy^m. The argument generates the Asm and Csm matrices:\n\nbeginaligned\nmathbfx_s(k+1) = mathbfA_s^m x_s(k) + mathbfB_s^m e(k) \nmathbfy_s^m(k) = mathbfC_s^m x_s(k)\nendaligned\n\nwhere mathbfe(k) is a conceptual and unknown zero mean white noise. mathbfB_s^m is not used for closed-loop state estimators thus ignored.\n\n\n\n\n\n","category":"function"},{"location":"internals/state_estim/#ModelPredictiveControl.augment_model","page":"State Estimators","title":"ModelPredictiveControl.augment_model","text":"augment_model(model::LinModel, As, Cs; verify_obsv=true) -> Â, B̂u, Ĉ, B̂d, D̂d\n\nAugment LinModel state-space matrices with the stochastic ones As and Cs.\n\nIf mathbfx are model.x states, and mathbfx_s, the states defined at init_estimstoch, we define an augmented state vector mathbfx = beginsmallmatrix mathbfx mathbfx_s endsmallmatrix . The method returns the augmented matrices Â, B̂u, Ĉ, B̂d and D̂d:\n\nbeginaligned\n mathbfx(k+1) = mathbfA x(k) + mathbfB_u u(k) + mathbfB_d d(k) \n mathbfy(k) = mathbfC x(k) + mathbfD_d d(k)\nendaligned\n\nAn error is thrown if the augmented model is not observable and verify_obsv == true.\n\n\n\n\n\nNo need to augment the model if model is not a LinModel.\n\n\n\n\n\n","category":"function"},{"location":"internals/state_estim/#ModelPredictiveControl.init_ukf","page":"State Estimators","title":"ModelPredictiveControl.init_ukf","text":"init_ukf(nx̂, α, β, κ) -> nσ, γ, m̂, Ŝ\n\nCompute the UnscentedKalmanFilter constants from α β and κ.\n\nWith n_mathbfx elements in the state vector mathbfx and n_σ = 2 n_mathbfx + 1 sigma points, the scaling factor applied on standard deviation matrices sqrtmathbfP is:\n\n γ = α sqrt n_mathbfx + κ \n\nThe weight vector (n_σ 1) for the mean and the weight matrix (n_σ n_σ) for the covariance are respectively:\n\nbeginaligned\n mathbfm = beginbmatrix 1 - tfracn_mathbfxγ^2 tfrac12γ^2 tfrac12γ^2 cdots tfrac12γ^2 endbmatrix \n mathbfS = mathrmdiagbig( 2 - α^2 + β - tfracn_mathbfxγ^2 tfrac12γ^2 tfrac12γ^2 cdots tfrac12γ^2 big)\nendaligned\n\nSee update_estimate!(::UnscentedKalmanFilter) for other details.\n\n\n\n\n\n","category":"function"},{"location":"internals/state_estim/#ModelPredictiveControl.init_internalmodel","page":"State Estimators","title":"ModelPredictiveControl.init_internalmodel","text":"init_internalmodel(As, Bs, Cs, Ds) -> Âs, B̂s\n\nCalc stochastic model update matrices Âs and B̂s for InternalModel estimator.\n\nAs, Bs, Cs and Ds are the stochastic model matrices :\n\nbeginaligned\n mathbfx_s(k+1) = mathbfA_s x_s(k) + mathbfB_s e(k) \n mathbfy_s(k) = mathbfC_s x_s(k) + mathbfD_s e(k)\nendaligned\n\nwhere mathbfe(k) is conceptual and unknown zero mean white noise. Its optimal estimation is mathbfe=0, the expected value. Thus, the Âs and B̂s matrices that optimally update the stochastic estimate mathbfx_s are:\n\nbeginaligned\n mathbfx_s(k+1) \n = mathbf(A_s - B_s D_s^-1 C_s) x_s(k) + mathbf(B_s D_s^-1) y_s(k) \n = mathbfA_s x_s(k) + mathbfB_s y_s(k)\nendaligned\n\nwith current stochastic outputs estimation mathbfy_s(k), composed of the measured mathbfy_s^m(k) = mathbfy^m(k) - mathbfy_d^m(k) and unmeasured mathbfy_s^u = 0 outputs. See [1].\n\n[1]: Desbiens, A., D. Hodouin & É. Plamondon. 2000, \"Global predictive control : a unified control structure for decoupling setpoint tracking, feedforward compensation and disturbance rejection dynamics\", IEE Proceedings - Control Theory and Applications, vol. 147, no 4, https://doi.org/10.1049/ip-cta:20000443, p. 465–475, ISSN 1350-2379.\n\n\n\n\n\n","category":"function"},{"location":"internals/state_estim/#Augmented-Model","page":"State Estimators","title":"Augmented Model","text":"","category":"section"},{"location":"internals/state_estim/","page":"State Estimators","title":"State Estimators","text":"ModelPredictiveControl.f̂\nModelPredictiveControl.ĥ","category":"page"},{"location":"internals/state_estim/#ModelPredictiveControl.f̂","page":"State Estimators","title":"ModelPredictiveControl.f̂","text":"f̂(estim::StateEstimator, x̂, u, d)\n\nState function mathbff of the augmented model.\n\nBy introducing an augmented state vector mathbfx like in augment_model, the function returns the next state of the augmented model, defined as:\n\nbeginaligned\n mathbfx(k+1) = mathbffBig(mathbfx(k) mathbfu(k) mathbfd(k)Big) \n mathbfy(k) = mathbfhBig(mathbfx(k) mathbfd(k)Big) \nendaligned\n\n\n\n\n\n","category":"function"},{"location":"internals/state_estim/#ModelPredictiveControl.ĥ","page":"State Estimators","title":"ModelPredictiveControl.ĥ","text":"ĥ(estim::StateEstimator, x̂, d)\n\nOutput function mathbfh of the augmented model, see f̂ for details.\n\n\n\n\n\n","category":"function"},{"location":"internals/state_estim/#Remove-Operating-Points","page":"State Estimators","title":"Remove Operating Points","text":"","category":"section"},{"location":"internals/state_estim/","page":"State Estimators","title":"State Estimators","text":"ModelPredictiveControl.remove_op!","category":"page"},{"location":"internals/state_estim/#ModelPredictiveControl.remove_op!","page":"State Estimators","title":"ModelPredictiveControl.remove_op!","text":"remove_op!(estim::StateEstimator, u, d, ym) -> u0, d0, ym0\n\nRemove operating points on inputs u, measured outputs ym and disturbances d.\n\nAlso store current inputs without operating points u0 in estim.lastu0. This field is used for PredictiveController computations.\n\n\n\n\n\n","category":"function"},{"location":"internals/state_estim/#Update-Estimate","page":"State Estimators","title":"Update Estimate","text":"","category":"section"},{"location":"internals/state_estim/","page":"State Estimators","title":"State Estimators","text":"info: Info\nAll these methods assume that the operating points are already removed in u, ym and d arguments. Strickly speaking, the arguments should be called u0, ym0 and d0, following setop! notation. The 0 is dropped to simplify the notation.","category":"page"},{"location":"internals/state_estim/","page":"State Estimators","title":"State Estimators","text":"ModelPredictiveControl.update_estimate!","category":"page"},{"location":"internals/state_estim/#ModelPredictiveControl.update_estimate!","page":"State Estimators","title":"ModelPredictiveControl.update_estimate!","text":"update_estimate!(estim::SteadyKalmanFilter, u, ym, d)\n\nUpdate estim.x̂ estimate with current inputs u, measured outputs ym and dist. d.\n\nThe SteadyKalmanFilter updates it with the precomputed Kalman gain mathbfK:\n\nmathbfx_k(k+1) = mathbfA x_k-1(k) + mathbfB_u u(k) + mathbfB_d d(k) \n + mathbfKmathbfy^m(k) - mathbfC^m x_k-1(k) - mathbfD_d^m d(k)\n\n\n\n\n\nupdate_estimate!(estim::KalmanFilter, u, ym, d)\n\nUpdate KalmanFilter state estim.x̂ and estimation error covariance estim.P̂.\n\nIt implements the time-varying Kalman Filter in its predictor (observer) form :\n\nbeginaligned\n mathbfM(k) = mathbfP_k-1(k)mathbfC^m\n mathbfC^m P_k-1(k)mathbfC^m + mathbfR^-1 \n mathbfK(k) = mathbfA M(k) \n mathbfy^m(k) = mathbfC^m x_k-1(k) + mathbfD_d^m d(k) \n mathbfx_k(k+1) = mathbfA x_k-1(k) + mathbfB_u u(k) + mathbfB_d d(k)\n + mathbfK(k)mathbfy^m(k) - mathbfy^m(k) \n mathbfP_k(k+1) = mathbfAmathbfP_k-1(k)\n - mathbfM(k)mathbfC^m P_k-1(k)mathbfA + mathbfQ\nendaligned\n\nbased on the process model described in SteadyKalmanFilter. The notation mathbfx_k-1(k) refers to the state for the current time k estimated at the last control period k-1. See [2] for details.\n\n[2]: Boyd S., \"Lecture 8 : The Kalman Filter\" (Winter 2008-09) [course slides], EE363: Linear Dynamical Systems, https://web.stanford.edu/class/ee363/lectures/kf.pdf.\n\n\n\n\n\nupdate_estimate!(estim::UnscentedKalmanFilter, u, ym, d)\n\nUpdate UnscentedKalmanFilter state estim.x̂ and covariance estimate estim.P̂.\n\nIt implements the unscented Kalman Filter in its predictor (observer) form, based on the generalized unscented transform[3]. See init_ukf for the definition of the constants mathbfm S and γ. \n\nDenoting mathbfx_k-1(k) as the state for the current time k estimated at the last period k-1, mathbf0, a null vector, n_σ = 2 n_mathbfx + 1, the number of sigma points, and mathbfX_k-1^j(k), the vector at the jth column of mathbfX_k-1(k), the estimator updates the states with:\n\nbeginaligned\n mathbfX_k-1(k) = biggbeginmatrix mathbfx_k-1(k) mathbfx_k-1(k) cdots mathbfx_k-1(k) endmatrixbigg + biggbeginmatrix mathbf0 γ sqrtmathbfP_k-1(k) -γ sqrtmathbfP_k-1(k) endmatrixbigg \n mathbfY^m(k) = biggbeginmatrix mathbfh^mBig( mathbfX_k-1^1(k) Big) mathbfh^mBig( mathbfX_k-1^2(k) Big) cdots mathbfh^mBig( mathbfX_k-1^n_σ(k) Big) endmatrixbigg \n mathbfy^m(k) = mathbfY^m(k) mathbfm \n mathbfX_k-1(k) = beginbmatrix mathbfX_k-1^1(k) - mathbfx_k-1(k) mathbfX_k-1^2(k) - mathbfx_k-1(k) cdots mathbfX_k-1^n_σ(k) - mathbfx_k-1(k) endbmatrix \n mathbfY^m(k) = beginbmatrix mathbfY^m^1(k) - mathbfy^m(k) mathbfY^m^2(k) - mathbfy^m(k) cdots mathbfY^m^n_σ(k) - mathbfy^m(k) endbmatrix \n mathbfM(k) = mathbfY^m(k) mathbfS mathbfY^m(k) + mathbfR \n mathbfK(k) = mathbfX_k-1(k) mathbfS mathbfY^m(k) mathbfM(k)^-1 \n mathbfx_k(k) = mathbfx_k-1(k) + mathbfK(k) big mathbfy^m(k) - mathbfy^m(k) big \n mathbfP_k(k) = mathbfP_k-1(k) - mathbfK(k) mathbfM(k) mathbfK(k) \n mathbfX_k(k) = biggbeginmatrix mathbfx_k(k) mathbfx_k(k) cdots mathbfx_k(k) endmatrixbigg + biggbeginmatrix mathbf0 gamma sqrtmathbfP_k(k) - gamma sqrtmathbfP_k(k) endmatrixbigg \n mathbfX_k(k+1) = biggbeginmatrix mathbffBig( mathbfX_k^1(k) mathbfu(k) mathbfd(k) Big) mathbffBig( mathbfX_k^2(k) mathbfu(k) mathbfd(k) Big) cdots mathbffBig( mathbfX_k^n_σ(k) mathbfu(k) mathbfd(k) Big) endmatrixbigg \n mathbfx_k(k+1) = mathbfX_k(k+1)mathbfm \n mathbfX_k(k+1) = beginbmatrix mathbfX_k^1(k+1) - mathbfx_k(k+1) mathbfX_k^2(k+1) - mathbfx_k(k+1) cdots mathbfX_k^n_σ(k+1) - mathbfx_k(k+1) endbmatrix \n mathbfP_k(k+1) = mathbfX_k(k+1) mathbfS mathbfX_k(k+1) + mathbfQ\nendaligned \n\nby using the lower triangular factor of cholesky to compute sqrtmathbfP_k-1(k) and sqrtmathbfP_k(k). The matrices mathbfP Q R are the covariance of the estimation error, process noise and sensor noise, respectively.\n\n[3]: Simon, D. 2006, \"Chapter 14: The unscented Kalman filter\" in \"Optimal State Estimation: Kalman, H∞, and Nonlinear Approaches\", John Wiley & Sons, p. 433–459, https://doi.org/10.1002/0470045345.ch14, ISBN9780470045343.\n\n\n\n\n\nupdate_estimate!(estim::ExtendedKalmanFilter, u, ym, d=Float64[])\n\nUpdate ExtendedKalmanFilter state estim.x̂ and covariance estim.P̂.\n\nThe equations are similar to update_estimate!(::KalmanFilter) but with the substitutions mathbfA = F(k) and mathbfC^m = H^m(k):\n\nbeginaligned\n mathbfM(k) = mathbfP_k-1(k)mathbfH^m(k)\n mathbfH^m(k)mathbfP_k-1(k)mathbfH^m(k) + mathbfR^-1 \n mathbfK(k) = mathbfF(k) mathbfM(k) \n mathbfy^m(k) = mathbfh^mBig( mathbfx_k-1(k) mathbfd(k) Big) \n mathbfx_k(k+1) = mathbffBig( mathbfx_k-1(k) mathbfu(k) mathbfd(k) Big)\n + mathbfK(k)mathbfy^m(k) - mathbfy^m(k) \n mathbfP_k(k+1) = mathbfF(k)mathbfP_k-1(k)\n - mathbfM(k)mathbfH^m(k)mathbfP_k-1(k)mathbfF(k) \n + mathbfQ\nendaligned\n\nForwardDiff.jacobian automatically computes the Jacobians:\n\nbeginaligned\n mathbfF(k) = left fracmathbff(mathbfx mathbfu mathbfd)mathbfx right_mathbfx = x_k-1(k) mathbfu = u(k) mathbfd = d(k) \n mathbfH(k) = left fracmathbfh(mathbfx mathbfd)mathbfx right_mathbfx = x_k-1(k) mathbfd = d(k)\nendaligned\n\nThe matrix mathbfH^m is the rows of mathbfH that are measured outputs.\n\n\n\n\n\nupdate_estimate!(estim::Luenberger, u, ym, d=Float64[])\n\nSame than update_estimate!(::SteadyKalmanFilter) but using Luenberger.\n\n\n\n\n\nupdate_estimate!(estim::InternalModel, u, ym, d=Float64[])\n\nUpdate estim.x̂ \\ x̂d \\ x̂s with current inputs u, measured outputs ym and dist. d.\n\nThe InternalModel updates the deterministic x̂d and stochastic x̂s estimates with:\n\nbeginaligned\n mathbfx_d(k+1) = mathbffBig( mathbfx_d(k) mathbfu(k) mathbfd(k) Big) \n mathbfx_s(k+1) = mathbfA_s x_s(k) + mathbfB_s y_s(k)\nendaligned\n\nThis estimator does not augment the state vector, thus mathbfx = x_d. See init_internalmodel for details. \n\n\n\n\n\n","category":"function"},{"location":"manual/nonlinmpc/#man_nonlin","page":"Nonlinear Design","title":"Manual: Nonlinear Design","text":"","category":"section"},{"location":"manual/nonlinmpc/#Nonlinear-Model","page":"Nonlinear Design","title":"Nonlinear Model","text":"","category":"section"},{"location":"manual/nonlinmpc/","page":"Nonlinear Design","title":"Nonlinear Design","text":"In this example, the goal is to control the angular position θ of a pendulum attached to a motor. Knowing that the manipulated input is the motor torque τ, the I/O vectors are:","category":"page"},{"location":"manual/nonlinmpc/","page":"Nonlinear Design","title":"Nonlinear Design","text":"beginaligned\n mathbfu = beginbmatrix τ endbmatrix \n mathbfy = beginbmatrix θ endbmatrix\nendaligned","category":"page"},{"location":"manual/nonlinmpc/","page":"Nonlinear Design","title":"Nonlinear Design","text":"The plant model is nonlinear:","category":"page"},{"location":"manual/nonlinmpc/","page":"Nonlinear Design","title":"Nonlinear Design","text":"beginaligned\n dotθ(t) = ω(t) \n dotω(t) = -fracgLsinbig( θ(t) big) - fracKm ω(t) + frac1m L^2 τ(t)\nendaligned","category":"page"},{"location":"manual/nonlinmpc/","page":"Nonlinear Design","title":"Nonlinear Design","text":"in which g is the gravitational acceleration, L, the pendulum length, K, the friction coefficient at the pivot point, and m, the mass attached at the end of the pendulum. Here, the explicit Euler method discretizes the system to construct a NonLinModel:","category":"page"},{"location":"manual/nonlinmpc/","page":"Nonlinear Design","title":"Nonlinear Design","text":"using ModelPredictiveControl\nfunction pendulum(par, x, u)\n g, L, K, m = par # [m/s], [m], [kg/s], [kg]\n θ, ω = x[1], x[2] # [rad], [rad/s]\n τ = u[1] # [N m]\n dθ = ω\n dω = -g/L*sin(θ) - K/m*ω + τ/m/L^2\n return [dθ, dω]\nend\nTs = 0.1 # [s]\npar = (9.8, 0.4, 1.2, 0.3)\nf(x, u, _ ) = x + Ts*pendulum(par, x, u) # Euler method\nh(x, _ ) = [180/π*x[1]] # [°]\nnu, nx, ny = 1, 2, 1\nmodel = NonLinModel(f, h, Ts, nu, nx, ny)","category":"page"},{"location":"manual/nonlinmpc/","page":"Nonlinear Design","title":"Nonlinear Design","text":"The output function mathbfh converts the θ angle to degrees. It is good practice to first simulate model using sim! as a quick sanity check:","category":"page"},{"location":"manual/nonlinmpc/","page":"Nonlinear Design","title":"Nonlinear Design","text":"using Plots\nu = [0.5]\nplot(sim!(model, 60, u), plotu=false)","category":"page"},{"location":"manual/nonlinmpc/#Nonlinear-Predictive-Controller","page":"Nonlinear Design","title":"Nonlinear Predictive Controller","text":"","category":"section"},{"location":"manual/nonlinmpc/","page":"Nonlinear Design","title":"Nonlinear Design","text":"An UnscentedKalmanFilter estimates the plant state :","category":"page"},{"location":"manual/nonlinmpc/","page":"Nonlinear Design","title":"Nonlinear Design","text":"estim = UnscentedKalmanFilter(model, σQ=[0.1, 0.5], σR=[0.5], nint_ym=[1], σQ_int=[5.0])","category":"page"},{"location":"manual/nonlinmpc/","page":"Nonlinear Design","title":"Nonlinear Design","text":"The standard deviation of the angular velocity ω is higher here (σQ second value) since dotω(t) equation includes an uncertain parameter: the friction coefficient K. The estimator tuning is tested on a plant simulated with a different K:","category":"page"},{"location":"manual/nonlinmpc/","page":"Nonlinear Design","title":"Nonlinear Design","text":"par_plant = (par[1], par[2], par[3] + 0.25, par[4])\nf_plant(x, u, _) = x + Ts*pendulum(par_plant, x, u)\nplant = NonLinModel(f_plant, h, Ts, nu, nx, ny)\nres = sim!(estim, 60, [0.5], plant=plant, y_noise=[0.5])\nplot(res, plotu=false, plotxwithx̂=true)","category":"page"},{"location":"manual/nonlinmpc/","page":"Nonlinear Design","title":"Nonlinear Design","text":"The estimate x_3 is the integrator state that compensates for static errors (nint_ym and σQ_int parameters of UnscentedKalmanFilter). The Kalman filter performance seems sufficient for control. As the motor torque is limited to -1.5 to 1.5 N m, we incorporate the input constraints in a NonLinMPC:","category":"page"},{"location":"manual/nonlinmpc/","page":"Nonlinear Design","title":"Nonlinear Design","text":"mpc = NonLinMPC(estim, Hp=20, Hc=4, Mwt=[0.5], Nwt=[2.5], Cwt=Inf)\nmpc = setconstraint!(mpc, umin=[-1.5], umax=[+1.5])","category":"page"},{"location":"manual/nonlinmpc/","page":"Nonlinear Design","title":"Nonlinear Design","text":"We test mpc performance on plant by imposing an angular setpoint of 180° (inverted position):","category":"page"},{"location":"manual/nonlinmpc/","page":"Nonlinear Design","title":"Nonlinear Design","text":"res = sim!(mpc, 60, [180.0], plant=plant, x0=zeros(plant.nx), x̂0=zeros(mpc.estim.nx̂))\nplot(res)","category":"page"},{"location":"manual/nonlinmpc/","page":"Nonlinear Design","title":"Nonlinear Design","text":"The controller seems robust enough to variations on K coefficient. Starting from this inverted position, the closed-loop response to a step disturbances of 10° is also satisfactory:","category":"page"},{"location":"manual/nonlinmpc/","page":"Nonlinear Design","title":"Nonlinear Design","text":"res = sim!(mpc, 60, [180.0], plant=plant, x0=[π, 0], x̂0=[π, 0, 0], y_step=[10])\nplot(res)","category":"page"},{"location":"public/state_estim/#Functions:-State-Estimators","page":"State Estimators","title":"Functions: State Estimators","text":"","category":"section"},{"location":"public/state_estim/","page":"State Estimators","title":"State Estimators","text":"Pages = [\"state_estim.md\"]","category":"page"},{"location":"public/state_estim/","page":"State Estimators","title":"State Estimators","text":"This module includes many state estimators (or state observer), both for deterministic and stochastic systems. The implementations focus on control applications, that is, relying on the estimates to compute a full state feedback (predictive controllers, in this package). They all incorporates some kind of integral action by default, since it is generally desired to eliminate the steady-state error with closed-loop control (offset-free tracking).","category":"page"},{"location":"public/state_estim/","page":"State Estimators","title":"State Estimators","text":"info: Info\nIf you plan to use the estimators for other contexts than this specific package (e.g. : filter, parameter estimation, etc.), careful must be taken at construction since the integral action is not necessarily desired. The option nint_ym=0 disable it.","category":"page"},{"location":"public/state_estim/","page":"State Estimators","title":"State Estimators","text":"The estimators are all implemented in the predictor form (a.k.a. observer form), that is, they all estimates at each discrete time k the states of the next period mathbfx_k(k+1)[1]. In contrast, the filter form that estimates mathbfx_k(k) is sometimes slightly more accurate.","category":"page"},{"location":"public/state_estim/","page":"State Estimators","title":"State Estimators","text":"[1]: also denoted mathbfx_k+1k elsewhere.","category":"page"},{"location":"public/state_estim/","page":"State Estimators","title":"State Estimators","text":"The predictor form comes in handy for control applications since the estimations come after the controller computations, without introducing any additional delays. Moreover, the moveinput! method of the predictive controllers does not automatically update the estimates with updatestate!. This allows applying the calculated inputs on the real plant before starting the potentially expensive estimator computations (see Manual for examples).","category":"page"},{"location":"public/state_estim/","page":"State Estimators","title":"State Estimators","text":"info: Info\nAll the estimators support measured mathbfy^m and unmeasured mathbfy^u model outputs, where mathbfy refers to all of them.","category":"page"},{"location":"public/state_estim/#StateEstimator","page":"State Estimators","title":"StateEstimator","text":"","category":"section"},{"location":"public/state_estim/","page":"State Estimators","title":"State Estimators","text":"StateEstimator","category":"page"},{"location":"public/state_estim/#ModelPredictiveControl.StateEstimator","page":"State Estimators","title":"ModelPredictiveControl.StateEstimator","text":"Abstract supertype of all state estimators.\n\n\n\n(estim::StateEstimator)(d=Float64[])\n\nFunctor allowing callable StateEstimator object as an alias for evaloutput.\n\nExamples\n\njulia> kf = KalmanFilter(setop!(LinModel(tf(3, [10, 1]), 2), yop=[20]));\n\njulia> ŷ = kf() \n1-element Vector{Float64}:\n 20.0\n\n\n\n\n\n","category":"type"},{"location":"public/state_estim/#SteadyKalmanFilter","page":"State Estimators","title":"SteadyKalmanFilter","text":"","category":"section"},{"location":"public/state_estim/","page":"State Estimators","title":"State Estimators","text":"SteadyKalmanFilter","category":"page"},{"location":"public/state_estim/#ModelPredictiveControl.SteadyKalmanFilter","page":"State Estimators","title":"ModelPredictiveControl.SteadyKalmanFilter","text":"SteadyKalmanFilter(model::LinModel; )\n\nConstruct a steady-state Kalman Filter with the LinModel model.\n\nThe steady-state (or asymptotic) Kalman filter is based on the process model :\n\nbeginaligned\n mathbfx(k+1) = \n mathbfA x(k) + mathbfB_u u(k) + mathbfB_d d(k) + mathbfw(k) \n mathbfy^m(k) = mathbfC^m x(k) + mathbfD_d^m d(k) + mathbfv(k) \n mathbfy^u(k) = mathbfC^u x(k) + mathbfD_d^u d(k)\nendaligned\n\nwith sensor mathbfv(k) and process mathbfw(k) noises as uncorrelated zero mean white noise vectors, with a respective covariance of mathbfR and mathbfQ. The arguments are in standard deviations σ, i.e. same units than outputs and states. The matrices mathbfA B_u B_d C D_d are model matrices augmented with the stochastic model, which is specified by the numbers of output integrator nint_ym (see Extended Help). Likewise, the covariance matrices are augmented with mathbfQ = textdiag(Q Q_int) and mathbfR = R. The matrices mathbfC^m D_d^m are the rows of mathbfC D_d that correspond to measured outputs mathbfy^m (and unmeasured ones, for mathbfC^u D_d^u).\n\nArguments\n\nmodel::LinModel : (deterministic) model for the estimations.\ni_ym=1:model.ny : model output indices that are measured mathbfy^m, the rest are unmeasured mathbfy^u.\nσQ=fill(1/model.nx,model.nx) : main diagonal of the process noise covariance mathbfQ of model, specified as a standard deviation vector.\nσR=fill(1,length(i_ym)) : main diagonal of the sensor noise covariance mathbfR of model measured outputs, specified as a standard deviation vector.\nnint_ym=default_nint(model,i_ym) : integrator quantity per measured outputs (vector) for the stochastic model, use nint_ym=0 for no integrator (see Extended Help).\nσQ_int=fill(1,sum(nint_ym)) : same than σQ but for the stochastic model covariance mathbfQ_int (composed of output integrators).\n\nExamples\n\njulia> model = LinModel([tf(3, [30, 1]); tf(-2, [5, 1])], 0.5);\n\njulia> estim = SteadyKalmanFilter(model, i_ym=[2], σR=[1], σQ_int=[0.01])\nSteadyKalmanFilter estimator with a sample time Ts = 0.5 s, LinModel and:\n 1 manipulated inputs u\n 3 states x̂\n 1 measured outputs ym\n 1 unmeasured outputs yu\n 0 measured disturbances d\n\nExtended Help\n\nThe model augmentation with nint_ym vector adds integrators at model measured outputs mathbfy^m. It creates the integral action when the estimator is used in a controller as state feedback. The method default_nint computes the default value of nint_ym. It can also be tweaked by following these rules on each measured output:\n\nUse 0 integrator if the model output is already integrating (else it will be unobservable)\nUse 1 integrator if the disturbances on the output are typically \"step-like\"\nUse 2 integrators if the disturbances on the output are typically \"ramp-like\" \n\nThe function init_estimstoch builds the stochastic model from nint_ym.\n\ntip: Tip\nIncreasing σQ_int values increases the integral action \"gain\".\n\nThe constructor pre-compute the steady-state Kalman gain K with the kalman function. It can sometimes fail, for example when model matrices are ill-conditioned. In such a case, you can try the alternative time-varying KalmanFilter.\n\n\n\n\n\nSteadyKalmanFilter(model, i_ym, nint_ym, Q̂, R̂)\n\nConstruct the estimator from the augmented covariance matrices Q̂ and R̂.\n\nThis syntax allows nonzero off-diagonal elements in mathbfQ R.\n\n\n\n\n\n","category":"type"},{"location":"public/state_estim/#KalmanFilter","page":"State Estimators","title":"KalmanFilter","text":"","category":"section"},{"location":"public/state_estim/","page":"State Estimators","title":"State Estimators","text":"KalmanFilter","category":"page"},{"location":"public/state_estim/#ModelPredictiveControl.KalmanFilter","page":"State Estimators","title":"ModelPredictiveControl.KalmanFilter","text":"KalmanFilter(model::LinModel; )\n\nConstruct a time-varying Kalman Filter with the LinModel model.\n\nThe process model is identical to SteadyKalmanFilter. The matrix mathbfP_k(k+1) is the estimation error covariance of model states augmented with the stochastic ones (specified by nint_ym). Two keyword arguments modify its initial value with mathbfP_-1(0) = mathrmdiag mathbfP(0) mathbfP_int(0) .\n\nArguments\n\nmodel::LinModel : (deterministic) model for the estimations.\nσP0=fill(1/model.nx,model.nx) : main diagonal of the initial estimate covariance mathbfP(0), specified as a standard deviation vector.\nσP0_int=fill(1,sum(nint_ym)) : same than σP0 but for the stochastic model covariance mathbfP_int(0) (composed of output integrators).\n of SteadyKalmanFilter constructor.\n\nExamples\n\njulia> model = LinModel([tf(3, [30, 1]); tf(-2, [5, 1])], 0.5);\n\njulia> estim = KalmanFilter(model, i_ym=[2], σR=[1], σP0=[100, 100], σQ_int=[0.01])\nKalmanFilter estimator with a sample time Ts = 0.5 s, LinModel and:\n 1 manipulated inputs u\n 3 states x̂\n 1 measured outputs ym\n 1 unmeasured outputs yu\n 0 measured disturbances d\n\n\n\n\n\nKalmanFilter(model, i_ym, nint_ym, P̂0, Q̂, R̂)\n\nConstruct the estimator from the augmented covariance matrices P̂0, Q̂ and R̂.\n\nThis syntax allows nonzero off-diagonal elements in mathbfP_-1(0) mathbfQ R.\n\n\n\n\n\n","category":"type"},{"location":"public/state_estim/#Luenberger","page":"State Estimators","title":"Luenberger","text":"","category":"section"},{"location":"public/state_estim/","page":"State Estimators","title":"State Estimators","text":"Luenberger","category":"page"},{"location":"public/state_estim/#ModelPredictiveControl.Luenberger","page":"State Estimators","title":"ModelPredictiveControl.Luenberger","text":"Luenberger(\n model::LinModel; \n i_ym = 1:model.ny, \n nint_ym = default_nint(model, i_ym),\n p̂ = 1e-3*(0:(model.nx + sum(nint_ym)-1)) .+ 0.5)\n)\n\nConstruct a Luenberger observer with the LinModel model.\n\ni_ym provides the model output indices that are measured mathbfy^m, the rest are unmeasured mathbfy^u. model matrices are augmented with the stochastic model, which is specified by the numbers of output integrator nint_ym (see SteadyKalmanFilter Extended Help). The argument p̂ is a vector of model.nx + sum(nint_ym) elements specifying the observer poles/eigenvalues (near z=05 by default). The method computes the observer gain mathbfK with place.\n\nExamples\n\njulia> model = LinModel([tf(3, [30, 1]); tf(-2, [5, 1])], 0.5);\n\njulia> estim = Luenberger(model, nint_ym=[1, 1], p̂=[0.61, 0.62, 0.63, 0.64])\nLuenberger estimator with a sample time Ts = 0.5 s, LinModel and:\n 1 manipulated inputs u\n 4 states x̂\n 2 measured outputs ym\n 0 unmeasured outputs yu\n 0 measured disturbances d\n\n\n\n\n\n","category":"type"},{"location":"public/state_estim/#UnscentedKalmanFilter","page":"State Estimators","title":"UnscentedKalmanFilter","text":"","category":"section"},{"location":"public/state_estim/","page":"State Estimators","title":"State Estimators","text":"UnscentedKalmanFilter","category":"page"},{"location":"public/state_estim/#ModelPredictiveControl.UnscentedKalmanFilter","page":"State Estimators","title":"ModelPredictiveControl.UnscentedKalmanFilter","text":"UnscentedKalmanFilter(model::SimModel; )\n\nConstruct an unscented Kalman Filter with the SimModel model.\n\nBoth LinModel and NonLinModel are supported. The unscented Kalman filter is based on the process model :\n\nbeginaligned\n mathbfx(k+1) = mathbffBig(mathbfx(k) mathbfu(k) mathbfd(k)Big) \n + mathbfw(k) \n mathbfy^m(k) = mathbfh^mBig(mathbfx(k) mathbfd(k)Big) + mathbfv(k) \n mathbfy^u(k) = mathbfh^uBig(mathbfx(k) mathbfd(k)Big) \nendaligned\n\nSee SteadyKalmanFilter for details on mathbfv(k) mathbfw(k) noises and mathbfR mathbfQ covariances. The functions mathbff h are model state-space functions augmented with the stochastic model, which is specified by the numbers of output integrator nint_ym (see SteadyKalmanFilter for details). The mathbfh^m function represents the measured outputs of mathbfh function (and unmeasured ones, for mathbfh^u).\n\nArguments\n\nmodel::SimModel : (deterministic) model for the estimations.\nα=1e-3 : alpha parameter, spread of the state distribution (0 α 1).\nβ=2 : beta parameter, skewness and kurtosis of the states distribution (β 0).\nκ=0 : kappa parameter, another spread parameter (0 κ 3).\n of SteadyKalmanFilter constructor.\n of KalmanFilter constructor.\n\nExamples\n\njulia> model = NonLinModel((x,u,_)->0.1x+u, (x,_)->2x, 10.0, 1, 1, 1);\n\njulia> estim = UnscentedKalmanFilter(model, σR=[1], nint_ym=[2], σP0_int=[1, 1])\nUnscentedKalmanFilter estimator with a sample time Ts = 10.0 s, NonLinModel and:\n 1 manipulated inputs u\n 3 states x̂\n 1 measured outputs ym\n 0 unmeasured outputs yu\n 0 measured disturbances d\n\n\n\n\n\nUnscentedKalmanFilter{M<:SimModel}(model::M, i_ym, nint_ym, P̂0, Q̂, R̂, α, β, κ)\n\nConstruct the estimator from the augmented covariance matrices P̂0, Q̂ and R̂.\n\nThis syntax allows nonzero off-diagonal elements in mathbfP_-1(0) mathbfQ R.\n\n\n\n\n\n","category":"type"},{"location":"public/state_estim/#ExtendedKalmanFilter","page":"State Estimators","title":"ExtendedKalmanFilter","text":"","category":"section"},{"location":"public/state_estim/","page":"State Estimators","title":"State Estimators","text":"ExtendedKalmanFilter","category":"page"},{"location":"public/state_estim/#ModelPredictiveControl.ExtendedKalmanFilter","page":"State Estimators","title":"ModelPredictiveControl.ExtendedKalmanFilter","text":"ExtendedKalmanFilter(model::SimModel; )\n\nConstruct an extended Kalman Filter with the SimModel model.\n\nBoth LinModel and NonLinModel are supported. The process model is identical to UnscentedKalmanFilter. The Jacobians of the augmented model mathbff h are computed with ForwardDiff.jl automatic differentiation.\n\nwarning: Warning\nSee Extended Help if you get an error like: MethodError: no method matching (::var\"##\")(::Vector{ForwardDiff.Dual}).\n\nArguments\n\nmodel::SimModel : (deterministic) model for the estimations.\n of SteadyKalmanFilter constructor.\n of KalmanFilter constructor.\n\nExamples\n\njulia> model = NonLinModel((x,u,_)->0.2x+u, (x,_)->-3x, 5.0, 1, 1, 1);\n\njulia> estim = ExtendedKalmanFilter(model, σQ=[2], σQ_int=[2], σP0=[0.1], σP0_int=[0.1])\nExtendedKalmanFilter estimator with a sample time Ts = 5.0 s, NonLinModel and:\n 1 manipulated inputs u\n 2 states x̂\n 1 measured outputs ym\n 0 unmeasured outputs yu\n 0 measured disturbances d\n\nExtended Help\n\nAutomatic differentiation (AD) allows exact Jacobians. The NonLinModel f and h functions must be compatible with this feature though. See Automatic differentiation for common mistakes when writing these functions.\n\n\n\n\n\nExtendedKalmanFilter{M<:SimModel}(model::M, i_ym, nint_ym, P̂0, Q̂, R̂)\n\nConstruct the estimator from the augmented covariance matrices P̂0, Q̂ and R̂.\n\nThis syntax allows nonzero off-diagonal elements in mathbfP_-1(0) mathbfQ R.\n\n\n\n\n\n","category":"type"},{"location":"public/state_estim/#InternalModel","page":"State Estimators","title":"InternalModel","text":"","category":"section"},{"location":"public/state_estim/","page":"State Estimators","title":"State Estimators","text":"InternalModel","category":"page"},{"location":"public/state_estim/#ModelPredictiveControl.InternalModel","page":"State Estimators","title":"ModelPredictiveControl.InternalModel","text":"InternalModel(model::SimModel; i_ym=1:model.ny, stoch_ym=ss(1,1,1,1,model.Ts).*I)\n\nConstruct an internal model estimator based on model (LinModel or NonLinModel).\n\ni_ym provides the model output indices that are measured mathbfy^m, the rest are unmeasured mathbfy^u. model evaluates the deterministic predictions mathbfy_d, and stoch_ym, the stochastic predictions of the measured outputs mathbfy_s^m (the unmeasured ones being mathbfy_s^u=0). The predicted outputs sum both values : mathbfy = y_d + y_s.\n\nwarning: Warning\nInternalModel estimator does not work if model is integrating or unstable. The constructor verifies these aspects for LinModel but not for NonLinModel. Uses any other state estimator in such cases.\n\nExamples\n\njulia> estim = InternalModel(LinModel([tf(3, [30, 1]); tf(-2, [5, 1])], 0.5), i_ym=[2])\nInternalModel estimator with a sample time Ts = 0.5 s, LinModel and:\n 1 manipulated inputs u\n 2 states x̂\n 1 measured outputs ym\n 1 unmeasured outputs yu\n 0 measured disturbances d\n\nExtended Help\n\nstoch_ym is a TransferFunction or StateSpace object that models disturbances on mathbfy^m. Its input is a hypothetical zero mean white noise vector. stoch_ym supposes 1 integrator per measured outputs by default, assuming that the current stochastic estimate mathbfy_s^m(k) = mathbfy^m(k) - mathbfy_d^m(k) is constant in the future. This is the dynamic matrix control (DMC) strategy, which is simple but sometimes too aggressive. Additional poles and zeros in stoch_ym can mitigate this.\n\n\n\n\n\n","category":"type"},{"location":"public/state_estim/#Default-model-augmentation","page":"State Estimators","title":"Default model augmentation","text":"","category":"section"},{"location":"public/state_estim/","page":"State Estimators","title":"State Estimators","text":"default_nint","category":"page"},{"location":"public/state_estim/#ModelPredictiveControl.default_nint","page":"State Estimators","title":"ModelPredictiveControl.default_nint","text":"default_nint(model::LinModel, i_ym=1:model.ny)\n\nGet default integrator quantity per measured outputs nint_ym for LinModel.\n\nThe measured output mathbfy^m indices are specified by i_ym argument. By default, one integrator is added on each measured outputs. If mathbfA C matrices of the augmented model becomes unobservable, the integrator is removed. This approach works well for stable, integrating and unstable model (see Examples).\n\nExamples\n\njulia> model = LinModel(append(tf(3, [10, 1]), tf(2, [1, 0]), tf(4,[-5, 1])), 1.0);\n\njulia> nint_ym = default_nint(model)\n3-element Vector{Int64}:\n 1\n 0\n 1\n\n\n\n\n\ndefault_nint(model::SimModel, i_ym=1:model.ny)\n\nOne integrator on each measured output by default if model is not a LinModel.\n\n\n\n\n\n","category":"function"},{"location":"func_index/#Index","page":"Index","title":"Index","text":"","category":"section"},{"location":"func_index/","page":"Index","title":"Index","text":"","category":"page"},{"location":"manual/linmpc/#man_lin","page":"Linear Design","title":"Manual: Linear Design","text":"","category":"section"},{"location":"manual/linmpc/#Linear-Model","page":"Linear Design","title":"Linear Model","text":"","category":"section"},{"location":"manual/linmpc/","page":"Linear Design","title":"Linear Design","text":"The example considers a well-stirred tank with a cold and hot water inlet as a plant. The water flows out of an opening at the bottom of the tank. The manipulated inputs are the cold u_c and hot u_h water flow rate, and the measured outputs are the liquid level y_L and temperature y_T:","category":"page"},{"location":"manual/linmpc/","page":"Linear Design","title":"Linear Design","text":"beginaligned\n mathbfu = beginbmatrix u_c u_h endbmatrix \n mathbfy = beginbmatrix y_L y_T endbmatrix\nendaligned","category":"page"},{"location":"manual/linmpc/","page":"Linear Design","title":"Linear Design","text":"At the steady-state operating points:","category":"page"},{"location":"manual/linmpc/","page":"Linear Design","title":"Linear Design","text":"beginaligned\n mathbfu_op = beginbmatrix 10 10 endbmatrix \n mathbfy_op = beginbmatrix 50 30 endbmatrix \nendaligned","category":"page"},{"location":"manual/linmpc/","page":"Linear Design","title":"Linear Design","text":"the following linear model accurately describes the plant dynamics:","category":"page"},{"location":"manual/linmpc/","page":"Linear Design","title":"Linear Design","text":"beginbmatrix\n y_L(s) y_T(s)\nendbmatrix = \nbeginbmatrix\n frac19018s + 1 frac19018s + 1 3pt\n frac-0748s + 1 frac0748s + 1\nendbmatrix\nbeginbmatrix\n u_c(s) u_h(s)\nendbmatrix","category":"page"},{"location":"manual/linmpc/","page":"Linear Design","title":"Linear Design","text":"We first need to construct a LinModel objet with setop! to handle the operating points:","category":"page"},{"location":"manual/linmpc/","page":"Linear Design","title":"Linear Design","text":"using ModelPredictiveControl, ControlSystemsBase\nsys = [ tf(1.90, [18, 1]) tf(1.90, [18, 1]);\n tf(-0.74,[8, 1]) tf(0.74, [8, 1]) ]\nTs = 4.0\nmodel = setop!(LinModel(sys, Ts), uop=[10, 10], yop=[50, 30])","category":"page"},{"location":"manual/linmpc/","page":"Linear Design","title":"Linear Design","text":"The model object will be used for two purposes : to construct our controller, and as a plant simulator to test the design.","category":"page"},{"location":"manual/linmpc/#Linear-Predictive-Controller","page":"Linear Design","title":"Linear Predictive Controller","text":"","category":"section"},{"location":"manual/linmpc/","page":"Linear Design","title":"Linear Design","text":"A predictive feedback will control both the water level y_L and temperature y_T in the tank, at a sampling time of 4 s. The tank level should also never fall below 45:","category":"page"},{"location":"manual/linmpc/","page":"Linear Design","title":"Linear Design","text":"y_L 45","category":"page"},{"location":"manual/linmpc/","page":"Linear Design","title":"Linear Design","text":"We design our LinMPC controllers by including the level constraint with setconstraint!:","category":"page"},{"location":"manual/linmpc/","page":"Linear Design","title":"Linear Design","text":"mpc = setconstraint!(LinMPC(model, Hp=15, Hc=2), ŷmin=[45, -Inf])","category":"page"},{"location":"manual/linmpc/","page":"Linear Design","title":"Linear Design","text":"By default, LinMPC controllers use OSQP to solve the problem and a SteadyKalmanFilter to estimate the plant states. Before closing the loop, we call initstate! with the actual plant inputs and measurements to ensure a bumpless transfer. Since model simulates our plant here, its output will initialize the states. LinModel objects are callable for this purpose (an alias for evaloutput):","category":"page"},{"location":"manual/linmpc/","page":"Linear Design","title":"Linear Design","text":"u = model.uop\ny = model() # or equivalently : y = evaloutput(model)\ninitstate!(mpc, u, y)\nnothing # hide","category":"page"},{"location":"manual/linmpc/","page":"Linear Design","title":"Linear Design","text":"We can then close the loop and test mpc performance on the simulator by imposing step changes on output setpoints mathbfr_y and on a load disturbance mathbfu_d:","category":"page"},{"location":"manual/linmpc/","page":"Linear Design","title":"Linear Design","text":"function test_mpc(mpc, model)\n N = 100\n ry, ud = [50, 30], [0, 0]\n u_data = zeros(model.nu, N)\n y_data = zeros(model.ny, N)\n ry_data = zeros(model.ny, N)\n for k = 0:N-1\n y = model() # simulated measurements\n k == 25 && (ry = [50, 35])\n k == 50 && (ry = [55, 30])\n k == 75 && (ud = [-15, 0])\n u = mpc(ry) # or equivalently : u = moveinput!(mpc, ry)\n u_data[:,k+1] = u\n y_data[:,k+1] = y\n ry_data[:,k+1] = ry \n updatestate!(mpc, u, y) # update mpc state estimate\n updatestate!(model, u + ud) # update simulator with disturbance\n end\n return u_data, y_data, ry_data\nend\nu_data, y_data, ry_data = test_mpc(mpc, model)\nt_data = Ts*(0:(size(y_data,2)-1))\nnothing # hide","category":"page"},{"location":"manual/linmpc/","page":"Linear Design","title":"Linear Design","text":"The LinMPC objects are also callable as an alternative syntax for moveinput!. Calling updatestate! on the mpc object updates its internal state for the NEXT control period (this is by design, see Functions: State Estimators for justifications). That is why the call is done at the end of the for loop. The same logic applies for model.","category":"page"},{"location":"manual/linmpc/","page":"Linear Design","title":"Linear Design","text":"Lastly, we plot the closed-loop test with the Plots package:","category":"page"},{"location":"manual/linmpc/","page":"Linear Design","title":"Linear Design","text":"using Plots\nfunction plot_data(t_data, u_data, y_data, ry_data)\n p1 = plot(t_data, y_data[1,:], label=\"level\"); ylabel!(\"level\")\n plot!(t_data, ry_data[1,:], label=\"setpoint\", linestyle=:dash, linetype=:steppost)\n plot!(t_data, fill(45,size(t_data)), label=\"min\", linestyle=:dot, linewidth=1.5)\n p2 = plot(t_data, y_data[2,:], label=\"temp.\"); ylabel!(\"temp.\")\n plot!(t_data, ry_data[2,:],label=\"setpoint\", linestyle=:dash, linetype=:steppost)\n p3 = plot(t_data,u_data[1,:],label=\"cold\", linetype=:steppost); ylabel!(\"flow rate\")\n plot!(t_data,u_data[2,:],label=\"hot\", linetype=:steppost); xlabel!(\"time (s)\")\n return plot(p1, p2, p3, layout=(3,1), fmt=:svg)\nend\nplot_data(t_data, u_data, y_data, ry_data)","category":"page"},{"location":"manual/linmpc/","page":"Linear Design","title":"Linear Design","text":"For some situations, when LinMPC matrices are small/medium and dense, DAQP optimizer may be more efficient. To install it, run:","category":"page"},{"location":"manual/linmpc/","page":"Linear Design","title":"Linear Design","text":"using Pkg; Pkg.add(\"DAQP\")","category":"page"},{"location":"manual/linmpc/","page":"Linear Design","title":"Linear Design","text":"Constructing a LinMPC with DAQP:","category":"page"},{"location":"manual/linmpc/","page":"Linear Design","title":"Linear Design","text":"using JuMP, DAQP\ndaqp = Model(DAQP.Optimizer)\nmpc2 = setconstraint!(LinMPC(model, Hp=15, Hc=2, optim=daqp), ŷmin=[45, -Inf])","category":"page"},{"location":"manual/linmpc/","page":"Linear Design","title":"Linear Design","text":"leads to identical results here:","category":"page"},{"location":"manual/linmpc/","page":"Linear Design","title":"Linear Design","text":"setstate!(model, zeros(model.nx))\ninitstate!(mpc2, model.uop, model())\nu_data2, y_data2, ry_data2 = test_mpc(mpc2, model)\nplot_data(t_data, u_data2, y_data2, ry_data2)","category":"page"},{"location":"public/sim_model/#func_sim_model","page":"Plant Models","title":"Functions: Plant Models","text":"","category":"section"},{"location":"public/sim_model/","page":"Plant Models","title":"Plant Models","text":"Pages = [\"sim_model.md\"]","category":"page"},{"location":"public/sim_model/","page":"Plant Models","title":"Plant Models","text":"The SimModel types represents discrete state-space models that can be used to construct StateEstimators and PredictiveControllers, or as plant simulators by calling evaloutput and updatestate! methods on SimModel instances (to test estimator/controller designs). For time simulations, the states x are stored inside SimModel instances. Use setstate! method to manually modify them.","category":"page"},{"location":"public/sim_model/#SimModel","page":"Plant Models","title":"SimModel","text":"","category":"section"},{"location":"public/sim_model/","page":"Plant Models","title":"Plant Models","text":"SimModel","category":"page"},{"location":"public/sim_model/#ModelPredictiveControl.SimModel","page":"Plant Models","title":"ModelPredictiveControl.SimModel","text":"Abstract supertype of LinModel and NonLinModel types.\n\n\n\n(model::SimModel)(d=Float64[])\n\nFunctor allowing callable SimModel object as an alias for evaloutput.\n\nExamples\n\njulia> model = NonLinModel((x,u,_)->-x + u, (x,_)->x .+ 20, 10.0, 1, 1, 1);\n\njulia> y = model()\n1-element Vector{Float64}:\n 20.0\n\n\n\n\n\n","category":"type"},{"location":"public/sim_model/#LinModel","page":"Plant Models","title":"LinModel","text":"","category":"section"},{"location":"public/sim_model/","page":"Plant Models","title":"Plant Models","text":"LinModel","category":"page"},{"location":"public/sim_model/#ModelPredictiveControl.LinModel","page":"Plant Models","title":"ModelPredictiveControl.LinModel","text":"LinModel(sys::StateSpace[, Ts]; i_u=1:size(sys,2), i_d=Int[])\n\nConstruct a linear model from state-space model sys with sampling time Ts in second.\n\nTs can be omitted when sys is discrete-time. Its state-space matrices are:\n\nbeginaligned\n mathbfx(k+1) = mathbfA x(k) + mathbfB z(k) \n mathbfy(k) = mathbfC x(k) + mathbfD z(k)\nendaligned\n\nwith the state mathbfx and output mathbfy vectors. The mathbfz vector comprises the manipulated inputs mathbfu and measured disturbances mathbfd, in any order. i_u provides the indices of mathbfz that are manipulated, and i_d, the measured disturbances. See Extended Help if sys is continuous-time, or discrete-time with Ts ≠ sys.Ts.\n\nSee also ss, tf.\n\nExamples\n\njulia> model = LinModel(ss(0.4, 0.2, 0.3, 0, 0.1))\nDiscrete-time linear model with a sample time Ts = 0.1 s and:\n 1 manipulated inputs u\n 1 states x\n 1 outputs y\n 0 measured disturbances d\n\nExtended Help\n\nState-space matrices are similar if sys is continuous (replace mathbfx(k+1) with mathbfx(t) and k with t on the LHS). In such a case, it's discretized with c2d and :zoh for manipulated inputs, and :tustin, for measured disturbances. Lastly, if sys is discrete and the provided argument Ts ≠ sys.Ts, the system is resampled by using the aforementioned discretization methods.\n\nThe constructor transforms the system to a more practical form (mathbfD_u=0 because of the zero-order hold):\n\nbeginaligned\n mathbfx(k+1) = mathbfA x(k) + mathbfB_u u(k) + mathbfB_d d(k) \n mathbfy(k) = mathbfC x(k) + mathbfD_d d(k)\nendaligned\n\n\n\n\n\nLinModel(sys::TransferFunction[, Ts]; i_u=1:size(sys,2), i_d=Int[])\n\nConvert to minimal realization state-space when sys is a transfer function.\n\nsys is equal to fracmathbfy(s)mathbfz(s) for continuous-time, and fracmathbfy(z)mathbfz(z), for discrete-time.\n\nExamples\n\njulia> model = LinModel([tf(3, [30, 1]) tf(-2, [5, 1])], 0.5, i_d=[2])\nDiscrete-time linear model with a sample time Ts = 0.5 s and:\n 1 manipulated inputs u\n 2 states x\n 1 outputs y\n 1 measured disturbances d\n\n\n\n\n\nLinModel(sys::DelayLtiSystem, Ts; i_u=1:size(sys,2), i_d=Int[])\n\nDiscretize with zero-order hold when sys is a continuous system with delays.\n\nThe delays must be multiples of the sample time Ts.\n\nExamples\n\njulia> model = LinModel(tf(4, [10, 1])*delay(2), 0.5)\nDiscrete-time linear model with a sample time Ts = 0.5 s and:\n 1 manipulated inputs u\n 5 states x\n 1 outputs y\n 0 measured disturbances d\n\n\n\n\n\n","category":"type"},{"location":"public/sim_model/#NonLinModel","page":"Plant Models","title":"NonLinModel","text":"","category":"section"},{"location":"public/sim_model/","page":"Plant Models","title":"Plant Models","text":"NonLinModel","category":"page"},{"location":"public/sim_model/#ModelPredictiveControl.NonLinModel","page":"Plant Models","title":"ModelPredictiveControl.NonLinModel","text":"NonLinModel(f::Function, h::Function, Ts, nu, nx, ny, nd=0)\n\nConstruct a nonlinear model from discrete-time state-space functions f and h.\n\nThe state update mathbff and output mathbfh functions are defined as :\n\n beginaligned\n mathbfx(k+1) = mathbffBig( mathbfx(k) mathbfu(k) mathbfd(k) Big) \n mathbfy(k) = mathbfhBig( mathbfx(k) mathbfd(k) Big)\n endaligned\n\nTs is the sampling time in second. nu, nx, ny and nd are the respective number of manipulated inputs, states, outputs and measured disturbances. \n\ntip: Tip\nReplace the d argument with _ if nd = 0 (see Examples below).\n\nNonlinear continuous-time state-space functions are not supported for now. In such a case, manually call a differential equation solver in f (see Manual).\n\nwarning: Warning\nf and h must be pure Julia functions to use the model in NonLinMPC, ExtendedKalmanFilter and MovingHorizonEstimator.\n\nSee also LinModel.\n\nExamples\n\njulia> model = NonLinModel((x,u,_)->0.1x+u, (x,_)->2x, 10.0, 1, 1, 1)\nDiscrete-time nonlinear model with a sample time Ts = 10.0 s and:\n 1 manipulated inputs u\n 1 states x\n 1 outputs y\n 0 measured disturbances d\n\n\n\n\n\n","category":"type"},{"location":"public/sim_model/#Set-Operating-Points","page":"Plant Models","title":"Set Operating Points","text":"","category":"section"},{"location":"public/sim_model/","page":"Plant Models","title":"Plant Models","text":"setop!","category":"page"},{"location":"public/sim_model/#ModelPredictiveControl.setop!","page":"Plant Models","title":"ModelPredictiveControl.setop!","text":"setop!(model::SimModel; uop=nothing, yop=nothing, dop=nothing)\n\nSet model inputs uop, outputs yop and measured disturbances dop operating points.\n\nThe state-space model with operating points (a.k.a. nominal values) is:\n\nbeginaligned\n mathbfx(k+1) = mathbfA x(k) + mathbfB_u u_0(k) + mathbfB_d d_0(k) \n mathbfy_0(k) = mathbfC x(k) + mathbfD_d d_0(k)\nendaligned\n\nin which the uop, yop and dop vectors evaluate:\n\nbeginaligned\n mathbfu_0(k) = mathbfu(k) - mathbfu_op \n mathbfy_0(k) = mathbfy(k) - mathbfy_op \n mathbfd_0(k) = mathbfd(k) - mathbfd_op \nendaligned\n\nThe structure is similar if model is a NonLinModel:\n\nbeginaligned\n mathbfx(k+1) = mathbffBig(mathbfx(k) mathbfu_0(k) mathbfd_0(k)Big)\n mathbfy_0(k) = mathbfhBig(mathbfx(k) mathbfd_0(k)Big)\nendaligned\n\nExamples\n\njulia> model = setop!(LinModel(tf(3, [10, 1]), 2.0), uop=[50], yop=[20])\nDiscrete-time linear model with a sample time Ts = 2.0 s and:\n 1 manipulated inputs u\n 1 states x\n 1 outputs y\n 0 measured disturbances d\n\n\n\n\n\n","category":"function"},{"location":"public/predictive_control/#Functions:-Predictive-Controllers","page":"Predictive Controllers","title":"Functions: Predictive Controllers","text":"","category":"section"},{"location":"public/predictive_control/","page":"Predictive Controllers","title":"Predictive Controllers","text":"Pages = [\"predictive_control.md\"]","category":"page"},{"location":"public/predictive_control/","page":"Predictive Controllers","title":"Predictive Controllers","text":"All the predictive controllers in this module rely on a state estimator to compute the predictions. The default LinMPC estimator is a SteadyKalmanFilter, and NonLinMPC with nonlinear models, an UnscentedKalmanFilter. For simpler and more classical designs, an InternalModel structure is also available, that assumes by default that the current model mismatch estimation is constant in the future (same approach than dynamic matrix control, DMC).","category":"page"},{"location":"public/predictive_control/","page":"Predictive Controllers","title":"Predictive Controllers","text":"info: Info\nThe nomenclature use capital letters for time series (and matrices) and hats for the predictions (and estimations, for state estimators).","category":"page"},{"location":"public/predictive_control/","page":"Predictive Controllers","title":"Predictive Controllers","text":"To be precise, at the kth control period, the vectors that encompass the future measured disturbances mathbfd, model outputs mathbfy and setpoints mathbfr_y over the prediction horizon H_p are defined as:","category":"page"},{"location":"public/predictive_control/","page":"Predictive Controllers","title":"Predictive Controllers","text":" mathbfD = beginbmatrix\n mathbfd(k+1) mathbfd(k+2) vdots mathbfd(k+H_p)\n endbmatrix quad\n mathbfY = beginbmatrix\n mathbfy(k+1) mathbfy(k+2) vdots mathbfy(k+H_p)\n endbmatrix quad textand quad\n mathbfR_y = beginbmatrix\n mathbfr_y(k+1) mathbfr_y(k+2) vdots mathbfr_y(k+H_p)\n endbmatrix","category":"page"},{"location":"public/predictive_control/","page":"Predictive Controllers","title":"Predictive Controllers","text":"The vectors for the manipulated input mathbfu are shifted by one time step:","category":"page"},{"location":"public/predictive_control/","page":"Predictive Controllers","title":"Predictive Controllers","text":" mathbfU = beginbmatrix\n mathbfu(k+0) mathbfu(k+1) vdots mathbfu(k+H_p-1)\n endbmatrix quad textand quad\n mathbfR_u = beginbmatrix\n mathbfr_u mathbfr_u vdots mathbfr_u\n endbmatrix","category":"page"},{"location":"public/predictive_control/","page":"Predictive Controllers","title":"Predictive Controllers","text":"assuming constant input setpoints at mathbfr_u. Defining the manipulated input increment as mathbfΔu(k+j) = mathbfu(k+j) - mathbfu(k+j-1), the control horizon H_c enforces that mathbfΔu(k+j) = mathbf0 for j H_c. For this reason, the vector that collects them is truncated up to k+H_c-1:","category":"page"},{"location":"public/predictive_control/","page":"Predictive Controllers","title":"Predictive Controllers","text":" mathbfΔU =\n beginbmatrix\n mathbfΔu(k+0) mathbfΔu(k+1) vdots mathbfΔu(k+H_c-1)\n endbmatrix","category":"page"},{"location":"public/predictive_control/#PredictiveController","page":"Predictive Controllers","title":"PredictiveController","text":"","category":"section"},{"location":"public/predictive_control/","page":"Predictive Controllers","title":"Predictive Controllers","text":"PredictiveController","category":"page"},{"location":"public/predictive_control/#ModelPredictiveControl.PredictiveController","page":"Predictive Controllers","title":"ModelPredictiveControl.PredictiveController","text":"Abstract supertype of all predictive controllers.\n\n\n\n(mpc::PredictiveController)(ry, d=Float64[]; kwargs...)\n\nFunctor allowing callable PredictiveController object as an alias for moveinput!.\n\nExamples\n\njulia> mpc = LinMPC(LinModel(tf(5, [2, 1]), 3), Nwt=[0], Hp=1000, Hc=1);\n\njulia> u = mpc([5]); round.(u, digits=3)\n1-element Vector{Float64}:\n 1.0\n\n\n\n\n\n","category":"type"},{"location":"public/predictive_control/#LinMPC","page":"Predictive Controllers","title":"LinMPC","text":"","category":"section"},{"location":"public/predictive_control/","page":"Predictive Controllers","title":"Predictive Controllers","text":"LinMPC","category":"page"},{"location":"public/predictive_control/#ModelPredictiveControl.LinMPC","page":"Predictive Controllers","title":"ModelPredictiveControl.LinMPC","text":"LinMPC(model::LinModel; )\n\nConstruct a linear predictive controller based on LinModel model.\n\nThe controller minimizes the following objective function at each discrete time k:\n\nmin_mathbfΔU ϵ mathbf(R_y - Y) mathbfM_H_p mathbf(R_y - Y) \n + mathbf(ΔU) mathbfN_H_c mathbf(ΔU) \n + mathbf(R_u - U) mathbfL_H_p mathbf(R_u - U) \n + C ϵ^2\n\nin which the weight matrices are repeated H_p or H_c times:\n\nbeginaligned\n mathbfM_H_p = textdiagmathbf(MMM) \n mathbfN_H_c = textdiagmathbf(NNN) \n mathbfL_H_p = textdiagmathbf(LLL) \nendaligned\n\nand with the following nomenclature:\n\nVARIABLE DESCRIPTION SIZE\nH_p prediction horizon (integer) ()\nH_c control horizon (integer) ()\nmathbfΔU manipulated input increments over H_c (nu*Hc,)\nmathbfY predicted outputs over H_p (ny*Hp,)\nmathbfU manipulated inputs over H_p (nu*Hp,)\nmathbfR_y predicted output setpoints over H_p (ny*Hp,)\nmathbfR_u predicted manipulated input setpoints over H_p (nu*Hp,)\nmathbfM output setpoint tracking weights (ny*Hp, ny*Hp)\nmathbfN manipulated input increment weights (nu*Hc, nu*Hc)\nmathbfL manipulated input setpoint tracking weights (nu*Hp, nu*Hp)\nC slack variable weight ()\nϵ slack variable for constraint softening ()\n\nThe mathbfΔU vector includes the manipulated input increments mathbfΔu(k+j) = mathbfu(k+j) - mathbfu(k+j-1) from j=0 to H_c-1, the mathbfY vector, the output predictions mathbfy(k+j) from j=1 to H_p, and the mathbfU vector, the manipulated inputs mathbfu(k+j) from j=0 to H_p-1. The manipulated input setpoint predictions mathbfR_u are constant at mathbfr_u.\n\nThis method uses the default state estimator, a SteadyKalmanFilter with default arguments.\n\nArguments\n\nmodel::LinModel : model used for controller predictions and state estimations.\nHp=10+nk: prediction horizon H_p, nk is the number of delays in model.\nHc=2 : control horizon H_c.\nMwt=fill(1.0,model.ny) : main diagonal of mathbfM weight matrix (vector).\nNwt=fill(0.1,model.nu) : main diagonal of mathbfN weight matrix (vector).\nLwt=fill(0.0,model.nu) : main diagonal of mathbfL weight matrix (vector).\nCwt=1e5 : slack variable weight C (scalar), use Cwt=Inf for hard constraints only.\nru=model.uop : manipulated input setpoints mathbfr_u (vector).\noptim=JuMP.Model(OSQP.MathOptInterfaceOSQP.Optimizer) : quadratic optimizer used in the predictive controller, provided as a JuMP.Model (default to OSQP.jl optimizer).\n\nExamples\n\njulia> model = LinModel([tf(3, [30, 1]); tf(-2, [5, 1])], 4);\n\njulia> mpc = LinMPC(model, Mwt=[0, 1], Nwt=[0.5], Hp=30, Hc=1)\nLinMPC controller with a sample time Ts = 4.0 s, OSQP optimizer, SteadyKalmanFilter estimator and:\n 30 prediction steps Hp\n 1 control steps Hc\n 1 manipulated inputs u\n 4 states x̂\n 2 measured outputs ym\n 0 unmeasured outputs yu\n 0 measured disturbances d\n\nExtended Help\n\nManipulated inputs setpoints mathbfr_u are not common but they can be interesting for over-actuated systems, when nu > ny (e.g. prioritize solutions with lower economical costs). The default Lwt value implies that this feature is disabled by default.\n\n\n\n\n\nLinMPC(estim::StateEstimator; )\n\nUse custom state estimator estim to construct LinMPC.\n\nestim.model must be a LinModel. Else, a NonLinMPC is required. \n\nExamples\n\njulia> estim = KalmanFilter(LinModel([tf(3, [30, 1]); tf(-2, [5, 1])], 4), i_ym=[2]);\n\njulia> mpc = LinMPC(estim, Mwt=[0, 1], Nwt=[0.5], Hp=30, Hc=1)\nLinMPC controller with a sample time Ts = 4.0 s, OSQP optimizer, KalmanFilter estimator and:\n 30 prediction steps Hp\n 1 control steps Hc\n 1 manipulated inputs u\n 3 states x̂\n 1 measured outputs ym\n 1 unmeasured outputs yu\n 0 measured disturbances d\n\n\n\n\n\n","category":"type"},{"location":"public/predictive_control/#ExplicitMPC","page":"Predictive Controllers","title":"ExplicitMPC","text":"","category":"section"},{"location":"public/predictive_control/","page":"Predictive Controllers","title":"Predictive Controllers","text":"ExplicitMPC","category":"page"},{"location":"public/predictive_control/#ModelPredictiveControl.ExplicitMPC","page":"Predictive Controllers","title":"ModelPredictiveControl.ExplicitMPC","text":"ExplicitMPC(model::LinModel; )\n\nConstruct an explicit linear predictive controller based on LinModel model.\n\nThe controller minimizes the following objective function at each discrete time k:\n\nmin_mathbfΔU mathbf(R_y - Y) mathbfM_H_p mathbf(R_y - Y) \n + mathbf(ΔU) mathbfN_H_c mathbf(ΔU) \n + mathbf(R_u - U) mathbfL_H_p mathbf(R_u - U) \n\nSee LinMPC for the variable definitions. This controller does not support constraints but the computational costs are extremely low (array division), therefore suitable for applications that require small sample times. This method uses the default state estimator, a SteadyKalmanFilter with default arguments.\n\nArguments\n\nmodel::LinModel : model used for controller predictions and state estimations.\nHp=10+nk: prediction horizon H_p, nk is the number of delays in model.\nHc=2 : control horizon H_c.\nMwt=fill(1.0,model.ny) : main diagonal of mathbfM weight matrix (vector).\nNwt=fill(0.1,model.nu) : main diagonal of mathbfN weight matrix (vector).\nLwt=fill(0.0,model.nu) : main diagonal of mathbfL weight matrix (vector).\nru=model.uop : manipulated input setpoints mathbfr_u (vector).\n\nExamples\n\njulia> model = LinModel([tf(3, [30, 1]); tf(-2, [5, 1])], 4);\n\njulia> mpc = ExplicitMPC(model, Mwt=[0, 1], Nwt=[0.5], Hp=30, Hc=1)\nExplicitMPC controller with a sample time Ts = 4.0 s, SteadyKalmanFilter estimator and:\n 30 prediction steps Hp\n 1 control steps Hc\n 1 manipulated inputs u\n 4 states x̂\n 2 measured outputs ym\n 0 unmeasured outputs yu\n 0 measured disturbances d\n\n\n\n\n\nExplicitMPC(estim::StateEstimator; )\n\nUse custom state estimator estim to construct ExplicitMPC.\n\nestim.model must be a LinModel. Else, a NonLinMPC is required. \n\nExamples\n\njulia> estim = KalmanFilter(LinModel([tf(3, [30, 1]); tf(-2, [5, 1])], 4), i_ym=[2]);\n\njulia> mpc = ExplicitMPC(estim, Mwt=[0, 1], Nwt=[0.5], Hp=30, Hc=1)\nExplicitMPC controller with a sample time Ts = 4.0 s, KalmanFilter estimator and:\n 30 prediction steps Hp\n 1 control steps Hc\n 1 manipulated inputs u\n 3 states x̂\n 1 measured outputs ym\n 1 unmeasured outputs yu\n 0 measured disturbances d\n\n\n\n\n\n","category":"type"},{"location":"public/predictive_control/#NonLinMPC","page":"Predictive Controllers","title":"NonLinMPC","text":"","category":"section"},{"location":"public/predictive_control/","page":"Predictive Controllers","title":"Predictive Controllers","text":"NonLinMPC","category":"page"},{"location":"public/predictive_control/#ModelPredictiveControl.NonLinMPC","page":"Predictive Controllers","title":"ModelPredictiveControl.NonLinMPC","text":"NonLinMPC(model::SimModel; )\n\nConstruct a nonlinear predictive controller based on SimModel model.\n\nBoth NonLinModel and LinModel are supported (see Extended Help). The controller minimizes the following objective function at each discrete time k:\n\nmin_mathbfΔU ϵ mathbf(R_y - Y) mathbfM_H_p mathbf(R_y - Y) \n + mathbf(ΔU) mathbfN_H_c mathbf(ΔU) \n + mathbf(R_u - U) mathbfL_H_p mathbf(R_u - U) \n + C ϵ^2 + E J_E(mathbfU_E mathbfY_E mathbfD_E)\n\nSee LinMPC for the variable definitions. The custom economic function J_E can penalizes solutions with high economic costs. Setting all the weights to 0 except E creates a pure economic model predictive controller (EMPC). The arguments of J_E are the manipulated inputs, the predicted outputs and measured disturbances from k to k+H_p inclusively:\n\n mathbfU_E = beginbmatrix mathbfU mathbfu(k+H_p-1) endbmatrix text qquad\n mathbfY_E = beginbmatrix mathbfy(k) mathbfY endbmatrix text qquad\n mathbfD_E = beginbmatrix mathbfd(k) mathbfD endbmatrix\n\nsince H_c H_p implies that mathbfu(k+H_p) = mathbfu(k+H_p-1). The vector mathbfD includes the predicted measured disturbance over H_p.\n\ntip: Tip\nReplace any of the 3 arguments with _ if not needed (see JE default value below).\n\nThis method uses the default state estimator :\n\nif model is a LinModel, a SteadyKalmanFilter with default arguments;\nelse, an UnscentedKalmanFilter with default arguments. \n\nwarning: Warning\nSee Extended Help if you get an error like: MethodError: no method matching Float64(::ForwardDiff.Dual).\n\nArguments\n\nmodel::SimModel : model used for controller predictions and state estimations.\nHp=10: prediction horizon H_p.\nHc=2 : control horizon H_c.\nMwt=fill(1.0,model.ny) : main diagonal of mathbfM weight matrix (vector).\nNwt=fill(0.1,model.nu) : main diagonal of mathbfN weight matrix (vector).\nLwt=fill(0.0,model.nu) : main diagonal of mathbfL weight matrix (vector).\nCwt=1e5 : slack variable weight C (scalar), use Cwt=Inf for hard constraints only.\nEwt=0.0 : economic costs weight E (scalar). \nJE=(_,_,_)->0.0 : economic function J_E(mathbfU_E mathbfY_E mathbfD_E).\nru=model.uop : manipulated input setpoints mathbfr_u (vector).\noptim=JuMP.Model(Ipopt.Optimizer) : nonlinear optimizer used in the predictive controller, provided as a JuMP.Model (default to Ipopt.jl optimizer).\n\nExamples\n\njulia> model = NonLinModel((x,u,_)->0.5x+u, (x,_)->2x, 10.0, 1, 1, 1);\n\njulia> mpc = NonLinMPC(model, Hp=20, Hc=1, Cwt=1e6)\nNonLinMPC controller with a sample time Ts = 10.0 s, Ipopt optimizer, UnscentedKalmanFilter estimator and:\n 20 prediction steps Hp\n 1 control steps Hc\n 1 manipulated inputs u\n 2 states x̂\n 1 measured outputs ym\n 0 unmeasured outputs yu\n 0 measured disturbances d\n\nExtended Help\n\nNonLinMPC controllers based on LinModel compute the predictions with matrix algebra instead of a for loop. This feature can accelerate the optimization and is not available in any other package, to my knowledge.\n\nThe optimizations rely on JuMP.jl automatic differentiation (AD) to compute the objective and constraint derivatives. Optimizers generally benefit from exact derivatives like AD. However, the NonLinModel f and h functions must be compatible with this feature. See Automatic differentiation for common mistakes when writing these functions.\n\n\n\n\n\nNonLinMPC(estim::StateEstimator; )\n\nUse custom state estimator estim to construct NonLinMPC.\n\nExamples\n\njulia> model = NonLinModel((x,u,_)->0.5x+u, (x,_)->2x, 10.0, 1, 1, 1);\n\njulia> estim = UnscentedKalmanFilter(model, σQ_int=[0.05]);\n\njulia> mpc = NonLinMPC(estim, Hp=20, Hc=1, Cwt=1e6)\nNonLinMPC controller with a sample time Ts = 10.0 s, Ipopt optimizer, UnscentedKalmanFilter estimator and:\n 20 prediction steps Hp\n 1 control steps Hc\n 1 manipulated inputs u\n 2 states x̂\n 1 measured outputs ym\n 0 unmeasured outputs yu\n 0 measured disturbances d\n\n\n\n\n\n","category":"type"},{"location":"public/predictive_control/#Set-Constraint","page":"Predictive Controllers","title":"Set Constraint","text":"","category":"section"},{"location":"public/predictive_control/","page":"Predictive Controllers","title":"Predictive Controllers","text":"setconstraint!","category":"page"},{"location":"public/predictive_control/#ModelPredictiveControl.setconstraint!","page":"Predictive Controllers","title":"ModelPredictiveControl.setconstraint!","text":"setconstraint!(mpc::PredictiveController; )\n\nSet the constraint parameters of mpc predictive controller.\n\nThe predictive controllers support both soft and hard constraints, defined by:\n\nbeginalignat*3\n mathbfu_min - c_u_min ϵ mathbfu(k+j) mathbfu_max + c_u_max ϵ qquad j = 0 1 H_c - 1 \n mathbfΔu_min - c_Δu_min ϵ mathbfΔu(k+j) mathbfΔu_max + c_Δu_max ϵ qquad j = 0 1 H_c - 1 \n mathbfy_min - c_y_min ϵ mathbfy(k+j) mathbfy_max + c_y_max ϵ qquad j = 1 2 H_p \nendalignat*\n\nand also ϵ 0. All the constraint parameters are vector. Use ±Inf values when there is no bound. The constraint softness parameters mathbfc, also called equal concern for relaxation, are non-negative values that specify the softness of the associated bound. Use 0.0 values for hard constraints. The predicted output constraints mathbfy_min and mathbfy_max are soft by default.\n\nArguments\n\ninfo: Info\nThe default constraints are mentioned here for clarity but omitting a keyword argument will not re-assign to its default value (defaults are set at construction only).\n\numin=fill(-Inf,nu) : manipulated input lower bounds mathbfu_min \numax=fill(+Inf,nu) : manipulated input upper bounds mathbfu_max \nΔumin=fill(-Inf,nu) : manipulated input increment lower bounds mathbfΔu_min \nΔumax=fill(+Inf,nu) : manipulated input increment upper bounds mathbfΔu_max \nŷmin=fill(-Inf,ny) : predicted output lower bounds mathbfy_min \nŷmax=fill(+Inf,ny) : predicted output upper bounds mathbfy_max \nc_umin=fill(0.0,nu) : umin softness weights mathbfc_u_min \nc_umax=fill(0.0,nu) : umax softness weights mathbfc_u_max \nc_Δumin=fill(0.0,nu) : Δumin softness weights mathbfc_Δu_min \nc_Δumax=fill(0.0,nu) : Δumax softness weights mathbfc_Δu_max \nc_ŷmin=fill(1.0,ny) : ŷmin softness weights mathbfc_y_min \nc_ŷmax=fill(1.0,ny) : ŷmax softness weights mathbfc_y_max\n\nExamples\n\njulia> mpc = LinMPC(setop!(LinModel(tf(3, [30, 1]), 4), uop=[50], yop=[25]));\n\njulia> mpc = setconstraint!(mpc, umin=[0], umax=[100], c_umin=[0.0], c_umax=[0.0]);\n\njulia> mpc = setconstraint!(mpc, Δumin=[-10], Δumax=[+10], c_Δumin=[1.0], c_Δumax=[1.0])\nLinMPC controller with a sample time Ts = 4.0 s, OSQP optimizer, SteadyKalmanFilter estimator and:\n 10 prediction steps Hp\n 2 control steps Hc\n 1 manipulated inputs u\n 2 states x̂\n 1 measured outputs ym\n 0 unmeasured outputs yu\n 0 measured disturbances d\n\n\n\n\n\n","category":"function"},{"location":"public/predictive_control/#Move-Manipulated-Input-u","page":"Predictive Controllers","title":"Move Manipulated Input u","text":"","category":"section"},{"location":"public/predictive_control/","page":"Predictive Controllers","title":"Predictive Controllers","text":"moveinput!","category":"page"},{"location":"public/predictive_control/#ModelPredictiveControl.moveinput!","page":"Predictive Controllers","title":"ModelPredictiveControl.moveinput!","text":"moveinput!(\n mpc::PredictiveController, \n ry = mpc.estim.model.yop, \n d = Float64[];\n R̂y = repeat(ry, mpc.Hp), \n D̂ = repeat(d, mpc.Hp), \n ym = nothing\n)\n\nCompute the optimal manipulated input value u for the current control period.\n\nSolve the optimization problem of mpc PredictiveController and return the results mathbfu(k). Following the receding horizon principle, the algorithm discards the optimal future manipulated inputs mathbfu(k+1) mathbfu(k+2) The arguments ry and d are current output setpoints mathbfr_y(k) and measured disturbances mathbfd(k).\n\nThe keyword arguments R̂y and D̂ are the predicted output setpoints mathbfR_y and measured disturbances mathbfD. They are assumed constant in the future by default, that is mathbfr_y(k+j) = mathbfr_y(k) and mathbfd(k+j) = mathbfd(k) for j=1 to H_p. Current measured output ym is only required if mpc.estim is a InternalModel.\n\nCalling a PredictiveController object calls this method.\n\nSee also LinMPC, NonLinMPC.\n\nExamples\n\njulia> mpc = LinMPC(LinModel(tf(5, [2, 1]), 3), Nwt=[0], Hp=1000, Hc=1);\n\njulia> ry = [5]; u = moveinput!(mpc, ry); round.(u, digits=3)\n1-element Vector{Float64}:\n 1.0\n\n\n\n\n\n","category":"function"},{"location":"public/predictive_control/#Get-Additional-Information","page":"Predictive Controllers","title":"Get Additional Information","text":"","category":"section"},{"location":"public/predictive_control/","page":"Predictive Controllers","title":"Predictive Controllers","text":"getinfo","category":"page"},{"location":"public/predictive_control/#ModelPredictiveControl.getinfo","page":"Predictive Controllers","title":"ModelPredictiveControl.getinfo","text":"getinfo(mpc::PredictiveController) -> sol_summary, info\n\nGet additional information about mpc controller optimum to ease troubleshooting.\n\nReturn the optimizer solution summary that can be printed, sol_summary, and the dictionary info with the following fields:\n\n:ΔU : optimal manipulated input increments over Hc (mathbfΔU)\n:ϵ : optimal slack variable (ϵ)\n:J : objective value optimum (J)\n:U : optimal manipulated inputs over Hp (mathbfU)\n:u : current optimal manipulated input (mathbfu)\n:d : current measured disturbance (mathbfd)\n:D̂ : predicted measured disturbances over Hp (mathbfD)\n:ŷ : current estimated output (mathbfy)\n:Ŷ : optimal predicted outputs over Hp (mathbfY = Y_d + Y_s)\n:Ŷd : optimal predicted deterministic output over Hp (mathbfY_d)\n:Ŷs : predicted stochastic output over Hp (mathbfY_s)\n:R̂y : predicted output setpoint over Hp (mathbfR_y)\n:R̂u : predicted manipulated input setpoint over Hp (mathbfR_u)\n\nExamples\n\njulia> mpc = LinMPC(LinModel(tf(5, [2, 1]), 3), Nwt=[0], Hp=1, Hc=1);\n\njulia> u = moveinput!(mpc, [10]);\n\njulia> sol_summary, info = getinfo(mpc); round.(info[:Ŷ], digits=3)\n1-element Vector{Float64}:\n 10.0\n\n\n\n\n\ngetinfo(mpc::NonLinMPC) -> sol_summary, info\n\nInvoke getinfo(::PredictiveController) and add :JE the economic optimum J_E.\n\n\n\n\n\n","category":"function"},{"location":"public/generic_func/#Functions:-Generic-Functions","page":"Generic Functions","title":"Functions: Generic Functions","text":"","category":"section"},{"location":"public/generic_func/","page":"Generic Functions","title":"Generic Functions","text":"Pages = [\"generic_func.md\"]","category":"page"},{"location":"public/generic_func/","page":"Generic Functions","title":"Generic Functions","text":"This page contains the documentation of functions that are generic to SimModel, StateEstimator and PredictiveController types.","category":"page"},{"location":"public/generic_func/#Evaluate-Output-y","page":"Generic Functions","title":"Evaluate Output y","text":"","category":"section"},{"location":"public/generic_func/","page":"Generic Functions","title":"Generic Functions","text":"evaloutput","category":"page"},{"location":"public/generic_func/#ModelPredictiveControl.evaloutput","page":"Generic Functions","title":"ModelPredictiveControl.evaloutput","text":"evaloutput(model::SimModel, d=Float64[])\n\nEvaluate SimModel outputs y from model.x states and measured disturbances d.\n\nCalling a SimModel object calls this evaloutput method.\n\nExamples\n\njulia> model = setop!(LinModel(tf(2, [10, 1]), 5.0), yop=[20]);\n\njulia> y = evaloutput(model)\n1-element Vector{Float64}:\n 20.0\n\n\n\n\n\nevaloutput(estim::StateEstimator, d=Float64[])\n\nEvaluate StateEstimator outputs ŷ from estim.x̂ states and disturbances d.\n\nCalling a StateEstimator object calls this evaloutput method.\n\nExamples\n\njulia> kf = SteadyKalmanFilter(setop!(LinModel(tf(2, [10, 1]), 5), yop=[20]));\n\njulia> ŷ = evaloutput(kf)\n1-element Vector{Float64}:\n 20.0\n\n\n\n\n\nevaloutput(estim::InternalModel, ym, d=Float64[])\n\nEvaluate InternalModel outputs ŷ from estim.x̂d states and measured outputs ym.\n\nInternalModel estimator needs current measured outputs mathbfy^m(k) to estimate its outputs mathbfy(k), since the strategy imposes that mathbfy^m(k) = mathbfy^m(k) is always true.\n\n\n\n\n\n","category":"function"},{"location":"public/generic_func/#Update-State-x","page":"Generic Functions","title":"Update State x","text":"","category":"section"},{"location":"public/generic_func/","page":"Generic Functions","title":"Generic Functions","text":"updatestate!","category":"page"},{"location":"public/generic_func/#ModelPredictiveControl.updatestate!","page":"Generic Functions","title":"ModelPredictiveControl.updatestate!","text":"updatestate!(model::SimModel, u, d=Float64[])\n\nUpdate model.x states with current inputs u and measured disturbances d.\n\nExamples\n\njulia> model = LinModel(ss(1, 1, 1, 0, 1.0));\n\njulia> x = updatestate!(model, [1])\n1-element Vector{Float64}:\n 1.0\n\n\n\n\n\nupdatestate!(estim::StateEstimator, u, ym, d=Float64[])\n\nUpdate estim.x̂ estimate with current inputs u, measured outputs ym and dist. d. \n\nThe method removes the operating points with remove_op! and call update_estimate!.\n\nExamples\n\njulia> kf = SteadyKalmanFilter(LinModel(ss(0.1, 0.5, 1, 0, 4.0)));\n\njulia> x̂ = updatestate!(kf, [1], [0]) # x̂[2] is the integrator state (nint_ym argument)\n2-element Vector{Float64}:\n 0.5\n 0.0\n\n\n\n\n\nupdatestate!(mpc::PredictiveController, u, ym, d=Float64[])\n\nCall updatestate! on mpc.estim StateEstimator.\n\n\n\n\n\n","category":"function"},{"location":"public/generic_func/#Init-State-x","page":"Generic Functions","title":"Init State x","text":"","category":"section"},{"location":"public/generic_func/","page":"Generic Functions","title":"Generic Functions","text":"initstate!","category":"page"},{"location":"public/generic_func/#ModelPredictiveControl.initstate!","page":"Generic Functions","title":"ModelPredictiveControl.initstate!","text":"initstate!(estim::StateEstimator, u, ym, d=Float64[])\n\nInit estim.x̂ states from current inputs u, measured outputs ym and disturbances d.\n\nThe method tries to find a good steady-state to initialize estim.x̂ estimate :\n\nIf estim.model is a LinModel, it evaluates estim.model steady-state (using steadystate) with current inputs u and measured disturbances d, and saves the result to estim.x̂[1:nx].\nElse, the current deterministic states estim.x̂[1:nx] are left unchanged (use setstate! to manually modify them). \n\nIt then estimates the measured outputs ŷm from these states, and the residual offset with current measured outputs (ym - ŷm) initializes the integrators of the stochastic model. This approach ensures that mathbfy^m(0) = mathbfy^m(0). For LinModel, it also ensures that the estimator starts at steady-state, resulting in a bumpless manual to automatic transfer for control applications.\n\nExamples\n\njulia> estim = SteadyKalmanFilter(LinModel(tf(3, [10, 1]), 0.5), nint_ym=[2]);\n\njulia> x̂ = initstate!(estim, [1], [3 - 0.1])\n3-element Vector{Float64}:\n 5.0000000000000115\n 0.0\n -0.10000000000000675\n\n\n\n\n\ninitstate!(estim::InternalModel, u, ym, d=Float64[])\n\nInit estim.x̂d / x̂s states from current inputs u, meas. outputs ym and disturb. d.\n\nThe deterministic state estim.x̂d initialization method is identical to initstate!(::StateEstimator). The stochastic states estim.x̂s are init at 0. \n\n\n\n\n\ninitstate!(mpc::PredictiveController, u, ym, d=Float64[])\n\nInit mpc.ΔŨ for warm-starting and the states of mpc.estim StateEstimator.\n\nBefore calling initstate!(::StateEstimator,_,_), it warm-starts mathbfΔU:\n\nIf model is a LinModel, the vector is filled with the analytical minimum J of the unconstrained problem.\nElse, the vector is filled with zeros.\n\n\n\n\n\n","category":"function"},{"location":"public/generic_func/#Set-State-x","page":"Generic Functions","title":"Set State x","text":"","category":"section"},{"location":"public/generic_func/","page":"Generic Functions","title":"Generic Functions","text":"setstate!","category":"page"},{"location":"public/generic_func/#ModelPredictiveControl.setstate!","page":"Generic Functions","title":"ModelPredictiveControl.setstate!","text":"setstate!(model::SimModel, x)\n\nSet model.x states to values specified by x. \n\n\n\n\n\nsetstate!(estim::StateEstimator, x̂)\n\nSet estim.x̂ states to values specified by x̂. \n\n\n\n\n\nsetstate!(mpc::PredictiveController, x̂)\n\nSet the estimate at mpc.estim.x̂.\n\n\n\n\n\n","category":"function"},{"location":"public/generic_func/#Quick-Simulation","page":"Generic Functions","title":"Quick Simulation","text":"","category":"section"},{"location":"public/generic_func/","page":"Generic Functions","title":"Generic Functions","text":"sim!","category":"page"},{"location":"public/generic_func/#ModelPredictiveControl.sim!","page":"Generic Functions","title":"ModelPredictiveControl.sim!","text":"sim!(plant::SimModel, N::Int, u=plant.uop.+1, d=plant.dop; x0=zeros(plant.nx))\n\nOpen-loop simulation of plant for N time steps, default to unit bump test on all inputs.\n\nThe manipulated inputs mathbfu and measured disturbances mathbfd are held constant at u and d values, respectively. The plant initial state mathbfx(0) is specified by x0 keyword arguments. The function returns SimResult instances that can be visualized by calling plot from Plots.jl on them (see Examples below). Note that the method mutates plant internal states.\n\nExamples\n\njulia> plant = NonLinModel((x,u,d)->0.1x+u+d, (x,_)->2x, 10.0, 1, 1, 1, 1);\n\njulia> res = sim!(plant, 15, [0], [0], x0=[1]);\n\njulia> using Plots; plot(res, plotu=false, plotd=false, plotx=true)\n\n\n\n\n\n\nsim!(\n estim::StateEstimator,\n N::Int,\n u = estim.model.uop .+ 1,\n d = estim.model.dop;\n \n)\n\nClosed-loop simulation of estim estimator for N time steps, default to input bumps.\n\nSee Arguments for the available options. The noises are provided as standard deviations σ vectors. The simulated sensor and process noises of plant are specified by y_noise and x_noise arguments, respectively.\n\nArguments\n\nestim::StateEstimator : state estimator to simulate.\nN::Int : simulation length in time steps.\nu = estim.model.uop .+ 1 : manipulated input mathbfu value.\nd = estim.model.dop : plant measured disturbance mathbfd value.\nplant::SimModel = estim.model : simulated plant model.\nu_step = zeros(plant.nu) : step load disturbance on plant inputs mathbfu.\nu_noise = zeros(plant.nu) : gaussian load disturbance on plant inputs mathbfu.\ny_step = zeros(plant.ny) : step disturbance on plant outputs mathbfy.\ny_noise = zeros(plant.ny) : additive gaussian noise on plant outputs mathbfy.\nd_step = zeros(plant.nd) : step on measured disturbances mathbfd.\nd_noise = zeros(plant.nd) : additive gaussian noise on measured dist. mathbfd.\nx_noise = zeros(plant.nx) : additive gaussian noise on plant states mathbfx.\nx0 = zeros(plant.nx) : plant initial state mathbfx(0).\nx̂0 = nothing : initial estimate mathbfx(0), initstate! is used if nothing.\nlastu = plant.uop : last plant input mathbfu for mathbfx initialization.\n\nExamples\n\njulia> model = LinModel(tf(3, [30, 1]), 0.5);\n\njulia> estim = KalmanFilter(model, σR=[0.5], σQ=[0.25], σQ_int=[0.01], σP0_int=[0.1]);\n\njulia> res = sim!(estim, 50, [0], y_noise=[0.5], x_noise=[0.25], x0=[-10], x̂0=[0, 0]);\n\njulia> using Plots; plot(res, plotŷ=true, plotu=false, plotxwithx̂=true)\n\n\n\n\n\n\nsim!(\n mpc::PredictiveController, \n N::Int,\n ry = mpc.estim.model.yop .+ 1, \n d = mpc.estim.model.dop; \n \n)\n\nClosed-loop simulation of mpc controller for N time steps, default to setpoint bumps.\n\nThe output setpoint mathbfr_y is held constant at ry. The keyword arguments are identical to sim!(::StateEstimator, ::Int).\n\nExamples\n\njulia> model = LinModel([tf(3, [30, 1]); tf(2, [5, 1])], 4);\n\njulia> mpc = setconstraint!(LinMPC(model, Mwt=[0, 1], Nwt=[0.01], Hp=30), ŷmin=[0, -Inf]);\n\njulia> res = sim!(mpc, 25, [0, 0], y_noise=[0.1], y_step=[-10, 0]);\n\njulia> using Plots; plot(res, plotry=true, plotŷ=true, plotŷmin=true, plotu=true)\n\n\n\n\n\n\n","category":"function"},{"location":"#ModelPredictiveControl.jl","page":"Home","title":"ModelPredictiveControl.jl","text":"","category":"section"},{"location":"","page":"Home","title":"Home","text":"A model predictive control package for Julia.","category":"page"},{"location":"","page":"Home","title":"Home","text":"The package depends on ControlSystemsBase.jl for the linear systems and JuMP.jl for the solving.","category":"page"},{"location":"","page":"Home","title":"Home","text":"The objective is to provide a simple and clear framework to quickly design model predictive controllers (MPCs) in Julia, while preserving the flexibility for advanced real-time optimization. Modern MPCs based on closed-loop state estimators are the main focus of the package, but classical approaches that rely on internal models are also possible. The JuMP.jl interface allows to easily test different solvers if the performance of the default settings is not satisfactory.","category":"page"},{"location":"","page":"Home","title":"Home","text":"The documentation is divided in two parts:","category":"page"},{"location":"","page":"Home","title":"Home","text":"Manual — This section includes step-by-step guides to design predictive controllers on multiple case studies.\nFunctions — Documentation of methods and types exported by the package. The \"Internals\" section provides implementation details of functions that are not exported.","category":"page"},{"location":"#Manual","page":"Home","title":"Manual","text":"","category":"section"},{"location":"","page":"Home","title":"Home","text":"Depth = 2\nPages = [\n \"manual/installation.md\",\n \"manual/linmpc.md\",\n \"manual/nonlinmpc.md\",\n]","category":"page"},{"location":"#Functions:-Public","page":"Home","title":"Functions: Public","text":"","category":"section"},{"location":"","page":"Home","title":"Home","text":"Depth = 2\nPages = [\n \"public/sim_model.md\",\n \"public/state_estim.md\",\n \"public/predictive_control.md\",\n \"public/generic_func.md\",\n]","category":"page"},{"location":"#Functions:-Internals","page":"Home","title":"Functions: Internals","text":"","category":"section"},{"location":"","page":"Home","title":"Home","text":"Depth = 1\nPages = [\n \"internals/sim_model.md\",\n \"internals/state_estim.md\",\n \"internals/predictive_control.md\",\n]","category":"page"},{"location":"internals/sim_model/#Functions:-SimModel-Internals","page":"Plant Models","title":"Functions: SimModel Internals","text":"","category":"section"},{"location":"internals/sim_model/","page":"Plant Models","title":"Plant Models","text":"ModelPredictiveControl.steadystate\nModelPredictiveControl.f\nModelPredictiveControl.h","category":"page"},{"location":"internals/sim_model/#ModelPredictiveControl.steadystate","page":"Plant Models","title":"ModelPredictiveControl.steadystate","text":"steadystate(model::LinModel, u, d=Float64[])\n\nEvaluate the steady-state vector when model is a LinModel.\n\nFollowing setop! notation, the method evaluates the equilibrium mathbfx() from:\n\n mathbfx() = mathbf(I - A)^-1(B_u u_0 + B_d d_0)\n\nwith the manipulated inputs held constant at mathbfu_0 and, the measured disturbances, at mathbfd_0. The Moore-Penrose pseudo-inverse computes mathbf(I - A)^-1 to support integrating model (integrator states will be 0).\n\n\n\n\n\n","category":"function"},{"location":"internals/sim_model/#ModelPredictiveControl.f","page":"Plant Models","title":"ModelPredictiveControl.f","text":"f(model::LinModel, x, u, d)\n\nEvaluate mathbfA x + B_u u + B_d d when model is a LinModel.\n\n\n\n\n\nCall mathbff(x u d) with model.f function for NonLinModel.\n\n\n\n\n\n","category":"function"},{"location":"internals/sim_model/#ModelPredictiveControl.h","page":"Plant Models","title":"ModelPredictiveControl.h","text":"h(model::LinModel, x, u, d)\n\nEvaluate mathbfC x + D_d d when model is a LinModel.\n\n\n\n\n\nCall mathbfh(x d) with model.h function for NonLinModel.\n\n\n\n\n\n","category":"function"}] +[{"location":"manual/installation/#Manual:-Installation","page":"Installation","title":"Manual: Installation","text":"","category":"section"},{"location":"manual/installation/","page":"Installation","title":"Installation","text":"To install the ModelPredictiveControl package, run this command in the Julia REPL:","category":"page"},{"location":"manual/installation/","page":"Installation","title":"Installation","text":"using Pkg; Pkg.add(\"ModelPredictiveControl\")","category":"page"},{"location":"manual/installation/","page":"Installation","title":"Installation","text":"Note that that the construction of linear models typically requires ss or tf functions, it is thus recommended to load the package with:","category":"page"},{"location":"manual/installation/","page":"Installation","title":"Installation","text":"using ModelPredictiveControl, ControlSystemsBase","category":"page"},{"location":"internals/predictive_control/#Functions:-PredictiveController-Internals","page":"Predictive Controllers","title":"Functions: PredictiveController Internals","text":"","category":"section"},{"location":"internals/predictive_control/","page":"Predictive Controllers","title":"Predictive Controllers","text":"The prediction methodology of this module is mainly based on Maciejowski textbook [1].","category":"page"},{"location":"internals/predictive_control/","page":"Predictive Controllers","title":"Predictive Controllers","text":"[1]: Maciejowski, J. 2000, \"Predictive control : with constraints\", 1st ed., Prentice Hall, ISBN 978-0201398236.","category":"page"},{"location":"internals/predictive_control/#Controller-Initialization","page":"Predictive Controllers","title":"Controller Initialization","text":"","category":"section"},{"location":"internals/predictive_control/","page":"Predictive Controllers","title":"Predictive Controllers","text":"ModelPredictiveControl.init_deterpred\nModelPredictiveControl.init_ΔUtoU\nModelPredictiveControl.init_quadprog\nModelPredictiveControl.init_stochpred\nModelPredictiveControl.init_linconstraint","category":"page"},{"location":"internals/predictive_control/#ModelPredictiveControl.init_deterpred","page":"Predictive Controllers","title":"ModelPredictiveControl.init_deterpred","text":"init_deterpred(model::LinModel, Hp, Hc) -> E, G, J, Kd, Q\n\nConstruct deterministic prediction matrices for LinModel model.\n\nThe linear model predictions are evaluated by :\n\nbeginaligned\n mathbfY = mathbfE ΔU + mathbfG d(k) + mathbfJ D + mathbfK_d x_d(k) \n + mathbfQ u(k-1) + mathbfY_s \n = mathbfE ΔU + mathbfF\nendaligned\n\nwhere predicted outputs mathbfY, stochastic outputs mathbfY_s, and measured disturbances mathbfD are from k + 1 to k + H_p. Input increments mathbfΔU are from k to k + H_c - 1. Deterministic state estimates mathbfx_d(k) are extracted from current estimates mathbfx_k-1(k) with getestimates!. Operating points on mathbfu, mathbfd and mathbfy are omitted in above equations.\n\nExtended Help\n\nUsing the mathbfA B_u C B_d D_d matrices in model and the equation mathbfW_j = mathbfC ( _i=0^j mathbfA^i ) mathbfB_u, the prediction matrices are computed by :\n\nbeginaligned\nmathbfE = beginbmatrix\nmathbfW_0 mathbf0 cdots mathbf0 \nmathbfW_1 mathbfW_0 cdots mathbf0 \nvdots vdots ddots vdots \nmathbfW_H_p-1 mathbfW_H_p-2 cdots mathbfW_H_p-H_c+1\nendbmatrix\n\nmathbfG = beginbmatrix\nmathbfCmathbfA^0 mathbfB_d \nmathbfCmathbfA^1 mathbfB_d \nvdots \nmathbfCmathbfA^H_p-1 mathbfB_d\nendbmatrix\n\nmathbfJ = beginbmatrix\nmathbfD_d mathbf0 cdots mathbf0 \nmathbfCmathbfA^0 mathbfB_d mathbfD_d cdots mathbf0 \nvdots vdots ddots vdots \nmathbfCmathbfA^H_p-2 mathbfB_d mathbfCmathbfA^H_p-3 mathbfB_d cdots mathbfD_d\nendbmatrix\n\nmathbfK_d = beginbmatrix\nmathbfCmathbfA^0 \nmathbfCmathbfA^1 \nvdots \nmathbfCmathbfA^H_p-1\nendbmatrix\n\nmathbfQ = beginbmatrix\nmathbfW_0 \nmathbfW_1 \nvdots \nmathbfW_H_p-1\nendbmatrix\nendaligned\n\nnote: Note\nStochastic predictions mathbfY_s are calculated separately (see init_stochpred) and added to the mathbfF matrix to support internal model structure and reduce NonLinMPC computational costs. That is also why the prediction matrices are built on mathbfA B_u C B_d D_d instead of the augmented model mathbfA B_u C B_d D_d.\n\n\n\n\n\nReturn empty matrices if model is not a LinModel\n\n\n\n\n\n","category":"function"},{"location":"internals/predictive_control/#ModelPredictiveControl.init_ΔUtoU","page":"Predictive Controllers","title":"ModelPredictiveControl.init_ΔUtoU","text":"init_ΔUtoU(nu, Hp, Hc, C, c_Umin, c_Umax) -> S_Hp, T_Hp, S_Hc, T_Hc\n\nInit manipulated input increments to inputs conversion matrices.\n\nThe conversion from the input increments mathbfΔU to manipulated inputs over H_p and H_c are calculated by:\n\nbeginaligned\nmathbfU = \n mathbfU_H_p = mathbfS_H_p mathbfΔU + mathbfT_H_p mathbfu(k-1) \n mathbfU_H_c = mathbfS_H_c mathbfΔU + mathbfT_H_c mathbfu(k-1)\nendaligned\n\n\n\n\n\n","category":"function"},{"location":"internals/predictive_control/#ModelPredictiveControl.init_quadprog","page":"Predictive Controllers","title":"ModelPredictiveControl.init_quadprog","text":"init_quadprog(model::LinModel, Ẽ, S_Hp, M_Hp, N_Hc, L_Hp) -> P̃, q̃, p\n\nInit the quadratic programming optimization matrix P̃ and q̃.\n\nThe matrices appear in the quadratic general form :\n\n J = min_mathbfΔU frac12mathbf(ΔU)P(ΔU) + mathbfq(ΔU) + p \n\nmathbfP is constant if the model and weights are linear and time invariant (LTI). The vector mathbfq and scalar p need recalculation each control period k (see initpred! method). p does not impact the minima position. It is thus useless at optimization but required to evaluate the minimal J value.\n\n\n\n\n\nReturn empty matrices if model is not a LinModel.\n\n\n\n\n\n","category":"function"},{"location":"internals/predictive_control/#ModelPredictiveControl.init_stochpred","page":"Predictive Controllers","title":"ModelPredictiveControl.init_stochpred","text":"init_stochpred(estim::StateEstimator, Hp) -> Ks, Ps\n\nInit the stochastic prediction matrix Ks from estim estimator for predictive control.\n\nmathbfK_s is the prediction matrix of the stochastic model (composed exclusively of integrators):\n\n mathbfY_s = mathbfK_s x_s(k)\n\nThe stochastic predictions mathbfY_s are the integrator outputs from k+1 to k+H_p. mathbfx_s(k) is extracted from current estimates mathbfx_k-1(k) with getestimates!. The method also returns an empty mathbfP_s matrix, since it is useless except for InternalModel estimators.\n\n\n\n\n\ninit_stochpred(estim::InternalModel, Hp) -> Ks, Ps\n\nInit the stochastic prediction matrices for InternalModel.\n\nKs and Ps matrices are defined as:\n\n mathbfY_s = mathbfK_s x_s(k) + mathbfP_s y_s(k)\n\nCurrent stochastic outputs mathbfy_s(k) comprises the measured outputs mathbfy_s^m(k) = mathbfy^m(k) - mathbfy_d^m(k) and unmeasured mathbfy_s^u(k) = mathbf0. See [2].\n\n[2]: Desbiens, A., D. Hodouin & É. Plamondon. 2000, \"Global predictive control : a unified control structure for decoupling setpoint tracking, feedforward compensation and disturbance rejection dynamics\", IEE Proceedings - Control Theory and Applications, vol. 147, no 4, https://doi.org/10.1049/ip-cta:20000443, p. 465–475, ISSN 1350-2379.\n\n\n\n\n\n","category":"function"},{"location":"internals/predictive_control/#ModelPredictiveControl.init_linconstraint","page":"Predictive Controllers","title":"ModelPredictiveControl.init_linconstraint","text":"init_linconstraint(model::LinModel, \n A_Umin, A_Umax, A_ΔŨmin, A_ΔŨmax, A_Ŷmin, A_Ŷmax,\n i_Umin, i_Umax, i_ΔŨmin, i_ΔŨmax, i_Ŷmin, i_Ŷmax\n) -> A, i_b, b\n\nInit A, b and i_b for the linear inequality constraints (mathbfA ΔU b).\n\ni_b is a BitVector including the indices of mathbfb that are finite numbers.\n\n\n\n\n\nInit values without predicted output constraints if model is not a LinModel.\n\n\n\n\n\n","category":"function"},{"location":"internals/predictive_control/#Constraint-Relaxation","page":"Predictive Controllers","title":"Constraint Relaxation","text":"","category":"section"},{"location":"internals/predictive_control/","page":"Predictive Controllers","title":"Predictive Controllers","text":"ModelPredictiveControl.relaxU\nModelPredictiveControl.relaxΔU\nModelPredictiveControl.relaxŶ","category":"page"},{"location":"internals/predictive_control/#ModelPredictiveControl.relaxU","page":"Predictive Controllers","title":"ModelPredictiveControl.relaxU","text":"relaxU(C, c_Umin, c_Umax, S_Hp, S_Hc) -> A_Umin, A_Umax, S̃_Hp\n\nAugment manipulated inputs constraints with slack variable ϵ for softening.\n\nDenoting the input increments augmented with the slack variable mathbfΔU = beginsmallmatrix mathbfΔU ϵ endsmallmatrix, it returns the augmented conversion matrix mathbfS_H_p, similar to the one described at init_ΔUtoU. It also returns the mathbfA matrices for the inequality constraints:\n\nbeginbmatrix \n mathbfA_U_min \n mathbfA_U_max \nendbmatrix mathbfΔU \nbeginbmatrix\n - mathbfU_min + mathbfT_H_c mathbfu(k-1) \n + mathbfU_max - mathbfT_H_c mathbfu(k-1)\nendbmatrix\n\n\n\n\n\n","category":"function"},{"location":"internals/predictive_control/#ModelPredictiveControl.relaxΔU","page":"Predictive Controllers","title":"ModelPredictiveControl.relaxΔU","text":"relaxΔU(C, c_ΔUmin, c_ΔUmax, ΔUmin, ΔUmax, N_Hc) -> A_ΔŨmin, A_ΔŨmax, ΔŨmin, ΔŨmax, Ñ_Hc\n\nAugment input increments constraints with slack variable ϵ for softening.\n\nDenoting the input increments augmented with the slack variable mathbfΔU = beginsmallmatrix mathbfΔU ϵ endsmallmatrix, it returns the augmented input increment weights mathbfN_H_c (that incorporate C). It also returns the augmented constraints mathbfΔU_min and mathbfΔU_max and the mathbfA matrices for the inequality constraints:\n\nbeginbmatrix \n mathbfA_ΔU_min \n mathbfA_ΔU_max\nendbmatrix mathbfΔU \nbeginbmatrix\n - mathbfΔU_min \n + mathbfΔU_max\nendbmatrix\n\n\n\n\n\n","category":"function"},{"location":"internals/predictive_control/#ModelPredictiveControl.relaxŶ","page":"Predictive Controllers","title":"ModelPredictiveControl.relaxŶ","text":"relaxŶ(::LinModel, C, c_Ŷmin, c_Ŷmax, E) -> A_Ŷmin, A_Ŷmax, Ẽ\n\nAugment linear output prediction constraints with slack variable ϵ for softening.\n\nDenoting the input increments augmented with the slack variable mathbfΔU = beginsmallmatrix mathbfΔU ϵ endsmallmatrix, it returns the mathbfE matrix that appears in the linear model prediction equation mathbfY = E ΔU + F, and the mathbfA matrices for the inequality constraints:\n\nbeginbmatrix \n mathbfA_Y_min \n mathbfA_Y_max\nendbmatrix mathbfΔU \nbeginbmatrix\n - mathbfY_min + mathbfF \n + mathbfY_max - mathbfF \nendbmatrix\n\n\n\n\n\nReturn empty matrices if model is not a LinModel\n\n\n\n\n\n","category":"function"},{"location":"internals/predictive_control/#Get-Estimates","page":"Predictive Controllers","title":"Get Estimates","text":"","category":"section"},{"location":"internals/predictive_control/","page":"Predictive Controllers","title":"Predictive Controllers","text":"ModelPredictiveControl.getestimates!","category":"page"},{"location":"internals/predictive_control/#ModelPredictiveControl.getestimates!","page":"Predictive Controllers","title":"ModelPredictiveControl.getestimates!","text":"getestimates!(mpc::PredictiveController, estim::StateEstimator) -> x̂d, x̂s, ŷ\n\nGet estimator output and split x̂ into the deterministic x̂d and stochastic x̂s states.\n\n\n\n\n\ngetestimates!(mpc::PredictiveController, estim::InternalModel) -> x̂d, x̂s, ŷ\n\nGet the internal model deterministic estim.x̂d and stochastic estim.x̂s states.\n\n\n\n\n\n","category":"function"},{"location":"internals/predictive_control/#Predictions","page":"Predictive Controllers","title":"Predictions","text":"","category":"section"},{"location":"internals/predictive_control/","page":"Predictive Controllers","title":"Predictive Controllers","text":"ModelPredictiveControl.initpred!","category":"page"},{"location":"internals/predictive_control/#ModelPredictiveControl.initpred!","page":"Predictive Controllers","title":"ModelPredictiveControl.initpred!","text":"initpred!(mpc, model::LinModel, d, D̂, R̂y)\n\nInit linear model prediction matrices F, q̃ and p.\n\nSee init_deterpred and init_quadprog for the definition of the matrices.\n\n\n\n\n\ninitpred!(mpc::PredictiveController, model::SimModel, d, D̂, R̂y)\n\nInit d0 and D̂0 matrices when model is not a LinModel.\n\nd0 and D̂0 are the measured disturbances and its predictions without the operating points mathbfd_op.\n\n\n\n\n\n","category":"function"},{"location":"internals/predictive_control/#Constraints","page":"Predictive Controllers","title":"Constraints","text":"","category":"section"},{"location":"internals/predictive_control/","page":"Predictive Controllers","title":"Predictive Controllers","text":"ModelPredictiveControl.linconstraint!","category":"page"},{"location":"internals/predictive_control/#ModelPredictiveControl.linconstraint!","page":"Predictive Controllers","title":"ModelPredictiveControl.linconstraint!","text":"linconstraint!(mpc::PredictiveController, model::LinModel)\n\nSet b vector for the linear model inequality constraints (mathbfA ΔU b).\n\n\n\n\n\nSet b excluding predicted output constraints when model is not a LinModel.\n\n\n\n\n\n","category":"function"},{"location":"internals/state_estim/#Functions:-StateEstimator-Internals","page":"State Estimators","title":"Functions: StateEstimator Internals","text":"","category":"section"},{"location":"internals/state_estim/#Estimator-Initialization","page":"State Estimators","title":"Estimator Initialization","text":"","category":"section"},{"location":"internals/state_estim/","page":"State Estimators","title":"State Estimators","text":"ModelPredictiveControl.init_estimstoch\nModelPredictiveControl.augment_model\nModelPredictiveControl.init_ukf\nModelPredictiveControl.init_internalmodel","category":"page"},{"location":"internals/state_estim/#ModelPredictiveControl.init_estimstoch","page":"State Estimators","title":"ModelPredictiveControl.init_estimstoch","text":"init_estimstoch(i_ym, nint_ym::Vector{Int}) -> Asm, Csm, nint_ym\n\nCalc stochastic model matrices from output integrators specifications for state estimation.\n\nFor closed-loop state estimators. nint_ym is a vector providing how many integrator should be added for each measured output mathbfy^m. The argument generates the Asm and Csm matrices:\n\nbeginaligned\nmathbfx_s(k+1) = mathbfA_s^m x_s(k) + mathbfB_s^m e(k) \nmathbfy_s^m(k) = mathbfC_s^m x_s(k)\nendaligned\n\nwhere mathbfe(k) is a conceptual and unknown zero mean white noise. mathbfB_s^m is not used for closed-loop state estimators thus ignored.\n\n\n\n\n\n","category":"function"},{"location":"internals/state_estim/#ModelPredictiveControl.augment_model","page":"State Estimators","title":"ModelPredictiveControl.augment_model","text":"augment_model(model::LinModel, As, Cs; verify_obsv=true) -> Â, B̂u, Ĉ, B̂d, D̂d\n\nAugment LinModel state-space matrices with the stochastic ones As and Cs.\n\nIf mathbfx are model.x states, and mathbfx_s, the states defined at init_estimstoch, we define an augmented state vector mathbfx = beginsmallmatrix mathbfx mathbfx_s endsmallmatrix . The method returns the augmented matrices Â, B̂u, Ĉ, B̂d and D̂d:\n\nbeginaligned\n mathbfx(k+1) = mathbfA x(k) + mathbfB_u u(k) + mathbfB_d d(k) \n mathbfy(k) = mathbfC x(k) + mathbfD_d d(k)\nendaligned\n\nAn error is thrown if the augmented model is not observable and verify_obsv == true.\n\n\n\n\n\nNo need to augment the model if model is not a LinModel.\n\n\n\n\n\n","category":"function"},{"location":"internals/state_estim/#ModelPredictiveControl.init_ukf","page":"State Estimators","title":"ModelPredictiveControl.init_ukf","text":"init_ukf(nx̂, α, β, κ) -> nσ, γ, m̂, Ŝ\n\nCompute the UnscentedKalmanFilter constants from α β and κ.\n\nWith n_mathbfx elements in the state vector mathbfx and n_σ = 2 n_mathbfx + 1 sigma points, the scaling factor applied on standard deviation matrices sqrtmathbfP is:\n\n γ = α sqrt n_mathbfx + κ \n\nThe weight vector (n_σ 1) for the mean and the weight matrix (n_σ n_σ) for the covariance are respectively:\n\nbeginaligned\n mathbfm = beginbmatrix 1 - tfracn_mathbfxγ^2 tfrac12γ^2 tfrac12γ^2 cdots tfrac12γ^2 endbmatrix \n mathbfS = mathrmdiagbig( 2 - α^2 + β - tfracn_mathbfxγ^2 tfrac12γ^2 tfrac12γ^2 cdots tfrac12γ^2 big)\nendaligned\n\nSee update_estimate!(::UnscentedKalmanFilter) for other details.\n\n\n\n\n\n","category":"function"},{"location":"internals/state_estim/#ModelPredictiveControl.init_internalmodel","page":"State Estimators","title":"ModelPredictiveControl.init_internalmodel","text":"init_internalmodel(As, Bs, Cs, Ds) -> Âs, B̂s\n\nCalc stochastic model update matrices Âs and B̂s for InternalModel estimator.\n\nAs, Bs, Cs and Ds are the stochastic model matrices :\n\nbeginaligned\n mathbfx_s(k+1) = mathbfA_s x_s(k) + mathbfB_s e(k) \n mathbfy_s(k) = mathbfC_s x_s(k) + mathbfD_s e(k)\nendaligned\n\nwhere mathbfe(k) is conceptual and unknown zero mean white noise. Its optimal estimation is mathbfe=0, the expected value. Thus, the Âs and B̂s matrices that optimally update the stochastic estimate mathbfx_s are:\n\nbeginaligned\n mathbfx_s(k+1) \n = mathbf(A_s - B_s D_s^-1 C_s) x_s(k) + mathbf(B_s D_s^-1) y_s(k) \n = mathbfA_s x_s(k) + mathbfB_s y_s(k)\nendaligned\n\nwith current stochastic outputs estimation mathbfy_s(k), composed of the measured mathbfy_s^m(k) = mathbfy^m(k) - mathbfy_d^m(k) and unmeasured mathbfy_s^u = 0 outputs. See [1].\n\n[1]: Desbiens, A., D. Hodouin & É. Plamondon. 2000, \"Global predictive control : a unified control structure for decoupling setpoint tracking, feedforward compensation and disturbance rejection dynamics\", IEE Proceedings - Control Theory and Applications, vol. 147, no 4, https://doi.org/10.1049/ip-cta:20000443, p. 465–475, ISSN 1350-2379.\n\n\n\n\n\n","category":"function"},{"location":"internals/state_estim/#Augmented-Model","page":"State Estimators","title":"Augmented Model","text":"","category":"section"},{"location":"internals/state_estim/","page":"State Estimators","title":"State Estimators","text":"ModelPredictiveControl.f̂\nModelPredictiveControl.ĥ","category":"page"},{"location":"internals/state_estim/#ModelPredictiveControl.f̂","page":"State Estimators","title":"ModelPredictiveControl.f̂","text":"f̂(estim::StateEstimator, x̂, u, d)\n\nState function mathbff of the augmented model.\n\nBy introducing an augmented state vector mathbfx like in augment_model, the function returns the next state of the augmented model, defined as:\n\nbeginaligned\n mathbfx(k+1) = mathbffBig(mathbfx(k) mathbfu(k) mathbfd(k)Big) \n mathbfy(k) = mathbfhBig(mathbfx(k) mathbfd(k)Big) \nendaligned\n\n\n\n\n\n","category":"function"},{"location":"internals/state_estim/#ModelPredictiveControl.ĥ","page":"State Estimators","title":"ModelPredictiveControl.ĥ","text":"ĥ(estim::StateEstimator, x̂, d)\n\nOutput function mathbfh of the augmented model, see f̂ for details.\n\n\n\n\n\n","category":"function"},{"location":"internals/state_estim/#Remove-Operating-Points","page":"State Estimators","title":"Remove Operating Points","text":"","category":"section"},{"location":"internals/state_estim/","page":"State Estimators","title":"State Estimators","text":"ModelPredictiveControl.remove_op!","category":"page"},{"location":"internals/state_estim/#ModelPredictiveControl.remove_op!","page":"State Estimators","title":"ModelPredictiveControl.remove_op!","text":"remove_op!(estim::StateEstimator, u, d, ym) -> u0, d0, ym0\n\nRemove operating points on inputs u, measured outputs ym and disturbances d.\n\nAlso store current inputs without operating points u0 in estim.lastu0. This field is used for PredictiveController computations.\n\n\n\n\n\n","category":"function"},{"location":"internals/state_estim/#Update-Estimate","page":"State Estimators","title":"Update Estimate","text":"","category":"section"},{"location":"internals/state_estim/","page":"State Estimators","title":"State Estimators","text":"info: Info\nAll these methods assume that the operating points are already removed in u, ym and d arguments. Strickly speaking, the arguments should be called u0, ym0 and d0, following setop! notation. The 0 is dropped to simplify the notation.","category":"page"},{"location":"internals/state_estim/","page":"State Estimators","title":"State Estimators","text":"ModelPredictiveControl.update_estimate!","category":"page"},{"location":"internals/state_estim/#ModelPredictiveControl.update_estimate!","page":"State Estimators","title":"ModelPredictiveControl.update_estimate!","text":"update_estimate!(estim::SteadyKalmanFilter, u, ym, d)\n\nUpdate estim.x̂ estimate with current inputs u, measured outputs ym and dist. d.\n\nThe SteadyKalmanFilter updates it with the precomputed Kalman gain mathbfK:\n\nmathbfx_k(k+1) = mathbfA x_k-1(k) + mathbfB_u u(k) + mathbfB_d d(k) \n + mathbfKmathbfy^m(k) - mathbfC^m x_k-1(k) - mathbfD_d^m d(k)\n\n\n\n\n\nupdate_estimate!(estim::KalmanFilter, u, ym, d)\n\nUpdate KalmanFilter state estim.x̂ and estimation error covariance estim.P̂.\n\nIt implements the time-varying Kalman Filter in its predictor (observer) form :\n\nbeginaligned\n mathbfM(k) = mathbfP_k-1(k)mathbfC^m\n mathbfC^m P_k-1(k)mathbfC^m + mathbfR^-1 \n mathbfK(k) = mathbfA M(k) \n mathbfy^m(k) = mathbfC^m x_k-1(k) + mathbfD_d^m d(k) \n mathbfx_k(k+1) = mathbfA x_k-1(k) + mathbfB_u u(k) + mathbfB_d d(k)\n + mathbfK(k)mathbfy^m(k) - mathbfy^m(k) \n mathbfP_k(k+1) = mathbfAmathbfP_k-1(k)\n - mathbfM(k)mathbfC^m P_k-1(k)mathbfA + mathbfQ\nendaligned\n\nbased on the process model described in SteadyKalmanFilter. The notation mathbfx_k-1(k) refers to the state for the current time k estimated at the last control period k-1. See [2] for details.\n\n[2]: Boyd S., \"Lecture 8 : The Kalman Filter\" (Winter 2008-09) [course slides], EE363: Linear Dynamical Systems, https://web.stanford.edu/class/ee363/lectures/kf.pdf.\n\n\n\n\n\nupdate_estimate!(estim::UnscentedKalmanFilter, u, ym, d)\n\nUpdate UnscentedKalmanFilter state estim.x̂ and covariance estimate estim.P̂.\n\nIt implements the unscented Kalman Filter in its predictor (observer) form, based on the generalized unscented transform[3]. See init_ukf for the definition of the constants mathbfm S and γ. \n\nDenoting mathbfx_k-1(k) as the state for the current time k estimated at the last period k-1, mathbf0, a null vector, n_σ = 2 n_mathbfx + 1, the number of sigma points, and mathbfX_k-1^j(k), the vector at the jth column of mathbfX_k-1(k), the estimator updates the states with:\n\nbeginaligned\n mathbfX_k-1(k) = biggbeginmatrix mathbfx_k-1(k) mathbfx_k-1(k) cdots mathbfx_k-1(k) endmatrixbigg + biggbeginmatrix mathbf0 γ sqrtmathbfP_k-1(k) -γ sqrtmathbfP_k-1(k) endmatrixbigg \n mathbfY^m(k) = biggbeginmatrix mathbfh^mBig( mathbfX_k-1^1(k) Big) mathbfh^mBig( mathbfX_k-1^2(k) Big) cdots mathbfh^mBig( mathbfX_k-1^n_σ(k) Big) endmatrixbigg \n mathbfy^m(k) = mathbfY^m(k) mathbfm \n mathbfX_k-1(k) = beginbmatrix mathbfX_k-1^1(k) - mathbfx_k-1(k) mathbfX_k-1^2(k) - mathbfx_k-1(k) cdots mathbfX_k-1^n_σ(k) - mathbfx_k-1(k) endbmatrix \n mathbfY^m(k) = beginbmatrix mathbfY^m^1(k) - mathbfy^m(k) mathbfY^m^2(k) - mathbfy^m(k) cdots mathbfY^m^n_σ(k) - mathbfy^m(k) endbmatrix \n mathbfM(k) = mathbfY^m(k) mathbfS mathbfY^m(k) + mathbfR \n mathbfK(k) = mathbfX_k-1(k) mathbfS mathbfY^m(k) mathbfM(k)^-1 \n mathbfx_k(k) = mathbfx_k-1(k) + mathbfK(k) big mathbfy^m(k) - mathbfy^m(k) big \n mathbfP_k(k) = mathbfP_k-1(k) - mathbfK(k) mathbfM(k) mathbfK(k) \n mathbfX_k(k) = biggbeginmatrix mathbfx_k(k) mathbfx_k(k) cdots mathbfx_k(k) endmatrixbigg + biggbeginmatrix mathbf0 gamma sqrtmathbfP_k(k) - gamma sqrtmathbfP_k(k) endmatrixbigg \n mathbfX_k(k+1) = biggbeginmatrix mathbffBig( mathbfX_k^1(k) mathbfu(k) mathbfd(k) Big) mathbffBig( mathbfX_k^2(k) mathbfu(k) mathbfd(k) Big) cdots mathbffBig( mathbfX_k^n_σ(k) mathbfu(k) mathbfd(k) Big) endmatrixbigg \n mathbfx_k(k+1) = mathbfX_k(k+1)mathbfm \n mathbfX_k(k+1) = beginbmatrix mathbfX_k^1(k+1) - mathbfx_k(k+1) mathbfX_k^2(k+1) - mathbfx_k(k+1) cdots mathbfX_k^n_σ(k+1) - mathbfx_k(k+1) endbmatrix \n mathbfP_k(k+1) = mathbfX_k(k+1) mathbfS mathbfX_k(k+1) + mathbfQ\nendaligned \n\nby using the lower triangular factor of cholesky to compute sqrtmathbfP_k-1(k) and sqrtmathbfP_k(k). The matrices mathbfP Q R are the covariance of the estimation error, process noise and sensor noise, respectively.\n\n[3]: Simon, D. 2006, \"Chapter 14: The unscented Kalman filter\" in \"Optimal State Estimation: Kalman, H∞, and Nonlinear Approaches\", John Wiley & Sons, p. 433–459, https://doi.org/10.1002/0470045345.ch14, ISBN9780470045343.\n\n\n\n\n\nupdate_estimate!(estim::ExtendedKalmanFilter, u, ym, d=Float64[])\n\nUpdate ExtendedKalmanFilter state estim.x̂ and covariance estim.P̂.\n\nThe equations are similar to update_estimate!(::KalmanFilter) but with the substitutions mathbfA = F(k) and mathbfC^m = H^m(k):\n\nbeginaligned\n mathbfM(k) = mathbfP_k-1(k)mathbfH^m(k)\n mathbfH^m(k)mathbfP_k-1(k)mathbfH^m(k) + mathbfR^-1 \n mathbfK(k) = mathbfF(k) mathbfM(k) \n mathbfy^m(k) = mathbfh^mBig( mathbfx_k-1(k) mathbfd(k) Big) \n mathbfx_k(k+1) = mathbffBig( mathbfx_k-1(k) mathbfu(k) mathbfd(k) Big)\n + mathbfK(k)mathbfy^m(k) - mathbfy^m(k) \n mathbfP_k(k+1) = mathbfF(k)mathbfP_k-1(k)\n - mathbfM(k)mathbfH^m(k)mathbfP_k-1(k)mathbfF(k) \n + mathbfQ\nendaligned\n\nForwardDiff.jacobian automatically computes the Jacobians:\n\nbeginaligned\n mathbfF(k) = left fracmathbff(mathbfx mathbfu mathbfd)mathbfx right_mathbfx = x_k-1(k) mathbfu = u(k) mathbfd = d(k) \n mathbfH(k) = left fracmathbfh(mathbfx mathbfd)mathbfx right_mathbfx = x_k-1(k) mathbfd = d(k)\nendaligned\n\nThe matrix mathbfH^m is the rows of mathbfH that are measured outputs.\n\n\n\n\n\nupdate_estimate!(estim::Luenberger, u, ym, d=Float64[])\n\nSame than update_estimate!(::SteadyKalmanFilter) but using Luenberger.\n\n\n\n\n\nupdate_estimate!(estim::InternalModel, u, ym, d=Float64[])\n\nUpdate estim.x̂ \\ x̂d \\ x̂s with current inputs u, measured outputs ym and dist. d.\n\nThe InternalModel updates the deterministic x̂d and stochastic x̂s estimates with:\n\nbeginaligned\n mathbfx_d(k+1) = mathbffBig( mathbfx_d(k) mathbfu(k) mathbfd(k) Big) \n mathbfx_s(k+1) = mathbfA_s x_s(k) + mathbfB_s y_s(k)\nendaligned\n\nThis estimator does not augment the state vector, thus mathbfx = x_d. See init_internalmodel for details. \n\n\n\n\n\n","category":"function"},{"location":"manual/nonlinmpc/#man_nonlin","page":"Nonlinear Design","title":"Manual: Nonlinear Design","text":"","category":"section"},{"location":"manual/nonlinmpc/#Nonlinear-Model","page":"Nonlinear Design","title":"Nonlinear Model","text":"","category":"section"},{"location":"manual/nonlinmpc/","page":"Nonlinear Design","title":"Nonlinear Design","text":"In this example, the goal is to control the angular position θ of a pendulum attached to a motor. Knowing that the manipulated input is the motor torque τ, the I/O vectors are:","category":"page"},{"location":"manual/nonlinmpc/","page":"Nonlinear Design","title":"Nonlinear Design","text":"beginaligned\n mathbfu = beginbmatrix τ endbmatrix \n mathbfy = beginbmatrix θ endbmatrix\nendaligned","category":"page"},{"location":"manual/nonlinmpc/","page":"Nonlinear Design","title":"Nonlinear Design","text":"The plant model is nonlinear:","category":"page"},{"location":"manual/nonlinmpc/","page":"Nonlinear Design","title":"Nonlinear Design","text":"beginaligned\n dotθ(t) = ω(t) \n dotω(t) = -fracgLsinbig( θ(t) big) - fracKm ω(t) + frac1m L^2 τ(t)\nendaligned","category":"page"},{"location":"manual/nonlinmpc/","page":"Nonlinear Design","title":"Nonlinear Design","text":"in which g is the gravitational acceleration, L, the pendulum length, K, the friction coefficient at the pivot point, and m, the mass attached at the end of the pendulum. Here, the explicit Euler method discretizes the system to construct a NonLinModel:","category":"page"},{"location":"manual/nonlinmpc/","page":"Nonlinear Design","title":"Nonlinear Design","text":"using ModelPredictiveControl\nfunction pendulum(par, x, u)\n g, L, K, m = par # [m/s²], [m], [kg/s], [kg]\n θ, ω = x[1], x[2] # [rad], [rad/s]\n τ = u[1] # [N m]\n dθ = ω\n dω = -g/L*sin(θ) - K/m*ω + τ/m/L^2\n return [dθ, dω]\nend\nTs = 0.1 # [s]\npar = (9.8, 0.4, 1.2, 0.3)\nf(x, u, _ ) = x + Ts*pendulum(par, x, u) # Euler method\nh(x, _ ) = [180/π*x[1]] # [°]\nnu, nx, ny = 1, 2, 1\nmodel = NonLinModel(f, h, Ts, nu, nx, ny)","category":"page"},{"location":"manual/nonlinmpc/","page":"Nonlinear Design","title":"Nonlinear Design","text":"The output function mathbfh converts the θ angle to degrees. It is good practice to first simulate model using sim! as a quick sanity check:","category":"page"},{"location":"manual/nonlinmpc/","page":"Nonlinear Design","title":"Nonlinear Design","text":"using Plots\nu = [0.5]\nplot(sim!(model, 60, u), plotu=false)","category":"page"},{"location":"manual/nonlinmpc/#Nonlinear-Predictive-Controller","page":"Nonlinear Design","title":"Nonlinear Predictive Controller","text":"","category":"section"},{"location":"manual/nonlinmpc/","page":"Nonlinear Design","title":"Nonlinear Design","text":"An UnscentedKalmanFilter estimates the plant state :","category":"page"},{"location":"manual/nonlinmpc/","page":"Nonlinear Design","title":"Nonlinear Design","text":"estim = UnscentedKalmanFilter(model, σQ=[0.1, 0.5], σR=[0.5], nint_ym=[1], σQ_int=[5.0])","category":"page"},{"location":"manual/nonlinmpc/","page":"Nonlinear Design","title":"Nonlinear Design","text":"The standard deviation of the angular velocity ω is higher here (σQ second value) since dotω(t) equation includes an uncertain parameter: the friction coefficient K. The estimator tuning is tested on a plant simulated with a different K:","category":"page"},{"location":"manual/nonlinmpc/","page":"Nonlinear Design","title":"Nonlinear Design","text":"par_plant = (par[1], par[2], par[3] + 0.25, par[4])\nf_plant(x, u, _) = x + Ts*pendulum(par_plant, x, u)\nplant = NonLinModel(f_plant, h, Ts, nu, nx, ny)\nres = sim!(estim, 60, [0.5], plant=plant, y_noise=[0.5])\nplot(res, plotu=false, plotxwithx̂=true)","category":"page"},{"location":"manual/nonlinmpc/","page":"Nonlinear Design","title":"Nonlinear Design","text":"The estimate x_3 is the integrator state that compensates for static errors (nint_ym and σQ_int parameters of UnscentedKalmanFilter). The Kalman filter performance seems sufficient for control. As the motor torque is limited to -1.5 to 1.5 N m, we incorporate the input constraints in a NonLinMPC:","category":"page"},{"location":"manual/nonlinmpc/","page":"Nonlinear Design","title":"Nonlinear Design","text":"mpc = NonLinMPC(estim, Hp=20, Hc=4, Mwt=[0.5], Nwt=[2.5], Cwt=Inf)\nmpc = setconstraint!(mpc, umin=[-1.5], umax=[+1.5])","category":"page"},{"location":"manual/nonlinmpc/","page":"Nonlinear Design","title":"Nonlinear Design","text":"We test mpc performance on plant by imposing an angular setpoint of 180° (inverted position):","category":"page"},{"location":"manual/nonlinmpc/","page":"Nonlinear Design","title":"Nonlinear Design","text":"res = sim!(mpc, 60, [180.0], plant=plant, x0=zeros(plant.nx), x̂0=zeros(mpc.estim.nx̂))\nplot(res)","category":"page"},{"location":"manual/nonlinmpc/","page":"Nonlinear Design","title":"Nonlinear Design","text":"The controller seems robust enough to variations on K coefficient. Starting from this inverted position, the closed-loop response to a step disturbances of 10° is also satisfactory:","category":"page"},{"location":"manual/nonlinmpc/","page":"Nonlinear Design","title":"Nonlinear Design","text":"res = sim!(mpc, 60, [180.0], plant=plant, x0=[π, 0], x̂0=[π, 0, 0], y_step=[10])\nplot(res)","category":"page"},{"location":"public/state_estim/#Functions:-State-Estimators","page":"State Estimators","title":"Functions: State Estimators","text":"","category":"section"},{"location":"public/state_estim/","page":"State Estimators","title":"State Estimators","text":"Pages = [\"state_estim.md\"]","category":"page"},{"location":"public/state_estim/","page":"State Estimators","title":"State Estimators","text":"This module includes many state estimators (or state observer), both for deterministic and stochastic systems. The implementations focus on control applications, that is, relying on the estimates to compute a full state feedback (predictive controllers, in this package). They all incorporates some kind of integral action by default, since it is generally desired to eliminate the steady-state error with closed-loop control (offset-free tracking).","category":"page"},{"location":"public/state_estim/","page":"State Estimators","title":"State Estimators","text":"info: Info\nIf you plan to use the estimators for other contexts than this specific package (e.g. : filter, parameter estimation, etc.), careful must be taken at construction since the integral action is not necessarily desired. The option nint_ym=0 disable it.","category":"page"},{"location":"public/state_estim/","page":"State Estimators","title":"State Estimators","text":"The estimators are all implemented in the predictor form (a.k.a. observer form), that is, they all estimates at each discrete time k the states of the next period mathbfx_k(k+1)[1]. In contrast, the filter form that estimates mathbfx_k(k) is sometimes slightly more accurate.","category":"page"},{"location":"public/state_estim/","page":"State Estimators","title":"State Estimators","text":"[1]: also denoted mathbfx_k+1k elsewhere.","category":"page"},{"location":"public/state_estim/","page":"State Estimators","title":"State Estimators","text":"The predictor form comes in handy for control applications since the estimations come after the controller computations, without introducing any additional delays. Moreover, the moveinput! method of the predictive controllers does not automatically update the estimates with updatestate!. This allows applying the calculated inputs on the real plant before starting the potentially expensive estimator computations (see Manual for examples).","category":"page"},{"location":"public/state_estim/","page":"State Estimators","title":"State Estimators","text":"info: Info\nAll the estimators support measured mathbfy^m and unmeasured mathbfy^u model outputs, where mathbfy refers to all of them.","category":"page"},{"location":"public/state_estim/#StateEstimator","page":"State Estimators","title":"StateEstimator","text":"","category":"section"},{"location":"public/state_estim/","page":"State Estimators","title":"State Estimators","text":"StateEstimator","category":"page"},{"location":"public/state_estim/#ModelPredictiveControl.StateEstimator","page":"State Estimators","title":"ModelPredictiveControl.StateEstimator","text":"Abstract supertype of all state estimators.\n\n\n\n(estim::StateEstimator)(d=Float64[])\n\nFunctor allowing callable StateEstimator object as an alias for evaloutput.\n\nExamples\n\njulia> kf = KalmanFilter(setop!(LinModel(tf(3, [10, 1]), 2), yop=[20]));\n\njulia> ŷ = kf() \n1-element Vector{Float64}:\n 20.0\n\n\n\n\n\n","category":"type"},{"location":"public/state_estim/#SteadyKalmanFilter","page":"State Estimators","title":"SteadyKalmanFilter","text":"","category":"section"},{"location":"public/state_estim/","page":"State Estimators","title":"State Estimators","text":"SteadyKalmanFilter","category":"page"},{"location":"public/state_estim/#ModelPredictiveControl.SteadyKalmanFilter","page":"State Estimators","title":"ModelPredictiveControl.SteadyKalmanFilter","text":"SteadyKalmanFilter(model::LinModel; )\n\nConstruct a steady-state Kalman Filter with the LinModel model.\n\nThe steady-state (or asymptotic) Kalman filter is based on the process model :\n\nbeginaligned\n mathbfx(k+1) = \n mathbfA x(k) + mathbfB_u u(k) + mathbfB_d d(k) + mathbfw(k) \n mathbfy^m(k) = mathbfC^m x(k) + mathbfD_d^m d(k) + mathbfv(k) \n mathbfy^u(k) = mathbfC^u x(k) + mathbfD_d^u d(k)\nendaligned\n\nwith sensor mathbfv(k) and process mathbfw(k) noises as uncorrelated zero mean white noise vectors, with a respective covariance of mathbfR and mathbfQ. The arguments are in standard deviations σ, i.e. same units than outputs and states. The matrices mathbfA B_u B_d C D_d are model matrices augmented with the stochastic model, which is specified by the numbers of output integrator nint_ym (see Extended Help). Likewise, the covariance matrices are augmented with mathbfQ = textdiag(Q Q_int) and mathbfR = R. The matrices mathbfC^m D_d^m are the rows of mathbfC D_d that correspond to measured outputs mathbfy^m (and unmeasured ones, for mathbfC^u D_d^u).\n\nArguments\n\nmodel::LinModel : (deterministic) model for the estimations.\ni_ym=1:model.ny : model output indices that are measured mathbfy^m, the rest are unmeasured mathbfy^u.\nσQ=fill(1/model.nx,model.nx) : main diagonal of the process noise covariance mathbfQ of model, specified as a standard deviation vector.\nσR=fill(1,length(i_ym)) : main diagonal of the sensor noise covariance mathbfR of model measured outputs, specified as a standard deviation vector.\nnint_ym=default_nint(model,i_ym) : integrator quantity per measured outputs (vector) for the stochastic model, use nint_ym=0 for no integrator (see Extended Help).\nσQ_int=fill(1,sum(nint_ym)) : same than σQ but for the stochastic model covariance mathbfQ_int (composed of output integrators).\n\nExamples\n\njulia> model = LinModel([tf(3, [30, 1]); tf(-2, [5, 1])], 0.5);\n\njulia> estim = SteadyKalmanFilter(model, i_ym=[2], σR=[1], σQ_int=[0.01])\nSteadyKalmanFilter estimator with a sample time Ts = 0.5 s, LinModel and:\n 1 manipulated inputs u\n 3 states x̂\n 1 measured outputs ym\n 1 unmeasured outputs yu\n 0 measured disturbances d\n\nExtended Help\n\nThe model augmentation with nint_ym vector adds integrators at model measured outputs mathbfy^m. It creates the integral action when the estimator is used in a controller as state feedback. The method default_nint computes the default value of nint_ym. It can also be tweaked by following these rules on each measured output:\n\nUse 0 integrator if the model output is already integrating (else it will be unobservable)\nUse 1 integrator if the disturbances on the output are typically \"step-like\"\nUse 2 integrators if the disturbances on the output are typically \"ramp-like\" \n\nThe function init_estimstoch builds the stochastic model from nint_ym.\n\ntip: Tip\nIncreasing σQ_int values increases the integral action \"gain\".\n\nThe constructor pre-compute the steady-state Kalman gain K with the kalman function. It can sometimes fail, for example when model matrices are ill-conditioned. In such a case, you can try the alternative time-varying KalmanFilter.\n\n\n\n\n\nSteadyKalmanFilter(model, i_ym, nint_ym, Q̂, R̂)\n\nConstruct the estimator from the augmented covariance matrices Q̂ and R̂.\n\nThis syntax allows nonzero off-diagonal elements in mathbfQ R.\n\n\n\n\n\n","category":"type"},{"location":"public/state_estim/#KalmanFilter","page":"State Estimators","title":"KalmanFilter","text":"","category":"section"},{"location":"public/state_estim/","page":"State Estimators","title":"State Estimators","text":"KalmanFilter","category":"page"},{"location":"public/state_estim/#ModelPredictiveControl.KalmanFilter","page":"State Estimators","title":"ModelPredictiveControl.KalmanFilter","text":"KalmanFilter(model::LinModel; )\n\nConstruct a time-varying Kalman Filter with the LinModel model.\n\nThe process model is identical to SteadyKalmanFilter. The matrix mathbfP_k(k+1) is the estimation error covariance of model states augmented with the stochastic ones (specified by nint_ym). Two keyword arguments modify its initial value with mathbfP_-1(0) = mathrmdiag mathbfP(0) mathbfP_int(0) .\n\nArguments\n\nmodel::LinModel : (deterministic) model for the estimations.\nσP0=fill(1/model.nx,model.nx) : main diagonal of the initial estimate covariance mathbfP(0), specified as a standard deviation vector.\nσP0_int=fill(1,sum(nint_ym)) : same than σP0 but for the stochastic model covariance mathbfP_int(0) (composed of output integrators).\n of SteadyKalmanFilter constructor.\n\nExamples\n\njulia> model = LinModel([tf(3, [30, 1]); tf(-2, [5, 1])], 0.5);\n\njulia> estim = KalmanFilter(model, i_ym=[2], σR=[1], σP0=[100, 100], σQ_int=[0.01])\nKalmanFilter estimator with a sample time Ts = 0.5 s, LinModel and:\n 1 manipulated inputs u\n 3 states x̂\n 1 measured outputs ym\n 1 unmeasured outputs yu\n 0 measured disturbances d\n\n\n\n\n\nKalmanFilter(model, i_ym, nint_ym, P̂0, Q̂, R̂)\n\nConstruct the estimator from the augmented covariance matrices P̂0, Q̂ and R̂.\n\nThis syntax allows nonzero off-diagonal elements in mathbfP_-1(0) mathbfQ R.\n\n\n\n\n\n","category":"type"},{"location":"public/state_estim/#Luenberger","page":"State Estimators","title":"Luenberger","text":"","category":"section"},{"location":"public/state_estim/","page":"State Estimators","title":"State Estimators","text":"Luenberger","category":"page"},{"location":"public/state_estim/#ModelPredictiveControl.Luenberger","page":"State Estimators","title":"ModelPredictiveControl.Luenberger","text":"Luenberger(\n model::LinModel; \n i_ym = 1:model.ny, \n nint_ym = default_nint(model, i_ym),\n p̂ = 1e-3*(0:(model.nx + sum(nint_ym)-1)) .+ 0.5)\n)\n\nConstruct a Luenberger observer with the LinModel model.\n\ni_ym provides the model output indices that are measured mathbfy^m, the rest are unmeasured mathbfy^u. model matrices are augmented with the stochastic model, which is specified by the numbers of output integrator nint_ym (see SteadyKalmanFilter Extended Help). The argument p̂ is a vector of model.nx + sum(nint_ym) elements specifying the observer poles/eigenvalues (near z=05 by default). The method computes the observer gain mathbfK with place.\n\nExamples\n\njulia> model = LinModel([tf(3, [30, 1]); tf(-2, [5, 1])], 0.5);\n\njulia> estim = Luenberger(model, nint_ym=[1, 1], p̂=[0.61, 0.62, 0.63, 0.64])\nLuenberger estimator with a sample time Ts = 0.5 s, LinModel and:\n 1 manipulated inputs u\n 4 states x̂\n 2 measured outputs ym\n 0 unmeasured outputs yu\n 0 measured disturbances d\n\n\n\n\n\n","category":"type"},{"location":"public/state_estim/#UnscentedKalmanFilter","page":"State Estimators","title":"UnscentedKalmanFilter","text":"","category":"section"},{"location":"public/state_estim/","page":"State Estimators","title":"State Estimators","text":"UnscentedKalmanFilter","category":"page"},{"location":"public/state_estim/#ModelPredictiveControl.UnscentedKalmanFilter","page":"State Estimators","title":"ModelPredictiveControl.UnscentedKalmanFilter","text":"UnscentedKalmanFilter(model::SimModel; )\n\nConstruct an unscented Kalman Filter with the SimModel model.\n\nBoth LinModel and NonLinModel are supported. The unscented Kalman filter is based on the process model :\n\nbeginaligned\n mathbfx(k+1) = mathbffBig(mathbfx(k) mathbfu(k) mathbfd(k)Big) \n + mathbfw(k) \n mathbfy^m(k) = mathbfh^mBig(mathbfx(k) mathbfd(k)Big) + mathbfv(k) \n mathbfy^u(k) = mathbfh^uBig(mathbfx(k) mathbfd(k)Big) \nendaligned\n\nSee SteadyKalmanFilter for details on mathbfv(k) mathbfw(k) noises and mathbfR mathbfQ covariances. The functions mathbff h are model state-space functions augmented with the stochastic model, which is specified by the numbers of output integrator nint_ym (see SteadyKalmanFilter for details). The mathbfh^m function represents the measured outputs of mathbfh function (and unmeasured ones, for mathbfh^u).\n\nArguments\n\nmodel::SimModel : (deterministic) model for the estimations.\nα=1e-3 : alpha parameter, spread of the state distribution (0 α 1).\nβ=2 : beta parameter, skewness and kurtosis of the states distribution (β 0).\nκ=0 : kappa parameter, another spread parameter (0 κ 3).\n of SteadyKalmanFilter constructor.\n of KalmanFilter constructor.\n\nExamples\n\njulia> model = NonLinModel((x,u,_)->0.1x+u, (x,_)->2x, 10.0, 1, 1, 1);\n\njulia> estim = UnscentedKalmanFilter(model, σR=[1], nint_ym=[2], σP0_int=[1, 1])\nUnscentedKalmanFilter estimator with a sample time Ts = 10.0 s, NonLinModel and:\n 1 manipulated inputs u\n 3 states x̂\n 1 measured outputs ym\n 0 unmeasured outputs yu\n 0 measured disturbances d\n\n\n\n\n\nUnscentedKalmanFilter{M<:SimModel}(model::M, i_ym, nint_ym, P̂0, Q̂, R̂, α, β, κ)\n\nConstruct the estimator from the augmented covariance matrices P̂0, Q̂ and R̂.\n\nThis syntax allows nonzero off-diagonal elements in mathbfP_-1(0) mathbfQ R.\n\n\n\n\n\n","category":"type"},{"location":"public/state_estim/#ExtendedKalmanFilter","page":"State Estimators","title":"ExtendedKalmanFilter","text":"","category":"section"},{"location":"public/state_estim/","page":"State Estimators","title":"State Estimators","text":"ExtendedKalmanFilter","category":"page"},{"location":"public/state_estim/#ModelPredictiveControl.ExtendedKalmanFilter","page":"State Estimators","title":"ModelPredictiveControl.ExtendedKalmanFilter","text":"ExtendedKalmanFilter(model::SimModel; )\n\nConstruct an extended Kalman Filter with the SimModel model.\n\nBoth LinModel and NonLinModel are supported. The process model is identical to UnscentedKalmanFilter. The Jacobians of the augmented model mathbff h are computed with ForwardDiff.jl automatic differentiation.\n\nwarning: Warning\nSee Extended Help if you get an error like: MethodError: no method matching (::var\"##\")(::Vector{ForwardDiff.Dual}).\n\nArguments\n\nmodel::SimModel : (deterministic) model for the estimations.\n of SteadyKalmanFilter constructor.\n of KalmanFilter constructor.\n\nExamples\n\njulia> model = NonLinModel((x,u,_)->0.2x+u, (x,_)->-3x, 5.0, 1, 1, 1);\n\njulia> estim = ExtendedKalmanFilter(model, σQ=[2], σQ_int=[2], σP0=[0.1], σP0_int=[0.1])\nExtendedKalmanFilter estimator with a sample time Ts = 5.0 s, NonLinModel and:\n 1 manipulated inputs u\n 2 states x̂\n 1 measured outputs ym\n 0 unmeasured outputs yu\n 0 measured disturbances d\n\nExtended Help\n\nAutomatic differentiation (AD) allows exact Jacobians. The NonLinModel f and h functions must be compatible with this feature though. See Automatic differentiation for common mistakes when writing these functions.\n\n\n\n\n\nExtendedKalmanFilter{M<:SimModel}(model::M, i_ym, nint_ym, P̂0, Q̂, R̂)\n\nConstruct the estimator from the augmented covariance matrices P̂0, Q̂ and R̂.\n\nThis syntax allows nonzero off-diagonal elements in mathbfP_-1(0) mathbfQ R.\n\n\n\n\n\n","category":"type"},{"location":"public/state_estim/#InternalModel","page":"State Estimators","title":"InternalModel","text":"","category":"section"},{"location":"public/state_estim/","page":"State Estimators","title":"State Estimators","text":"InternalModel","category":"page"},{"location":"public/state_estim/#ModelPredictiveControl.InternalModel","page":"State Estimators","title":"ModelPredictiveControl.InternalModel","text":"InternalModel(model::SimModel; i_ym=1:model.ny, stoch_ym=ss(1,1,1,1,model.Ts).*I)\n\nConstruct an internal model estimator based on model (LinModel or NonLinModel).\n\ni_ym provides the model output indices that are measured mathbfy^m, the rest are unmeasured mathbfy^u. model evaluates the deterministic predictions mathbfy_d, and stoch_ym, the stochastic predictions of the measured outputs mathbfy_s^m (the unmeasured ones being mathbfy_s^u=0). The predicted outputs sum both values : mathbfy = y_d + y_s.\n\nwarning: Warning\nInternalModel estimator does not work if model is integrating or unstable. The constructor verifies these aspects for LinModel but not for NonLinModel. Uses any other state estimator in such cases.\n\nExamples\n\njulia> estim = InternalModel(LinModel([tf(3, [30, 1]); tf(-2, [5, 1])], 0.5), i_ym=[2])\nInternalModel estimator with a sample time Ts = 0.5 s, LinModel and:\n 1 manipulated inputs u\n 2 states x̂\n 1 measured outputs ym\n 1 unmeasured outputs yu\n 0 measured disturbances d\n\nExtended Help\n\nstoch_ym is a TransferFunction or StateSpace object that models disturbances on mathbfy^m. Its input is a hypothetical zero mean white noise vector. stoch_ym supposes 1 integrator per measured outputs by default, assuming that the current stochastic estimate mathbfy_s^m(k) = mathbfy^m(k) - mathbfy_d^m(k) is constant in the future. This is the dynamic matrix control (DMC) strategy, which is simple but sometimes too aggressive. Additional poles and zeros in stoch_ym can mitigate this.\n\n\n\n\n\n","category":"type"},{"location":"public/state_estim/#Default-model-augmentation","page":"State Estimators","title":"Default model augmentation","text":"","category":"section"},{"location":"public/state_estim/","page":"State Estimators","title":"State Estimators","text":"default_nint","category":"page"},{"location":"public/state_estim/#ModelPredictiveControl.default_nint","page":"State Estimators","title":"ModelPredictiveControl.default_nint","text":"default_nint(model::LinModel, i_ym=1:model.ny)\n\nGet default integrator quantity per measured outputs nint_ym for LinModel.\n\nThe measured output mathbfy^m indices are specified by i_ym argument. By default, one integrator is added on each measured outputs. If mathbfA C matrices of the augmented model become unobservable, the integrator is removed. This approach works well for stable, integrating and unstable model (see Examples).\n\nExamples\n\njulia> model = LinModel(append(tf(3, [10, 1]), tf(2, [1, 0]), tf(4,[-5, 1])), 1.0);\n\njulia> nint_ym = default_nint(model)\n3-element Vector{Int64}:\n 1\n 0\n 1\n\n\n\n\n\ndefault_nint(model::SimModel, i_ym=1:model.ny)\n\nOne integrator on each measured output by default if model is not a LinModel.\n\n\n\n\n\n","category":"function"},{"location":"func_index/#Index","page":"Index","title":"Index","text":"","category":"section"},{"location":"func_index/","page":"Index","title":"Index","text":"","category":"page"},{"location":"manual/linmpc/#man_lin","page":"Linear Design","title":"Manual: Linear Design","text":"","category":"section"},{"location":"manual/linmpc/#Linear-Model","page":"Linear Design","title":"Linear Model","text":"","category":"section"},{"location":"manual/linmpc/","page":"Linear Design","title":"Linear Design","text":"The example considers a well-stirred tank with a cold and hot water inlet as a plant. The water flows out of an opening at the bottom of the tank. The manipulated inputs are the cold u_c and hot u_h water flow rate, and the measured outputs are the liquid level y_L and temperature y_T:","category":"page"},{"location":"manual/linmpc/","page":"Linear Design","title":"Linear Design","text":"beginaligned\n mathbfu = beginbmatrix u_c u_h endbmatrix \n mathbfy = beginbmatrix y_L y_T endbmatrix\nendaligned","category":"page"},{"location":"manual/linmpc/","page":"Linear Design","title":"Linear Design","text":"At the steady-state operating points:","category":"page"},{"location":"manual/linmpc/","page":"Linear Design","title":"Linear Design","text":"beginaligned\n mathbfu_op = beginbmatrix 10 10 endbmatrix \n mathbfy_op = beginbmatrix 50 30 endbmatrix \nendaligned","category":"page"},{"location":"manual/linmpc/","page":"Linear Design","title":"Linear Design","text":"the following linear model accurately describes the plant dynamics:","category":"page"},{"location":"manual/linmpc/","page":"Linear Design","title":"Linear Design","text":"beginbmatrix\n y_L(s) y_T(s)\nendbmatrix = \nbeginbmatrix\n frac19018s + 1 frac19018s + 1 3pt\n frac-0748s + 1 frac0748s + 1\nendbmatrix\nbeginbmatrix\n u_c(s) u_h(s)\nendbmatrix","category":"page"},{"location":"manual/linmpc/","page":"Linear Design","title":"Linear Design","text":"We first need to construct a LinModel objet with setop! to handle the operating points:","category":"page"},{"location":"manual/linmpc/","page":"Linear Design","title":"Linear Design","text":"using ModelPredictiveControl, ControlSystemsBase\nsys = [ tf(1.90, [18, 1]) tf(1.90, [18, 1]);\n tf(-0.74,[8, 1]) tf(0.74, [8, 1]) ]\nTs = 4.0\nmodel = setop!(LinModel(sys, Ts), uop=[10, 10], yop=[50, 30])","category":"page"},{"location":"manual/linmpc/","page":"Linear Design","title":"Linear Design","text":"The model object will be used for two purposes : to construct our controller, and as a plant simulator to test the design.","category":"page"},{"location":"manual/linmpc/#Linear-Predictive-Controller","page":"Linear Design","title":"Linear Predictive Controller","text":"","category":"section"},{"location":"manual/linmpc/","page":"Linear Design","title":"Linear Design","text":"A predictive feedback will control both the water level y_L and temperature y_T in the tank, at a sampling time of 4 s. The tank level should also never fall below 45:","category":"page"},{"location":"manual/linmpc/","page":"Linear Design","title":"Linear Design","text":"y_L 45","category":"page"},{"location":"manual/linmpc/","page":"Linear Design","title":"Linear Design","text":"We design our LinMPC controllers by including the level constraint with setconstraint!:","category":"page"},{"location":"manual/linmpc/","page":"Linear Design","title":"Linear Design","text":"mpc = setconstraint!(LinMPC(model, Hp=15, Hc=2), ŷmin=[45, -Inf])","category":"page"},{"location":"manual/linmpc/","page":"Linear Design","title":"Linear Design","text":"By default, LinMPC controllers use OSQP to solve the problem and a SteadyKalmanFilter to estimate the plant states. Before closing the loop, we call initstate! with the actual plant inputs and measurements to ensure a bumpless transfer. Since model simulates our plant here, its output will initialize the states. LinModel objects are callable for this purpose (an alias for evaloutput):","category":"page"},{"location":"manual/linmpc/","page":"Linear Design","title":"Linear Design","text":"u = model.uop\ny = model() # or equivalently : y = evaloutput(model)\ninitstate!(mpc, u, y)\nnothing # hide","category":"page"},{"location":"manual/linmpc/","page":"Linear Design","title":"Linear Design","text":"We can then close the loop and test mpc performance on the simulator by imposing step changes on output setpoints mathbfr_y and on a load disturbance mathbfu_d:","category":"page"},{"location":"manual/linmpc/","page":"Linear Design","title":"Linear Design","text":"function test_mpc(mpc, model)\n N = 100\n ry, ud = [50, 30], [0, 0]\n u_data = zeros(model.nu, N)\n y_data = zeros(model.ny, N)\n ry_data = zeros(model.ny, N)\n for k = 0:N-1\n y = model() # simulated measurements\n k == 25 && (ry = [50, 35])\n k == 50 && (ry = [55, 30])\n k == 75 && (ud = [-15, 0])\n u = mpc(ry) # or equivalently : u = moveinput!(mpc, ry)\n u_data[:,k+1] = u\n y_data[:,k+1] = y\n ry_data[:,k+1] = ry \n updatestate!(mpc, u, y) # update mpc state estimate\n updatestate!(model, u + ud) # update simulator with disturbance\n end\n return u_data, y_data, ry_data\nend\nu_data, y_data, ry_data = test_mpc(mpc, model)\nt_data = Ts*(0:(size(y_data,2)-1))\nnothing # hide","category":"page"},{"location":"manual/linmpc/","page":"Linear Design","title":"Linear Design","text":"The LinMPC objects are also callable as an alternative syntax for moveinput!. Calling updatestate! on the mpc object updates its internal state for the NEXT control period (this is by design, see Functions: State Estimators for justifications). That is why the call is done at the end of the for loop. The same logic applies for model.","category":"page"},{"location":"manual/linmpc/","page":"Linear Design","title":"Linear Design","text":"Lastly, we plot the closed-loop test with the Plots package:","category":"page"},{"location":"manual/linmpc/","page":"Linear Design","title":"Linear Design","text":"using Plots\nfunction plot_data(t_data, u_data, y_data, ry_data)\n p1 = plot(t_data, y_data[1,:], label=\"level\"); ylabel!(\"level\")\n plot!(t_data, ry_data[1,:], label=\"setpoint\", linestyle=:dash, linetype=:steppost)\n plot!(t_data, fill(45,size(t_data)), label=\"min\", linestyle=:dot, linewidth=1.5)\n p2 = plot(t_data, y_data[2,:], label=\"temp.\"); ylabel!(\"temp.\")\n plot!(t_data, ry_data[2,:],label=\"setpoint\", linestyle=:dash, linetype=:steppost)\n p3 = plot(t_data,u_data[1,:],label=\"cold\", linetype=:steppost); ylabel!(\"flow rate\")\n plot!(t_data,u_data[2,:],label=\"hot\", linetype=:steppost); xlabel!(\"time (s)\")\n return plot(p1, p2, p3, layout=(3,1), fmt=:svg)\nend\nplot_data(t_data, u_data, y_data, ry_data)","category":"page"},{"location":"manual/linmpc/","page":"Linear Design","title":"Linear Design","text":"For some situations, when LinMPC matrices are small/medium and dense, DAQP optimizer may be more efficient. To install it, run:","category":"page"},{"location":"manual/linmpc/","page":"Linear Design","title":"Linear Design","text":"using Pkg; Pkg.add(\"DAQP\")","category":"page"},{"location":"manual/linmpc/","page":"Linear Design","title":"Linear Design","text":"Constructing a LinMPC with DAQP:","category":"page"},{"location":"manual/linmpc/","page":"Linear Design","title":"Linear Design","text":"using JuMP, DAQP\ndaqp = Model(DAQP.Optimizer)\nmpc2 = setconstraint!(LinMPC(model, Hp=15, Hc=2, optim=daqp), ŷmin=[45, -Inf])","category":"page"},{"location":"manual/linmpc/","page":"Linear Design","title":"Linear Design","text":"leads to identical results here:","category":"page"},{"location":"manual/linmpc/","page":"Linear Design","title":"Linear Design","text":"setstate!(model, zeros(model.nx))\ninitstate!(mpc2, model.uop, model())\nu_data2, y_data2, ry_data2 = test_mpc(mpc2, model)\nplot_data(t_data, u_data2, y_data2, ry_data2)","category":"page"},{"location":"public/sim_model/#func_sim_model","page":"Plant Models","title":"Functions: Plant Models","text":"","category":"section"},{"location":"public/sim_model/","page":"Plant Models","title":"Plant Models","text":"Pages = [\"sim_model.md\"]","category":"page"},{"location":"public/sim_model/","page":"Plant Models","title":"Plant Models","text":"The SimModel types represents discrete state-space models that can be used to construct StateEstimators and PredictiveControllers, or as plant simulators by calling evaloutput and updatestate! methods on SimModel instances (to test estimator/controller designs). For time simulations, the states x are stored inside SimModel instances. Use setstate! method to manually modify them.","category":"page"},{"location":"public/sim_model/#SimModel","page":"Plant Models","title":"SimModel","text":"","category":"section"},{"location":"public/sim_model/","page":"Plant Models","title":"Plant Models","text":"SimModel","category":"page"},{"location":"public/sim_model/#ModelPredictiveControl.SimModel","page":"Plant Models","title":"ModelPredictiveControl.SimModel","text":"Abstract supertype of LinModel and NonLinModel types.\n\n\n\n(model::SimModel)(d=Float64[])\n\nFunctor allowing callable SimModel object as an alias for evaloutput.\n\nExamples\n\njulia> model = NonLinModel((x,u,_)->-x + u, (x,_)->x .+ 20, 10.0, 1, 1, 1);\n\njulia> y = model()\n1-element Vector{Float64}:\n 20.0\n\n\n\n\n\n","category":"type"},{"location":"public/sim_model/#LinModel","page":"Plant Models","title":"LinModel","text":"","category":"section"},{"location":"public/sim_model/","page":"Plant Models","title":"Plant Models","text":"LinModel","category":"page"},{"location":"public/sim_model/#ModelPredictiveControl.LinModel","page":"Plant Models","title":"ModelPredictiveControl.LinModel","text":"LinModel(sys::StateSpace[, Ts]; i_u=1:size(sys,2), i_d=Int[])\n\nConstruct a linear model from state-space model sys with sampling time Ts in second.\n\nTs can be omitted when sys is discrete-time. Its state-space matrices are:\n\nbeginaligned\n mathbfx(k+1) = mathbfA x(k) + mathbfB z(k) \n mathbfy(k) = mathbfC x(k) + mathbfD z(k)\nendaligned\n\nwith the state mathbfx and output mathbfy vectors. The mathbfz vector comprises the manipulated inputs mathbfu and measured disturbances mathbfd, in any order. i_u provides the indices of mathbfz that are manipulated, and i_d, the measured disturbances. See Extended Help if sys is continuous-time, or discrete-time with Ts ≠ sys.Ts.\n\nSee also ss, tf.\n\nExamples\n\njulia> model = LinModel(ss(0.4, 0.2, 0.3, 0, 0.1))\nDiscrete-time linear model with a sample time Ts = 0.1 s and:\n 1 manipulated inputs u\n 1 states x\n 1 outputs y\n 0 measured disturbances d\n\nExtended Help\n\nState-space matrices are similar if sys is continuous (replace mathbfx(k+1) with mathbfx(t) and k with t on the LHS). In such a case, it's discretized with c2d and :zoh for manipulated inputs, and :tustin, for measured disturbances. Lastly, if sys is discrete and the provided argument Ts ≠ sys.Ts, the system is resampled by using the aforementioned discretization methods.\n\nThe constructor transforms the system to a more practical form (mathbfD_u=0 because of the zero-order hold):\n\nbeginaligned\n mathbfx(k+1) = mathbfA x(k) + mathbfB_u u(k) + mathbfB_d d(k) \n mathbfy(k) = mathbfC x(k) + mathbfD_d d(k)\nendaligned\n\n\n\n\n\nLinModel(sys::TransferFunction[, Ts]; i_u=1:size(sys,2), i_d=Int[])\n\nConvert to minimal realization state-space when sys is a transfer function.\n\nsys is equal to fracmathbfy(s)mathbfz(s) for continuous-time, and fracmathbfy(z)mathbfz(z), for discrete-time.\n\nExamples\n\njulia> model = LinModel([tf(3, [30, 1]) tf(-2, [5, 1])], 0.5, i_d=[2])\nDiscrete-time linear model with a sample time Ts = 0.5 s and:\n 1 manipulated inputs u\n 2 states x\n 1 outputs y\n 1 measured disturbances d\n\n\n\n\n\nLinModel(sys::DelayLtiSystem, Ts; i_u=1:size(sys,2), i_d=Int[])\n\nDiscretize with zero-order hold when sys is a continuous system with delays.\n\nThe delays must be multiples of the sample time Ts.\n\nExamples\n\njulia> model = LinModel(tf(4, [10, 1])*delay(2), 0.5)\nDiscrete-time linear model with a sample time Ts = 0.5 s and:\n 1 manipulated inputs u\n 5 states x\n 1 outputs y\n 0 measured disturbances d\n\n\n\n\n\n","category":"type"},{"location":"public/sim_model/#NonLinModel","page":"Plant Models","title":"NonLinModel","text":"","category":"section"},{"location":"public/sim_model/","page":"Plant Models","title":"Plant Models","text":"NonLinModel","category":"page"},{"location":"public/sim_model/#ModelPredictiveControl.NonLinModel","page":"Plant Models","title":"ModelPredictiveControl.NonLinModel","text":"NonLinModel(f::Function, h::Function, Ts, nu, nx, ny, nd=0)\n\nConstruct a nonlinear model from discrete-time state-space functions f and h.\n\nThe state update mathbff and output mathbfh functions are defined as :\n\n beginaligned\n mathbfx(k+1) = mathbffBig( mathbfx(k) mathbfu(k) mathbfd(k) Big) \n mathbfy(k) = mathbfhBig( mathbfx(k) mathbfd(k) Big)\n endaligned\n\nTs is the sampling time in second. nu, nx, ny and nd are the respective number of manipulated inputs, states, outputs and measured disturbances. \n\ntip: Tip\nReplace the d argument with _ if nd = 0 (see Examples below).\n\nNonlinear continuous-time state-space functions are not supported for now. In such a case, manually call a differential equation solver in f (see Manual).\n\nwarning: Warning\nf and h must be pure Julia functions to use the model in NonLinMPC, ExtendedKalmanFilter and MovingHorizonEstimator.\n\nSee also LinModel.\n\nExamples\n\njulia> model = NonLinModel((x,u,_)->0.1x+u, (x,_)->2x, 10.0, 1, 1, 1)\nDiscrete-time nonlinear model with a sample time Ts = 10.0 s and:\n 1 manipulated inputs u\n 1 states x\n 1 outputs y\n 0 measured disturbances d\n\n\n\n\n\n","category":"type"},{"location":"public/sim_model/#Set-Operating-Points","page":"Plant Models","title":"Set Operating Points","text":"","category":"section"},{"location":"public/sim_model/","page":"Plant Models","title":"Plant Models","text":"setop!","category":"page"},{"location":"public/sim_model/#ModelPredictiveControl.setop!","page":"Plant Models","title":"ModelPredictiveControl.setop!","text":"setop!(model::SimModel; uop=nothing, yop=nothing, dop=nothing)\n\nSet model inputs uop, outputs yop and measured disturbances dop operating points.\n\nThe state-space model with operating points (a.k.a. nominal values) is:\n\nbeginaligned\n mathbfx(k+1) = mathbfA x(k) + mathbfB_u u_0(k) + mathbfB_d d_0(k) \n mathbfy_0(k) = mathbfC x(k) + mathbfD_d d_0(k)\nendaligned\n\nin which the uop, yop and dop vectors evaluate:\n\nbeginaligned\n mathbfu_0(k) = mathbfu(k) - mathbfu_op \n mathbfy_0(k) = mathbfy(k) - mathbfy_op \n mathbfd_0(k) = mathbfd(k) - mathbfd_op \nendaligned\n\nThe structure is similar if model is a NonLinModel:\n\nbeginaligned\n mathbfx(k+1) = mathbffBig(mathbfx(k) mathbfu_0(k) mathbfd_0(k)Big)\n mathbfy_0(k) = mathbfhBig(mathbfx(k) mathbfd_0(k)Big)\nendaligned\n\nExamples\n\njulia> model = setop!(LinModel(tf(3, [10, 1]), 2.0), uop=[50], yop=[20])\nDiscrete-time linear model with a sample time Ts = 2.0 s and:\n 1 manipulated inputs u\n 1 states x\n 1 outputs y\n 0 measured disturbances d\n\n\n\n\n\n","category":"function"},{"location":"public/predictive_control/#Functions:-Predictive-Controllers","page":"Predictive Controllers","title":"Functions: Predictive Controllers","text":"","category":"section"},{"location":"public/predictive_control/","page":"Predictive Controllers","title":"Predictive Controllers","text":"Pages = [\"predictive_control.md\"]","category":"page"},{"location":"public/predictive_control/","page":"Predictive Controllers","title":"Predictive Controllers","text":"All the predictive controllers in this module rely on a state estimator to compute the predictions. The default LinMPC estimator is a SteadyKalmanFilter, and NonLinMPC with nonlinear models, an UnscentedKalmanFilter. For simpler and more classical designs, an InternalModel structure is also available, that assumes by default that the current model mismatch estimation is constant in the future (same approach than dynamic matrix control, DMC).","category":"page"},{"location":"public/predictive_control/","page":"Predictive Controllers","title":"Predictive Controllers","text":"info: Info\nThe nomenclature use capital letters for time series (and matrices) and hats for the predictions (and estimations, for state estimators).","category":"page"},{"location":"public/predictive_control/","page":"Predictive Controllers","title":"Predictive Controllers","text":"To be precise, at the kth control period, the vectors that encompass the future measured disturbances mathbfd, model outputs mathbfy and setpoints mathbfr_y over the prediction horizon H_p are defined as:","category":"page"},{"location":"public/predictive_control/","page":"Predictive Controllers","title":"Predictive Controllers","text":" mathbfD = beginbmatrix\n mathbfd(k+1) mathbfd(k+2) vdots mathbfd(k+H_p)\n endbmatrix quad\n mathbfY = beginbmatrix\n mathbfy(k+1) mathbfy(k+2) vdots mathbfy(k+H_p)\n endbmatrix quad textand quad\n mathbfR_y = beginbmatrix\n mathbfr_y(k+1) mathbfr_y(k+2) vdots mathbfr_y(k+H_p)\n endbmatrix","category":"page"},{"location":"public/predictive_control/","page":"Predictive Controllers","title":"Predictive Controllers","text":"The vectors for the manipulated input mathbfu are shifted by one time step:","category":"page"},{"location":"public/predictive_control/","page":"Predictive Controllers","title":"Predictive Controllers","text":" mathbfU = beginbmatrix\n mathbfu(k+0) mathbfu(k+1) vdots mathbfu(k+H_p-1)\n endbmatrix quad textand quad\n mathbfR_u = beginbmatrix\n mathbfr_u mathbfr_u vdots mathbfr_u\n endbmatrix","category":"page"},{"location":"public/predictive_control/","page":"Predictive Controllers","title":"Predictive Controllers","text":"assuming constant input setpoints at mathbfr_u. Defining the manipulated input increment as mathbfΔu(k+j) = mathbfu(k+j) - mathbfu(k+j-1), the control horizon H_c enforces that mathbfΔu(k+j) = mathbf0 for j H_c. For this reason, the vector that collects them is truncated up to k+H_c-1:","category":"page"},{"location":"public/predictive_control/","page":"Predictive Controllers","title":"Predictive Controllers","text":" mathbfΔU =\n beginbmatrix\n mathbfΔu(k+0) mathbfΔu(k+1) vdots mathbfΔu(k+H_c-1)\n endbmatrix","category":"page"},{"location":"public/predictive_control/#PredictiveController","page":"Predictive Controllers","title":"PredictiveController","text":"","category":"section"},{"location":"public/predictive_control/","page":"Predictive Controllers","title":"Predictive Controllers","text":"PredictiveController","category":"page"},{"location":"public/predictive_control/#ModelPredictiveControl.PredictiveController","page":"Predictive Controllers","title":"ModelPredictiveControl.PredictiveController","text":"Abstract supertype of all predictive controllers.\n\n\n\n(mpc::PredictiveController)(ry, d=Float64[]; kwargs...)\n\nFunctor allowing callable PredictiveController object as an alias for moveinput!.\n\nExamples\n\njulia> mpc = LinMPC(LinModel(tf(5, [2, 1]), 3), Nwt=[0], Hp=1000, Hc=1);\n\njulia> u = mpc([5]); round.(u, digits=3)\n1-element Vector{Float64}:\n 1.0\n\n\n\n\n\n","category":"type"},{"location":"public/predictive_control/#LinMPC","page":"Predictive Controllers","title":"LinMPC","text":"","category":"section"},{"location":"public/predictive_control/","page":"Predictive Controllers","title":"Predictive Controllers","text":"LinMPC","category":"page"},{"location":"public/predictive_control/#ModelPredictiveControl.LinMPC","page":"Predictive Controllers","title":"ModelPredictiveControl.LinMPC","text":"LinMPC(model::LinModel; )\n\nConstruct a linear predictive controller based on LinModel model.\n\nThe controller minimizes the following objective function at each discrete time k:\n\nmin_mathbfΔU ϵ mathbf(R_y - Y) mathbfM_H_p mathbf(R_y - Y) \n + mathbf(ΔU) mathbfN_H_c mathbf(ΔU) \n + mathbf(R_u - U) mathbfL_H_p mathbf(R_u - U) \n + C ϵ^2\n\nin which the weight matrices are repeated H_p or H_c times:\n\nbeginaligned\n mathbfM_H_p = textdiagmathbf(MMM) \n mathbfN_H_c = textdiagmathbf(NNN) \n mathbfL_H_p = textdiagmathbf(LLL) \nendaligned\n\nand with the following nomenclature:\n\nVARIABLE DESCRIPTION SIZE\nH_p prediction horizon (integer) ()\nH_c control horizon (integer) ()\nmathbfΔU manipulated input increments over H_c (nu*Hc,)\nmathbfY predicted outputs over H_p (ny*Hp,)\nmathbfU manipulated inputs over H_p (nu*Hp,)\nmathbfR_y predicted output setpoints over H_p (ny*Hp,)\nmathbfR_u predicted manipulated input setpoints over H_p (nu*Hp,)\nmathbfM output setpoint tracking weights (ny*Hp, ny*Hp)\nmathbfN manipulated input increment weights (nu*Hc, nu*Hc)\nmathbfL manipulated input setpoint tracking weights (nu*Hp, nu*Hp)\nC slack variable weight ()\nϵ slack variable for constraint softening ()\n\nThe mathbfΔU vector includes the manipulated input increments mathbfΔu(k+j) = mathbfu(k+j) - mathbfu(k+j-1) from j=0 to H_c-1, the mathbfY vector, the output predictions mathbfy(k+j) from j=1 to H_p, and the mathbfU vector, the manipulated inputs mathbfu(k+j) from j=0 to H_p-1. The manipulated input setpoint predictions mathbfR_u are constant at mathbfr_u.\n\nThis method uses the default state estimator, a SteadyKalmanFilter with default arguments.\n\nArguments\n\nmodel::LinModel : model used for controller predictions and state estimations.\nHp=10+nk: prediction horizon H_p, nk is the number of delays in model.\nHc=2 : control horizon H_c.\nMwt=fill(1.0,model.ny) : main diagonal of mathbfM weight matrix (vector).\nNwt=fill(0.1,model.nu) : main diagonal of mathbfN weight matrix (vector).\nLwt=fill(0.0,model.nu) : main diagonal of mathbfL weight matrix (vector).\nCwt=1e5 : slack variable weight C (scalar), use Cwt=Inf for hard constraints only.\nru=model.uop : manipulated input setpoints mathbfr_u (vector).\noptim=JuMP.Model(OSQP.MathOptInterfaceOSQP.Optimizer) : quadratic optimizer used in the predictive controller, provided as a JuMP.Model (default to OSQP.jl optimizer).\n\nExamples\n\njulia> model = LinModel([tf(3, [30, 1]); tf(-2, [5, 1])], 4);\n\njulia> mpc = LinMPC(model, Mwt=[0, 1], Nwt=[0.5], Hp=30, Hc=1)\nLinMPC controller with a sample time Ts = 4.0 s, OSQP optimizer, SteadyKalmanFilter estimator and:\n 30 prediction steps Hp\n 1 control steps Hc\n 1 manipulated inputs u\n 4 states x̂\n 2 measured outputs ym\n 0 unmeasured outputs yu\n 0 measured disturbances d\n\nExtended Help\n\nManipulated inputs setpoints mathbfr_u are not common but they can be interesting for over-actuated systems, when nu > ny (e.g. prioritize solutions with lower economical costs). The default Lwt value implies that this feature is disabled by default.\n\n\n\n\n\nLinMPC(estim::StateEstimator; )\n\nUse custom state estimator estim to construct LinMPC.\n\nestim.model must be a LinModel. Else, a NonLinMPC is required. \n\nExamples\n\njulia> estim = KalmanFilter(LinModel([tf(3, [30, 1]); tf(-2, [5, 1])], 4), i_ym=[2]);\n\njulia> mpc = LinMPC(estim, Mwt=[0, 1], Nwt=[0.5], Hp=30, Hc=1)\nLinMPC controller with a sample time Ts = 4.0 s, OSQP optimizer, KalmanFilter estimator and:\n 30 prediction steps Hp\n 1 control steps Hc\n 1 manipulated inputs u\n 3 states x̂\n 1 measured outputs ym\n 1 unmeasured outputs yu\n 0 measured disturbances d\n\n\n\n\n\n","category":"type"},{"location":"public/predictive_control/#ExplicitMPC","page":"Predictive Controllers","title":"ExplicitMPC","text":"","category":"section"},{"location":"public/predictive_control/","page":"Predictive Controllers","title":"Predictive Controllers","text":"ExplicitMPC","category":"page"},{"location":"public/predictive_control/#ModelPredictiveControl.ExplicitMPC","page":"Predictive Controllers","title":"ModelPredictiveControl.ExplicitMPC","text":"ExplicitMPC(model::LinModel; )\n\nConstruct an explicit linear predictive controller based on LinModel model.\n\nThe controller minimizes the following objective function at each discrete time k:\n\nmin_mathbfΔU mathbf(R_y - Y) mathbfM_H_p mathbf(R_y - Y) \n + mathbf(ΔU) mathbfN_H_c mathbf(ΔU) \n + mathbf(R_u - U) mathbfL_H_p mathbf(R_u - U) \n\nSee LinMPC for the variable definitions. This controller does not support constraints but the computational costs are extremely low (array division), therefore suitable for applications that require small sample times. This method uses the default state estimator, a SteadyKalmanFilter with default arguments.\n\nArguments\n\nmodel::LinModel : model used for controller predictions and state estimations.\nHp=10+nk: prediction horizon H_p, nk is the number of delays in model.\nHc=2 : control horizon H_c.\nMwt=fill(1.0,model.ny) : main diagonal of mathbfM weight matrix (vector).\nNwt=fill(0.1,model.nu) : main diagonal of mathbfN weight matrix (vector).\nLwt=fill(0.0,model.nu) : main diagonal of mathbfL weight matrix (vector).\nru=model.uop : manipulated input setpoints mathbfr_u (vector).\n\nExamples\n\njulia> model = LinModel([tf(3, [30, 1]); tf(-2, [5, 1])], 4);\n\njulia> mpc = ExplicitMPC(model, Mwt=[0, 1], Nwt=[0.5], Hp=30, Hc=1)\nExplicitMPC controller with a sample time Ts = 4.0 s, SteadyKalmanFilter estimator and:\n 30 prediction steps Hp\n 1 control steps Hc\n 1 manipulated inputs u\n 4 states x̂\n 2 measured outputs ym\n 0 unmeasured outputs yu\n 0 measured disturbances d\n\n\n\n\n\nExplicitMPC(estim::StateEstimator; )\n\nUse custom state estimator estim to construct ExplicitMPC.\n\nestim.model must be a LinModel. Else, a NonLinMPC is required. \n\nExamples\n\njulia> estim = KalmanFilter(LinModel([tf(3, [30, 1]); tf(-2, [5, 1])], 4), i_ym=[2]);\n\njulia> mpc = ExplicitMPC(estim, Mwt=[0, 1], Nwt=[0.5], Hp=30, Hc=1)\nExplicitMPC controller with a sample time Ts = 4.0 s, KalmanFilter estimator and:\n 30 prediction steps Hp\n 1 control steps Hc\n 1 manipulated inputs u\n 3 states x̂\n 1 measured outputs ym\n 1 unmeasured outputs yu\n 0 measured disturbances d\n\n\n\n\n\n","category":"type"},{"location":"public/predictive_control/#NonLinMPC","page":"Predictive Controllers","title":"NonLinMPC","text":"","category":"section"},{"location":"public/predictive_control/","page":"Predictive Controllers","title":"Predictive Controllers","text":"NonLinMPC","category":"page"},{"location":"public/predictive_control/#ModelPredictiveControl.NonLinMPC","page":"Predictive Controllers","title":"ModelPredictiveControl.NonLinMPC","text":"NonLinMPC(model::SimModel; )\n\nConstruct a nonlinear predictive controller based on SimModel model.\n\nBoth NonLinModel and LinModel are supported (see Extended Help). The controller minimizes the following objective function at each discrete time k:\n\nmin_mathbfΔU ϵ mathbf(R_y - Y) mathbfM_H_p mathbf(R_y - Y) \n + mathbf(ΔU) mathbfN_H_c mathbf(ΔU) \n + mathbf(R_u - U) mathbfL_H_p mathbf(R_u - U) \n + C ϵ^2 + E J_E(mathbfU_E mathbfY_E mathbfD_E)\n\nSee LinMPC for the variable definitions. The custom economic function J_E can penalizes solutions with high economic costs. Setting all the weights to 0 except E creates a pure economic model predictive controller (EMPC). The arguments of J_E are the manipulated inputs, the predicted outputs and measured disturbances from k to k+H_p inclusively:\n\n mathbfU_E = beginbmatrix mathbfU mathbfu(k+H_p-1) endbmatrix text qquad\n mathbfY_E = beginbmatrix mathbfy(k) mathbfY endbmatrix text qquad\n mathbfD_E = beginbmatrix mathbfd(k) mathbfD endbmatrix\n\nsince H_c H_p implies that mathbfu(k+H_p) = mathbfu(k+H_p-1). The vector mathbfD includes the predicted measured disturbance over H_p.\n\ntip: Tip\nReplace any of the 3 arguments with _ if not needed (see JE default value below).\n\nThis method uses the default state estimator :\n\nif model is a LinModel, a SteadyKalmanFilter with default arguments;\nelse, an UnscentedKalmanFilter with default arguments. \n\nwarning: Warning\nSee Extended Help if you get an error like: MethodError: no method matching Float64(::ForwardDiff.Dual).\n\nArguments\n\nmodel::SimModel : model used for controller predictions and state estimations.\nHp=10: prediction horizon H_p.\nHc=2 : control horizon H_c.\nMwt=fill(1.0,model.ny) : main diagonal of mathbfM weight matrix (vector).\nNwt=fill(0.1,model.nu) : main diagonal of mathbfN weight matrix (vector).\nLwt=fill(0.0,model.nu) : main diagonal of mathbfL weight matrix (vector).\nCwt=1e5 : slack variable weight C (scalar), use Cwt=Inf for hard constraints only.\nEwt=0.0 : economic costs weight E (scalar). \nJE=(_,_,_)->0.0 : economic function J_E(mathbfU_E mathbfY_E mathbfD_E).\nru=model.uop : manipulated input setpoints mathbfr_u (vector).\noptim=JuMP.Model(Ipopt.Optimizer) : nonlinear optimizer used in the predictive controller, provided as a JuMP.Model (default to Ipopt.jl optimizer).\n\nExamples\n\njulia> model = NonLinModel((x,u,_)->0.5x+u, (x,_)->2x, 10.0, 1, 1, 1);\n\njulia> mpc = NonLinMPC(model, Hp=20, Hc=1, Cwt=1e6)\nNonLinMPC controller with a sample time Ts = 10.0 s, Ipopt optimizer, UnscentedKalmanFilter estimator and:\n 20 prediction steps Hp\n 1 control steps Hc\n 1 manipulated inputs u\n 2 states x̂\n 1 measured outputs ym\n 0 unmeasured outputs yu\n 0 measured disturbances d\n\nExtended Help\n\nNonLinMPC controllers based on LinModel compute the predictions with matrix algebra instead of a for loop. This feature can accelerate the optimization and is not available in any other package, to my knowledge.\n\nThe optimizations rely on JuMP.jl automatic differentiation (AD) to compute the objective and constraint derivatives. Optimizers generally benefit from exact derivatives like AD. However, the NonLinModel f and h functions must be compatible with this feature. See Automatic differentiation for common mistakes when writing these functions.\n\n\n\n\n\nNonLinMPC(estim::StateEstimator; )\n\nUse custom state estimator estim to construct NonLinMPC.\n\nExamples\n\njulia> model = NonLinModel((x,u,_)->0.5x+u, (x,_)->2x, 10.0, 1, 1, 1);\n\njulia> estim = UnscentedKalmanFilter(model, σQ_int=[0.05]);\n\njulia> mpc = NonLinMPC(estim, Hp=20, Hc=1, Cwt=1e6)\nNonLinMPC controller with a sample time Ts = 10.0 s, Ipopt optimizer, UnscentedKalmanFilter estimator and:\n 20 prediction steps Hp\n 1 control steps Hc\n 1 manipulated inputs u\n 2 states x̂\n 1 measured outputs ym\n 0 unmeasured outputs yu\n 0 measured disturbances d\n\n\n\n\n\n","category":"type"},{"location":"public/predictive_control/#Set-Constraint","page":"Predictive Controllers","title":"Set Constraint","text":"","category":"section"},{"location":"public/predictive_control/","page":"Predictive Controllers","title":"Predictive Controllers","text":"setconstraint!","category":"page"},{"location":"public/predictive_control/#ModelPredictiveControl.setconstraint!","page":"Predictive Controllers","title":"ModelPredictiveControl.setconstraint!","text":"setconstraint!(mpc::PredictiveController; )\n\nSet the constraint parameters of mpc predictive controller.\n\nThe predictive controllers support both soft and hard constraints, defined by:\n\nbeginalignat*3\n mathbfu_min - c_u_min ϵ mathbfu(k+j) mathbfu_max + c_u_max ϵ qquad j = 0 1 H_c - 1 \n mathbfΔu_min - c_Δu_min ϵ mathbfΔu(k+j) mathbfΔu_max + c_Δu_max ϵ qquad j = 0 1 H_c - 1 \n mathbfy_min - c_y_min ϵ mathbfy(k+j) mathbfy_max + c_y_max ϵ qquad j = 1 2 H_p \nendalignat*\n\nand also ϵ 0. All the constraint parameters are vector. Use ±Inf values when there is no bound. The constraint softness parameters mathbfc, also called equal concern for relaxation, are non-negative values that specify the softness of the associated bound. Use 0.0 values for hard constraints. The predicted output constraints mathbfy_min and mathbfy_max are soft by default.\n\nArguments\n\ninfo: Info\nThe default constraints are mentioned here for clarity but omitting a keyword argument will not re-assign to its default value (defaults are set at construction only).\n\numin=fill(-Inf,nu) : manipulated input lower bounds mathbfu_min \numax=fill(+Inf,nu) : manipulated input upper bounds mathbfu_max \nΔumin=fill(-Inf,nu) : manipulated input increment lower bounds mathbfΔu_min \nΔumax=fill(+Inf,nu) : manipulated input increment upper bounds mathbfΔu_max \nŷmin=fill(-Inf,ny) : predicted output lower bounds mathbfy_min \nŷmax=fill(+Inf,ny) : predicted output upper bounds mathbfy_max \nc_umin=fill(0.0,nu) : umin softness weights mathbfc_u_min \nc_umax=fill(0.0,nu) : umax softness weights mathbfc_u_max \nc_Δumin=fill(0.0,nu) : Δumin softness weights mathbfc_Δu_min \nc_Δumax=fill(0.0,nu) : Δumax softness weights mathbfc_Δu_max \nc_ŷmin=fill(1.0,ny) : ŷmin softness weights mathbfc_y_min \nc_ŷmax=fill(1.0,ny) : ŷmax softness weights mathbfc_y_max\n\nExamples\n\njulia> mpc = LinMPC(setop!(LinModel(tf(3, [30, 1]), 4), uop=[50], yop=[25]));\n\njulia> mpc = setconstraint!(mpc, umin=[0], umax=[100], c_umin=[0.0], c_umax=[0.0]);\n\njulia> mpc = setconstraint!(mpc, Δumin=[-10], Δumax=[+10], c_Δumin=[1.0], c_Δumax=[1.0])\nLinMPC controller with a sample time Ts = 4.0 s, OSQP optimizer, SteadyKalmanFilter estimator and:\n 10 prediction steps Hp\n 2 control steps Hc\n 1 manipulated inputs u\n 2 states x̂\n 1 measured outputs ym\n 0 unmeasured outputs yu\n 0 measured disturbances d\n\n\n\n\n\n","category":"function"},{"location":"public/predictive_control/#Move-Manipulated-Input-u","page":"Predictive Controllers","title":"Move Manipulated Input u","text":"","category":"section"},{"location":"public/predictive_control/","page":"Predictive Controllers","title":"Predictive Controllers","text":"moveinput!","category":"page"},{"location":"public/predictive_control/#ModelPredictiveControl.moveinput!","page":"Predictive Controllers","title":"ModelPredictiveControl.moveinput!","text":"moveinput!(\n mpc::PredictiveController, \n ry = mpc.estim.model.yop, \n d = Float64[];\n R̂y = repeat(ry, mpc.Hp), \n D̂ = repeat(d, mpc.Hp), \n ym = nothing\n)\n\nCompute the optimal manipulated input value u for the current control period.\n\nSolve the optimization problem of mpc PredictiveController and return the results mathbfu(k). Following the receding horizon principle, the algorithm discards the optimal future manipulated inputs mathbfu(k+1) mathbfu(k+2) The arguments ry and d are current output setpoints mathbfr_y(k) and measured disturbances mathbfd(k).\n\nThe keyword arguments R̂y and D̂ are the predicted output setpoints mathbfR_y and measured disturbances mathbfD. They are assumed constant in the future by default, that is mathbfr_y(k+j) = mathbfr_y(k) and mathbfd(k+j) = mathbfd(k) for j=1 to H_p. Current measured output ym is only required if mpc.estim is a InternalModel.\n\nCalling a PredictiveController object calls this method.\n\nSee also LinMPC, NonLinMPC.\n\nExamples\n\njulia> mpc = LinMPC(LinModel(tf(5, [2, 1]), 3), Nwt=[0], Hp=1000, Hc=1);\n\njulia> ry = [5]; u = moveinput!(mpc, ry); round.(u, digits=3)\n1-element Vector{Float64}:\n 1.0\n\n\n\n\n\n","category":"function"},{"location":"public/predictive_control/#Get-Additional-Information","page":"Predictive Controllers","title":"Get Additional Information","text":"","category":"section"},{"location":"public/predictive_control/","page":"Predictive Controllers","title":"Predictive Controllers","text":"getinfo","category":"page"},{"location":"public/predictive_control/#ModelPredictiveControl.getinfo","page":"Predictive Controllers","title":"ModelPredictiveControl.getinfo","text":"getinfo(mpc::PredictiveController) -> sol_summary, info\n\nGet additional information about mpc controller optimum to ease troubleshooting.\n\nReturn the optimizer solution summary that can be printed, sol_summary, and the dictionary info with the following fields:\n\n:ΔU : optimal manipulated input increments over Hc (mathbfΔU)\n:ϵ : optimal slack variable (ϵ)\n:J : objective value optimum (J)\n:U : optimal manipulated inputs over Hp (mathbfU)\n:u : current optimal manipulated input (mathbfu)\n:d : current measured disturbance (mathbfd)\n:D̂ : predicted measured disturbances over Hp (mathbfD)\n:ŷ : current estimated output (mathbfy)\n:Ŷ : optimal predicted outputs over Hp (mathbfY = Y_d + Y_s)\n:Ŷd : optimal predicted deterministic output over Hp (mathbfY_d)\n:Ŷs : predicted stochastic output over Hp (mathbfY_s)\n:R̂y : predicted output setpoint over Hp (mathbfR_y)\n:R̂u : predicted manipulated input setpoint over Hp (mathbfR_u)\n\nExamples\n\njulia> mpc = LinMPC(LinModel(tf(5, [2, 1]), 3), Nwt=[0], Hp=1, Hc=1);\n\njulia> u = moveinput!(mpc, [10]);\n\njulia> sol_summary, info = getinfo(mpc); round.(info[:Ŷ], digits=3)\n1-element Vector{Float64}:\n 10.0\n\n\n\n\n\ngetinfo(mpc::NonLinMPC) -> sol_summary, info\n\nInvoke getinfo(::PredictiveController) and add :JE the economic optimum J_E.\n\n\n\n\n\n","category":"function"},{"location":"public/generic_func/#Functions:-Generic-Functions","page":"Generic Functions","title":"Functions: Generic Functions","text":"","category":"section"},{"location":"public/generic_func/","page":"Generic Functions","title":"Generic Functions","text":"Pages = [\"generic_func.md\"]","category":"page"},{"location":"public/generic_func/","page":"Generic Functions","title":"Generic Functions","text":"This page contains the documentation of functions that are generic to SimModel, StateEstimator and PredictiveController types.","category":"page"},{"location":"public/generic_func/#Evaluate-Output-y","page":"Generic Functions","title":"Evaluate Output y","text":"","category":"section"},{"location":"public/generic_func/","page":"Generic Functions","title":"Generic Functions","text":"evaloutput","category":"page"},{"location":"public/generic_func/#ModelPredictiveControl.evaloutput","page":"Generic Functions","title":"ModelPredictiveControl.evaloutput","text":"evaloutput(model::SimModel, d=Float64[])\n\nEvaluate SimModel outputs y from model.x states and measured disturbances d.\n\nCalling a SimModel object calls this evaloutput method.\n\nExamples\n\njulia> model = setop!(LinModel(tf(2, [10, 1]), 5.0), yop=[20]);\n\njulia> y = evaloutput(model)\n1-element Vector{Float64}:\n 20.0\n\n\n\n\n\nevaloutput(estim::StateEstimator, d=Float64[])\n\nEvaluate StateEstimator outputs ŷ from estim.x̂ states and disturbances d.\n\nCalling a StateEstimator object calls this evaloutput method.\n\nExamples\n\njulia> kf = SteadyKalmanFilter(setop!(LinModel(tf(2, [10, 1]), 5), yop=[20]));\n\njulia> ŷ = evaloutput(kf)\n1-element Vector{Float64}:\n 20.0\n\n\n\n\n\nevaloutput(estim::InternalModel, ym, d=Float64[])\n\nEvaluate InternalModel outputs ŷ from estim.x̂d states and measured outputs ym.\n\nInternalModel estimator needs current measured outputs mathbfy^m(k) to estimate its outputs mathbfy(k), since the strategy imposes that mathbfy^m(k) = mathbfy^m(k) is always true.\n\n\n\n\n\n","category":"function"},{"location":"public/generic_func/#Update-State-x","page":"Generic Functions","title":"Update State x","text":"","category":"section"},{"location":"public/generic_func/","page":"Generic Functions","title":"Generic Functions","text":"updatestate!","category":"page"},{"location":"public/generic_func/#ModelPredictiveControl.updatestate!","page":"Generic Functions","title":"ModelPredictiveControl.updatestate!","text":"updatestate!(model::SimModel, u, d=Float64[])\n\nUpdate model.x states with current inputs u and measured disturbances d.\n\nExamples\n\njulia> model = LinModel(ss(1, 1, 1, 0, 1.0));\n\njulia> x = updatestate!(model, [1])\n1-element Vector{Float64}:\n 1.0\n\n\n\n\n\nupdatestate!(estim::StateEstimator, u, ym, d=Float64[])\n\nUpdate estim.x̂ estimate with current inputs u, measured outputs ym and dist. d. \n\nThe method removes the operating points with remove_op! and call update_estimate!.\n\nExamples\n\njulia> kf = SteadyKalmanFilter(LinModel(ss(0.1, 0.5, 1, 0, 4.0)));\n\njulia> x̂ = updatestate!(kf, [1], [0]) # x̂[2] is the integrator state (nint_ym argument)\n2-element Vector{Float64}:\n 0.5\n 0.0\n\n\n\n\n\nupdatestate!(mpc::PredictiveController, u, ym, d=Float64[])\n\nCall updatestate! on mpc.estim StateEstimator.\n\n\n\n\n\n","category":"function"},{"location":"public/generic_func/#Init-State-x","page":"Generic Functions","title":"Init State x","text":"","category":"section"},{"location":"public/generic_func/","page":"Generic Functions","title":"Generic Functions","text":"initstate!","category":"page"},{"location":"public/generic_func/#ModelPredictiveControl.initstate!","page":"Generic Functions","title":"ModelPredictiveControl.initstate!","text":"initstate!(estim::StateEstimator, u, ym, d=Float64[])\n\nInit estim.x̂ states from current inputs u, measured outputs ym and disturbances d.\n\nThe method tries to find a good steady-state to initialize estim.x̂ estimate :\n\nIf estim.model is a LinModel, it evaluates estim.model steady-state (using steadystate) with current inputs u and measured disturbances d, and saves the result to estim.x̂[1:nx].\nElse, the current deterministic states estim.x̂[1:nx] are left unchanged (use setstate! to manually modify them). \n\nIt then estimates the measured outputs ŷm from these states, and the residual offset with current measured outputs (ym - ŷm) initializes the integrators of the stochastic model. This approach ensures that mathbfy^m(0) = mathbfy^m(0). For LinModel, it also ensures that the estimator starts at steady-state, resulting in a bumpless manual to automatic transfer for control applications.\n\nExamples\n\njulia> estim = SteadyKalmanFilter(LinModel(tf(3, [10, 1]), 0.5), nint_ym=[2]);\n\njulia> x̂ = initstate!(estim, [1], [3 - 0.1])\n3-element Vector{Float64}:\n 5.0000000000000115\n 0.0\n -0.10000000000000675\n\n\n\n\n\ninitstate!(estim::InternalModel, u, ym, d=Float64[])\n\nInit estim.x̂d / x̂s states from current inputs u, meas. outputs ym and disturb. d.\n\nThe deterministic state estim.x̂d initialization method is identical to initstate!(::StateEstimator). The stochastic states estim.x̂s are init at 0. \n\n\n\n\n\ninitstate!(mpc::PredictiveController, u, ym, d=Float64[])\n\nInit mpc.ΔŨ for warm-starting and the states of mpc.estim StateEstimator.\n\nBefore calling initstate!(::StateEstimator,_,_), it warm-starts mathbfΔU:\n\nIf model is a LinModel, the vector is filled with the analytical minimum J of the unconstrained problem.\nElse, the vector is filled with zeros.\n\n\n\n\n\n","category":"function"},{"location":"public/generic_func/#Set-State-x","page":"Generic Functions","title":"Set State x","text":"","category":"section"},{"location":"public/generic_func/","page":"Generic Functions","title":"Generic Functions","text":"setstate!","category":"page"},{"location":"public/generic_func/#ModelPredictiveControl.setstate!","page":"Generic Functions","title":"ModelPredictiveControl.setstate!","text":"setstate!(model::SimModel, x)\n\nSet model.x states to values specified by x. \n\n\n\n\n\nsetstate!(estim::StateEstimator, x̂)\n\nSet estim.x̂ states to values specified by x̂. \n\n\n\n\n\nsetstate!(mpc::PredictiveController, x̂)\n\nSet the estimate at mpc.estim.x̂.\n\n\n\n\n\n","category":"function"},{"location":"public/generic_func/#Quick-Simulation","page":"Generic Functions","title":"Quick Simulation","text":"","category":"section"},{"location":"public/generic_func/","page":"Generic Functions","title":"Generic Functions","text":"sim!","category":"page"},{"location":"public/generic_func/#ModelPredictiveControl.sim!","page":"Generic Functions","title":"ModelPredictiveControl.sim!","text":"sim!(plant::SimModel, N::Int, u=plant.uop.+1, d=plant.dop; x0=zeros(plant.nx))\n\nOpen-loop simulation of plant for N time steps, default to unit bump test on all inputs.\n\nThe manipulated inputs mathbfu and measured disturbances mathbfd are held constant at u and d values, respectively. The plant initial state mathbfx(0) is specified by x0 keyword arguments. The function returns SimResult instances that can be visualized by calling plot from Plots.jl on them (see Examples below). Note that the method mutates plant internal states.\n\nExamples\n\njulia> plant = NonLinModel((x,u,d)->0.1x+u+d, (x,_)->2x, 10.0, 1, 1, 1, 1);\n\njulia> res = sim!(plant, 15, [0], [0], x0=[1]);\n\njulia> using Plots; plot(res, plotu=false, plotd=false, plotx=true)\n\n\n\n\n\n\nsim!(\n estim::StateEstimator,\n N::Int,\n u = estim.model.uop .+ 1,\n d = estim.model.dop;\n \n)\n\nClosed-loop simulation of estim estimator for N time steps, default to input bumps.\n\nSee Arguments for the available options. The noises are provided as standard deviations σ vectors. The simulated sensor and process noises of plant are specified by y_noise and x_noise arguments, respectively.\n\nArguments\n\nestim::StateEstimator : state estimator to simulate.\nN::Int : simulation length in time steps.\nu = estim.model.uop .+ 1 : manipulated input mathbfu value.\nd = estim.model.dop : plant measured disturbance mathbfd value.\nplant::SimModel = estim.model : simulated plant model.\nu_step = zeros(plant.nu) : step load disturbance on plant inputs mathbfu.\nu_noise = zeros(plant.nu) : gaussian load disturbance on plant inputs mathbfu.\ny_step = zeros(plant.ny) : step disturbance on plant outputs mathbfy.\ny_noise = zeros(plant.ny) : additive gaussian noise on plant outputs mathbfy.\nd_step = zeros(plant.nd) : step on measured disturbances mathbfd.\nd_noise = zeros(plant.nd) : additive gaussian noise on measured dist. mathbfd.\nx_noise = zeros(plant.nx) : additive gaussian noise on plant states mathbfx.\nx0 = zeros(plant.nx) : plant initial state mathbfx(0).\nx̂0 = nothing : initial estimate mathbfx(0), initstate! is used if nothing.\nlastu = plant.uop : last plant input mathbfu for mathbfx initialization.\n\nExamples\n\njulia> model = LinModel(tf(3, [30, 1]), 0.5);\n\njulia> estim = KalmanFilter(model, σR=[0.5], σQ=[0.25], σQ_int=[0.01], σP0_int=[0.1]);\n\njulia> res = sim!(estim, 50, [0], y_noise=[0.5], x_noise=[0.25], x0=[-10], x̂0=[0, 0]);\n\njulia> using Plots; plot(res, plotŷ=true, plotu=false, plotxwithx̂=true)\n\n\n\n\n\n\nsim!(\n mpc::PredictiveController, \n N::Int,\n ry = mpc.estim.model.yop .+ 1, \n d = mpc.estim.model.dop; \n \n)\n\nClosed-loop simulation of mpc controller for N time steps, default to setpoint bumps.\n\nThe output setpoint mathbfr_y is held constant at ry. The keyword arguments are identical to sim!(::StateEstimator, ::Int).\n\nExamples\n\njulia> model = LinModel([tf(3, [30, 1]); tf(2, [5, 1])], 4);\n\njulia> mpc = setconstraint!(LinMPC(model, Mwt=[0, 1], Nwt=[0.01], Hp=30), ŷmin=[0, -Inf]);\n\njulia> res = sim!(mpc, 25, [0, 0], y_noise=[0.1], y_step=[-10, 0]);\n\njulia> using Plots; plot(res, plotry=true, plotŷ=true, plotŷmin=true, plotu=true)\n\n\n\n\n\n\n","category":"function"},{"location":"#ModelPredictiveControl.jl","page":"Home","title":"ModelPredictiveControl.jl","text":"","category":"section"},{"location":"","page":"Home","title":"Home","text":"A model predictive control package for Julia.","category":"page"},{"location":"","page":"Home","title":"Home","text":"The package depends on ControlSystemsBase.jl for the linear systems and JuMP.jl for the solving.","category":"page"},{"location":"","page":"Home","title":"Home","text":"The objective is to provide a simple and clear framework to quickly design model predictive controllers (MPCs) in Julia, while preserving the flexibility for advanced real-time optimization. Modern MPCs based on closed-loop state estimators are the main focus of the package, but classical approaches that rely on internal models are also possible. The JuMP.jl interface allows to easily test different solvers if the performance of the default settings is not satisfactory.","category":"page"},{"location":"","page":"Home","title":"Home","text":"The documentation is divided in two parts:","category":"page"},{"location":"","page":"Home","title":"Home","text":"Manual — This section includes step-by-step guides to design predictive controllers on multiple case studies.\nFunctions — Documentation of methods and types exported by the package. The \"Internals\" section provides implementation details of functions that are not exported.","category":"page"},{"location":"#Manual","page":"Home","title":"Manual","text":"","category":"section"},{"location":"","page":"Home","title":"Home","text":"Depth = 2\nPages = [\n \"manual/installation.md\",\n \"manual/linmpc.md\",\n \"manual/nonlinmpc.md\",\n]","category":"page"},{"location":"#Functions:-Public","page":"Home","title":"Functions: Public","text":"","category":"section"},{"location":"","page":"Home","title":"Home","text":"Depth = 2\nPages = [\n \"public/sim_model.md\",\n \"public/state_estim.md\",\n \"public/predictive_control.md\",\n \"public/generic_func.md\",\n]","category":"page"},{"location":"#Functions:-Internals","page":"Home","title":"Functions: Internals","text":"","category":"section"},{"location":"","page":"Home","title":"Home","text":"Depth = 1\nPages = [\n \"internals/sim_model.md\",\n \"internals/state_estim.md\",\n \"internals/predictive_control.md\",\n]","category":"page"},{"location":"internals/sim_model/#Functions:-SimModel-Internals","page":"Plant Models","title":"Functions: SimModel Internals","text":"","category":"section"},{"location":"internals/sim_model/","page":"Plant Models","title":"Plant Models","text":"ModelPredictiveControl.steadystate\nModelPredictiveControl.f\nModelPredictiveControl.h","category":"page"},{"location":"internals/sim_model/#ModelPredictiveControl.steadystate","page":"Plant Models","title":"ModelPredictiveControl.steadystate","text":"steadystate(model::LinModel, u, d=Float64[])\n\nEvaluate the steady-state vector when model is a LinModel.\n\nFollowing setop! notation, the method evaluates the equilibrium mathbfx() from:\n\n mathbfx() = mathbf(I - A)^-1(B_u u_0 + B_d d_0)\n\nwith the manipulated inputs held constant at mathbfu_0 and, the measured disturbances, at mathbfd_0. The Moore-Penrose pseudo-inverse computes mathbf(I - A)^-1 to support integrating model (integrator states will be 0).\n\n\n\n\n\n","category":"function"},{"location":"internals/sim_model/#ModelPredictiveControl.f","page":"Plant Models","title":"ModelPredictiveControl.f","text":"f(model::LinModel, x, u, d)\n\nEvaluate mathbfA x + B_u u + B_d d when model is a LinModel.\n\n\n\n\n\nCall mathbff(x u d) with model.f function for NonLinModel.\n\n\n\n\n\n","category":"function"},{"location":"internals/sim_model/#ModelPredictiveControl.h","page":"Plant Models","title":"ModelPredictiveControl.h","text":"h(model::LinModel, x, u, d)\n\nEvaluate mathbfC x + D_d d when model is a LinModel.\n\n\n\n\n\nCall mathbfh(x d) with model.h function for NonLinModel.\n\n\n\n\n\n","category":"function"}] }