Skip to content

Commit

Permalink
Upgrade to MadNLP 0.8 (#68)
Browse files Browse the repository at this point in the history
- Upgrade to MadNLP 0.8
- test BieglegKKTSystem with MadNLPTests
- support CuVector in OPFModel
- change hashing function to sum
- add support for fixed variables in BieglerKKTSystem
  • Loading branch information
frapac authored Mar 10, 2024
1 parent 4bb2155 commit 9cf6fbe
Show file tree
Hide file tree
Showing 19 changed files with 349 additions and 281 deletions.
13 changes: 7 additions & 6 deletions .ci/Project.toml
Original file line number Diff line number Diff line change
@@ -1,9 +1,3 @@
[compat]
CUDA = "4.1, 5"
FiniteDiff = "2.7"
Ipopt = "1"
MadNLP = "0.7"

[deps]
Argos = "ef244971-cf80-42b0-9762-2c2c832df5d5"
CUDA = "052768ef-5323-5732-b1bb-66c8b64840ba"
Expand All @@ -17,10 +11,17 @@ LazyArtifacts = "4af54fe1-eca0-43a8-85a7-787d91b784e3"
LinearAlgebra = "37e2e46d-f89d-539d-b4ee-838fcccc9c8e"
MadNLP = "2621e9c9-9eb4-46b1-8089-e8c72242dfb6"
MadNLPGPU = "d72a61cc-809d-412f-99be-fd81f4b8a598"
MadNLPTests = "b52a2a03-04ab-4a5f-9698-6a2deff93217"
MathOptInterface = "b8f27783-ece8-5eb3-8dc8-9495eed66fee"
Random = "9a3f8284-a2c9-5f02-9a11-845980a1fd5c"
SparseArrays = "2f01184e-e22b-5df5-ae63-d93ebab69eaf"
Test = "8dfed614-e22c-5e08-85e1-65c5234f0b40"

[compat]
CUDA = "4.1, 5"
FiniteDiff = "2.7"
Ipopt = "1"
MadNLP = "0.8"

[extras]
CUDA_Runtime_jll = "76a88914-d11a-5bdc-97e0-2f5a05c973a2"
7 changes: 3 additions & 4 deletions .ci/setup.jl
Original file line number Diff line number Diff line change
@@ -1,10 +1,9 @@

using Pkg
Pkg.instantiate()

using CUDA

argos_path = joinpath(@__DIR__, "..")
Pkg.develop(path=argos_path)

Pkg.instantiate()

using CUDA
CUDA.set_runtime_version!(v"11.8")
2 changes: 1 addition & 1 deletion Project.toml
Original file line number Diff line number Diff line change
Expand Up @@ -23,7 +23,7 @@ ArgosCUDAExt = ["CUDA", "CUSOLVERRF"]
[compat]
ExaPF = "~0.9.3"
KernelAbstractions = "0.9"
MadNLP = "0.7"
MadNLP = "0.8"
MathOptInterface = "1"
NLPModels = "0.19, 0.20"
julia = "1.9"
3 changes: 1 addition & 2 deletions docs/src/lib/kkt.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,10 +4,9 @@ CurrentModule = Argos

# KKT systems

Argos implements two KKT systems [`MadNLP.AbstractKKTSystem`](https://madnlp.github.io/MadNLP.jl/dev/lib/kkt/#MadNLP.AbstractKKTSystem)
Argos implements a MadNLP's KKT system [`MadNLP.AbstractKKTSystem`](https://madnlp.github.io/MadNLP.jl/dev/lib/kkt/#MadNLP.AbstractKKTSystem)
whose operations can be deported on NVIDIA GPU.

```@docs
BieglerKKTSystem
MixedAuglagKKTSystem
```
16 changes: 10 additions & 6 deletions docs/src/optim/biegler.md
Original file line number Diff line number Diff line change
Expand Up @@ -52,13 +52,17 @@ KKT = Argos.BieglerKKTSystem{Float64, Vector{Int}, Vector{Float64}, Matrix{Float
and we instantiate MadNLP with:
```@example bieglermadnlp
using MadNLP
# This syntax is a bit too involved and should be improved in the future.
madnlp_options = Dict{Symbol, Any}()
madnlp_options[:linear_solver] = MadNLP.LapackCPUSolver
opt_ipm, opt_linear, logger = MadNLP.load_options(; madnlp_options...)
KKT = Argos.BieglerKKTSystem{Float64, Vector{Int}, Vector{Float64}, Matrix{Float64}}
solver = MadNLP.MadNLPSolver{Float64, KKT}(model, opt_ipm, opt_linear; logger=logger)
T = Float64
VI = Vector{Int}
VT = Vector{T}
MT = Matrix{T}
solver = MadNLP.MadNLPSolver(
model;
kkt_system=Argos.BieglerKKTSystem{T, VI, VT, MT},
linear_solver=LapackCPUSolver,
callback=MadNLP.SparseCallback,
)
```
Note that we are again using Lapack as linear solver: indeed the resulting Biegler's KKT
system is dense (we use the same condensification procedure as in the
Expand Down
6 changes: 3 additions & 3 deletions docs/src/optim/reducedspace.md
Original file line number Diff line number Diff line change
Expand Up @@ -56,7 +56,7 @@ a dense linear solver (as Lapack):
```@example reducedmadnlp
solver = MadNLP.MadNLPSolver(
model;
kkt_system=MadNLP.DENSE_KKT_SYSTEM,
kkt_system=MadNLP.DenseKKTSystem,
linear_solver=LapackCPUSolver,
)
MadNLP.get_kkt(solver.kkt)
Expand All @@ -72,7 +72,7 @@ only proportional to the number of variables (here, `5`):
```@example reducedmadnlp
solver = MadNLP.MadNLPSolver(
model;
kkt_system=MadNLP.DENSE_CONDENSED_KKT_SYSTEM,
kkt_system=MadNLP.DenseCondensedKKTSystem,
linear_solver=LapackCPUSolver,
)
MadNLP.get_kkt(solver.kkt)
Expand Down Expand Up @@ -105,7 +105,7 @@ We recommend changing the default tolerance to be above the tolerance
using MadNLPGPU
solver = MadNLP.MadNLPSolver(
model;
kkt_system=MadNLP.DENSE_CONDENSED_KKT_SYSTEM,
kkt_system=MadNLP.DenseCondensedKKTSystem,
linear_solver=LapackGPUSolver,
tol=1e=5,
)
Expand Down
15 changes: 0 additions & 15 deletions docs/src/quickstart/cpu.md
Original file line number Diff line number Diff line change
Expand Up @@ -36,21 +36,6 @@ datafile = joinpath(INSTANCES_DIR, "case118.m")
```

## Full-space method

!!! tip
At each iteration of the algorithm,
`FullSpace` solves the KKT system with a sparse linear solver.
By default, MadNLP is using Umfpack, but we recommend installing
[MadNLPHSL](https://madnlp.github.io/MadNLP.jl/dev/installation/#HSL-linear-solver)
and uses ma27 (`linear_solver=Ma27Solver`) or ma57 (`linear_solver=Ma57Solver`).


```@repl quickstart_cpu
Argos.run_opf(datafile, Argos.FullSpace());
```

## Biegler's method (linearize-then-reduce)

!!! tip
Expand Down
50 changes: 23 additions & 27 deletions ext/api.jl
Original file line number Diff line number Diff line change
Expand Up @@ -7,44 +7,40 @@ end
function Argos.run_opf_gpu(datafile::String, ::Argos.FullSpace; options...)
flp = Argos.FullSpaceEvaluator(datafile; device=CUDABackend())
model = Argos.OPFModel(Argos.bridge(flp))
ips = MadNLP.MadNLPSolver(
solver = MadNLP.MadNLPSolver(
model;
kkt_system=MadNLP.SparseKKTSystem,
callback=MadNLP.SparseCallback,
options...
)
MadNLP.solve!(ips)
return ips
MadNLP.solve!(solver)
return solver
end

function Argos.run_opf_gpu(datafile::String, ::Argos.BieglerReduction; options...)
flp = Argos.FullSpaceEvaluator(datafile; device=CUDABackend())
model = Argos.OPFModel(Argos.bridge(flp))

madnlp_options = Dict{Symbol, Any}(options...)
# madnlp_options[:linear_solver] = LapackGPUSolver
opt_ipm, opt_linear, logger = MadNLP.load_options(; madnlp_options...)

KKT = Argos.BieglerKKTSystem{Float64, CuVector{Int}, CuVector{Float64}, CuMatrix{Float64}}
ips = MadNLP.MadNLPSolver{Float64, KKT}(model, opt_ipm, opt_linear; logger=logger)
MadNLP.solve!(ips)
return ips
solver = MadNLP.MadNLPSolver(
model;
kkt_system=KKT,
callback=MadNLP.SparseCallback,
options...
)

MadNLP.solve!(solver)
return solver
end

function Argos.run_opf_gpu(datafile::String, ::Argos.DommelTinney; options...)
flp = Argos.ReducedSpaceEvaluator(datafile; device=CUDABackend(), nbatch_hessian=256)
model = Argos.OPFModel(Argos.bridge(flp))

madnlp_options = Dict{Symbol, Any}(options...)
# madnlp_options[:linear_solver] = LapackGPUSolver
madnlp_options[:kkt_system] = MadNLP.DENSE_CONDENSED_KKT_SYSTEM
# madnlp_options[:inertia_correction_method] = MadNLP.INERTIA_FREE
madnlp_options[:lapack_algorithm] = MadNLP.CHOLESKY

opt_ipm, opt_linear, logger = MadNLP.load_options(; madnlp_options...)

QN = MadNLP.ExactHessian{Float64, CuVector{Float64}}
KKT = MadNLP.DenseCondensedKKTSystem{Float64, CuVector{Float64}, CuMatrix{Float64}, QN}
ips = MadNLP.MadNLPSolver{Float64, KKT}(model, opt_ipm, opt_linear; logger=logger)
MadNLP.solve!(ips)

return ips
model = Argos.OPFModel(flp)
solver = MadNLP.MadNLPSolver(
model;
kkt_system=MadNLP.DenseCondensedKKTSystem,
callback=MadNLP.DenseCallback,
options...
)
MadNLP.solve!(solver)
return solver
end
25 changes: 17 additions & 8 deletions ext/reduction.jl
Original file line number Diff line number Diff line change
Expand Up @@ -42,18 +42,27 @@ function Argos.update!(K::Argos.HJDJ, A, D)
spgemm!('N', 'N', 1.0, K.Jt, A, 0.0, K.JtJ, 'O')
end

function MadNLP.set_aug_diagonal!(kkt::Argos.BieglerKKTSystem{T, VI, VT, MT}, ips::MadNLP.MadNLPSolver) where {T, VI<:CuVector{Int}, VT<:CuVector{T}, MT<:CuMatrix{T}}
function MadNLP.set_aug_diagonal!(kkt::Argos.BieglerKKTSystem{T, VI, VT, MT}, solver::MadNLP.MadNLPSolver{T, Vector{T}}) where {T, VI<:CuVector{Int}, VT<:CuVector{T}, MT<:CuMatrix{T}}
haskey(kkt.etc, :pr_diag_host) || (kkt.etc[:pr_diag_host] = Vector{T}(undef, length(kkt.pr_diag)))
pr_diag_h = kkt.etc[:pr_diag_host]::Vector{T}
# Broadcast is not working as MadNLP array are allocated on the CPU,
# whereas pr_diag is allocated on the GPU
x = MadNLP.full(ips.x)
xl = MadNLP.full(ips.xl)
xu = MadNLP.full(ips.xu)
zl = MadNLP.full(ips.zl)
zu = MadNLP.full(ips.zu)
x = MadNLP.full(solver.x)
xl = MadNLP.full(solver.xl)
xu = MadNLP.full(solver.xu)
zl = MadNLP.full(solver.zl)
zu = MadNLP.full(solver.zu)

pr_diag_h .= zl ./ (x .- xl) .+ zu ./ (xu .- x)
copyto!(kkt.pr_diag, pr_diag_h)
fill!(kkt.reg, 0.0)
fill!(kkt.du_diag, 0.0)

kkt.l_diag .= solver.xl_r .- solver.x_lr
kkt.u_diag .= solver.x_ur .- solver.xu_r
copyto!(kkt.l_lower, solver.zl_r)
copyto!(kkt.u_lower, solver.zu_r)
copyto!(pr_diag_h, kkt.reg)
pr_diag_h[kkt.ind_lb] .-= kkt.l_lower ./ kkt.l_diag
pr_diag_h[kkt.ind_ub] .-= kkt.u_lower ./ kkt.u_diag

copyto!(kkt.pr_diag, pr_diag_h)
end
15 changes: 7 additions & 8 deletions src/Algorithms/auglag.jl
Original file line number Diff line number Diff line change
Expand Up @@ -46,7 +46,7 @@ function solve_subproblem!(
# Optimize with IPM
res = MadNLP.solve!(algo.optimizer)
return (
status=MadNLP._STATUS_CODES[res.status],
status=res.status,
iter=aug.counter.hessian - n_iter,
minimizer=res.solution,
)
Expand Down Expand Up @@ -122,7 +122,7 @@ function optimize!(
end

local solution
status = MOI.ITERATION_LIMIT
status = MadNLP.MAXIMUM_ITERATIONS_EXCEEDED
mul = copy(aug.λ)

tic = time()
Expand All @@ -131,12 +131,11 @@ function optimize!(
# Solve inner problem
solution = solve_subproblem!(algo, aug, uₖ; niter=i_out)

if (solution.status != MOI.OPTIMAL) &&
(solution.status != MOI.LOCALLY_SOLVED) &&
(solution.status != MOI.SLOW_PROGRESS) &&
(solution.status != MOI.ITERATION_LIMIT)
if (solution.status != MadNLP.SOLVE_SUCCEEDED) &&
(solution.status != MadNLP.SOLVED_TO_ACCEPTABLE_LEVEL) &&
(solution.status != MadNLP.MAXIMUM_ITERATIONS_EXCEEDED)
println("[AugLag] Fail to solve inner subproblem. Status: $(solution.status). Exiting.")
status = MOI.NUMERICAL_ERROR
status = MadNLP.INTERNAL_ERROR
break
end

Expand Down Expand Up @@ -170,7 +169,7 @@ function optimize!(
push!(tracer, obj, primal_feas, dual_feas)

if (dual_feas < ε_dual) && (primal_feas < ε_primal)
status = MOI.OPTIMAL
status = MOI.SOLVE_SUCCEEDED
break
end
end
Expand Down
2 changes: 0 additions & 2 deletions src/Evaluators/bridge_evaluator.jl
Original file line number Diff line number Diff line change
Expand Up @@ -153,7 +153,6 @@ end

function jacobian!(nlp::BridgeDeviceEvaluator, jac, w)
jacobian!(nlp.inner, jac, nlp.buffers.u)
# copyto!(jac, nlp.buffers.J)
return
end

Expand All @@ -179,7 +178,6 @@ end
function hessian_lagrangian!(nlp::BridgeDeviceEvaluator, H, u, y, σ)
_copyto!(nlp.buffers.wc, 1, y, 1, length(nlp.buffers.wc))
hessian_lagrangian!(nlp.inner, H, nlp.buffers.u, nlp.buffers.wc, σ)
# copyto!(H, nlp.buffers.H)
return
end

Expand Down
2 changes: 1 addition & 1 deletion src/KKT/KKTsystems.jl
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@

include("utils.jl")
include("auglag_kkt.jl")
# include("auglag_kkt.jl")
include("reduced_newton.jl")
Loading

0 comments on commit 9cf6fbe

Please sign in to comment.