Skip to content

Commit

Permalink
Added tutorial, manual page
Browse files Browse the repository at this point in the history
  • Loading branch information
AstitvaAggarwal committed Jan 14, 2024
1 parent 3d11a70 commit 5864e65
Show file tree
Hide file tree
Showing 7 changed files with 206 additions and 79 deletions.
5 changes: 5 additions & 0 deletions docs/Project.toml
Original file line number Diff line number Diff line change
@@ -1,13 +1,16 @@
[deps]
AdvancedHMC = "0bf59076-c3b1-5ca4-86bd-e02cd72cde3d"
DiffEqBase = "2b5f629d-d688-5b77-993f-72d75c75574e"
Distributions = "31c24e10-a181-5473-b8eb-7969acd0382f"
Documenter = "e30172f5-a6a5-5a46-863b-614d45cd2de4"
DomainSets = "5b8099bc-c8ec-5219-889f-1d9e522a28bf"
Flux = "587475ba-b771-5e3f-ad9e-33799f191a9c"
Integrals = "de52edbc-65ea-441a-8357-d3a637375a31"
IntegralsCubature = "c31f79ba-6e32-46d4-a52f-182a8ac42a54"
LinearAlgebra = "37e2e46d-f89d-539d-b4ee-838fcccc9c8e"
Lux = "b2108857-7c20-44ae-9111-449ecde12c47"
ModelingToolkit = "961ee093-0014-501f-94e3-6117800e7a78"
MonteCarloMeasurements = "0987c9cc-fe09-11e8-30f0-b96dd679fdca"
NeuralPDE = "315f7962-48a3-4962-8226-d0f33b1235f0"
Optimization = "7f7a1694-90dd-40f0-9382-eb1efda571ba"
OptimizationOptimJL = "36348300-93cb-4f02-beb5-3c3902f8871e"
Expand All @@ -20,6 +23,7 @@ Roots = "f2b01f46-fcfa-551c-844a-d8ac1e96c665"
SpecialFunctions = "276daf66-3868-5448-9aa4-cd146d93841b"

[compat]
AdvancedHMC = "0.5"
DiffEqBase = "6.106"
Documenter = "1"
DomainSets = "0.6"
Expand All @@ -28,6 +32,7 @@ Integrals = "3.3"
IntegralsCubature = "=0.2.2"
Lux = "0.4, 0.5"
ModelingToolkit = "8.33"
MonteCarloMeasurements = "1"
NeuralPDE = "5.3"
Optimization = "3.9"
OptimizationOptimJL = "0.1"
Expand Down
2 changes: 2 additions & 0 deletions docs/pages.jl
Original file line number Diff line number Diff line change
Expand Up @@ -4,6 +4,7 @@ pages = ["index.md",
#"examples/nnrode_example.md", # currently incorrect
],
"PDE PINN Tutorials" => Any["Introduction to NeuralPDE for PDEs" => "tutorials/pdesystem.md",
"Bayesian PINNs for PDEs" => "tutorials/low_level_2.md",
"Using GPUs" => "tutorials/gpu.md",
"Defining Systems of PDEs" => "tutorials/systems.md",
"Imposing Constraints" => "tutorials/constraints.md",
Expand All @@ -21,6 +22,7 @@ pages = ["index.md",
"examples/nonlinear_hyperbolic.md"],
"Manual" => Any["manual/ode.md",
"manual/pinns.md",
"manual/bpinns.md",
"manual/training_strategies.md",
"manual/adaptive_losses.md",
"manual/logging.md",
Expand Down
153 changes: 107 additions & 46 deletions docs/src/tutorials/low_level_2.md
Original file line number Diff line number Diff line change
@@ -1,75 +1,136 @@
# Using `ahmc_bayesian_pinn_pde` with the `BayesianPINN` Discretizer for the 1-D Burgers' Equation
# Using `ahmc_bayesian_pinn_pde` with the `BayesianPINN` Discretizer for the Kuramoto–Sivashinsky equation

Let's consider the Burgers' equation:
Consider the Kuramoto–Sivashinsky equation:

```math
\begin{gather*}
∂_t u + u ∂_x u - (0.01 / \pi) ∂_x^2 u = 0 \, , \quad x \in [-1, 1], t \in [0, 1] \, , \\
u(0, x) = - \sin(\pi x) \, , \\
u(t, -1) = u(t, 1) = 0 \, ,
\end{gather*}
∂_t u(x, t) + u(x, t) ∂_x u(x, t) + \alpha ∂^2_x u(x, t) + \beta ∂^3_x u(x, t) + \gamma ∂^4_x u(x, t) = 0 \, ,
```

with Bayesian Physics-Informed Neural Networks. Here is an example of using `BayesianPINN` discretization with `ahmc_bayesian_pinn_pde` :
where $\alpha = \gamma = 1$ and $\beta = 4$. The exact solution is:

```math
u_e(x, t) = 11 + 15 \tanh \theta - 15 \tanh^2 \theta - 15 \tanh^3 \theta \, ,
```

where $\theta = t - x/2$ and with initial and boundary conditions:

```math
\begin{align*}
u( x, 0) &= u_e( x, 0) \, ,\\
u( 10, t) &= u_e( 10, t) \, ,\\
u(-10, t) &= u_e(-10, t) \, ,\\
∂_x u( 10, t) &= ∂_x u_e( 10, t) \, ,\\
∂_x u(-10, t) &= ∂_x u_e(-10, t) \, .
\end{align*}
```

With Bayesian Physics-Informed Neural Networks, here is an example of using `BayesianPINN` discretization with `ahmc_bayesian_pinn_pde` :

```@example low_level_2
using NeuralPDE, Lux, ModelingToolkit
import ModelingToolkit: Interval, infimum, supremum
using NeuralPDE, Flux, Lux, ModelingToolkit, LinearAlgebra, AdvancedHMC
import ModelingToolkit: Interval, infimum, supremum, Distributions
using Plots, MonteCarloMeasurements
@parameters t, x
@parameters x, t, α
@variables u(..)
Dt = Differential(t)
Dx = Differential(x)
Dxx = Differential(x)^2
Dx2 = Differential(x)^2
Dx3 = Differential(x)^3
Dx4 = Differential(x)^4
# α = 1
β = 4
γ = 1
eq = Dt(u(x, t)) + u(x, t) * Dx(u(x, t)) + α * Dx2(u(x, t)) + β * Dx3(u(x, t)) + γ * Dx4(u(x, t)) ~ 0
#2D PDE
eq = Dt(u(t, x)) + u(t, x) * Dx(u(t, x)) - (0.01 / pi) * Dxx(u(t, x)) ~ 0
u_analytic(x, t; z = -x / 2 + t) = 11 + 15 * tanh(z) - 15 * tanh(z)^2 - 15 * tanh(z)^3
du(x, t; z = -x / 2 + t) = 15 / 2 * (tanh(z) + 1) * (3 * tanh(z) - 1) * sech(z)^2
# Initial and boundary conditions
bcs = [u(0, x) ~ -sin(pi * x),
u(t, -1) ~ 0.0,
u(t, 1) ~ 0.0,
u(t, -1) ~ u(t, 1)]
bcs = [u(x, 0) ~ u_analytic(x, 0),
u(-10, t) ~ u_analytic(-10, t),
u(10, t) ~ u_analytic(10, t),
Dx(u(-10, t)) ~ du(-10, t),
Dx(u(10, t)) ~ du(10, t)]
# Space and time domains
domains = [t ∈ Interval(0.0, 1.0),
x ∈ Interval(-1.0, 1.0)]
domains = [x ∈ Interval(-10.0, 10.0),
t ∈ Interval(0.0, 1.0)]
# Discretization
dx = 0.05
dx = 0.4;
dt = 0.2;
# Function to compute analytical solution at a specific point (x, t)
function u_analytic_point(x, t)
z = -x / 2 + t
return 11 + 15 * tanh(z) - 15 * tanh(z)^2 - 15 * tanh(z)^3
end
# Function to generate the dataset matrix
function generate_dataset_matrix(domains, dx, dt)
x_values = -10:dx:10
t_values = 0.0:dt:1.0
dataset = []
for t in t_values
for x in x_values
u_value = u_analytic_point(x, t)
push!(dataset, [u_value, x, t])
end
end
return vcat([data' for data in dataset]...)
end
datasetpde = [generate_dataset_matrix(domains, dx, dt)]
plot(datasetpde[1][:, 2], datasetpde[1][:, 1], title="Dataset from Analytical Solution")
# Add noise to dataset
datasetpde[1][:, 1] = datasetpde[1][:, 1] .+ randn(size(datasetpde[1][:, 1])) .* 5 / 100 .*
datasetpde[1][:, 1]
plot!(datasetpde[1][:, 2], datasetpde[1][:, 1])
# Neural network
chain = Lux.Chain(Lux.Dense(2, 10, Lux.σ), Lux.Dense(10, 10, Lux.σ), Lux.Dense(10, 1))
strategy = NeuralPDE.GridTraining([dx,dx])
chain = Lux.Chain(Lux.Dense(2, 8, Lux.tanh),
Lux.Dense(8, 8, Lux.tanh),
Lux.Dense(8, 1))
discretization = NeuralPDE.BayesianPINN([chain], strategy)
discretization = NeuralPDE.BayesianPINN([chain],
GridTraining([dx, dt]), param_estim = true, dataset = [datasetpde, nothing])
@named pde_system = PDESystem(eq, bcs, domains, [x, t], [u(x, t)])
@named pde_system = PDESystem(eq,
bcs,
domains,
[x, t],
[u(x, t)],
[α],
defaults = Dict([α => 0.5]))
sol1 = ahmc_bayesian_pinn_pde(pde_system,
discretization;
draw_samples = 100,
bcstd = [0.01, 0.03, 0.03, 0.01],
phystd = [0.01],
draw_samples = 100, Kernel = AdvancedHMC.NUTS(0.8),
bcstd = [0.2, 0.2, 0.2, 0.2, 0.2],
phystd = [1.0], l2std = [0.05], param = [Distributions.LogNormal(0.5, 2)],
priorsNNw = (0.0, 10.0),
saveats = [1 / 100.0, 1 / 100.0],progress=true)
saveats = [1 / 100.0, 1 / 100.0], progress = true)
```

And some analysis:

```@example low_level
using Plots
ts, xs = [infimum(d.domain):0.01:supremum(d.domain) for d in domains]
u_predict_contourf = reshape([first(phi([t, x], res.u)) for t in ts for x in xs],
length(xs), length(ts))
plot(ts, xs, u_predict_contourf, linetype = :contourf, title = "predict")
u_predict = [[first(phi([t, x], res.u)) for x in xs] for t in ts]
p1 = plot(xs, u_predict[3], title = "t = 0.1");
p2 = plot(xs, u_predict[11], title = "t = 0.5");
p3 = plot(xs, u_predict[end], title = "t = 1");
```@example low_level_2
phi = discretization.phi[1]
xs, ts = [infimum(d.domain):dx:supremum(d.domain) for (d, dx) in zip(domains, [dx / 10, dt])]
u_predict = [[first(pmean(phi([x, t], sol1.estimated_nn_params[1]))) for x in xs]
for t in ts]
u_real = [[u_analytic(x, t) for x in xs] for t in ts]
diff_u = [[abs(u_analytic(x, t) - first(pmean(phi([x, t], sol1.estimated_nn_params[1]))))
for x in xs]
for t in ts]
p1 = plot(xs, u_predict, title = "predict")
p2 = plot(xs, u_real, title = "analytic")
p3 = plot(xs, diff_u, title = "error")
plot(p1, p2, p3)
```

![burgers](https://user-images.githubusercontent.com/12683885/90984874-a0870800-e580-11ea-9fd4-af8a4e3c523e.png)

![burgers2](https://user-images.githubusercontent.com/12683885/90984856-8c430b00-e580-11ea-9206-1a88ebd24ca0.png)
2 changes: 1 addition & 1 deletion src/NeuralPDE.jl
Original file line number Diff line number Diff line change
Expand Up @@ -67,6 +67,6 @@ export NNODE, TerminalPDEProblem, NNPDEHan, NNPDENS, NNRODE,
AbstractAdaptiveLoss, NonAdaptiveLoss, GradientScaleAdaptiveLoss,
MiniMaxAdaptiveLoss, LogOptions,
ahmc_bayesian_pinn_ode, BNNODE, ahmc_bayesian_pinn_pde, vector_to_parameters,
BPINNsolution
BPINNsolution, BayesianPINN

end # module
1 change: 1 addition & 0 deletions src/PDE_BPINN.jl
Original file line number Diff line number Diff line change
Expand Up @@ -369,6 +369,7 @@ function ahmc_bayesian_pinn_pde(pde_system, discretization;
#ode parameter estimation
nparameters = length(initial_θ)
ninv = length(param)
# add init_params for NN params
priors = [
MvNormal(priorsNNw[1] * ones(nparameters),
LinearAlgebra.Diagonal(abs2.(priorsNNw[2] .* ones(nparameters)))),
Expand Down
6 changes: 3 additions & 3 deletions src/pinn_types.jl
Original file line number Diff line number Diff line change
Expand Up @@ -162,7 +162,7 @@ BayesianPINN(chain,
logger = nothing,
log_options = LogOptions(),
iteration = nothing,
dataset=nothing,
dataset = nothing,
kwargs...) where {iip}
```
Expand All @@ -178,7 +178,7 @@ BayesianPINN(chain,
* `Dataset`: A vector of matrix, each matrix for ith dependant
variable and first col in matrix is for dependant variables,
remaining coloumns for independant variables.
remaining coloumns for independant variables. Needed for inverse problem solving.
* `init_params`: the initial parameters of the neural networks. This should match the
specification of the chosen `chain` library. For example, if a Flux.chain is used, then
`init_params` should match `Flux.destructure(chain)[1]` in shape. If `init_params` is not
Expand All @@ -189,7 +189,7 @@ BayesianPINN(chain,
of the neural network defining `phi`). By default, this is generated from the `chain`. This
should only be used to more directly impose functional information in the training problem,
for example imposing the boundary condition by the test function formulation.
* `adaptive_loss`: the choice for the adaptive loss function. See the
* `adaptive_loss`: (STILL WIP), the choice for the adaptive loss function. See the
[adaptive loss page](@ref adaptive_loss) for more details. Defaults to no adaptivity.
* `additional_loss`: a function `additional_loss(phi, θ, p_)` where `phi` are the neural
network trial solutions, `θ` are the weights of the neural network(s), and `p_` are the
Expand Down
Loading

0 comments on commit 5864e65

Please sign in to comment.