From 6eda2977b03978547a162c1649c6305b9f5ae6c5 Mon Sep 17 00:00:00 2001 From: "Documenter.jl" Date: Sun, 10 Sep 2023 01:52:04 +0000 Subject: [PATCH] build based on a09f2d3 --- dev/compare/index.html | 2 +- dev/differentiation/index.html | 2 +- dev/differentiation_math/index.html | 2 +- dev/index.html | 8 ++++---- dev/interpolation/index.html | 2 +- dev/interpolation_math/index.html | 2 +- dev/search/index.html | 2 +- dev/swiss/index.html | 2 +- 8 files changed, 11 insertions(+), 11 deletions(-) diff --git a/dev/compare/index.html b/dev/compare/index.html index 5c24a0c..4d31865 100644 --- a/dev/compare/index.html +++ b/dev/compare/index.html @@ -567,4 +567,4 @@ 2.187066 seconds (8.27 k allocations: 76.855 MiB, 0.36% compilation time) julia> @time itp(xq, yq; method = Hiyoshi(2)); - 13.762652 seconds (9.26 k allocations: 76.920 MiB, 0.06% compilation time)

Conclusion

Overall, the smooth interpolants have the best performance, with Farin(1) and Hiyoshi(2) typically beating most interpolants. Hiyoshi(2) is much slower than the other interpolants, though, and Farin(1) may be a preferable interpolant if $C^1$ continuity at the data sites is sufficient. For generating derivatives, the Direct() seems to beat the results with the Iterative() method in most situations.

+ 13.762652 seconds (9.26 k allocations: 76.920 MiB, 0.06% compilation time)

Conclusion

Overall, the smooth interpolants have the best performance, with Farin(1) and Hiyoshi(2) typically beating most interpolants. Hiyoshi(2) is much slower than the other interpolants, though, and Farin(1) may be a preferable interpolant if $C^1$ continuity at the data sites is sufficient. For generating derivatives, the Direct() seems to beat the results with the Iterative() method in most situations.

diff --git a/dev/differentiation/index.html b/dev/differentiation/index.html index ae92223..126f334 100644 --- a/dev/differentiation/index.html +++ b/dev/differentiation/index.html @@ -248,4 +248,4 @@ 7.479964687679311 julia> εH_nohull = rrmserr(f′′.(_x, _y), to_mat.(Hg), ∂, _x, _y) -38.884740966379056

The errors are smaller, though not by much.

+38.884740966379056

The errors are smaller, though not by much.

diff --git a/dev/differentiation_math/index.html b/dev/differentiation_math/index.html index 163d035..9f4b07c 100644 --- a/dev/differentiation_math/index.html +++ b/dev/differentiation_math/index.html @@ -21,4 +21,4 @@ \overline{\boldsymbol g}_1 &= \gamma_i^\prime g_{i1}, \\ \overline{\boldsymbol g}_2 &= \gamma_i^\prime g_{i2}, \\ \boldsymbol{\bm\theta} &= \begin{bmatrix} \frac{\partial f(\boldsymbol x_0)}{\partial x} & \frac{\partial f(\boldsymbol x_0)}{\partial y} & \frac{\partial^2 f(\boldsymbol x_0)}{\partial x^2} & \frac{\partial f(\boldsymbol x_0)}{\partial y^2} & \frac{\partial f(\boldsymbol x_0)}{\partial x\partial y} \end{bmatrix}^T. -\end{align*}\]

To solve this linear system, let

\[\boldsymbol D = \begin{bmatrix} \overline{\boldsymbol A} \\ \overline{\boldsymbol B} \\ \overline{\boldsymbol C} \end{bmatrix}, \quad \boldsymbol c = \begin{bmatrix} \overline{\boldsymbol w} \\ \overline{\boldsymbol g}_1 \\ \overline{\boldsymbol g}_2 \end{bmatrix},\]

so that $\boldsymbol D^T\boldsymbol D\boldsymbol\theta = \boldsymbol D^T\boldsymbol c$. These are just the normal equations for $\boldsymbol D\boldsymbol \theta = \boldsymbol c$, thus we can estimate the gradients and Hessians by simply solving $\boldsymbol D\boldsymbol \theta = \boldsymbol c$.

Generation Away from the Data Sites

It is possible to extend these ideas so that we can approximate the derivative at any point $\boldsymbol x_0 \in \mathcal C(\boldsymbol X)$. Using the associated interpolant, simply approximate $z_0$ with the value of the interpolant at $\boldsymbol x_0$, and then replace $W_i$ by $\lambda_i/\|\boldsymbol x_i-\boldsymbol x_0\|$, where $\lambda_i$ is the Sibson coordinate at $\boldsymbol x_i$ relative to $\boldsymbol x_0$. If using a direct approach to approximate gradients and Hessians, Sibson coordinates cannot be used (because you can't extend the weights out to $N_0^2$) and so $W_i$ remains as is in that case. Note that the $N_0$ neighbourhoods are now the sets of natural neighbours.

+\end{align*}\]

To solve this linear system, let

\[\boldsymbol D = \begin{bmatrix} \overline{\boldsymbol A} \\ \overline{\boldsymbol B} \\ \overline{\boldsymbol C} \end{bmatrix}, \quad \boldsymbol c = \begin{bmatrix} \overline{\boldsymbol w} \\ \overline{\boldsymbol g}_1 \\ \overline{\boldsymbol g}_2 \end{bmatrix},\]

so that $\boldsymbol D^T\boldsymbol D\boldsymbol\theta = \boldsymbol D^T\boldsymbol c$. These are just the normal equations for $\boldsymbol D\boldsymbol \theta = \boldsymbol c$, thus we can estimate the gradients and Hessians by simply solving $\boldsymbol D\boldsymbol \theta = \boldsymbol c$.

Generation Away from the Data Sites

It is possible to extend these ideas so that we can approximate the derivative at any point $\boldsymbol x_0 \in \mathcal C(\boldsymbol X)$. Using the associated interpolant, simply approximate $z_0$ with the value of the interpolant at $\boldsymbol x_0$, and then replace $W_i$ by $\lambda_i/\|\boldsymbol x_i-\boldsymbol x_0\|$, where $\lambda_i$ is the Sibson coordinate at $\boldsymbol x_i$ relative to $\boldsymbol x_0$. If using a direct approach to approximate gradients and Hessians, Sibson coordinates cannot be used (because you can't extend the weights out to $N_0^2$) and so $W_i$ remains as is in that case. Note that the $N_0$ neighbourhoods are now the sets of natural neighbours.

diff --git a/dev/index.html b/dev/index.html index 4b0f843..5c493c1 100644 --- a/dev/index.html +++ b/dev/index.html @@ -3,16 +3,16 @@ interpolate(points, z; gradient=nothing, hessian=nothing, derivatives=false, kwargs...) interpolate(x::AbstractVector, y::AbstractVector, z; gradient=nothing, hessian=nothing, derivatives=false, kwargs...)

Construct an interpolant over the data z at the sites defined by the triangulation tri (or points, or (x, y)). See the Output section for a description of how to use the interpolant itp.

Keyword Arguments

Output

The returned value is a NaturalNeighboursInterpolant struct. This struct is callable, with the following methods defined:

(itp::NaturalNeighboursInterpolant)(x, y, id::Integer=1; parallel=false, method=Sibson(), project = true, kwargs...)
 (itp::NaturalNeighboursInterpolant)(vals::AbstractVector, x::AbstractVector, y::AbstractVector; parallel=true, method=Sibson(), project = true, kwargs...)
-(itp::NaturalNeighboursInterpolant)(x::AbstractVector, y::AbstractVector; parallel=true, method=Sibson(), project = true, kwargs...)
  1. The first method is for scalars, with id referring to a thread id.
  2. This method is an in-place method for vectors, storing itp(x[i], y[i]) into vals[i].
  3. This method is similar to (2), but vals is constructed and returned.

In each method, method defines the method used for evaluating the interpolant, which is some AbstractInterpolator. For the first method, parallel is ignored, but for the latter two methods it defines whether to use multithreading or not for evaluating the interpolant at all the points. The kwargs... argument is passed into add_point! from DelaunayTriangulation.jl, e.g. you could pass some rng. Lastly, the project argument determines whether extrapolation is performed by projecting any exterior points onto the boundary of the convex hull of the data sites and performing two-point interpolation, or to simply replace any extrapolated values with Inf.

Warning

Until we implement ghost point extrapolation, behaviour near the convex hull of your data sites may in some cases be undesirable, despite the extrapolation method we describe above, even for points that are inside the convex hull. If you want to control this behaviour so that you discard any points that are very close to the convex hull, see identify_exterior_points and the tol keyword argument.

source
NaturalNeighbours.differentiateFunction
differentiate(itp::NaturalNeighboursInterpolant, order)

Differentiate a given interpolant itp up to degree order (1 or 2). The returned object is a NaturalNeighboursDifferentiator struct, which is callable.

For calling the resulting struct, we define the following methods:

(∂::NaturalNeighboursDifferentiator)(x, y, zᵢ, nc, id::Integer=1; parallel=false, method=default_diff_method(∂), kwargs...)
+(itp::NaturalNeighboursInterpolant)(x::AbstractVector, y::AbstractVector; parallel=true, method=Sibson(), project = true, kwargs...)
  1. The first method is for scalars, with id referring to a thread id.
  2. This method is an in-place method for vectors, storing itp(x[i], y[i]) into vals[i].
  3. This method is similar to (2), but vals is constructed and returned.

In each method, method defines the method used for evaluating the interpolant, which is some AbstractInterpolator. For the first method, parallel is ignored, but for the latter two methods it defines whether to use multithreading or not for evaluating the interpolant at all the points. The kwargs... argument is passed into add_point! from DelaunayTriangulation.jl, e.g. you could pass some rng. Lastly, the project argument determines whether extrapolation is performed by projecting any exterior points onto the boundary of the convex hull of the data sites and performing two-point interpolation, or to simply replace any extrapolated values with Inf.

Warning

Until we implement ghost point extrapolation, behaviour near the convex hull of your data sites may in some cases be undesirable, despite the extrapolation method we describe above, even for points that are inside the convex hull. If you want to control this behaviour so that you discard any points that are very close to the convex hull, see identify_exterior_points and the tol keyword argument.

source
NaturalNeighbours.differentiateFunction
differentiate(itp::NaturalNeighboursInterpolant, order)

Differentiate a given interpolant itp up to degree order (1 or 2). The returned object is a NaturalNeighboursDifferentiator struct, which is callable.

For calling the resulting struct, we define the following methods:

(∂::NaturalNeighboursDifferentiator)(x, y, zᵢ, nc, id::Integer=1; parallel=false, method=default_diff_method(∂), kwargs...)
 (∂::NaturalNeighboursDifferentiator)(x, y, id::Integer=1; parallel=false, method=default_diff_method(∂), interpolant_method=Sibson(), rng=Random.default_rng(), project = true, kwargs...)
 (∂::NaturalNeighboursDifferentiator)(vals::AbstractVector, x::AbstractVector, y::AbstractVector; parallel=true, method=default_diff_method(∂), interpolant_method=Sibson(), kwargs...)
-(∂::NaturalNeighboursDifferentiator{I, O})(x::AbstractVector, y::AbstractVector; parallel=true, method=default_diff_method(∂), interpolant_method=Sibson(), kwargs...) where {I, O}
  1. This method is useful if you already have an estimate for the function value, zᵢ, at the data site, (x, y), provided you also provide the NaturalCoordinates nc. id is the thread id.
  2. This method is for scalars, with id referring to a thread id.
  3. This method is an in-place method for vectors, storing ∂(x[i], y[i], 1) into vals[i].
  4. This method is similar to (3), but vals is constructed and returned.

The available keyword arguments are:

  • parallel=true: Whether to use multithreading. Ignored for the first two methods.
  • method=default_diff_method(∂): Default method for evaluating the interpolant. default_diff_method(∂) returns Direct(). The method must be a AbstractDifferentiator.
  • interpolant_method=Sibson(): The method used for evaluating the interpolant to estimate zᵢ for the latter three methods. See AbstractInterpolator for the avaiable methods.
  • rng=Random.default_rng(): The random number generator used for estimating zᵢ for the latter three methods, or for constructing the natural coordinates.
  • project=false: Whether to project any extrapolated points onto the boundary of the convex hull of the data sites and perform two-point interpolation, or to simply replace any extrapolated values with Inf, when evaluating the interpolant in the latter three methods.
  • use_cubic_terms=true: If estimating second order derivatives, whether to use cubic terms. Only relevant for method == Direct().
  • alpha=0.1: If estimating second order derivatives, the weighting parameter used for estimating the second order derivatives. Only relevant for method == Iterative().
  • use_sibson_weight=true: Whether to weight the residuals in the associated least squares problems by the associated Sibson coordinates. Only relevant for method == Iterative() if order == 2.

The outputs are:

  • order == 1: The scalar methods return a Tuple of the form (∂x, ∂y), while the vector methods return a vector of Tuples of the form (∂x, ∂y).
  • order == 2: The scalar methods return a (∇, ℋ), where is a Tuple of the form (∂x, ∂y) and is a Tuple of the form (∂xx, ∂yy, ∂xy). The vector methods return a vector of (∇, ℋ)s.
Warning

Until we implement ghost point extrapolation, behaviour near the convex hull of your data sites may in some cases be undesirable, despite the extrapolation method we describe above, even for points that are inside the convex hull. If you want to control this behaviour so that you discard any points that are very close to the convex hull, see identify_exterior_points and the tol keyword argument.

source
NaturalNeighbours.generate_gradientsFunction
generate_gradients(
+(∂::NaturalNeighboursDifferentiator{I, O})(x::AbstractVector, y::AbstractVector; parallel=true, method=default_diff_method(∂), interpolant_method=Sibson(), kwargs...) where {I, O}
  1. This method is useful if you already have an estimate for the function value, zᵢ, at the data site, (x, y), provided you also provide the NaturalCoordinates nc. id is the thread id.
  2. This method is for scalars, with id referring to a thread id.
  3. This method is an in-place method for vectors, storing ∂(x[i], y[i], 1) into vals[i].
  4. This method is similar to (3), but vals is constructed and returned.

The available keyword arguments are:

  • parallel=true: Whether to use multithreading. Ignored for the first two methods.
  • method=default_diff_method(∂): Default method for evaluating the interpolant. default_diff_method(∂) returns Direct(). The method must be a AbstractDifferentiator.
  • interpolant_method=Sibson(): The method used for evaluating the interpolant to estimate zᵢ for the latter three methods. See AbstractInterpolator for the avaiable methods.
  • rng=Random.default_rng(): The random number generator used for estimating zᵢ for the latter three methods, or for constructing the natural coordinates.
  • project=false: Whether to project any extrapolated points onto the boundary of the convex hull of the data sites and perform two-point interpolation, or to simply replace any extrapolated values with Inf, when evaluating the interpolant in the latter three methods.
  • use_cubic_terms=true: If estimating second order derivatives, whether to use cubic terms. Only relevant for method == Direct().
  • alpha=0.1: If estimating second order derivatives, the weighting parameter used for estimating the second order derivatives. Only relevant for method == Iterative().
  • use_sibson_weight=true: Whether to weight the residuals in the associated least squares problems by the associated Sibson coordinates. Only relevant for method == Iterative() if order == 2.

The outputs are:

  • order == 1: The scalar methods return a Tuple of the form (∂x, ∂y), while the vector methods return a vector of Tuples of the form (∂x, ∂y).
  • order == 2: The scalar methods return a (∇, ℋ), where is a Tuple of the form (∂x, ∂y) and is a Tuple of the form (∂xx, ∂yy, ∂xy). The vector methods return a vector of (∇, ℋ)s.
Warning

Until we implement ghost point extrapolation, behaviour near the convex hull of your data sites may in some cases be undesirable, despite the extrapolation method we describe above, even for points that are inside the convex hull. If you want to control this behaviour so that you discard any points that are very close to the convex hull, see identify_exterior_points and the tol keyword argument.

source
NaturalNeighbours.generate_gradientsFunction
generate_gradients(
     tri,
     z,
     derivative_caches=[DerivativeCache(tri) for _ in 1:Base.Threads.nthreads()],
     neighbour_caches=[NaturalNeighboursCache(tri) for _ in 1:Base.Threads.nthreads()];
     parallel=true
-)

Generate gradients at the data sites defined by the triangulation tri with associated function values tri.

Arguments

  • tri: A Triangulation object.
  • z: A vector of function values at the data sites.
  • derivative_caches=[DerivativeCache(tri) for _ in 1:Base.Threads.nthreads()]: A vector of DerivativeCache objects, one for each thread.
  • neighbour_caches=[NaturalNeighboursCache(tri) for _ in 1:Base.Threads.nthreads()]: A vector of NaturalNeighboursCache objects, one for each thread.

Keyword Arguments

  • parallel=true: Whether to use multithreading or not.

Output

  • : A vector of gradients at the data sites. Each element is a Tuple defining the gradient entries.
source
NaturalNeighbours.generate_derivativesFunction
generate_derivatives(
+)

Generate gradients at the data sites defined by the triangulation tri with associated function values tri.

Arguments

  • tri: A Triangulation object.
  • z: A vector of function values at the data sites.
  • derivative_caches=[DerivativeCache(tri) for _ in 1:Base.Threads.nthreads()]: A vector of DerivativeCache objects, one for each thread.
  • neighbour_caches=[NaturalNeighboursCache(tri) for _ in 1:Base.Threads.nthreads()]: A vector of NaturalNeighboursCache objects, one for each thread.

Keyword Arguments

  • parallel=true: Whether to use multithreading or not.

Output

  • : A vector of gradients at the data sites. Each element is a Tuple defining the gradient entries.
source
NaturalNeighbours.generate_derivativesFunction
generate_derivatives(
     tri,
     z,
     derivative_caches=[DerivativeCache(tri) for _ in 1:Base.Threads.nthreads()],
@@ -22,4 +22,4 @@
     use_cubic_terms=true,
     alpha=0.1,
     initial_gradients=dwrap(method) == Direct() ? nothing : generate_gradients(tri, z, derivative_caches, neighbour_caches; method=dwrap(method), parallel, rng)
-)

Generate derivatives at the data sites defined by the triangulation tri with associated function values tri.

Arguments

  • tri: A Triangulation object.
  • z: A vector of function values at the data sites.
  • derivative_caches=[DerivativeCache(tri) for _ in 1:Base.Threads.nthreads()]: A vector of DerivativeCache objects, one for each thread.
  • neighbour_caches=[NaturalNeighboursCache(tri) for _ in 1:Base.Threads.nthreads()]: A vector of NaturalNeighboursCache objects, one for each thread.

Keyword Arguments

  • parallel=true: Whether to use multithreading or not.
  • method=Direct(): The method used for generating the derivatives. See AbstractDifferentiator.
  • use_cubic_terms=true: Whether to use cubic terms for estimating the second order derivatives. Only relevant for method == Direct().
  • alpha=0.1: The weighting parameter used for estimating the second order derivatives. Only relevant for method == Iterative().
  • initial_gradients=dwrap(method) == Direct() ? nothing : generate_gradients(tri, z, derivative_caches, neighbour_caches; method=dwrap(method), parallel): The initial gradients used for estimating the second order derivatives. Only relevant for method == Iterative().

Output

  • : A vector of gradients at the data sites. Each element is a Tuple defining the gradient entries.
  • : A vector of Hessians at the data sites. Each element is a Tuple defining the Hessian entries in the form (H[1, 1], H[2, 2], H[1, 2]) (H[2, 1] is the same as H[2, 2]).
source
NaturalNeighbours.AbstractInterpolatorType
abstract type AbstractInterpolator{D}

Abstract type for defining the method used for evaluating an interpolant. D is, roughly, defined to be the smoothness at the data sites (currently only relevant for Sibson). The available subtypes are:

  • Sibson(d): Interpolate via the Sibson interpolant, with C(d) continuity at the data sites. Only defined for D ∈ (0, 1). If D == 1, gradients must be provided.
  • Triangle(d): Interpolate based on vertices of the triangle that the point resides in, with C(0) continuity at the data sites. D is ignored.
  • Nearest(d): Interpolate by returning the function value at the nearest data site. D doesn't mean much here (it could be D = ∞), and so it is ignored and replaced with 0.
  • Laplace(d): Interpolate via the Laplace interpolant, with C(0) continuity at the data sites. D is ignored.
  • Farin(d): Interpolate using the Farin interpolant, with C(1) continuity at the data sites. d is ignored.
  • Hiyoshi(d): Interpolate using the Hiyoshi interpolant, with C(d) continuity at the data sites. Currently, only defined for d == 2.

Our implementation of Sibson(0)'s coordinates follows this article with some simple modifications.

source
NaturalNeighbours.AbstractDifferentiatorType
abstract type AbstractDifferentiator end

Abstract type for defining the method used for differentiating an interpolant or generating derivatives at data sites.

  • Direct(): Generate derivatives directly with one least squares problem.
  • Iterative(): Generate derivatives iteratively: Gradients are estimated first, and then both gradients and Hessians are estimated with the initial gradients used to refine the results.
source
NaturalNeighbours.SibsonType
Sibson(d=0)

Interpolate using Sibson's coordinates with C(d) continuity at the data sites.

source
NaturalNeighbours.HiyoshiType
Hiyoshi(d=2)

Interpolate using Hiyoshi's C(2) interpolant. Hiyoshi's interpolant C(0) is not yet implemented, but we do not make any conversions to C(2) like in e.g. Farin(), e.g. Farin() gets converted to Farin(1) but, to support possible later versions, Hiyoshi() does not get converted to Hiyoshi(2).

source
NaturalNeighbours.FarinType
Farin()

Interpolate using Farin's C(1) interpolant.

source
NaturalNeighbours.LaplaceType
Laplace()

Interpolate using Laplace's coordinates.

source
NaturalNeighbours.TriangleType
Triangle()

Interpolate using a piecewise linear interpolant over the underlying triangulation.

source
NaturalNeighbours.NearestType
Nearest()

Interpolate by taking the function value at the nearest data site.

source
NaturalNeighbours.DirectType
Direct()

Generate derivatives directly with one least squares problem.

source
NaturalNeighbours.IterativeType
Iterative()

Generate derivatives iteratively: Gradients are estimated first, and then both gradients and Hessians are estimated with the initial gradients used to refine the results.

source
NaturalNeighbours.identify_exterior_pointsFunction
identify_exterior_points(x, y, points, boundary_nodes; tol = 0.0)

Given a polygon described by (points, boundary_nodes), matching the specification of polygons in DelaunayTriangulation.jl (see here), returns a vector of indices of the points defined by (x, y) that are outside of the polygon.

Use tol to specify a tolerance for the distance to the polygon.

source
identify_exterior_points(x, y, itp::NaturalNeighboursInterpolant; tol = 0.0)

Returns the indices of the points defined by the vectors (x, y) that are outside of the underlying triangulation to the interpolant itp.

Use tol to specify a tolerance for the distance to the triangulation.

source
+)

Generate derivatives at the data sites defined by the triangulation tri with associated function values tri.

Arguments

Keyword Arguments

Output

source
NaturalNeighbours.AbstractInterpolatorType
abstract type AbstractInterpolator{D}

Abstract type for defining the method used for evaluating an interpolant. D is, roughly, defined to be the smoothness at the data sites (currently only relevant for Sibson). The available subtypes are:

  • Sibson(d): Interpolate via the Sibson interpolant, with C(d) continuity at the data sites. Only defined for D ∈ (0, 1). If D == 1, gradients must be provided.
  • Triangle(d): Interpolate based on vertices of the triangle that the point resides in, with C(0) continuity at the data sites. D is ignored.
  • Nearest(d): Interpolate by returning the function value at the nearest data site. D doesn't mean much here (it could be D = ∞), and so it is ignored and replaced with 0.
  • Laplace(d): Interpolate via the Laplace interpolant, with C(0) continuity at the data sites. D is ignored.
  • Farin(d): Interpolate using the Farin interpolant, with C(1) continuity at the data sites. d is ignored.
  • Hiyoshi(d): Interpolate using the Hiyoshi interpolant, with C(d) continuity at the data sites. Currently, only defined for d == 2.

Our implementation of Sibson(0)'s coordinates follows this article with some simple modifications.

source
NaturalNeighbours.AbstractDifferentiatorType
abstract type AbstractDifferentiator end

Abstract type for defining the method used for differentiating an interpolant or generating derivatives at data sites.

  • Direct(): Generate derivatives directly with one least squares problem.
  • Iterative(): Generate derivatives iteratively: Gradients are estimated first, and then both gradients and Hessians are estimated with the initial gradients used to refine the results.
source
NaturalNeighbours.SibsonType
Sibson(d=0)

Interpolate using Sibson's coordinates with C(d) continuity at the data sites.

source
NaturalNeighbours.HiyoshiType
Hiyoshi(d=2)

Interpolate using Hiyoshi's C(2) interpolant. Hiyoshi's interpolant C(0) is not yet implemented, but we do not make any conversions to C(2) like in e.g. Farin(), e.g. Farin() gets converted to Farin(1) but, to support possible later versions, Hiyoshi() does not get converted to Hiyoshi(2).

source
NaturalNeighbours.FarinType
Farin()

Interpolate using Farin's C(1) interpolant.

source
NaturalNeighbours.LaplaceType
Laplace()

Interpolate using Laplace's coordinates.

source
NaturalNeighbours.TriangleType
Triangle()

Interpolate using a piecewise linear interpolant over the underlying triangulation.

source
NaturalNeighbours.NearestType
Nearest()

Interpolate by taking the function value at the nearest data site.

source
NaturalNeighbours.DirectType
Direct()

Generate derivatives directly with one least squares problem.

source
NaturalNeighbours.IterativeType
Iterative()

Generate derivatives iteratively: Gradients are estimated first, and then both gradients and Hessians are estimated with the initial gradients used to refine the results.

source
NaturalNeighbours.identify_exterior_pointsFunction
identify_exterior_points(x, y, points, boundary_nodes; tol = 0.0)

Given a polygon described by (points, boundary_nodes), matching the specification of polygons in DelaunayTriangulation.jl (see here), returns a vector of indices of the points defined by (x, y) that are outside of the polygon.

Use tol to specify a tolerance for the distance to the polygon.

source
identify_exterior_points(x, y, itp::NaturalNeighboursInterpolant; tol = 0.0)

Returns the indices of the points defined by the vectors (x, y) that are outside of the underlying triangulation to the interpolant itp.

Use tol to specify a tolerance for the distance to the triangulation.

source
diff --git a/dev/interpolation/index.html b/dev/interpolation/index.html index c104f90..e6f692b 100644 --- a/dev/interpolation/index.html +++ b/dev/interpolation/index.html @@ -133,4 +133,4 @@ 8.272516151708604 julia> esib1 = 100sqrt(sum((sib1_vals .- f.(_x, _y)).^2) / sum(sib_vals.^2)) -6.974149853003652 +6.974149853003652 diff --git a/dev/interpolation_math/index.html b/dev/interpolation_math/index.html index e1eb3d9..4e1a522 100644 --- a/dev/interpolation_math/index.html +++ b/dev/interpolation_math/index.html @@ -46,4 +46,4 @@ &+ \frac{1}{180}\left(z_{i, jk} + z_{i, j\ell} + z_{i, jm} + z_{i, k\ell} + z_{i, km} + z_{i, \ell m} + z_{j, i\ell} + \cdots + z_{mk\ell}\right), \end{align*}\]

where all the $i$, $j$, $k$, $\ell$, and $m$ are different. To evaluate $f^{\text{HIY}}$, we use the same relationship between $f^{\text{HIY}}$ and complete homogeneous symmetric polynomials to write

\[f^{\text{HIY}}(\boldsymbol x_0) = 120\sum_{1 \leq i \leq j \leq k \leq \ell \leq m \leq n} \tilde f_{ijk \ell m} \lambda_i\lambda_j\lambda_k\lambda_\ell \lambda_m,\]

where $\tilde f_{iiiii} = f_{iiiii}/5! = f_{iiiii}/120$, $\tilde f_{iiiij} = f_{iiiij}/24$, $\tilde f_{iijjj} = f_{iijjj}/12$, $\tilde f_{iijjk} = f_{iijjk}/4$, $\tilde f_{iijk\ell} = f_{iijk\ell}/2$, and $\tilde f_{ijk\ell m} = f_{ijk\ell m}$.

This interpolant has cubic precision, meaning it can recover cubic polynomials. Note that this sum has

\[\sum_{i=1}^n\sum_{j=i}^n\sum_{k=j}^n\sum_{\ell=k}^n\sum_{m=k}^n 1 = \frac{n^5}{120} + \frac{n^4}{12} + \cdots = \mathcal O(n^5)\]

terms, which could cause issues with many natural neighbours. For example, with $n = 20$ we have $n^5 = 3,200,000$. In fact, as discussed in Section 6.5 of Bobach's thesis, more than 150 million operations could be required with $n = 20$. We discuss benchmark results in the comparison section of the sidebar.

Regions of Influence

The region of influence for the natural neighbour coordinates associated with a point $\boldsymbol x_i$ is the interior the union of all circumcircles coming from the triangles of the underlying triangulation that pass through $\boldsymbol x_i$. We can visualise this for the coordinates we define above below. (this region of influence definition not necessarily generalise to the triangle and nearest neighbour coordinates, but we still compare them).

We take a set of data sites in $[-1, 1]^2$ such that all function values are zero except for $z_1 = 0$ with $\boldsymbol x_1 = \boldsymbol 0$. Using this setup, we obtain the following results (see also Figure 3.6 of Bobach's thesis linked previously):

Region of Influence
-

We can indeed see the effect of the region of influence about this single point $\boldsymbol x_1$. Note also that $f^{\text{SIB}1}$ is much smoother than the others.

Extrapolation

An important consideration is extrapolation. Currently, all the methods above assume that the query point $\boldsymbol x_0$ is in $\mathcal C(\boldsymbol X)$, and the interpolation near the boundary of $\mathcal C(\boldsymbol X)$ often has some weird effects. There are many approaches available for extrapolation, such as with ghost points, although these are not implemented in this package (yet!).

The approach we take for any point outside of $\mathcal C(\boldsymbol X)$, or on $\partial\mathcal C(\boldsymbol X)$, is to find the ghost triangle that $\boldsymbol x_0$ is in (ghost triangles are defined here in the DelaunayTriangulation.jl documentation), which will have some boundary edge $\boldsymbol e_{ij}$. (Similarly, if $\boldsymbol x_0 \in \partial \mathcal C(\boldsymbol X)$, $\boldsymbol e_{ij}$ is the boundary edge that it is on.) We then simply use two-point interpolation, letting

\[f(\boldsymbol x_0) \approx \lambda_iz_i + \lambda_jz_j,\]

where $\lambda_i = 1-t$, $\lambda_j = t$, $\ell = \|x_i - \boldsymbol x_j\|$, and $t = [(x_0 - x_i)(x_j - x_i) + (y_0 - y_i)(y_j - y_i)]/\ell^2$. Note also that in this definition of $t$ we have projected $\boldsymbol x_0$ onto the line through $\boldsymbol x_i$ and $\boldsymbol x_j$ – this projection is not necessarily on $\boldsymbol e_{ij}$, though, so $t$ will not always be in $[0, 1]$, meaning the coordinates are not guaranteed to be (and probably won't be) convex.

This extrapolation will not always be perfect, but it is good enough until we implement more sophisticated methods. If you want to disable this approach, just use the project = false keyword argument when evaluating your interpolant.

Similarly, if you have points defining a boundary of some domain that isn't necessarily convex, the function identify_exterior_points may be useful to you, provided you have represented your boundary as defined here in DelaunayTriangulation.jl. See the Switzerland example in the sidebar for more information.

+

We can indeed see the effect of the region of influence about this single point $\boldsymbol x_1$. Note also that $f^{\text{SIB}1}$ is much smoother than the others.

Extrapolation

An important consideration is extrapolation. Currently, all the methods above assume that the query point $\boldsymbol x_0$ is in $\mathcal C(\boldsymbol X)$, and the interpolation near the boundary of $\mathcal C(\boldsymbol X)$ often has some weird effects. There are many approaches available for extrapolation, such as with ghost points, although these are not implemented in this package (yet!).

The approach we take for any point outside of $\mathcal C(\boldsymbol X)$, or on $\partial\mathcal C(\boldsymbol X)$, is to find the ghost triangle that $\boldsymbol x_0$ is in (ghost triangles are defined here in the DelaunayTriangulation.jl documentation), which will have some boundary edge $\boldsymbol e_{ij}$. (Similarly, if $\boldsymbol x_0 \in \partial \mathcal C(\boldsymbol X)$, $\boldsymbol e_{ij}$ is the boundary edge that it is on.) We then simply use two-point interpolation, letting

\[f(\boldsymbol x_0) \approx \lambda_iz_i + \lambda_jz_j,\]

where $\lambda_i = 1-t$, $\lambda_j = t$, $\ell = \|x_i - \boldsymbol x_j\|$, and $t = [(x_0 - x_i)(x_j - x_i) + (y_0 - y_i)(y_j - y_i)]/\ell^2$. Note also that in this definition of $t$ we have projected $\boldsymbol x_0$ onto the line through $\boldsymbol x_i$ and $\boldsymbol x_j$ – this projection is not necessarily on $\boldsymbol e_{ij}$, though, so $t$ will not always be in $[0, 1]$, meaning the coordinates are not guaranteed to be (and probably won't be) convex.

This extrapolation will not always be perfect, but it is good enough until we implement more sophisticated methods. If you want to disable this approach, just use the project = false keyword argument when evaluating your interpolant.

Similarly, if you have points defining a boundary of some domain that isn't necessarily convex, the function identify_exterior_points may be useful to you, provided you have represented your boundary as defined here in DelaunayTriangulation.jl. See the Switzerland example in the sidebar for more information.

diff --git a/dev/search/index.html b/dev/search/index.html index c77b621..897d718 100644 --- a/dev/search/index.html +++ b/dev/search/index.html @@ -1,2 +1,2 @@ -Search · NaturalNeighbours.jl

Loading search...

    +Search · NaturalNeighbours.jl

    Loading search...

      diff --git a/dev/swiss/index.html b/dev/swiss/index.html index 4a266c5..ed7da8b 100644 --- a/dev/swiss/index.html +++ b/dev/swiss/index.html @@ -180,4 +180,4 @@ hiyoshi_vals_p[exterior_idx] .= Inf fig = plot_results(sibson_vals_p, sibson_1_vals_p, laplace_vals_p, triangle_vals_p, nearest_vals_p, farin_vals_p, hiyoshi_vals_p, query_triangles, interpolant, a, b, c, d, e, f, nx, ny, data, triangles, elevation_data)

      -

      Perfect!

      +

      Perfect!