Skip to content

Commit

Permalink
Merge pull request #21 from DanielVandH/fix-typos
Browse files Browse the repository at this point in the history
Fix typos
  • Loading branch information
DanielVandH authored Oct 2, 2023
2 parents a4f8d3e + 602584c commit 4d2658d
Show file tree
Hide file tree
Showing 6 changed files with 10 additions and 10 deletions.
2 changes: 1 addition & 1 deletion docs/make.jl
Original file line number Diff line number Diff line change
Expand Up @@ -18,7 +18,7 @@ makedocs(;
"Home" => "index.md",
"Examples" => [
"Interpolation" => "interpolation.md",
"Differentiaton" => "differentiation.md",
"Differentiation" => "differentiation.md",
"Switzerland Elevation Data" => "swiss.md"
],
"Comparison of Interpolation Methods" => "compare.md",
Expand Down
4 changes: 2 additions & 2 deletions docs/src/compare.md
Original file line number Diff line number Diff line change
Expand Up @@ -539,7 +539,7 @@ fig = plot_errors(considered_fidx, considered_itp, gdf, interior_idx, :n_error,

Judging from these results, again the `Hiyoshi(2)` and `Farin(1)` methods have the best performance across all metrics.

# Quantiative Global Analysis
# Quantitative Global Analysis

Now we will use global metrics to assess the interpolation quality. A limitation of the above discussion is that we are considering a fixed data set. Here, we instead consider random data sets (with the same test functions) and weighted averages of the local errors. We will measure the errors as a function of the median edge length of the data set's underlying triangulation. Note that, in these random data sets, we will not maintain a convex hull of $[0, 1]^2$. Lastly, we will use a stricter tolerance on whether to classify a point as being inside of the convex hull in this case, now using a `tol = 0.1` rather than `tol = 0.01`. The global metric we use is $100\sqrt{\frac{\sum_i \|y_i - \hat y_i\|^2}{\sum_j \|\hat y_i\|^2}}$:

Expand Down Expand Up @@ -695,7 +695,7 @@ Once again, the `Hiyoshi(2)` and `Farin(1)` methods seem preferable, and `Direct

It is important to note that using the smooth interpolants comes at a cost of greater running time. If $n$ is the number of natural neighbours around a point $\boldsymbol x_0$, then computing $f^{\text{HIY}}(\boldsymbol x_0)$ is about $\mathcal O(n^5)$, and $f^{\text{FAR}}(\boldsymbol x_0)$ is $\mathcal O(n^3)$. Derivative generation also has this complexity when using these interpolants (since it involves solving a least squares problem). Of course, this complexity doesn't typically matter so much since (1) many points are being evaluated at using multithreading and (2) points have, on average, six natural neighbours only in most triangulations.

Let us explore here how long it takes to compute the interpolant as a function of the number of natural neighbours. There are many ways to measure this properly, e.g. collecting large samples of computation times from random data sets, but here we take a simple approach where we contruct a triangulation with a point $\boldsymbol x_1 = \boldsymbol 0$ surrounded by $m$ points on a circle. This point $\boldsymbol x_1$ will have approximately $m$ natural neighbours. (Note that we do not care about the number of data points in the dataset since these interpolants are local.) The function we use for this is:
Let us explore here how long it takes to compute the interpolant as a function of the number of natural neighbours. There are many ways to measure this properly, e.g. collecting large samples of computation times from random data sets, but here we take a simple approach where we construct a triangulation with a point $\boldsymbol x_1 = \boldsymbol 0$ surrounded by $m$ points on a circle. This point $\boldsymbol x_1$ will have approximately $m$ natural neighbours. (Note that we do not care about the number of data points in the dataset since these interpolants are local.) The function we use for this is:

```julia
function circular_example(m) # extra points are added outside of the circular barrier for derivative generation
Expand Down
4 changes: 2 additions & 2 deletions docs/src/differentiation.md
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,7 @@ CurrentModule = NaturalNeighbours

The purpose of this example is to explore derivative generation. For this, it is important to note that we are thinking of _generating_ derivatives rather than _estimating_ them: Following [Alfeld (1989)](https://doi.org/10.1016/B978-0-12-460515-2.50005-6), derivative generation only seeks to find derivatives that best fit our assumptions of the data, i.e. that give a most satisfactory interpolant, rather than trying to find exact derivative values. The complete quote for this by [Alfeld (1989)](https://doi.org/10.1016/B978-0-12-460515-2.50005-6) is below:

> It seems inevitable that in order to obtain an interpolant that is both local and smooth one has to supply derivative data. Typically, such data are not part of the interpolation problem and have to be made up from existing func tional data. This process is usually referred as derivative estimation, but this is probably a misnomer. The objective is not to estimate existing but unknown values of derivatives. Instead, it is to generate values that will yield a satisfactory interpolant. Even if an underlying primitive function did exist it might be preferable to use derivative values that differ from the exact ones. (For example, a maximum error might be decreased by using the "wrong" derivative values.) Therefore, I prefer the term derivative generation rather than derivative estimation.
> It seems inevitable that in order to obtain an interpolant that is both local and smooth one has to supply derivative data. Typically, such data are not part of the interpolation problem and have to be made up from existing functional data. This process is usually referred as derivative estimation, but this is probably a misnomer. The objective is not to estimate existing but unknown values of derivatives. Instead, it is to generate values that will yield a satisfactory interpolant. Even if an underlying primitive function did exist it might be preferable to use derivative values that differ from the exact ones. (For example, a maximum error might be decreased by using the "wrong" derivative values.) Therefore, I prefer the term derivative generation rather than derivative estimation.
For the purpose of this exploration, we use Franke's test function. This function, introduced by [Franke and Nielson (1980)](https://doi.org/10.1002/nme.1620151110), is given by

Expand Down Expand Up @@ -108,7 +108,7 @@ fig

# Generation at the Data Sites

To start with the example, we consider generating the dervatives at the data sites.
To start with the example, we consider generating the derivatives at the data sites.

## Gradients

Expand Down
2 changes: 1 addition & 1 deletion docs/src/differentiation_math.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,7 @@ CurrentModule = NaturalNeighbours

# Differentiation

In this section, we give some of the mathematical detail used for implementing derivative generation, following this [thesis](https://kluedo.ub.rptu.de/frontdoor/deliver/index/docId/2104/file/diss.bobach.natural.neighbor.20090615.pdf). The discussion that follows is primarily sourced from Chapter 6 of the linked thesis. While it is possible to generate derivatives of arbitary order, our discussion here in this section will be limited to gradient and Hessian generation. These ideas are implemented by the `generate_gradients` and `generate_derivatives` functions, which you should use via the `differentiate` function.
In this section, we give some of the mathematical detail used for implementing derivative generation, following this [thesis](https://kluedo.ub.rptu.de/frontdoor/deliver/index/docId/2104/file/diss.bobach.natural.neighbor.20090615.pdf). The discussion that follows is primarily sourced from Chapter 6 of the linked thesis. While it is possible to generate derivatives of arbitrary order, our discussion here in this section will be limited to gradient and Hessian generation. These ideas are implemented by the `generate_gradients` and `generate_derivatives` functions, which you should use via the `differentiate` function.

# Generation at Data Sites

Expand Down
6 changes: 3 additions & 3 deletions docs/src/interpolation_math.md
Original file line number Diff line number Diff line change
Expand Up @@ -68,7 +68,7 @@ To represent a point $\boldsymbol x_0$, we can use what are known as _nearest ne
\lambda_i^{\text{NEAR}} = \begin{cases} 1 & \boldsymbol x_0 \in \mathcal V_i, \\ 0 & \text{otherwise}. \end{cases}
```

The resulting scatterd data interpolant $f^{\text{NEAR}}$ is then just
The resulting scattered data interpolant $f^{\text{NEAR}}$ is then just

```math
f^{\text{NEAR}}(\boldsymbol x) = z_i,
Expand Down Expand Up @@ -201,7 +201,7 @@ All the derived interpolants above are not differentiable at the data sites. Her

## Sibson's $C^1$ Interpolant

Sibson's $C^1$ interpolant, which we call Sibson-1 interpolation, extends on Sibon's coordinates above, also called Sibson-0 coordinates, is $C^1$ at the data sites. A limitation of it is that it requires an estimate of the gradient $\boldsymbol \nabla_i$ at the data sites $\boldsymbol x_i$, which may be estimated using the derivative generation techniques describd in the sidebar.
Sibson's $C^1$ interpolant, which we call Sibson-1 interpolation, extends on Sibson coordinates above, also called Sibson-0 coordinates, is $C^1$ at the data sites. A limitation of it is that it requires an estimate of the gradient $\boldsymbol \nabla_i$ at the data sites $\boldsymbol x_i$, which may be estimated using the derivative generation techniques described in the sidebar.

Following [Bobach's thesis](https://kluedo.ub.rptu.de/frontdoor/deliver/index/docId/2104/file/diss.bobach.natural.neighbor.20090615.pdf) or [Flötotto's thesis](https://theses.hal.science/tel-00832487/PDF/these-flototto.pdf), the Sibson-1 interpolant $f^{\text{SIB}1}$ is a linear combination of $f^{\text{SIB}0} \equiv f^{\text{SIB}}$ and another interpolant $\xi$. We define:

Expand Down Expand Up @@ -262,7 +262,7 @@ Let us describe how we actually evaluate $\sum_{i \in N_0}\sum_{j \in N_0}\sum_{
f^{\text{FAR}}(\boldsymbol x_0) = \sum_{1 \leq i, j, k \leq n} f_{ijk}\lambda_i\lambda_j\lambda_k.
```

This looks close to the definition of a [complete homogeneous symetric polynomial](https://en.wikipedia.org/wiki/Complete_homogeneous_symmetric_polynomial). This page shows the identity
This looks close to the definition of a [complete homogeneous symmetric polynomial](https://en.wikipedia.org/wiki/Complete_homogeneous_symmetric_polynomial). This page shows the identity

```math
\sum_{1 \leq i \leq j \leq k \leq n} X_iX_kX_j = \sum_{1 \leq i, j, k \leq n} \frac{m_i!m_j!m_k!}{3!}X_iX_jX_k,
Expand Down
2 changes: 1 addition & 1 deletion src/differentiation/differentiate.jl
Original file line number Diff line number Diff line change
Expand Up @@ -19,7 +19,7 @@ For calling the resulting struct, we define the following methods:
The available keyword arguments are:
- `parallel=true`: Whether to use multithreading. Ignored for the first two methods.
- `method=default_diff_method(∂)`: Default method for evaluating the interpolant. `default_diff_method(∂)` returns `Direct()`. The method must be a [`AbstractDifferentiator`](@ref).
- `interpolant_method=Sibson()`: The method used for evaluating the interpolant to estimate `zᵢ` for the latter three methods. See [`AbstractInterpolator`](@ref) for the avaiable methods.
- `interpolant_method=Sibson()`: The method used for evaluating the interpolant to estimate `zᵢ` for the latter three methods. See [`AbstractInterpolator`](@ref) for the available methods.
- `rng=Random.default_rng()`: The random number generator used for estimating `zᵢ` for the latter three methods, or for constructing the natural coordinates.
- `project=false`: Whether to project any extrapolated points onto the boundary of the convex hull of the data sites and perform two-point interpolation, or to simply replace any extrapolated values with `Inf`, when evaluating the interpolant in the latter three methods.
- `use_cubic_terms=true`: If estimating second order derivatives, whether to use cubic terms. Only relevant for `method == Direct()`.
Expand Down

0 comments on commit 4d2658d

Please sign in to comment.