Skip to content

Commit

Permalink
Minor text edits, including typo corrections, to the technical articl…
Browse files Browse the repository at this point in the history
…es for clarity
  • Loading branch information
antonijw committed Mar 27, 2024
1 parent c0f0f28 commit 8b931f6
Show file tree
Hide file tree
Showing 2 changed files with 23 additions and 30 deletions.
49 changes: 21 additions & 28 deletions documentation/AI-Verification-Convexity.md
Original file line number Diff line number Diff line change
Expand Up @@ -147,7 +147,7 @@ activation functions evolving the state, such as $tanh$  activation layers
(see layer “nca\_0” in Fig 4). As with FICNNs, the weights in certain
parts of the network are constrained to be non-negative to maintain the
partial convexity property. In the figure above, the weight matrices for
the fully connected layer “fc\_z\_+\_1” is constrained to be positive
the fully connected layer “fc\_z\_+\_1” are constrained to be positive
(as indicated by the “\_+\_” in the layer name). All other fully
connected weight matrices in Fig 4 are unconstrained, giving freedom to
fit any purely feedforward network – see proposition 2 [1]. Note again that in our implementation, the final activation function, $g_k$, is not applied. This still guarantees partial convexity but removes the restriction that outputs of the network must be non-negative.
Expand Down Expand Up @@ -180,7 +180,7 @@ $f((1−\lambda)x+\lambda y) \leq (1−\lambda)f(x)+ \lambda f(y)$. Interval

$$ f(x) \leq max(f(a),f(b)) $$

To find the minimum of $f$ on the interval, you could use a optimization routine, such as projected gradient descent, interior-point
To find the minimum of $f$ on the interval, you could use an optimization routine, such as projected gradient descent, interior-point
methods, or barrier methods. However, you can use the properties of
convex functions to accelerate the search in certain scenarios.

Expand All @@ -190,7 +190,7 @@ If $f(a) \gt f(b)$, then either the minimum is at $x=b$ or
the minimum lies strictly in the interior of the interval,
$x \in (a,b)$. To assess whether the minimum is at $x=b$, look at the derivative, $\nabla f(x)$, at the interval bounds. If $f$ is not differentiable
at the interval bounds, for example the network has relu activation
functions that defines a set of non-differentiable points in $\mathbb{R}$, evaluate
functions that define a set of non-differentiable points in $\mathbb{R}$, evaluate
both the left and right derivate of $f$ at the interval bounds instead.
Then examine the sign of the directional derivatives at the interval bounds,
directed to the interior of the interval: $sgn( \nabla f(a), -\nabla f(b) ) = (\pm , \pm)$. Note that the sign of 0 is taken as positive in this discussion.
Expand Down Expand Up @@ -240,7 +240,7 @@ possible sign combinations since, at $x=b$, convexity means that $-\nabla f(b+\e

In the case that $f(a) = f(b)$, the function must either be
constant and the minimum is $f(a) = f(b)$. Or the minimum again
lies at the interior. If $sgn(\nabla f(a)) = +$, then $\nabla f(a) = 0$ else this violates convexity since $f(a) = f(b)$. Similar is true for
lies in the interior. If $sgn(\nabla f(a)) = +$, then $\nabla f(a) = 0$ else this violates convexity since $f(a) = f(b)$. Similar is true for
$-sgn(\nabla f(b)) = +$. In this case, all sign combinations are possible
owing to possible non-differentiability of $f$ at the interval bounds:

Expand All @@ -262,8 +262,8 @@ convex functions.

This idea can be extended to many intervals. Take a 1-dimensional ICNN. Consider subdividing the
operational design domain into a union of intervals $I_i$, where $I_i = [a_i,a_{i+1}]$ and $a_i \lt a_{i+1}$. A tight lower and upper bound on each interval can be computed with a
single forward pass through the network of all interval bounds values in the union of intervals, a
single backward pass through the network to compute derivatives at the interval bounds values, and
single forward pass through the network of all interval boundary values in the union of intervals, a
single backward pass through the network to compute derivatives at the interval boundary values, and
one final convex optimization on the interval containing the global
minimum. Furthermore, since bounds are computed at forward and
backward passes through the network, you can compute a 'boundedness metric' during
Expand All @@ -279,29 +279,28 @@ and $sgn(0) = +$.
The previous discussion focused on 1-dimensional convex functions, however, this idea extends to n-dimensional convex functions, $f:\mathbb{R}^n \rightarrow \mathbb{R}$. Note that a vector valued convex function is
convex in each output, so it is sufficient to keep the target as $\mathbb{R}$. In the discussion in this section, take the convex set to be the n-dimensinal hypercube, $H_n$, with vertices, $V_n = {(\pm 1,\pm 1, \dots,\pm 1)}$. General convex hulls will be discussed later.

An important property of convex functions in n-dimensions is that every 1-dimension restriction also defines a convex function. This is easily seen from the
An important property of convex functions in n-dimensions is that every 1-dimensional restriction also defines a convex function. This is easily seen from the
definition. Define $g:\mathbb{R} \rightarrow \mathbb{R}$ as $g(t) = f(t\hat{n}) \text{ where } \hat{n}$ is
some unit vector in $\mathbb{R}^n$. Then, by definition of convexity of $f$, letting $x = t\hat{n}$ and $y = t'\hat{n}$, it follows that,

$$ g((1−\lambda)t+\lambda t') \leq (1−\lambda)g(t)+ \lambda g(t') $$

Note that the restriction to 1-dimensional convex function will be used several times in the following discussion.
Note that the restriction to 1-dimensional convex functions will be used several times in the following discussion.

To determine an upper bound of $f$ on the hypercube, note that any point in $H_n$ can be expressed as a convex combination of its vertices, i.e., for $z \in H_n$, it follows that $z = \sum_i \lambda_i v_i$ where $\sum_i \lambda_i = 1$ and $v_i \in V_n$. Therefore, using the definition of convexity in the first inequality and that $\lambda_i \leq 1$ in the second equality,

$$ f(z) = f(\sum_i \lambda_i v_i) \leq \sum \lambda_i f(v_i) \leq \underset{v \in V_n}{\text{max }} f(v) $$

Consider now the lower bound of $f$ over the hypercube. Here we take the
approach of looking for cases where there is a guarantee that the
minimum lies at a vertex of the hypercube and when this guarantee cannot
be met, falling back to solving the convex optimization over this
hypercubic domain. For the n-dimensional approach, we will split the
Consider now the lower bound of $f$ over a hypercubic grid. Here we take the
approach of looking for hypercubes where there is a guarantee that the
minimum lies at a vertex of the hypercube and when this guarantee is not met, fall back to solving the convex optimization over that particular
hypercubic. For the n-dimensional approach, we will split the
discussion into differentiable and non-differentiable $f$, and consider
these separately.

**Multi-Dimensional Differentiable Convex Functions**

Consider the derivatives evaluated at each vertex of the hypercube. For each $\nabla f(v)$, $v \in V_n$, take the directional derivatives,
Consider the derivatives evaluated at each vertex of a hypercube. For each $\nabla f(v)$, $v \in V_n$, take the directional derivatives,
pointing inward along a hypercubic edge. Without loss of generality,
recall $V_n = \{(±1,±1,…,±1) \in \mathbb{R}^n\}$ and therefore
the hypercube is aligned along the standard basis vectors
Expand Down Expand Up @@ -340,10 +339,8 @@ derivative along the line at $w$, pointing inwards, is given by,

$$ \hat{n} \cdot \nabla f(w) = \sum_i -|n_i|\cdot sgn(w_i) \cdot \nabla_i f(w) = \sum_i |n_i| \cdot (-sgn(w_i) \cdot \nabla_i f(w)) \geq 0 $$

is positive, as $\hat{n} = - |n_i| \cdot sgn(w_i) \cdot e_i $.
The properties proved previously can then by applied to this 1-dimensional restriction, i.e., if the
gradient of $f$ as the interval bounds of an interval is positive, then $f$ has
a minimum value at this interval bounds. Hence, a vertex with inward
and is positive, as $\hat{n} = - |n_i| \cdot sgn(w_i) \cdot e_i $.
The properties proved previously can then be applied to this 1-dimensional restriction. Hence, a vertex with inward
directional derivative signature $(+,+,…,+)$ is a lower bound for $f$ over the hypercube. ◼

If there are multiple vertices sharing this signature, then since every
Expand All @@ -354,9 +351,9 @@ at vertices sharing these signatures so it is sufficient to select any.

If no vertex has signature $(+,+,…,+)$, solve for the minimum using
a convex optimization routine over this hypercube. Since all local minima are
global minima, there is at least one hypercube requiring this solution.
global minima, there is at least one hypercube requiring this approach.
If the function has a flat section at its minima, there may be other
hypercubes in the operational design domain, also without a vertex with all positive signature. Note that empirically,
hypercubes, also without a vertex with all positive signature. Note that empirically,
this seldom happens for convex neural networks as it requires fine
tuning of the parameters to create such a landscape.

Expand All @@ -380,7 +377,7 @@ As depicted in figure 7, the vertices $w$ of the square (hypercube of dimension
bisecting these directional derivatives, into the interior of the square, has a negative gradient. This is
because the vertex is at the intersection of two planes and is a
non-differentiable point, so the derivative through this point is path
dependent. This is a well-known observation but this breaks the assertion that this vertex if the minimum of $f$ over this
dependent. This is a well-known property of non-differentiable functions and breaks the assertion that this vertex is the minimum of $f$ over this
square region. From this example, it is clear the minimum lies at the apex at $(0,0)$.

To ameliorate this issue, in the case that the convex function is
Expand All @@ -391,13 +388,9 @@ $relu$ operations. In practice, this means that a vertex may be a
non-differentiable point if the network has pre-activations to $relu$
layers that have exact zeros. In practice, this is seldom the case. The
probability of this occurring can be further reduced by offsetting any
hypercube or hypercubic grid origin by a small random perturbation. It
is assumed during training, for efficiency of computing bounds during training, that the convex neural network is differentiable everywhere. For final post-training analysis, this implementation checks the $relu$
pre-activations for any exact zeros for all vertices. If there are
any zeros in these pre-activations, lower bounds for hypercubes that contain that vertex are recomputed using
an minimization routine. As a demonstration that these bounds are
correct, in the examples, we also run the minimization optimization routine on every
hypercube to show that bounds agree.
hypercube or hypercubic grid origin by a small random perturbation. If there are
any zeros in these pre-activations, lower bounds for hypercubes that contain that vertex can be recomputed using
a convex optimization routine instead.

As a final comment, for general convex hulls, the argument for the upper bound value of the function over the convex hull trivially extends, defined as the largest function value over the set of points defining the hull. The lower bound should be determined using an optimization routine, constrained to the set of point in the convex hull.

Expand Down
4 changes: 2 additions & 2 deletions documentation/AI-Verification-Monotonicity.md
Original file line number Diff line number Diff line change
Expand Up @@ -27,7 +27,7 @@ To circumvent these challenges, an alternative approach is to construct neural n
- **Constrained Weights**: Ensuring that all weights in the network are non-negative can guarantee monotonicity. You can achieve this by using techniques like weight clipping or transforming weights during training.
- **Architectural Considerations**: Designing network architectures that facilitate monotonic behavior. For example, architectures that avoid certain types of skip connections or layer types that could introduce non-monotonic behavior.

The approach taken in this repository is to utilize a combination of these three aspects and is based on the construction outlined in [1]. Ref [1] discusses the derivation in the context of row vector representations of network inputs however MATLAB utilizes a column vector representation of network inputs. This means that the 1-norm discussed in [1] is replaced by the $\infty$-norm for implementations in MATLAB.
The approach taken in this repository is to utilize a combination of activation function, weight and architectural restrictions and is based on the construction outlined in [1]. Ref [1] discusses the derivation in the context of row vector representations of network inputs however MATLAB utilizes a column vector representation of network inputs. This means that the 1-norm discussed in [1] is replaced by the $\infty$-norm for implementations in MATLAB.

Note that for different choices of p-norm, the derivation in [1] still yields a monotonic function $f$, however there may be couplings between the magnitudes of the partial derivatives (shown for p=2 in [1]). By default, the implementation in this repository sets $p=\infty$ for monotonic networks but other values are explored as these may yield better fits.

Expand All @@ -50,7 +50,7 @@ The main challenge with expressive monotonic networks is to balance the inherent

For networks constructed to be monotonic, verification becomes more straightforward and comes down to architectural and weight inspection, i.e., provided the network architecture is of a specified monotonic topology, and that the weights in the network are appropriately related - see [1] - then the network is monotonic.

In summary, while verifying monotonicity in general neural networks is complex due to non-linearities and high dimensionality, constructing networks with inherent monotonic properties simplifies verification. By using monotonic activation functions and ensuring non-negative weights, you can design networks that are guaranteed to be monotonic, thus facilitating the verification process and making the network more suitable for applications where monotonic behavior is essential.
In summary, while verifying monotonicity in general neural networks is complex due to non-linearities and high dimensionality, constructing networks with inherent monotonic properties simplifies verification. By using constrained architectures and weights, you can design networks that are guaranteed to be monotonic, thus facilitating the verification process and making the network more suitable for applications where monotonic behavior is essential.

**References**

Expand Down

0 comments on commit 8b931f6

Please sign in to comment.