You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I recently published a blog post on fitting cubic beziers. For some applications, such as merging two beziers into one, it is sufficient. For other applications, the end goal is representation of the source curve by a sequence of bezier segments.
A standard approach to this problem is to attempt to fit the curve with a single bezier, then measure the error. If that is below the threshold, we're done. If it's above, then split in half (where "half" can be defined by the original parameterization, or by arc length, or something else; usually the exact detail doesn't matter much) and apply the algorithm recursively. In general this produces somewhere around 1.5 as many segments as necessary. To pick a simple example, assume that the source curve can be rendered using 5 bezier segments. The recursive subdivision approach will generally create 8. That said, the recursive subdivision approach is simple and fast, so it will likely be useful in some contexts.
This issue discusses how to produce a globally optimum bezier representation, subject to the error threshold.
Definition of the problem
We'll pose a slightly tricky statement of the problem; it is carefully engineered to make the problem more tractable.
Generate a sequence of beziers that approximates the source curve. The number of segments should be minimal, ie there is no sequence with fewer segments for which the L2 error of each segment is below the error threshold. Within that constraint, minimize the maximum L2 error across the segments. Each segment is required to be G1-consistent with the source curve and also match its area exactly.
Note that this is very similar to minimizing Frechet distance, but subtly different. If there is an application for which Frechet minimization is absolutely required, I think it's likely we could use this solution as a starting point, then fine-tune, for example using a gradient descent mechanism. I find it hard to imagine a use case where that would actually be important.
A few assumptions. The L2 error is assumed to be monotonic when one endpoint is fixed and the other is varied. (Intuition is that if this were not true, then a de Casteljau subdivision of the lower-error fit for the longer segment would be less error. However, making that argument mathematically rigorous is tricky, as the subdivided bezier may not be G1-consistent with the source curve. Thus, numerical techniques should be chosen to be robust against failure of strict monotonicity.)
Thus, a minimax optimum is when all beziers have the same L2 error wrt the source curve. Any perturbation would reduce one error but increase another, thus increasing the maximum.
Outline of the solution
The solution should be both fast and accurate. Thus, it breaks down into three major components. First, an intermediate representation of a (close) approximation of the source curve, designed for efficient query operations (there will be a lot of queries to that curve). Second, a doubly nested solver to determine the optimum value of the subdivision points. And third, a method to determine the optimum bezier (and calculate its error) for each segment; that's already explained in the blog post.
Query structure
The intermediate curve representation supports two queries. One resolves a position, parameterized by arc length, to a point and tangent. The second resolves an interval, parameterized by arc length of each endpoint, to signed area and raw first moments. A central part of this design is a clever approach to serve both queries in effectively O(1) time, regardless of the complexity of the source curve, at the cost of some preprocessing. I'll also consider different points in the tradeoff space that make queries more expensive but reduce preprocessing cost.
The simpler approach first. Essentially the curve is an array of segments, each of which has arc length of total arc length / n. Each segment is a G1 cubic Hermite interpolation of the arc length parameterization of the source curve. Another way of saying that is that the first derivative at the endpoints has the same tangent direction as the source curve, and its magnitude is the length of the segment, ie arc length / n. I expect error to scale as O(n^4) but with a constant factor Somewhat better than, eg, piecewise quadratic bezier or piecewise Euler spiral. Obviously this is something to check emprically.
Then, querying point and tangent from arc length is simple: multiply the query arc length by n. The integer part is an index into the array of segments, and the fractional part is the t parameter for the resulting cubic bezier.
A clever aspect to this design is the range query: in addition to the bezier, also store the prefix sum of the area and first moments. Then, to resolve a range query, compute the area and moments of the fractional pieces at each end, and add the difference of those values sampled at the integer values. This allows resolution of all queries in O(1) time.
A concern is the number of subdivisions required if the source curve has a cusp or region of huge curvature variation - all segments are the same length, therefore the segment length must be chosen to satisfy the error threshold for the worst case. Below are two alternatives to address this concern, though the simpler approach is probably good enough for many applications, and the worst thing that can happen is excessive time spent on preprocessing.
Nested solver for segment endpoints
The determination of segment endpoints is a doubly nested loop of bisection-style solvers. We'll consider the inner loop first as it's fairly straightforward. Given a starting position and an error target, determine the end position. Given the assumption that error is monotonic, that can readily be solved by bisection (or similar but better methods, see below).
The outer loop has an initial phase of determining n and a lower bound on the target error. That walks the curve from the beginning to the end, using the inner loop with the target error. All segments but the last will have that error, and the last will have some smaller error. This is is then a solution with the minimal number of segments, and meets the error threshold, but the fact that the last segment is short is unsatisfying. The error of the last segment is a lower bound.
Next is a bisection to find the actual error. Each iteration consists of a curve walk as above with n - 1 segments, then the error is computed for the last segment. The "value" for the bisection algorithm is the error of the last segment minus the error of all previous segments.
I said bisection here, but the ITP method will likely yield significantly faster results. Another trick is to consistently use the sixth root of error in place of raw error, as we expect O(n^6) scaling of the bezier fit, so this should make the resulting curves closer to linear.
Error calculation
I already do something like this in the notebook (not yet published) for the curve fitting post, but will describe it here. Compute the L2 error of a bezier wrt the source curve as follows. The goal is to compute the integral of (error vector)^2 over a normalized arc length parameterization of both curves. To do that the straightforward way requires solving an inverse arc length parameterization of the bezier, which is potentially slow.
Instead, compute Lagrange-Gauss quadrature, using the t parameter of the bezier as the integration parameter. For each t compute the corresponding s by doing a forward arc length evaluation on the bezier. Then find the point on the bezier (query by t) and the point on the source curve (query by s) and just evaluate Δx^2 + Δy^2. Then multiply by ds/dt, which is the norm of the derivative of the bezier. Do the dot product of those values with the w parameters from LGQ, et voila, a good, reasonably cheap numerical integration.
This is the inner loop of a bunch of nested solver loops, so it's important to be fast. As a result, query of the source curve needs to be fast, as does arc length calculation on the bezier. Fortunately, we have good approaches for both.
Alternative query structures
The fact that all segments are equal length is not great when there's a cusp - the segment near the cusp needs to be short, therefore all segments need to be short.
One approach is a variable size segment, and binary search of cumulative arc length to find the right segment. Each segment could be the same as above. A somewhat more exotic approach is to store a reference to the source curve and parameters for a function to solve the inverse arc length problem (ie map s within the segment to t on the source curve). One approach is to model the arc length as piecewise quadratic, in particular the integral of a piecewise linear representation of the magnitude of the first derivative of the source curve, and use the quadratic formula to solve for t on each query. This will generally model cusps well, so shouldn't require very many segments. Also, because it references the source curve, the query of position and tangent will always produce a value on the curve (though it's not clear this is an important distinction, as errors in measuring arc length probably have a very similar effect).
Another approach is an approach reminisent of a radix tree. At each level in the tree, each node represents a constant amount of arc length. The node can either be a cubic segment, or an array similar to what was described above. Thus, query is proportional to tree depth, but the constant factors should be good.
My main concern is complexity, so it's not clear which approach is best. A nontrivial part of that complexity is determining the error of the C1 cubic Hermite approximation relative to the source curve, and thus the appropriate value for n. The radix tree is more forgiving in that regard; one can be fairly sloppy in the choice of the radix, and use a simple "does it meet the error threshold" test to decide whether to subdivide.
Last thoughts
This all assumes the curve is smooth. If it contains corners, those should be determined before starting. The same goes for cusps, though in theory the bezier curve fitting should be able to match a curve containing a cusp in the middle.
For font applications, it's generally a good idea to subdivide at horizontal and vertical extrema, as well. Thus, each curve segment goes through 90 degrees of angular deviation at most; two or at most three beziers should be sufficient to represent that for most fonts.
The doubly nested loop solution is fairly similar to what I described in section 9.6.3 of my thesis. I do think the use of the ITP method is an improvement, as I'd be worried about robustness of the secant method as recommended in the thesis; one of the great advantages of ITP is that its worst-case behavior is no worse than bisection.
The text was updated successfully, but these errors were encountered:
I recently published a blog post on fitting cubic beziers. For some applications, such as merging two beziers into one, it is sufficient. For other applications, the end goal is representation of the source curve by a sequence of bezier segments.
A standard approach to this problem is to attempt to fit the curve with a single bezier, then measure the error. If that is below the threshold, we're done. If it's above, then split in half (where "half" can be defined by the original parameterization, or by arc length, or something else; usually the exact detail doesn't matter much) and apply the algorithm recursively. In general this produces somewhere around 1.5 as many segments as necessary. To pick a simple example, assume that the source curve can be rendered using 5 bezier segments. The recursive subdivision approach will generally create 8. That said, the recursive subdivision approach is simple and fast, so it will likely be useful in some contexts.
This issue discusses how to produce a globally optimum bezier representation, subject to the error threshold.
Definition of the problem
We'll pose a slightly tricky statement of the problem; it is carefully engineered to make the problem more tractable.
Generate a sequence of beziers that approximates the source curve. The number of segments should be minimal, ie there is no sequence with fewer segments for which the L2 error of each segment is below the error threshold. Within that constraint, minimize the maximum L2 error across the segments. Each segment is required to be G1-consistent with the source curve and also match its area exactly.
Note that this is very similar to minimizing Frechet distance, but subtly different. If there is an application for which Frechet minimization is absolutely required, I think it's likely we could use this solution as a starting point, then fine-tune, for example using a gradient descent mechanism. I find it hard to imagine a use case where that would actually be important.
A few assumptions. The L2 error is assumed to be monotonic when one endpoint is fixed and the other is varied. (Intuition is that if this were not true, then a de Casteljau subdivision of the lower-error fit for the longer segment would be less error. However, making that argument mathematically rigorous is tricky, as the subdivided bezier may not be G1-consistent with the source curve. Thus, numerical techniques should be chosen to be robust against failure of strict monotonicity.)
Thus, a minimax optimum is when all beziers have the same L2 error wrt the source curve. Any perturbation would reduce one error but increase another, thus increasing the maximum.
Outline of the solution
The solution should be both fast and accurate. Thus, it breaks down into three major components. First, an intermediate representation of a (close) approximation of the source curve, designed for efficient query operations (there will be a lot of queries to that curve). Second, a doubly nested solver to determine the optimum value of the subdivision points. And third, a method to determine the optimum bezier (and calculate its error) for each segment; that's already explained in the blog post.
Query structure
The intermediate curve representation supports two queries. One resolves a position, parameterized by arc length, to a point and tangent. The second resolves an interval, parameterized by arc length of each endpoint, to signed area and raw first moments. A central part of this design is a clever approach to serve both queries in effectively O(1) time, regardless of the complexity of the source curve, at the cost of some preprocessing. I'll also consider different points in the tradeoff space that make queries more expensive but reduce preprocessing cost.
The simpler approach first. Essentially the curve is an array of segments, each of which has arc length of total arc length / n. Each segment is a G1 cubic Hermite interpolation of the arc length parameterization of the source curve. Another way of saying that is that the first derivative at the endpoints has the same tangent direction as the source curve, and its magnitude is the length of the segment, ie arc length / n. I expect error to scale as O(n^4) but with a constant factor Somewhat better than, eg, piecewise quadratic bezier or piecewise Euler spiral. Obviously this is something to check emprically.
Then, querying point and tangent from arc length is simple: multiply the query arc length by n. The integer part is an index into the array of segments, and the fractional part is the
t
parameter for the resulting cubic bezier.A clever aspect to this design is the range query: in addition to the bezier, also store the prefix sum of the area and first moments. Then, to resolve a range query, compute the area and moments of the fractional pieces at each end, and add the difference of those values sampled at the integer values. This allows resolution of all queries in O(1) time.
A concern is the number of subdivisions required if the source curve has a cusp or region of huge curvature variation - all segments are the same length, therefore the segment length must be chosen to satisfy the error threshold for the worst case. Below are two alternatives to address this concern, though the simpler approach is probably good enough for many applications, and the worst thing that can happen is excessive time spent on preprocessing.
Nested solver for segment endpoints
The determination of segment endpoints is a doubly nested loop of bisection-style solvers. We'll consider the inner loop first as it's fairly straightforward. Given a starting position and an error target, determine the end position. Given the assumption that error is monotonic, that can readily be solved by bisection (or similar but better methods, see below).
The outer loop has an initial phase of determining n and a lower bound on the target error. That walks the curve from the beginning to the end, using the inner loop with the target error. All segments but the last will have that error, and the last will have some smaller error. This is is then a solution with the minimal number of segments, and meets the error threshold, but the fact that the last segment is short is unsatisfying. The error of the last segment is a lower bound.
Next is a bisection to find the actual error. Each iteration consists of a curve walk as above with n - 1 segments, then the error is computed for the last segment. The "value" for the bisection algorithm is the error of the last segment minus the error of all previous segments.
I said bisection here, but the ITP method will likely yield significantly faster results. Another trick is to consistently use the sixth root of error in place of raw error, as we expect O(n^6) scaling of the bezier fit, so this should make the resulting curves closer to linear.
Error calculation
I already do something like this in the notebook (not yet published) for the curve fitting post, but will describe it here. Compute the L2 error of a bezier wrt the source curve as follows. The goal is to compute the integral of (error vector)^2 over a normalized arc length parameterization of both curves. To do that the straightforward way requires solving an inverse arc length parameterization of the bezier, which is potentially slow.
Instead, compute Lagrange-Gauss quadrature, using the
t
parameter of the bezier as the integration parameter. For eacht
compute the correspondings
by doing a forward arc length evaluation on the bezier. Then find the point on the bezier (query byt
) and the point on the source curve (query bys
) and just evaluate Δx^2 + Δy^2. Then multiply by ds/dt, which is the norm of the derivative of the bezier. Do the dot product of those values with the w parameters from LGQ, et voila, a good, reasonably cheap numerical integration.This is the inner loop of a bunch of nested solver loops, so it's important to be fast. As a result, query of the source curve needs to be fast, as does arc length calculation on the bezier. Fortunately, we have good approaches for both.
Alternative query structures
The fact that all segments are equal length is not great when there's a cusp - the segment near the cusp needs to be short, therefore all segments need to be short.
One approach is a variable size segment, and binary search of cumulative arc length to find the right segment. Each segment could be the same as above. A somewhat more exotic approach is to store a reference to the source curve and parameters for a function to solve the inverse arc length problem (ie map
s
within the segment tot
on the source curve). One approach is to model the arc length as piecewise quadratic, in particular the integral of a piecewise linear representation of the magnitude of the first derivative of the source curve, and use the quadratic formula to solve fort
on each query. This will generally model cusps well, so shouldn't require very many segments. Also, because it references the source curve, the query of position and tangent will always produce a value on the curve (though it's not clear this is an important distinction, as errors in measuring arc length probably have a very similar effect).Another approach is an approach reminisent of a radix tree. At each level in the tree, each node represents a constant amount of arc length. The node can either be a cubic segment, or an array similar to what was described above. Thus, query is proportional to tree depth, but the constant factors should be good.
My main concern is complexity, so it's not clear which approach is best. A nontrivial part of that complexity is determining the error of the C1 cubic Hermite approximation relative to the source curve, and thus the appropriate value for
n
. The radix tree is more forgiving in that regard; one can be fairly sloppy in the choice of the radix, and use a simple "does it meet the error threshold" test to decide whether to subdivide.Last thoughts
This all assumes the curve is smooth. If it contains corners, those should be determined before starting. The same goes for cusps, though in theory the bezier curve fitting should be able to match a curve containing a cusp in the middle.
For font applications, it's generally a good idea to subdivide at horizontal and vertical extrema, as well. Thus, each curve segment goes through 90 degrees of angular deviation at most; two or at most three beziers should be sufficient to represent that for most fonts.
The doubly nested loop solution is fairly similar to what I described in section 9.6.3 of my thesis. I do think the use of the ITP method is an improvement, as I'd be worried about robustness of the secant method as recommended in the thesis; one of the great advantages of ITP is that its worst-case behavior is no worse than bisection.
The text was updated successfully, but these errors were encountered: