Skip to content

Support calculation of only active constraint derivatives #439

@A-CGray

Description

@A-CGray

Description of feature

NLPQLP supports only computing the derivatives of the constraints that it currently considers active, which it tells you through the active array that it passes to pyoptsparse nlgrad function. It would be great if we could support this capability in pyOptSparse as it has the potential to speed up problems with many constraints.

Image

Image

To demonstrate, NLPQLP converges just fine with the following:

def nlgrad(m, me, mmax, n, f, g, df, dg, x, active, wa):
    gobj, gcon, fail = self._masterFunc(x, ["gobj", "gcon"])
    df[0:n] = gobj.copy()
    # Only copy the active constraints
    activeConIndices = np.where(active[0:m] != 0)[0]
    dg[activeConIndices, 0:n] = -gcon.copy()[activeConIndices]
    return df, dg

Although clearly this implementation doesn't actually save any time since it still computes the entire constraint jacobian.

Potential solution

The implementation would be quite involved, requiring at least the following:

  • Mapping the active constraints as seen by NLPQLP back to form defined by the user (e.g undoing the conversion from two-sided to one-sided constraints)
  • Determining a way to pass the active constraint information to the user's sens function (e.g sens(self, xdict, funcs, activeCon)), and doing so in a way they can still use the current sens signature (sens(self, xdict, funcs))
  • Determining a standard for how the user is expected to pass back the active constraint values (I can see this being complicated for large sparse constraints if we want to support only computing the derivatives for the active entries of those constraints)

Metadata

Metadata

Assignees

No one assigned

    Labels

    enhancementNew feature or request

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions