NLoptMMAV1#
- class zfit.minimize.NLoptMMAV1(tol=None, verbosity=None, maxiter=None, strategy=None, criterion=None, name='NLopt MMA')[source]#
Bases:
NLoptBaseMinimizerV1
Method-of-moving-asymptotes for gradient-based local minimization.
Globally-convergent method-of-moving-asymptotes (MMA) for gradient-based local minimization. The algorithm is described in:
Krister Svanberg, “A class of globally convergent optimization methods based on conservative convex separable approximations,” SIAM J. Optim. 12 (2), p. 555-573 (2002).
This is an improved CCSA (“conservative convex separable approximation”) variant of the original MMA algorithm published by Svanberg in 1987, which has become popular for topology optimization. (Note: “globally convergent” does not mean that this algorithm converges to the global optimum; it means that it is guaranteed to converge to some local minimum from any feasible starting point.)
At each point x, MMA forms a local approximation using the gradient of f and the constraint functions, plus a quadratic “penalty” term to make the approximations “conservative” (upper bounds for the exact functions). The precise approximation MMA forms is difficult to describe in a few words, because it includes nonlinear terms consisting of a poles at some distance from x (outside of the current trust region), almost a kind of Padé approximant. The main point is that the approximation is both convex and separable, making it trivial to solve the approximate optimization by a dual method. Optimizing the approximation leads to a new candidate point x. The objective and constraints are evaluated at the candidate point. If the approximations were indeed conservative (upper bounds for the actual functions at the candidate point), then the process is restarted at the new x. Otherwise, the approximations are made more conservative (by increasing the penalty term) and re-optimized.
- Parameters
tol (float | None) – Termination value for the convergence/stopping criterion of the algorithm in order to determine if the minimum has been found. Defaults to 1e-3.
verbosity (int | None) –
Verbosity of the minimizer. Has to be between 0 and 10. The verbosity has the meaning:
a value of 0 means quiet and no output
above 0 up to 5, information that is good to know but without flooding the user, corresponding to a “INFO” level.
A value above 5 starts printing out considerably more and is used more for debugging purposes.
Setting the verbosity to 10 will print out every evaluation of the loss function and gradient.
Some minimizers offer additional output which is also distributed as above but may duplicate certain printed values.
maxiter (int | str | None) – Approximate number of iterations. This corresponds to roughly the maximum number of evaluations of the
value
, ‘gradient`` orhessian
.strategy (ZfitStrategy | None) – A class of type
ZfitStrategy
that takes no input arguments in the init. Determines the behavior of the minimizer in certain situations, most notably when encountering NaNs. It can also implement a callback function.criterion (ConvergenceCriterion | None) – Criterion of the minimum. This is an estimated measure for the distance to the minimum and can include the relative or absolute changes of the parameters, function value, gradients and more. If the value of the criterion is smaller than
loss.errordef * tol
, the algorithm stopps and it is assumed that the minimum has been found.name (str) – Human-readable name of the minimizer.
- create_criterion(loss=None, params=None)#
Create a criterion instance for the given loss and parameters.
- Parameters
loss (ZfitLoss | None) – Loss that is used for the criterion. Can be None if called inside
_minimize
params (ztyping.ParametersType | None) – Parameters that will be associated with the loss in this order. Can be None if called within
_minimize
.
- Return type
ConvergenceCriterion
- Returns
ConvergenceCriterion to check if the function converged.
- create_evaluator(loss=None, params=None, numpy_converter=None, strategy=None)#
Make a loss evaluator using the strategy and more from the minimizer.
Convenience factory for the loss evaluator. This wraps the loss to return a numpy array, to catch NaNs, stop on maxiter and evaluate the gradient and hessian without the need to specify the order every time.
- Parameters
loss (ZfitLoss | None) – Loss to be wrapped. Can be None if called inside
_minimize
params (ztyping.ParametersType | None) – Parameters that will be associated with the loss in this order. Can be None if called within
_minimize
.strategy (ZfitStrategy | None) – Instance of a Strategy that will be used during the evaluation.
- Returns
The evaluator that wraps the Loss ant Strategy with the current parameters.
- Return type
LossEval
- minimize(loss, params=None, init=None)#
Fully minimize the
loss
with respect toparams
, optionally using information frominit
.The minimizer changes the parameter values in order to minimize the loss function until the convergence criterion value is less than the tolerance. This is a stateless function that can take a
FitResult
in order to initialize the minimization.- Parameters
loss (ZfitLoss | Callable) – Loss to be minimized until convergence is reached. Usually a
ZfitLoss
.attribute (- If this is a simple callable that takes an array as argument and an attribute errordef. The) –
can be set to any arbitrary function like
def loss(x): return - x ** 2 loss.errordef = 0.5 # as an example minimizer.minimize(loss, [2, 5])
If not TensorFlow is used inside the function, make sure to set
zfit.run.set_graph_mode(False)
andzfit.run.set_autograd_mode(False)
.method (- A FitResult can be provided as the only argument to the) – parameters to be minimized are taken from it. This allows to easily chain minimization algorithms.
the (in which case the loss as well as) – parameters to be minimized are taken from it. This allows to easily chain minimization algorithms.
params (ztyping.ParamsTypeOpt | None) –
The parameters with respect to which to minimize the
loss
. IfNone
, the parameters will be taken from theloss
.In order to fix the parameter values to a specific value (and thereby make them indepented of their current value), a dictionary mapping a parameter to a value can be given.
If
loss
is a callable,params
can also be (instead ofParameters
):an array of initial values
for more control, a
dict
with the keys:value
(required): array-like initial values.name
: list of unique names of the parameters.lower
: array-like lower limits of the parameters,upper
: array-like upper limits of the parameters,step_size
: array-like initial step size of the parameters (approximately the expected uncertainty)
This will create internally a single parameter for each value that can be accessed in the
FitResult
via params. Repeated calls can therefore (in the current implement) cause a memory increase. The recommended way is to re-use parameters (just taken from theFitResult
attributeparams
).init (ZfitResult | None) –
A result of a previous minimization that provides auxiliary information such as the starting point for the parameters, the approximation of the covariance and more. Which information is used can depend on the specific minimizer implementation.
In general, the assumption is that the loss provided is similar enough to the one provided in
init
.What is assumed to be close:
the parameters at the minimum of loss will be close to the parameter values at the minimum of init.
Covariance matrix, or in general the shape, of init to the loss at its minimum.
What is explicitly _not_ assumed to be the same:
absolute value of the loss function. If init has a function value at minimum x of fmin, it is not assumed that
loss
will have the same/similar value at x.parameters that are used in the minimization may differ in order or which are fixed.
- Return type
- Returns
The fit result containing all information about the minimization.
Examples
Using the ability to restart a minimization with a previous result allows to use a more global search algorithm with a high tolerance and an additional local minimization to polish the found minimum.
result_approx = minimizer_global.minimize(loss, params) result = minimizer_local.minimize(result_approx)
For a simple usage with a callable only, the parameters can be given as an array of initial values.
def func(x): return np.log(np.sum(x ** 2)) func.errordef = 0.5 params = [1.1, 3.5, 8.35] # initial values result = minimizer.minimize(func, param)