zfit supplies wrappers for different minimizers from multiple libraries. Most of the are local
minimizers (such as Minuit
, IpyoptV1
or
ScipyLBFGSBV1
are) while there are also a few global ones such as
the NLoptISRESV1
or NLoptStoGOV1
.
While the former are usually faster and preferred, they depend more on the initial values than the latter. Especially in higher dimensions, a global search of the parameters can increase the minimization time drastically and is often infeasible. It is also possible to couple the minimizers by first doing an approximate global minimization and then polish the minimum found with a local minimizer.
All minimizers support similar arguments, most notably tol
which denotes the termination
value. This is reached if the value of the convergence criterion, which defaults to
EDM
, the same that is also used in Minuit
.
Other than that, there are a also a few minimizer specific arguments that differ from each minimizer.
They all have the exact same minimization method minimize()
which takes a loss, parameters and (optionally) a FitResult
from which it can
take information to have a better start into the minimization.
Minuit#
|
Minuit is a longstanding and well proven algorithm of the L-BFGS-B class implemented in `iminuit`_. |
Ipyopt#
|
Ipopt is a gradient-based minimizer that performs large scale nonlinear optimization of continuous systems. |
Scipy#
|
Local, gradient based quasi-Newton algorithm using the limited-memory BFGS approximation. |
|
Trust-region based local minimizer. |
|
Local minimizer using the modified Powell algorithm. |
|
Local, gradient-based minimizer using tho Sequential Least Squares Programming algorithm.name. |
|
Local, gradient based minimization algorithm using a truncated Newton method. |
NLopt#
|
Local, gradient-based quasi-Newton minimizer using the low storage BFGS Hessian approximation. |
|
Local, gradient-based truncated Newton minimizer using an inexact algorithm. |
|
Local gradient based minimizer using a sequential quadratic programming. |
|
Method-of-moving-asymptotes for gradient-based local minimization. |
|
MMA-like minimizer with simpler, quadratic local approximations. |
|
Local derivative free minimizer which improves on the Nealder-Mead algorithm. |
|
Derivative free simplex minimizer using a linear approximation with trust region steps. |
|
Global minimizer using local optimization by randomly selecting points. |
|
Global minimizer which divides the space into smaller rectangles and uses a local BFGS variant inside. |
|
Derivative-free local minimizer that iteratively constructed quadratic approximation for the loss. |
|
Improved Stochastic Ranking Evolution Strategy using a mutation rule and differential variation. |
|
Global minimizer using an evolutionary algorithm. |
|
Local, gradient-based minimizer using a shifted limited-memory variable-metric. |
Tensorflow#
|