Minuit#

class zfit.minimize.Minuit(tol=None, mode=None, gradient=None, verbosity=None, options=None, maxiter=None, criterion=None, strategy=None, name=None, use_minuit_grad=None, minuit_grad=None, minimize_strategy=None, ncall=None, minimizer_options=None)[source]#

Bases: BaseMinimizer, GraphCachable

Minuit is a longstanding and well proven algorithm of the L-BFGS-B class implemented in iminuit. (deprecated arguments) (deprecated arguments) (deprecated arguments) (deprecated arguments) (deprecated arguments)

Deprecated: SOME ARGUMENTS ARE DEPRECATED: (minimizer_options). They will be removed in a future version. Instructions for updating: Use options instead.

Deprecated: SOME ARGUMENTS ARE DEPRECATED: (ncall). They will be removed in a future version. Instructions for updating: Use maxiter instead.

Deprecated: SOME ARGUMENTS ARE DEPRECATED: (minimize_strategy). They will be removed in a future version. Instructions for updating: Use mode instead.

Deprecated: SOME ARGUMENTS ARE DEPRECATED: (minuit_grad). They will be removed in a future version. Instructions for updating: Use gradient instead.

Deprecated: SOME ARGUMENTS ARE DEPRECATED: (use_minuit_grad). They will be removed in a future version. Instructions for updating: Use gradient instead.

The package iminuit is the fast, interactive minimizer based on the Minuit2 C++ library; the latter is maintained by CERN’s ROOT team. It is an especially robust minimizer that finds the global minimum quiet reliably. It is however, like all local minimizers, still rather dependent on close enough initial values.

Parameters:
  • tol (float | None) – ​Termination value for the convergence/stopping criterion of the algorithm in order to determine if the minimum has been found. Defaults to 1e-3.​

  • mode (int | None) –

    A number used by minuit to define the internal minimization strategy, either 0, 1 or 2. As explained in the iminuit docs , they mean: - 0 The fastest and the number of function calls required to minimise

    scales linearly with the number of fitted parameters. The Hesse matrix is not computed during the minimisation (only an approximation that is continuously updated). When the number of fitted parameters > 10, you should prefer this strategy.

    • 1 (default with Minuit gradient) medium in speed. The number of function calls required
      scales quadratically with the number of fitted parameters. The different scales comes from the fact

      that the Hesse matrix is explicitly computed in a Newton step, if Minuit detects significant correlations between parameters.

    • 2 same quadratic scaling as strategy 1 but is even slower. The Hesse matrix is

      always explicitly computed in each Newton step.

  • gradient (bool | str | None) – If True, iminuit uses its internal numerical gradient calculation instead of the (analytic/numerical) gradient provided by TensorFlow/zfit. If False or 'zfit', the latter is used. For smaller datasets with less stable losses, the internal Minuit gradient often performs better while the zfit provided gradient improves the convergence rate for larger (10’000+) datasets.

  • verbosity (int | None) –

    ​Verbosity of the minimizer. Has to be between 0 and 10. The verbosity has the meaning:

    • a value of 0 means quiet and no output

    • above 0 up to 5, information that is good to know but without flooding the user, corresponding to a “INFO” level.

    • A value above 5 starts printing out considerably more and is used more for debugging purposes.

    • Setting the verbosity to 10 will print out every evaluation of the loss function and gradient.

    Some minimizers offer additional output which is also distributed as above but may duplicate certain printed values.​This

    also changes the iminuit internal verbosity at around 7.

  • options (Mapping[str, object] | None) – Additional options that will be directly passsed into migrad()

  • maxiter (int | None) – ​Approximate number of iterations. This corresponds to roughly the maximum number of evaluations of the value, ‘gradient`` or hessian.​

  • criterion (ConvergenceCriterion | None) – ​Criterion of the minimum. This is an estimated measure for the distance to the minimum and can include the relative or absolute changes of the parameters, function value, gradients and more. If the value of the criterion is smaller than loss.errordef * tol, the algorithm stopps and it is assumed that the minimum has been found.​

  • strategy (ZfitStrategy | None) – ​A class of type ZfitStrategy that takes no input arguments in the init. Determines the behavior of the minimizer in certain situations, most notably when encountering NaNs. It can also implement a callback function.​

  • name (str | None) – ​Human-readable name of the minimizer.​

  • use_minuit_grad (bool | None) – deprecated, legacy.

  • minuit_grad – deprecated, legacy.

  • minimize_strategy – deprecated, legacy.

  • ncall – deprecated, legacy.

  • minimizer_options – deprecated, legacy.

add_cache_deps(cache_deps, allow_non_cachable=True)#

Add dependencies that render the cache invalid if they change.

Parameters:
  • cache_deps (ztyping.CacherOrCachersType) –

  • allow_non_cachable (bool) – If True, allow cache_dependents to be non-cachables. If False, any cache_dependents that is not a ZfitGraphCachable will raise an error.

Raises:

TypeError – if one of the cache_dependents is not a ZfitGraphCachable _and_ allow_non_cachable if False.

create_criterion(loss=None, params=None)#

Create a criterion instance for the given loss and parameters.

Parameters:
  • loss (ZfitLoss | None) – Loss that is used for the criterion. Can be None if called inside _minimize

  • params (Optional[Iterable[TypeVar(ParameterType, bound= Dict[str, zfit.core.interfaces.ZfitParameter])]]) – Parameters that will be associated with the loss in this order. Can be None if called within _minimize.

Return type:

ConvergenceCriterion

Returns:

ConvergenceCriterion to check if the function converged.

create_evaluator(loss=None, params=None, numpy_converter=None, strategy=None)#

Make a loss evaluator using the strategy and more from the minimizer.

Convenience factory for the loss evaluator. This wraps the loss to return a numpy array, to catch NaNs, stop on maxiter and evaluate the gradient and hessian without the need to specify the order every time.

Parameters:
  • loss (ZfitLoss | None) – Loss to be wrapped. Can be None if called inside _minimize

  • params (Optional[Iterable[TypeVar(ParameterType, bound= Dict[str, zfit.core.interfaces.ZfitParameter])]]) – Parameters that will be associated with the loss in this order. Can be None if called within _minimize.

  • strategy (ZfitStrategy | None) – Instance of a Strategy that will be used during the evaluation.

Returns:

The evaluator that wraps the Loss ant Strategy with the current parameters.

Return type:

LossEval

minimize(loss, params=None, init=None)#

Fully minimize the loss with respect to params, optionally using information from init.

The minimizer changes the parameter values in order to minimize the loss function until the convergence criterion value is less than the tolerance. This is a stateless function that can take a FitResult in order to initialize the minimization.

Parameters:
  • loss (ZfitLoss | Callable) – Loss to be minimized until convergence is reached. Usually a ZfitLoss.

  • attribute (- If this is a simple callable that takes an array as argument and an attribute errordef. The) –

    can be set to any arbitrary function like

    def loss(x):
        return - x ** 2
    
    loss.errordef = 0.5  # as an example
    minimizer.minimize(loss, [2, 5])
    

    If not TensorFlow is used inside the function, make sure to set zfit.run.set_graph_mode(False) and zfit.run.set_autograd_mode(False).

  • method (- A FitResult can be provided as the only argument to the) – parameters to be minimized are taken from it. This allows to easily chain minimization algorithms.

  • the (in which case the loss as well as) – parameters to be minimized are taken from it. This allows to easily chain minimization algorithms.

  • params (Optional[Iterable[ZfitParameter]]) –

    The parameters with respect to which to minimize the loss. If None, the parameters will be taken from the loss.

    In order to fix the parameter values to a specific value (and thereby make them indepented of their current value), a dictionary mapping a parameter to a value can be given.

    If loss is a callable, params can also be (instead of Parameters):

    • an array of initial values

    • for more control, a dict with the keys:

      • value (required): array-like initial values.

      • name: list of unique names of the parameters.

      • lower: array-like lower limits of the parameters,

      • upper: array-like upper limits of the parameters,

      • step_size: array-like initial step size of the parameters (approximately the expected uncertainty)

    This will create internally a single parameter for each value that can be accessed in the FitResult via params. Repeated calls can therefore (in the current implement) cause a memory increase. The recommended way is to re-use parameters (just taken from the FitResult attribute params).

  • init (ZfitResult | None) –

    A result of a previous minimization that provides auxiliary information such as the starting point for the parameters, the approximation of the covariance and more. Which information is used can depend on the specific minimizer implementation.

    In general, the assumption is that the loss provided is similar enough to the one provided in init.

    What is assumed to be close:

    • the parameters at the minimum of loss will be close to the parameter values at the minimum of init.

    • Covariance matrix, or in general the shape, of init to the loss at its minimum.

    What is explicitly _not_ assumed to be the same:

    • absolute value of the loss function. If init has a function value at minimum x of fmin, it is not assumed that loss will have the same/similar value at x.

    • parameters that are used in the minimization may differ in order or which are fixed.

Return type:

FitResult

Returns:

The fit result containing all information about the minimization.

Examples

Using the ability to restart a minimization with a previous result allows to use a more global search algorithm with a high tolerance and an additional local minimization to polish the found minimum.

result_approx = minimizer_global.minimize(loss, params)
result = minimizer_local.minimize(result_approx)

For a simple usage with a callable only, the parameters can be given as an array of initial values.

def func(x):
    return np.log(np.sum(x ** 2))

func.errordef = 0.5
params = [1.1, 3.5, 8.35]  # initial values
result = minimizer.minimize(func, param)
register_cacher(cacher)#

Register a cacher that caches values produces by this instance; a dependent.

Parameters:

cacher (ztyping.CacherOrCachersType) –

reset_cache_self()#

Clear the cache of self and all dependent cachers.