BinnedChi2#

class zfit.loss.BinnedChi2(model, data, constraints=None, options=None)[source]#

Bases: BaseBinned

Binned Chi2 loss, using the :math:`N_{tot} from the data.

\[\chi^2 = \sum_{\mathrm{bins}} \left( \frac{N_\mathrm{PDF,bin} - N_\mathrm{Data,bin}}{\sigma_\mathrm{Data,bin}} \right)^2\]

where

\[N_\mathrm{PDF,bin} = \mathrm{pdf}(\text{integral}) \cdot N_\mathrm{Data,tot} \sigma_\mathrm{bin} = \text{variance}\]

with variance the value of variances of the binned data.

​If the dataset has empty bins, the errors

will be zero and \(\chi^2\) is undefined. Two possibilities are available and can be given as an option:

  • “empty”: “ignore” will ignore all bins with zero entries and won’t count to the loss

  • “errors”: “expected” will use the expected counts from the model with a Poissonian uncertainty​

    Args:

    model:​Binned PDF(s) that return the normalized probability

    (rel_counts or counts) for data under the given parameters. If multiple model and data are given, they will be used in the same order to do a simultaneous fit.​

    data:​Binned dataset that will be given to the model.

    If multiple model and data are given, they will be used in the same order to do a simultaneous fit.​

    constraints:​Auxiliary measurements (“constraints”)

    that add a likelihood term to the loss.

    \[\mathcal{L}(\theta) = \mathcal{L}_{unconstrained} \prod_{i} f_{constr_i}(\theta)\]

    Usually, an auxiliary measurement – by its very nature -S should only be added once to the loss. zfit does not automatically deduplicate constraints if they are given multiple times, leaving the freedom for arbitrary constructs.

    Constraints can also be used to restrict the loss by adding any kinds of penalties.​

    options:​Additional options (as a dict) for the loss.

    Current possibilities include:

    • ‘subtr_const’ (default True): subtract from each points log probability density a constant that is approximately equal to the average log probability density in the very first evaluation before the summation. This brings the initial loss value closer to 0 and increases, especially for large datasets, the numerical stability.

      The value will be stored ith ‘subtr_const_value’ and can also be given directly.

      The subtraction should not affect the minimum as the absolute value of the NLL is meaningless. However, with this switch on, one cannot directly compare different likelihoods absolute value as the constant may differ! Use create_new in order to have a comparable likelihood between different losses or use the full argument in the value function to calculate the full loss with all constants.

    These settings may extend over time. In order to make sure that a loss is the same under the same data, make sure to use create_new instead of instantiating a new loss as the former will automatically overtake any relevant constants and behavior.​

__call__(_x=None)#

Calculate the loss value with the given input for the free parameters.

Parameters:
  • *positional* – Array-like argument to set the parameters. The order of the values correspond to the position of the parameters in get_params() (called without any arguments). For more detailed control, it is always possible to wrap value() and set the desired parameters manually.

  • full – ​If True, return the full loss value, otherwise allow for the removal of constants and only return the part that depends on the parameters. Constants don’t matter for the task of optimization, but they can greatly help with the numerical stability of the loss function.​

Return type:

array

Returns:

Calculated loss value as a scalar.

add_cache_deps(cache_deps, allow_non_cachable=True)#

Add dependencies that render the cache invalid if they change.

Parameters:
  • cache_deps (ztyping.CacherOrCachersType) –

  • allow_non_cachable (bool) – If True, allow cache_dependents to be non-cachables. If False, any cache_dependents that is not a ZfitGraphCachable will raise an error.

Raises:

TypeError – if one of the cache_dependents is not a ZfitGraphCachable _and_ allow_non_cachable if False.

create_new(model=None, data=None, constraints=None, options=None)#

Create a new binned loss of this type. This is preferrable over creating a new instance in most cases.

Internals, such as certain optimizations will be shared and therefore the loss is made comparable.

If something is not given, it will be taken from the current loss.

Parameters:
  • model (ztyping.BinnedPDFInputType) – ​Binned PDF(s) that return the normalized probability (rel_counts or counts) for data under the given parameters. If multiple model and data are given, they will be used in the same order to do a simultaneous fit.​

  • data (ztyping.BinnedDataInputType) – ​Binned dataset that will be given to the model. If multiple model and data are given, they will be used in the same order to do a simultaneous fit.​

  • constraints (ConstraintsInputType) –

    ​Auxiliary measurements (“constraints”) that add a likelihood term to the loss.

    \[\mathcal{L}(\theta) = \mathcal{L}_{unconstrained} \prod_{i} f_{constr_i}(\theta)\]

    Usually, an auxiliary measurement – by its very nature -S should only be added once to the loss. zfit does not automatically deduplicate constraints if they are given multiple times, leaving the freedom for arbitrary constructs.

    Constraints can also be used to restrict the loss by adding any kinds of penalties.​

  • options (OptionsInputType) –

    ​Additional options (as a dict) for the loss. Current possibilities include:

    • ’subtr_const’ (default True): subtract from each points log probability density a constant that is approximately equal to the average log probability density in the very first evaluation before the summation. This brings the initial loss value closer to 0 and increases, especially for large datasets, the numerical stability.

      The value will be stored ith ‘subtr_const_value’ and can also be given directly.

      The subtraction should not affect the minimum as the absolute value of the NLL is meaningless. However, with this switch on, one cannot directly compare different likelihoods absolute value as the constant may differ! Use create_new in order to have a comparable likelihood between different losses or use the full argument in the value function to calculate the full loss with all constants.

    These settings may extend over time. In order to make sure that a loss is the same under the same data, make sure to use create_new instead of instantiating a new loss as the former will automatically overtake any relevant constants and behavior.​

Returns:

property dtype: DType#

The dtype of the object.

get_cache_deps(only_floating=True)#

Return a set of all independent Parameter that this object depends on.

Parameters:

only_floating (bool) – If True, only return floating Parameter

Return type:

OrderedSet

get_dependencies(only_floating: bool = True) ztyping.DependentsType#

DEPRECATED FUNCTION

Deprecated: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. Instructions for updating: Use get_params instead if you want to retrieve the independent parameters or get_cache_deps in case you need the numerical cache dependents (advanced). :rtype: OrderedSet

get_params(floating=True, is_yield=None, extract_independent=True, only_floating=<class 'zfit.util.checks.NotSpecified'>)#

Recursively collect parameters that this object depends on according to the filter criteria.

Which parameters should be included can be steered using the arguments as a filter.
  • None: do not filter on this. E.g. floating=None will return parameters that are floating as well as

    parameters that are fixed.

  • True: only return parameters that fulfil this criterion

  • False: only return parameters that do not fulfil this criterion. E.g. floating=False will return

    only parameters that are not floating.

Parameters:
  • floating (bool | None) – if a parameter is floating, e.g. if floating() returns True

  • is_yield (bool | None) – if a parameter is a yield of the _current_ model. This won’t be applied recursively, but may include yields if they do also represent a parameter parametrizing the shape. So if the yield of the current model depends on other yields (or also non-yields), this will be included. If, however, just submodels depend on a yield (as their yield) and it is not correlated to the output of our model, they won’t be included.

  • extract_independent (bool | None) – If the parameter is an independent parameter, i.e. if it is a ZfitIndependentParameter.

Return type:

set[ZfitParameter]

gradient(params=None, *, numgrad=None)#

Calculate the gradient of the loss with respect to the given parameters.

Parameters:
  • params (TypeVar(ParamTypeInput, zfit.core.interfaces.ZfitParameter, Union[int, float, complex, Tensor, zfit.core.interfaces.ZfitParameter])) – The parameters with respect to which the gradient is calculated. If None, all parameters are used.

  • numgrad – ​If True, calculate the numerical gradient/Hessian instead of using the automatic one. This is usually slower if called repeatedly but can be used if the automatic gradient fails (e.g. if the model is not differentiable, written not in znp.* etc). Default will fall back to what the loss is set to.​

Return type:

list[Tensor]

Returns:

The gradient of the loss with respect to the given parameters.

hessian(params=None, hessian=None, *, numgrad=None)#

Calculate the hessian of the loss with respect to the given parameters.

Args: params: The parameters with respect to which the hessian is calculated. If None, all parameters

are used.

hessian: Can be ‘full’ or ‘diag’. numgrad:​If True, calculate the numerical gradient/Hessian

instead of using the automatic one. This is usually slower if called repeatedly but can be used if the automatic gradient fails (e.g. if the model is not differentiable, written not in znp.* etc). Default will fall back to what the loss is set to.​

register_cacher(cacher)#

Register a cacher that caches values produces by this instance; a dependent.

Parameters:

cacher (ztyping.CacherOrCachersType) –

reset_cache_self()#

Clear the cache of self and all dependent cachers.

value(*, full=None)#

Calculate the loss value with the current values of the free parameters.

Parameters:

full (bool | None) – ​If True, return the full loss value, otherwise allow for the removal of constants and only return the part that depends on the parameters. Constants don’t matter for the task of optimization, but they can greatly help with the numerical stability of the loss function.​

Return type:

Tensor

Returns:

Calculated loss value as a scalar.

value_gradient(params=None, *, full=None, numgrad=None)#

Calculate the loss value and the gradient with the current values of the free parameters.

Parameters:
  • params (TypeVar(ParamTypeInput, zfit.core.interfaces.ZfitParameter, Union[int, float, complex, Tensor, zfit.core.interfaces.ZfitParameter])) – The parameters to calculate the gradient for. If not given, all free parameters are used.

  • full (bool | None) – ​If True, return the full loss value, otherwise allow for the removal of constants and only return the part that depends on the parameters. Constants don’t matter for the task of optimization, but they can greatly help with the numerical stability of the loss function.​

  • numgrad (bool | None) – ​If True, calculate the numerical gradient/Hessian instead of using the automatic one. This is usually slower if called repeatedly but can be used if the automatic gradient fails (e.g. if the model is not differentiable, written not in znp.* etc). Default will fall back to what the loss is set to.​

Return type:

tuple[Tensor, Tensor]

Returns:

Calculated loss value as a scalar and the gradient as a tensor.

value_gradient_hessian(params=None, hessian=None, *, full=None, numgrad=None)#

Calculate the loss value, the gradient and the hessian with the current values of the free parameters.

Parameters:
  • params (TypeVar(ParamTypeInput, zfit.core.interfaces.ZfitParameter, Union[int, float, complex, Tensor, zfit.core.interfaces.ZfitParameter])) – The parameters to calculate the gradient for. If not given, all free parameters are used.

  • hessian – Can be ‘full’ or ‘diag’.

  • full (bool | None) – ​If True, return the full loss value, otherwise allow for the removal of constants and only return the part that depends on the parameters. Constants don’t matter for the task of optimization, but they can greatly help with the numerical stability of the loss function.​

  • numgrad – ​If True, calculate the numerical gradient/Hessian instead of using the automatic one. This is usually slower if called repeatedly but can be used if the automatic gradient fails (e.g. if the model is not differentiable, written not in znp.* etc). Default will fall back to what the loss is set to.​

Return type:

tuple[Tensor, Tensor, Tensor]

Returns:

Calculated loss value as a scalar, the gradient as a tensor and the hessian as a tensor.