ExtendedUnbinnedNLL#

class zfit.loss.ExtendedUnbinnedNLL(model, data, fit_range=None, constraints=None, options=None)[source]#

Bases: BaseUnbinnedNLL

An Unbinned Negative Log Likelihood with an additional poisson term for the number of events in the dataset.

​The unbinned log likelihood can be written as

\[\mathcal{L}_{non-extended}(x | \theta) = \prod_{i} f_{\theta} (x_i)\]

where \(x_i\) is a single event from the dataset data and f is the model.​

​The extended likelihood has an additional term

\[\mathcal{L}_{extended term} = poiss(N_{tot}, N_{data}) = N_{data}^{N_{tot}} \frac{e^{- N_{data}}}{N_{tot}!}\]

and the extended likelihood is the product of both.​

​A simultaneous fit can be performed by giving one or more model, data, to the loss. The length of each has to match the length of the others

\[\mathcal{L}_{simultaneous}(\theta | {data_0, data_1, ..., data_n}) = \prod_{i} \mathcal{L}(\theta_i, data_i)\]

where \(\theta_i\) is a set of parameters and a subset of \(\theta\)

​For optimization purposes, it is often easier to minimize a function and to use a log transformation. The actual loss is given by

\[\mathcal{L} = - \sum_{i}^{n} ln(f(\theta|x_i))\]

and therefore being called “negative log …”​

​If the dataset has weights, a weighted likelihood will be constructed instead

\[\mathcal{L} = - \sum_{i}^{n} w_i \cdot ln(f(\theta|x_i))\]

Note that this is not a real likelihood anymore! Calculating uncertainties can be done with hesse (as it has a correction) but will yield wrong results with profiling methods. The minimum is however fully valid.​

__call__(_x=None)#

Calculate the loss value with the given input for the free parameters.

Parameters:
  • *positional* – Array-like argument to set the parameters. The order of the values correspond to the position of the parameters in get_params() (called without any arguments). For more detailed control, it is always possible to wrap value() and set the desired parameters manually.

  • full – ​If True, return the full loss value, otherwise allow for the removal of constants and only return the part that depends on the parameters. Constants don’t matter for the task of optimization, but they can greatly help with the numerical stability of the loss function.​

Return type:

array

Returns:

Calculated loss value as a scalar.

add_cache_deps(cache_deps, allow_non_cachable=True)#

Add dependencies that render the cache invalid if they change.

Parameters:
  • cache_deps (ztyping.CacherOrCachersType) –

  • allow_non_cachable (bool) – If True, allow cache_dependents to be non-cachables. If False, any cache_dependents that is not a ZfitGraphCachable will raise an error.

Raises:

TypeError – if one of the cache_dependents is not a ZfitGraphCachable _and_ allow_non_cachable if False.

create_new(model=None, data=None, fit_range=None, constraints=None, options=None)#

Create a new loss from the current loss and replacing what is given as the arguments.

This creates a “copy” of the current loss but replaces any argument that is explicitly given. Equivalent to creating a new instance but with some arguments taken.

A loss has more than a model and data (and constraints), it can have internal optimizations and more that may do alter the behavior of a naive re-instantiation in unpredictable ways.

Parameters:
  • model (ZfitPDF | Iterable[ZfitPDF] | None) –

    If not given, the current one will be used.

    ​PDFs that return the normalized probability for

    data under the given parameters. If multiple model and data are given, they will be used in the same order to do a simultaneous fit.​

  • data (ZfitData | Iterable[ZfitData] | None) –

    If not given, the current one will be used.

    ​Dataset that will be given to the model.

    If multiple model and data are given, they will be used in the same order to do a simultaneous fit.​

  • fit_range

  • constraints

    If not given, the current one will be used.

    ​Auxiliary measurements (“constraints”)

    that add a likelihood term to the loss.

    \[\mathcal{L}(\theta) = \mathcal{L}_{unconstrained} \prod_{i} f_{constr_i}(\theta)\]

    Usually, an auxiliary measurement – by its very nature -S should only be added once to the loss. zfit does not automatically deduplicate constraints if they are given multiple times, leaving the freedom for arbitrary constructs.

    Constraints can also be used to restrict the loss by adding any kinds of penalties.​

  • options

    If not given, the current one will be used.

    ​Additional options (as a dict) for the loss.

    Current possibilities include:

    • ’subtr_const’ (default True): subtract from each points log probability density a constant that is approximately equal to the average log probability density in the very first evaluation before the summation. This brings the initial loss value closer to 0 and increases, especially for large datasets, the numerical stability.

      The value will be stored ith ‘subtr_const_value’ and can also be given directly.

      The subtraction should not affect the minimum as the absolute value of the NLL is meaningless. However, with this switch on, one cannot directly compare different likelihoods absolute value as the constant may differ! Use create_new in order to have a comparable likelihood between different losses or use the full argument in the value function to calculate the full loss with all constants.

    These settings may extend over time. In order to make sure that a loss is the same under the same data, make sure to use create_new instead of instantiating a new loss as the former will automatically overtake any relevant constants and behavior.​

property dtype: DType#

The dtype of the object.

classmethod from_asdf(asdf_obj, *, reuse_params=None)#

Load an object from an asdf file.

Args#

asdf_obj: Object reuse_params:​If parameters, the parameters

will be reused if they are given. If a parameter is given, it will be used as the parameter with the same name. If a parameter is not given, a new parameter will be created.​

classmethod from_dict(dict_, *, reuse_params=None)#

Creates an object from a dictionary structure as generated by to_dict.

Parameters:
  • dict – Dictionary structure.

  • reuse_params – ​If parameters, the parameters will be reused if they are given. If a parameter is given, it will be used as the parameter with the same name. If a parameter is not given, a new parameter will be created.​

Returns:

The deserialized object.

classmethod from_json(cls, json, *, reuse_params=None)#

Load an object from a json string.

Parameters:
  • json (str) – Serialized object in a JSON string.

  • reuse_params – ​If parameters, the parameters will be reused if they are given. If a parameter is given, it will be used as the parameter with the same name. If a parameter is not given, a new parameter will be created.​

Return type:

object

Returns:

The deserialized object.

get_cache_deps(only_floating=True)#

Return a set of all independent Parameter that this object depends on.

Parameters:

only_floating (bool) – If True, only return floating Parameter

Return type:

OrderedSet

get_dependencies(only_floating: bool = True) ztyping.DependentsType#

DEPRECATED FUNCTION

Deprecated: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. Instructions for updating: Use get_params instead if you want to retrieve the independent parameters or get_cache_deps in case you need the numerical cache dependents (advanced). :rtype: OrderedSet

get_params(floating=True, is_yield=None, extract_independent=True, only_floating=<class 'zfit.util.checks.NotSpecified'>)#

Recursively collect parameters that this object depends on according to the filter criteria.

Which parameters should be included can be steered using the arguments as a filter.
  • None: do not filter on this. E.g. floating=None will return parameters that are floating as well as

    parameters that are fixed.

  • True: only return parameters that fulfil this criterion

  • False: only return parameters that do not fulfil this criterion. E.g. floating=False will return

    only parameters that are not floating.

Parameters:
  • floating (bool | None) – if a parameter is floating, e.g. if floating() returns True

  • is_yield (bool | None) – if a parameter is a yield of the _current_ model. This won’t be applied recursively, but may include yields if they do also represent a parameter parametrizing the shape. So if the yield of the current model depends on other yields (or also non-yields), this will be included. If, however, just submodels depend on a yield (as their yield) and it is not correlated to the output of our model, they won’t be included.

  • extract_independent (bool | None) – If the parameter is an independent parameter, i.e. if it is a ZfitIndependentParameter.

Return type:

set[ZfitParameter]

classmethod get_repr()#

Abstract representation of the object for serialization.

This objects knows how to serialize and deserialize the object and is used by the to_json, from_json, to_dict and from_dict methods.

Returns:

The representation of the object.

Return type:

pydantic.BaseModel

gradient(params=None, *, numgrad=None)#

Calculate the gradient of the loss with respect to the given parameters.

Parameters:
  • params (TypeVar(ParamTypeInput, zfit.core.interfaces.ZfitParameter, Union[int, float, complex, Tensor, zfit.core.interfaces.ZfitParameter])) – The parameters with respect to which the gradient is calculated. If None, all parameters are used.

  • numgrad – ​If True, calculate the numerical gradient/Hessian instead of using the automatic one. This is usually slower if called repeatedly but can be used if the automatic gradient fails (e.g. if the model is not differentiable, written not in znp.* etc). Default will fall back to what the loss is set to.​

Return type:

list[Tensor]

Returns:

The gradient of the loss with respect to the given parameters.

hessian(params=None, hessian=None, *, numgrad=None)#

Calculate the hessian of the loss with respect to the given parameters.

Args: params: The parameters with respect to which the hessian is calculated. If None, all parameters

are used.

hessian: Can be ‘full’ or ‘diag’. numgrad:​If True, calculate the numerical gradient/Hessian

instead of using the automatic one. This is usually slower if called repeatedly but can be used if the automatic gradient fails (e.g. if the model is not differentiable, written not in znp.* etc). Default will fall back to what the loss is set to.​

register_cacher(cacher)#

Register a cacher that caches values produces by this instance; a dependent.

Parameters:

cacher (ztyping.CacherOrCachersType) –

reset_cache_self()#

Clear the cache of self and all dependent cachers.

to_asdf()#

Convert the object to an asdf file.

to_dict()#

Convert the object to a nested dictionary structure.

Returns:

The dictionary structure.

Return type:

dict

to_json()#

Convert the object to a json string.

Returns:

The json string.

Return type:

str

to_yaml()#

Convert the object to a yaml string.

Returns:

The yaml string.

Return type:

str

value(*, full=None)#

Calculate the loss value with the current values of the free parameters.

Parameters:

full (bool | None) – ​If True, return the full loss value, otherwise allow for the removal of constants and only return the part that depends on the parameters. Constants don’t matter for the task of optimization, but they can greatly help with the numerical stability of the loss function.​

Return type:

Tensor

Returns:

Calculated loss value as a scalar.

value_gradient(params=None, *, full=None, numgrad=None)#

Calculate the loss value and the gradient with the current values of the free parameters.

Parameters:
  • params (TypeVar(ParamTypeInput, zfit.core.interfaces.ZfitParameter, Union[int, float, complex, Tensor, zfit.core.interfaces.ZfitParameter])) – The parameters to calculate the gradient for. If not given, all free parameters are used.

  • full (bool | None) – ​If True, return the full loss value, otherwise allow for the removal of constants and only return the part that depends on the parameters. Constants don’t matter for the task of optimization, but they can greatly help with the numerical stability of the loss function.​

  • numgrad (bool | None) – ​If True, calculate the numerical gradient/Hessian instead of using the automatic one. This is usually slower if called repeatedly but can be used if the automatic gradient fails (e.g. if the model is not differentiable, written not in znp.* etc). Default will fall back to what the loss is set to.​

Return type:

tuple[Tensor, Tensor]

Returns:

Calculated loss value as a scalar and the gradient as a tensor.

value_gradient_hessian(params=None, hessian=None, *, full=None, numgrad=None)#

Calculate the loss value, the gradient and the hessian with the current values of the free parameters.

Parameters:
  • params (TypeVar(ParamTypeInput, zfit.core.interfaces.ZfitParameter, Union[int, float, complex, Tensor, zfit.core.interfaces.ZfitParameter])) – The parameters to calculate the gradient for. If not given, all free parameters are used.

  • hessian – Can be ‘full’ or ‘diag’.

  • full (bool | None) – ​If True, return the full loss value, otherwise allow for the removal of constants and only return the part that depends on the parameters. Constants don’t matter for the task of optimization, but they can greatly help with the numerical stability of the loss function.​

  • numgrad – ​If True, calculate the numerical gradient/Hessian instead of using the automatic one. This is usually slower if called repeatedly but can be used if the automatic gradient fails (e.g. if the model is not differentiable, written not in znp.* etc). Default will fall back to what the loss is set to.​

Return type:

tuple[Tensor, Tensor, Tensor]

Returns:

Calculated loss value as a scalar, the gradient as a tensor and the hessian as a tensor.