UnbinnedNLL¶
-
class
zfit.loss.
UnbinnedNLL
(model, data, fit_range=None, constraints=None, options=None)[source]¶ Bases:
zfit.core.loss.BaseLoss
Unbinned Negative Log Likelihood.
A simultaneous fit can be performed by giving one or more model, data, fit_range to the loss. The length of each has to match the length of the others.
- Parameters
model (
Union
[ZfitPDF
,Iterable
[ZfitPDF
]]) –- If not given, the current one will be used.
PDFs that return the normalized probability for
data under the given parameters. If multiple model and data are given, they will be used in the same order to do a simultaneous fit.
data (
Union
[ZfitData
,Iterable
[ZfitData
]]) –- If not given, the current one will be used.
Dataset that will be given to the model.
If multiple model and data are given, they will be used in the same order to do a simultaneous fit.
fit_range – The fitting range. It’s the norm_range for the models (if they have a norm_range) and the data_range for the data.
constraints –
- If not given, the current one will be used.
Auxiliary measurements that add a term to the loss
or terms that restrict the loss in an other way such as penalties.
options –
- If not given, the current one will be used.
Additional options (as a dict) for the loss.
Current possibilities include:
’subtr_const’ (default True): subtract from each points log probability density a constant that is approximately equal to the average log probability density in the very first evaluation before the summation. This brings the initial loss value closer to 0 and increases, especially for large datasets, the numerical stability.
The value will be stored ith ‘subtr_const_value’ and can also be given directly.
The subtraction should not affect the minimum as the absolute value of the NLL is meaningless. However, with this switch on, one cannot directly compare different likelihoods ablolute value as the constant may differs! Use create_new in order to have a comparable likelihood between different losses
These settings may extend over time. In order to make sure that a loss is the same under the same data, make sure to use create_new instead of instantiating a new loss as the former will automatically overtake any relevant constants and behavior.
-
create_new
(model=<zfit.util.checks.NotSpecified object>, data=<zfit.util.checks.NotSpecified object>, fit_range=<zfit.util.checks.NotSpecified object>, constraints=<zfit.util.checks.NotSpecified object>, options=<zfit.util.checks.NotSpecified object>)[source]¶ Create a new loss from the current loss and replacing what is given as the arguments.
This creates a “copy” of the current loss but replacing any argument that is explicitly given. Equivalent to creating a new instance but with some arguments taken.
A loss has more than a model and data (and constraints), it can have internal optimizations and more that may do alter the behavior of a naive re-instantiation in unpredictable ways.
- Parameters
model (
Union
[ZfitPDF
,Iterable
[ZfitPDF
],None
]) –- If not given, the current one will be used.
PDFs that return the normalized probability for
data under the given parameters. If multiple model and data are given, they will be used in the same order to do a simultaneous fit.
data (
Union
[ZfitData
,Iterable
[ZfitData
],None
]) –- If not given, the current one will be used.
Dataset that will be given to the model.
If multiple model and data are given, they will be used in the same order to do a simultaneous fit.
fit_range –
constraints –
- If not given, the current one will be used.
Auxiliary measurements that add a term to the loss
or terms that restrict the loss in an other way such as penalties.
options –
- If not given, the current one will be used.
Additional options (as a dict) for the loss.
Current possibilities include:
’subtr_const’ (default True): subtract from each points log probability density a constant that is approximately equal to the average log probability density in the very first evaluation before the summation. This brings the initial loss value closer to 0 and increases, especially for large datasets, the numerical stability.
The value will be stored ith ‘subtr_const_value’ and can also be given directly.
The subtraction should not affect the minimum as the absolute value of the NLL is meaningless. However, with this switch on, one cannot directly compare different likelihoods ablolute value as the constant may differs! Use create_new in order to have a comparable likelihood between different losses
These settings may extend over time. In order to make sure that a loss is the same under the same data, make sure to use create_new instead of instantiating a new loss as the former will automatically overtake any relevant constants and behavior.
Returns:
-
add_cache_deps
(cache_deps, allow_non_cachable=True)¶ Add dependencies that render the cache invalid if they change.
- Parameters
cache_deps (
Union
[ForwardRef
,Iterable
[ForwardRef
]]) –allow_non_cachable (
bool
) – If True, allow cache_dependents to be non-cachables. If False, any cache_dependents that is not a ZfitCachable will raise an error.
- Raises
TypeError – if one of the cache_dependents is not a ZfitCachable _and_ allow_non_cachable if False.
-
property
dtype
¶ The dtype of the object.
- Return type
DType
-
get_cache_deps
(only_floating=True)¶ Return a set of all independent
Parameter
that this object depends on.- Parameters
only_floating (
bool
) – If True, only return floatingParameter
- Return type
OrderedSet
-
get_dependencies
(only_floating=True)¶ DEPRECATED FUNCTION
Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. Instructions for updating: Use get_params instead if you want to retrieve the independent parameters or get_cache_deps in case you need the numerical cache dependents (advanced).
- Return type
OrderedSet
-
get_params
(floating=True, is_yield=None, extract_independent=True, only_floating=<class 'zfit.util.checks.NotSpecified'>)¶ Recursively collect parameters that this object depends on according to the filter criteria.
- Which parameters should be included can be steered using the arguments as a filter.
- None: do not filter on this. E.g. floating=None will return parameters that are floating as well as
parameters that are fixed.
True: only return parameters that fulfil this criterion
- False: only return parameters that do not fulfil this criterion. E.g. floating=False will return
only parameters that are not floating.
- Parameters
floating (
Optional
[bool
]) – if a parameter is floating, e.g. iffloating()
returns Trueis_yield (
Optional
[bool
]) – if a parameter is a yield of the _current_ model. This won’t be applied recursively, but may include yields if they do also represent a parameter parametrizing the shape. So if the yield of the current model depends on other yields (or also non-yields), this will be included. If, however, just submodels depend on a yield (as their yield) and it is not correlated to the output of our model, they won’t be included.extract_independent (
Optional
[bool
]) – If the parameter is an independent parameter, i.e. if it is a ZfitIndependentParameter.
- Return type
Set
[ZfitParameter
]
-
gradients
(*args, **kwargs)¶ DEPRECATED FUNCTION
Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. Instructions for updating: Use gradient instead.
-
register_cacher
(cacher)¶ Register a cacher that caches values produces by this instance; a dependent.
- Parameters
cacher (
Union
[ForwardRef
,Iterable
[ForwardRef
]]) –
-
reset_cache_self
()¶ Clear the cache of self and all dependent cachers.
-
value_gradients
(*args, **kwargs)¶ DEPRECATED FUNCTION
Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. Instructions for updating: Use value_gradient instead.
-
value_gradients_hessian
(*args, **kwargs)¶ DEPRECATED FUNCTION
Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. Instructions for updating: Use value_gradient_hessian instead.