ExtendedBinnedNLL¶
- class zfit.loss.ExtendedBinnedNLL(model, data, constraints=None, options=None)[source]¶
Bases:
zfit._loss.binnedloss.BaseBinned
Extended binned likelihood using the expected number of events per bin with a poisson probability.
A scaled Poisson distribution is used as described by Bohm and Zech, NIMA 748 (2014) 1-6
The binned likelihood is defined as
\[\mathcal{L} = \product \mathcal{poiss}(N_{modelbin_i}, N_{databin_i}) = N_{databin_i}^{N_{modelbin_i}} \frac{e^{- N_{databin_i}}}{N_{modelbin_i}!}\]where \(databin_i\) is the \(i^{th}\) bin in the data and \(modelbin_i\) is the \(i^{th}\) bin of the model, the expected counts.
A simultaneous fit can be performed by giving one or more
model
,data
, to the loss. The length of each has to match the length of the others\[\mathcal{L}_{simultaneous}(\theta | {data_0, data_1, ..., data_n}) = \prod_{i} \mathcal{L}(\theta_i, data_i)\]where \(\theta_i\) is a set of parameters and a subset of \(\theta\)
For optimization purposes, it is often easier to minimize a function and to use a log transformation. The actual loss is given by
\[\mathcal{L} = - \sum_{i}^{n} ln(f(\theta|x_i))\]and therefore being called “negative log …”
- Args:
model:Binned PDF(s) that return the normalized probability
(
rel_counts
orcounts
) for data under the given parameters. If multiple model and data are given, they will be used in the same order to do a simultaneous fit.data:Binned dataset that will be given to the model.
If multiple model and data are given, they will be used in the same order to do a simultaneous fit.
- constraints:Auxiliary measurements (“constraints”)
that add a likelihood term to the loss.
\[\mathcal{L}(\theta) = \mathcal{L}_{unconstrained} \prod_{i} f_{constr_i}(\theta)\]Usually, an auxiliary measurement – by its very nature -S should only be added once to the loss. zfit does not automatically deduplicate constraints if they are given multiple times, leaving the freedom for arbitrary constructs.
Constraints can also be used to restrict the loss by adding any kinds of penalties.
- options:Additional options (as a dict) for the loss.
Current possibilities include:
‘subtr_const’ (default True): subtract from each points log probability density a constant that is approximately equal to the average log probability density in the very first evaluation before the summation. This brings the initial loss value closer to 0 and increases, especially for large datasets, the numerical stability.
The value will be stored ith ‘subtr_const_value’ and can also be given directly.
The subtraction should not affect the minimum as the absolute value of the NLL is meaningless. However, with this switch on, one cannot directly compare different likelihoods ablolute value as the constant may differ! Use
create_new
in order to have a comparable likelihood between different losses
These settings may extend over time. In order to make sure that a loss is the same under the same data, make sure to use
create_new
instead of instantiating a new loss as the former will automatically overtake any relevant constants and behavior.
- __call__(_x=None)¶
Calculate the loss value with the given input for the free parameters.
- Parameters
*positional* – Array-like argument to set the parameters. The order of the values correspond to the position of the parameters in
get_params()
(called without any arguments). For more detailed control, it is always possible to wrapvalue()
and set the desired parameters manually.- Return type
Tensor
- Returns
Calculated loss value as a scalar.
- add_cache_deps(cache_deps, allow_non_cachable=True)¶
Add dependencies that render the cache invalid if they change.
- Parameters
cache_deps (
Union
[ForwardRef
,Iterable
[ForwardRef
]]) –allow_non_cachable (
bool
) – IfTrue
, allowcache_dependents
to be non-cachables. IfFalse
, anycache_dependents
that is not aZfitCachable
will raise an error.
- Raises
TypeError – if one of the
cache_dependents
is not aZfitCachable
_and_allow_non_cachable
ifFalse
.
- create_new(model=None, data=None, constraints=None, options=None)¶
Create a new binned loss of this type. This is preferrable over creating a new instance in most cases.
Internals, such as certain optimizations will be shared and therefore the loss is made comparable.
If something is not given, it will be taken from the current loss.
- Args:
model:Binned PDF(s) that return the normalized probability
(
rel_counts
orcounts
) for data under the given parameters. If multiple model and data are given, they will be used in the same order to do a simultaneous fit.data:Binned dataset that will be given to the model.
If multiple model and data are given, they will be used in the same order to do a simultaneous fit.
- constraints:Auxiliary measurements (“constraints”)
that add a likelihood term to the loss.
\[\mathcal{L}(\theta) = \mathcal{L}_{unconstrained} \prod_{i} f_{constr_i}(\theta)\]Usually, an auxiliary measurement – by its very nature -S should only be added once to the loss. zfit does not automatically deduplicate constraints if they are given multiple times, leaving the freedom for arbitrary constructs.
Constraints can also be used to restrict the loss by adding any kinds of penalties.
- options:Additional options (as a dict) for the loss.
Current possibilities include:
‘subtr_const’ (default True): subtract from each points log probability density a constant that is approximately equal to the average log probability density in the very first evaluation before the summation. This brings the initial loss value closer to 0 and increases, especially for large datasets, the numerical stability.
The value will be stored ith ‘subtr_const_value’ and can also be given directly.
The subtraction should not affect the minimum as the absolute value of the NLL is meaningless. However, with this switch on, one cannot directly compare different likelihoods ablolute value as the constant may differ! Use
create_new
in order to have a comparable likelihood between different losses
These settings may extend over time. In order to make sure that a loss is the same under the same data, make sure to use
create_new
instead of instantiating a new loss as the former will automatically overtake any relevant constants and behavior.
Returns:
- property dtype: tensorflow.python.framework.dtypes.DType¶
The dtype of the object.
- Return type
DType
- get_cache_deps(only_floating=True)¶
Return a set of all independent
Parameter
that this object depends on.- Parameters
only_floating (
bool
) – IfTrue
, only return floatingParameter
- Return type
OrderedSet
- get_dependencies(only_floating=True)¶
DEPRECATED FUNCTION
Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. Instructions for updating: Use
get_params
instead if you want to retrieve the independent parameters orget_cache_deps
in case you need the numerical cache dependents (advanced).
- get_params(floating=True, is_yield=None, extract_independent=True, only_floating=<class 'zfit.util.checks.NotSpecified'>)¶
Recursively collect parameters that this object depends on according to the filter criteria.
- Which parameters should be included can be steered using the arguments as a filter.
- None: do not filter on this. E.g.
floating=None
will return parameters that are floating as well as parameters that are fixed.
- None: do not filter on this. E.g.
True: only return parameters that fulfil this criterion
- False: only return parameters that do not fulfil this criterion. E.g.
floating=False
will return only parameters that are not floating.
- False: only return parameters that do not fulfil this criterion. E.g.
- Parameters
floating (
Optional
[bool
]) – if a parameter is floating, e.g. iffloating()
returnsTrue
is_yield (
Optional
[bool
]) – if a parameter is a yield of the _current_ model. This won’t be applied recursively, but may include yields if they do also represent a parameter parametrizing the shape. So if the yield of the current model depends on other yields (or also non-yields), this will be included. If, however, just submodels depend on a yield (as their yield) and it is not correlated to the output of our model, they won’t be included.extract_independent (
Optional
[bool
]) – If the parameter is an independent parameter, i.e. if it is aZfitIndependentParameter
.
- Return type
Set
[ZfitParameter
]
- gradients(*args, **kwargs)¶
DEPRECATED FUNCTION
Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. Instructions for updating: Use
gradient
instead.
- register_cacher(cacher)¶
Register a
cacher
that caches values produces by this instance; a dependent.- Parameters
cacher (
Union
[ForwardRef
,Iterable
[ForwardRef
]]) –
- reset_cache_self()¶
Clear the cache of self and all dependent cachers.
- value_gradients(*args, **kwargs)¶
DEPRECATED FUNCTION
Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. Instructions for updating: Use
value_gradient
instead.
- value_gradients_hessian(*args, **kwargs)¶
DEPRECATED FUNCTION
Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. Instructions for updating: Use
value_gradient_hessian
instead.