ExtendedBinnedChi2#
- class zfit.loss.ExtendedBinnedChi2(model, data, constraints=None, options=None)[source]#
Bases:
BaseBinned
Binned Chi2 loss, using the :math:`N_{tot} from the PDF.
\[\chi^2 = \sum_{\mathrm{bins}} \left( \frac{N_\mathrm{PDF,bin} - N_\mathrm{Data,bin}}{\sigma_\mathrm{Data,bin}} \right)^2\]where
\[N_\mathrm{PDF,bin} = \mathrm{pdf}(\text{integral}) \cdot N_\mathrm{PDF,expected} \sigma_\mathrm{bin} = \text{variance}\]with
variance
the value ofvariances
of the binned data.If the dataset has empty bins, the errors
will be zero and \(\chi^2\) is undefined. Two possibilities are available and can be given as an option:
“empty”: “ignore” will ignore all bins with zero entries and won’t count to the loss
“errors”: “expected” will use the expected counts from the model with a Poissonian uncertainty
- Parameters:
model (ztyping.BinnedPDFInputType) –
Binned PDF(s) that return the normalized probability (
rel_counts
orcounts
) for data under the given parameters. If multiple model and data are given, they will be used in the same order to do a simultaneous fit.data:Binned dataset that will be given to the model.
If multiple model and data are given, they will be used in the same order to do a simultaneous fit.
constraints:Auxiliary measurements (“constraints”)
that add a likelihood term to the loss.
\[\mathcal{L}(\theta) = \mathcal{L}_{unconstrained} \prod_{i} f_{constr_i}(\theta)\]Usually, an auxiliary measurement – by its very nature -S should only be added once to the loss. zfit does not automatically deduplicate constraints if they are given multiple times, leaving the freedom for arbitrary constructs.
- Constraints can also be used to restrict the loss by adding any kinds of penalties.
options:Additional options (as a dict) for the loss.
Current possibilities include:
’subtr_const’ (default True): subtract from each points log probability density a constant that is approximately equal to the average log probability density in the very first evaluation before the summation. This brings the initial loss value closer to 0 and increases, especially for large datasets, the numerical stability.
The value will be stored ith ‘subtr_const_value’ and can also be given directly.
The subtraction should not affect the minimum as the absolute value of the NLL is meaningless. However, with this switch on, one cannot directly compare different likelihoods absolute value as the constant may differ! Use
create_new
in order to have a comparable likelihood between different losses or use thefull
argument in the value function to calculate the full loss with all constants.
These settings may extend over time. In order to make sure that a loss is the same under the same data, make sure to use
create_new
instead of instantiating a new loss as the former will automatically overtake any relevant constants and behavior.
- __call__(_x=None)#
Calculate the loss value with the given input for the free parameters.
- Parameters:
*positional* – Array-like argument to set the parameters. The order of the values correspond to the position of the parameters in
get_params()
(called without any arguments). For more detailed control, it is always possible to wrapvalue()
and set the desired parameters manually.full – If True, return the full loss value, otherwise allow for the removal of constants and only return the part that depends on the parameters. Constants don’t matter for the task of optimization, but they can greatly help with the numerical stability of the loss function.
- Return type:
array
- Returns:
Calculated loss value as a scalar.
- add_cache_deps(cache_deps, allow_non_cachable=True)#
Add dependencies that render the cache invalid if they change.
- add_constraints(constraints)#
DEPRECATED FUNCTION
Deprecated: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. Instructions for updating: Use
create_new
instead and fill the constraints there.
- create_new(model=None, data=None, constraints=None, options=None)#
Create a new binned loss of this type. This is preferrable over creating a new instance in most cases.
Internals, such as certain optimizations will be shared and therefore the loss is made comparable.
If something is not given, it will be taken from the current loss.
- Parameters:
model (ztyping.BinnedPDFInputType) – Binned PDF(s) that return the normalized probability (
rel_counts
orcounts
) for data under the given parameters. If multiple model and data are given, they will be used in the same order to do a simultaneous fit.data (ztyping.BinnedDataInputType) – Binned dataset that will be given to the model. If multiple model and data are given, they will be used in the same order to do a simultaneous fit.
constraints (ConstraintsInputType) –
Auxiliary measurements (“constraints”) that add a likelihood term to the loss.
\[\mathcal{L}(\theta) = \mathcal{L}_{unconstrained} \prod_{i} f_{constr_i}(\theta)\]Usually, an auxiliary measurement – by its very nature -S should only be added once to the loss. zfit does not automatically deduplicate constraints if they are given multiple times, leaving the freedom for arbitrary constructs.
Constraints can also be used to restrict the loss by adding any kinds of penalties.
options (OptionsInputType) –
Additional options (as a dict) for the loss. Current possibilities include:
’subtr_const’ (default True): subtract from each points log probability density a constant that is approximately equal to the average log probability density in the very first evaluation before the summation. This brings the initial loss value closer to 0 and increases, especially for large datasets, the numerical stability.
The value will be stored ith ‘subtr_const_value’ and can also be given directly.
The subtraction should not affect the minimum as the absolute value of the NLL is meaningless. However, with this switch on, one cannot directly compare different likelihoods absolute value as the constant may differ! Use
create_new
in order to have a comparable likelihood between different losses or use thefull
argument in the value function to calculate the full loss with all constants.
These settings may extend over time. In order to make sure that a loss is the same under the same data, make sure to use
create_new
instead of instantiating a new loss as the former will automatically overtake any relevant constants and behavior.
Returns:
- property dtype: DType#
The dtype of the object.
- get_params(floating=True, is_yield=None, extract_independent=True, *, autograd=None)#
Recursively collect parameters that this object depends on according to the filter criteria.
- Which parameters should be included can be steered using the arguments as a filter.
- None: do not filter on this. E.g.
floating=None
will return parameters that are floating as well as parameters that are fixed.
- None: do not filter on this. E.g.
True: only return parameters that fulfil this criterion
- False: only return parameters that do not fulfil this criterion. E.g.
floating=False
will return only parameters that are not floating.
- False: only return parameters that do not fulfil this criterion. E.g.
- Parameters:
floating (
bool
|None
) – if a parameter is floating, e.g. iffloating()
returnsTrue
is_yield (
bool
|None
) – if a parameter is a yield of the _current_ model. This won’t be applied recursively, but may include yields if they do also represent a parameter parametrizing the shape. So if the yield of the current model depends on other yields (or also non-yields), this will be included. If, however, just submodels depend on a yield (as their yield) and it is not correlated to the output of our model, they won’t be included.extract_independent (
bool
|None
) – If the parameter is an independent parameter, i.e. if it is aZfitIndependentParameter
.
- Return type:
set
[ZfitParameter
]
- gradient(params=None, *, numgrad=None, paramvals=None)#
Calculate the gradient of the loss with respect to the given parameters.
- Parameters:
params (
TypeVar
(ParamTypeInput
, zfit.core.interfaces.ZfitParameter,Union
[int
,float
,complex
,Tensor
, zfit.core.interfaces.ZfitParameter])) – The parameters with respect to which the gradient is calculated. IfNone
, all parameters are used.numgrad – If
True
, calculate the numerical gradient/Hessian instead of using the automatic one. This is usually slower if called repeatedly but can be used if the automatic gradient fails (e.g. if the model is not differentiable, written not in znp.* etc). Default will fall back to what the loss is set to.paramvals (
TypeVar
(ParamTypeInput
, zfit.core.interfaces.ZfitParameter,Union
[int
,float
,complex
,Tensor
, zfit.core.interfaces.ZfitParameter])) – Mapping of the parameter names to the actual values. The parameter names refer to the names of the parameters, typicallyParameter
, that is returned byget_params()
. If no params are given, the current default values of the parameters are used.
- Return type:
Tensor
- Returns:
The gradient of the loss with respect to the given parameters.
- hessian(params=None, hessian=None, *, numgrad=None, paramvals=None)#
Calculate the hessian of the loss with respect to the given parameters.
- Return type:
Tensor
Args: params: The parameters with respect to which the hessian is calculated. If
None
, all parametersare used.
hessian: Can be ‘full’ or ‘diag’. numgrad:If
True
, calculate the numerical gradient/Hessianinstead of using the automatic one. This is usually slower if called repeatedly but can be used if the automatic gradient fails (e.g. if the model is not differentiable, written not in znp.* etc). Default will fall back to what the loss is set to.
- register_cacher(cacher)#
Register a
cacher
that caches values produces by this instance; a dependent.- Parameters:
cacher (ztyping.CacherOrCachersType)
- reset_cache_self()#
Clear the cache of self and all dependent cachers.
- value(*, params=None, full=None)#
Calculate the loss value with the current values of the free parameters.
- Parameters:
params (
TypeVar
(ParamTypeInput
, zfit.core.interfaces.ZfitParameter,Union
[int
,float
,complex
,Tensor
, zfit.core.interfaces.ZfitParameter])) – Mapping of the parameter names to the actual values. The parameter names refer to the names of the parameters, typicallyParameter
, that is returned byget_params()
. If no params are given, the current default values of the parameters are used.full (
bool
|None
) – If True, return the full loss value, otherwise allow for the removal of constants and only return the part that depends on the parameters. Constants don’t matter for the task of optimization, but they can greatly help with the numerical stability of the loss function.
- Return type:
Tensor
- Returns:
Calculated loss value as a scalar.
- value_gradient(params=None, *, full=None, numgrad=None, paramvals=None)#
Calculate the loss value and the gradient with the current values of the free parameters.
- Parameters:
params (
TypeVar
(ParamTypeInput
, zfit.core.interfaces.ZfitParameter,Union
[int
,float
,complex
,Tensor
, zfit.core.interfaces.ZfitParameter])) – The parameters to calculate the gradient for. If not given, all free parameters are used.full (
bool
|None
) – If True, return the full loss value, otherwise allow for the removal of constants and only return the part that depends on the parameters. Constants don’t matter for the task of optimization, but they can greatly help with the numerical stability of the loss function.numgrad (
bool
|None
) – IfTrue
, calculate the numerical gradient/Hessian instead of using the automatic one. This is usually slower if called repeatedly but can be used if the automatic gradient fails (e.g. if the model is not differentiable, written not in znp.* etc). Default will fall back to what the loss is set to.paramvals (
TypeVar
(ParamTypeInput
, zfit.core.interfaces.ZfitParameter,Union
[int
,float
,complex
,Tensor
, zfit.core.interfaces.ZfitParameter])) – Mapping of the parameter names to the actual values. The parameter names refer to the names of the parameters, typicallyParameter
, that is returned byget_params()
. If no params are given, the current default values of the parameters are used.
- Return type:
tuple
[Tensor
,Tensor
]- Returns:
Calculated loss value as a scalar and the gradient as a tensor.
- value_gradient_hessian(params=None, *, hessian=None, full=None, numgrad=None, paramvals=None)#
Calculate the loss value, the gradient and the hessian with the current values of the free parameters.
- Parameters:
params (
TypeVar
(ParamTypeInput
, zfit.core.interfaces.ZfitParameter,Union
[int
,float
,complex
,Tensor
, zfit.core.interfaces.ZfitParameter])) – The parameters to calculate the gradient for. If not given, all free parameters are used.hessian – Can be ‘full’ or ‘diag’.
full (
bool
|None
) – If True, return the full loss value, otherwise allow for the removal of constants and only return the part that depends on the parameters. Constants don’t matter for the task of optimization, but they can greatly help with the numerical stability of the loss function.numgrad – If
True
, calculate the numerical gradient/Hessian instead of using the automatic one. This is usually slower if called repeatedly but can be used if the automatic gradient fails (e.g. if the model is not differentiable, written not in znp.* etc). Default will fall back to what the loss is set to.paramvals (
TypeVar
(ParamTypeInput
, zfit.core.interfaces.ZfitParameter,Union
[int
,float
,complex
,Tensor
, zfit.core.interfaces.ZfitParameter])) – Mapping of the parameter names to the actual values. The parameter names refer to the names of the parameters, typicallyParameter
, that is returned byget_params()
. If no params are given, the current default values of the parameters are used.
- Return type:
tuple
[Tensor
,Tensor
,Tensor
]- Returns:
Calculated loss value as a scalar, the gradient as a tensor and the hessian as a tensor.