SimpleLoss¶
-
class
zfit.loss.
SimpleLoss
(func, params=None, errordef=None, deps=<zfit.util.checks.NotSpecified object>, dependents=<zfit.util.checks.NotSpecified object>)[source]¶ Bases:
zfit.core.loss.BaseLoss
Loss from a (function returning a) Tensor. (deprecated arguments)
Warning: SOME ARGUMENTS ARE DEPRECATED: (deps). They will be removed in a future version. Instructions for updating: Use params instead.
This allows for a very generic loss function as the functions only restriction is that is should depend on zfit.Parameter.
- Parameters
func (
Callable
) – Callable that constructs the loss and returns a tensor without taking an argument.params (
Iterable
[ForwardRef
]) – The dependents (independent zfit.Parameter) of the loss. Essentially the (free) parameters that the func depends on.errordef (
Optional
[float
]) – Definition of which change in the loss corresponds to a change of 1 sigma. For example, 1 for Chi squared, 0.5 for negative log-likelihood.
Usage:
import zfit from zfit import z param1 = zfit.Parameter('param1', 5, 1, 10) # we can build a model here if we want, but in principle, it's not necessary x = z.random.uniform(shape=(100,)) y = x * z.random.normal(mean=4, stddev=0.1, shape=x.shape) def squared_loss(): y_pred = x * param1 # this is very simple, but we can of course use any # zfit PDF or Func inside squared = (y_pred - y) ** 2 mse = tf.reduce_mean(squared) return mse loss = zfit.loss.SimpleLoss(squared_loss, param1)
which can then be used in conjunction with any zfit minimizer such as Minuit
minimizer = zfit.minize.Minuit() result = minimizer.minimize(loss)
-
add_cache_deps
(cache_deps, allow_non_cachable=True)¶ Add dependencies that render the cache invalid if they change.
- Parameters
cache_deps (
Union
[ForwardRef
,Iterable
[ForwardRef
]]) –allow_non_cachable (
bool
) – If True, allow cache_dependents to be non-cachables. If False, any cache_dependents that is not a ZfitCachable will raise an error.
- Raises
TypeError – if one of the cache_dependents is not a ZfitCachable _and_ allow_non_cachable if False.
-
property
dtype
¶ The dtype of the object.
- Return type
DType
-
get_cache_deps
(only_floating=True)¶ Return a set of all independent
Parameter
that this object depends on.- Parameters
only_floating (
bool
) – If True, only return floatingParameter
- Return type
OrderedSet
-
get_dependencies
(only_floating=True)¶ DEPRECATED FUNCTION
Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. Instructions for updating: Use get_params instead if you want to retrieve the independent parameters or get_cache_deps in case you need the numerical cache dependents (advanced).
- Return type
OrderedSet
-
get_params
(floating=True, is_yield=None, extract_independent=True, only_floating=<class 'zfit.util.checks.NotSpecified'>)¶ Recursively collect parameters that this object depends on according to the filter criteria.
- Which parameters should be included can be steered using the arguments as a filter.
- None: do not filter on this. E.g. floating=None will return parameters that are floating as well as
parameters that are fixed.
True: only return parameters that fulfil this criterion
- False: only return parameters that do not fulfil this criterion. E.g. floating=False will return
only parameters that are not floating.
- Parameters
floating (
Optional
[bool
]) – if a parameter is floating, e.g. iffloating()
returns Trueis_yield (
Optional
[bool
]) – if a parameter is a yield of the _current_ model. This won’t be applied recursively, but may include yields if they do also represent a parameter parametrizing the shape. So if the yield of the current model depends on other yields (or also non-yields), this will be included. If, however, just submodels depend on a yield (as their yield) and it is not correlated to the output of our model, they won’t be included.extract_independent (
Optional
[bool
]) – If the parameter is an independent parameter, i.e. if it is a ZfitIndependentParameter.
- Return type
Set
[ZfitParameter
]
-
gradients
(*args, **kwargs)¶ DEPRECATED FUNCTION
Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. Instructions for updating: Use gradient instead.
-
register_cacher
(cacher)¶ Register a cacher that caches values produces by this instance; a dependent.
- Parameters
cacher (
Union
[ForwardRef
,Iterable
[ForwardRef
]]) –
-
reset_cache_self
()¶ Clear the cache of self and all dependent cachers.
-
value_gradients
(*args, **kwargs)¶ DEPRECATED FUNCTION
Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. Instructions for updating: Use value_gradient instead.
-
value_gradients_hessian
(*args, **kwargs)¶ DEPRECATED FUNCTION
Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. Instructions for updating: Use value_gradient_hessian instead.