Introduction to zfit#

zfit is a model building and fitting library, mostly for likelihood fits and with advanced features for HEP analyses.

It aims to define an API and workflow to be the stable core and allow other libraries to be built on top of it. Furthermore, it uses JIT compilation and autograd (current backend TensorFlow, future JAX (?)) to significantly speed up the computation.

Workflow#

There are five mostly independent parts in zfit zfit workflow

import matplotlib.pyplot as plt
import mplhep
import numpy as np
import zfit
# znp is a subset of numpy functions with a numpy interface but using actually the zfit backend (currently TF)
import zfit.z.numpy as znp
import zfit_physics as zphysics  # extension (for user contribution) containing physics PDFs
from zfit import z  # z is the backend, for gradients, JIT etc
/home/docs/checkouts/readthedocs.org/user_builds/zfit/checkouts/latest/.venv/lib/python3.12/site-packages/zfit/__init__.py:60: UserWarning: TensorFlow warnings are by default suppressed by zfit. In order to show them, set the environment variable ZFIT_DISABLE_TF_WARNINGS=0. In order to suppress the TensorFlow warnings AND this warning, set ZFIT_DISABLE_TF_WARNINGS=1.
  warnings.warn(
2024-11-30 18:42:36.655580: E external/local_xla/xla/stream_executor/cuda/cuda_fft.cc:477] Unable to register cuFFT factory: Attempting to register factory for plugin cuFFT when one has already been registered
WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
E0000 00:00:1732992156.674980    2373 cuda_dnn.cc:8310] Unable to register cuDNN factory: Attempting to register factory for plugin cuDNN when one has already been registered
E0000 00:00:1732992156.680697    2373 cuda_blas.cc:1418] Unable to register cuBLAS factory: Attempting to register factory for plugin cuBLAS when one has already been registered

Data#

This component in general plays a minor role in zfit: it is mostly to provide a unified interface for data.

Preprocessing is therefore not part of zfit and should be done beforehand. Python offers many great possibilities to do so (e.g. Pandas).

zfit knows unbinned and binned datasets.

The unbinned Data can load data from various sources, such as Numpy, Pandas DataFrame, TensorFlow Tensor and ROOT (using uproot). It is also possible, for convenience, to convert it directly to_pandas. The constructors are named from_numpy, from_root etc.

A Data needs not only the data itself but also the observables: the human-readable string identifiers of the axes (corresponding to “columns” of a Pandas DataFrame). It is convenient to define the Space not only with the observable but also with a limit: this can directly be re-used as the normalization range in the PDF.

First, let’s define our observables

obs = zfit.Space('obs1', -5, 10)
# or more explicitly
obs = zfit.Space(obs='obs1', lower=-5, upper=10)
2024-11-30 18:42:40.813265: E external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:152] failed call to cuInit: INTERNAL: CUDA error: Failed call to cuInit: UNKNOWN ERROR (303)

This Space has limits and offers the following functionality:

  • volume: calculate the volume (e.g. distance between upper and lower)

  • inside(): return a boolean Tensor corresponding to whether the value is inside the Space

  • filter(): filter the input values to only return the one inside

size_normal = 10_000
data_normal_np = np.random.normal(size=size_normal, scale=2)

data_normal = zfit.Data(data_normal_np, obs=obs)

The main functionality is

  • nevents: attribute that returns the number of events in the object

  • space: a Space that defines the limits of the data; if outside, the data will be cut (!)

  • n_obs: the number of dimensions in the dataset

  • with_obs: returns a dataset, possibly a subset of the dataset with only the given obs

  • with_weights: returns a dataset with the given weights

  • weights: event weights

Using indexing data[obs] will return a Data object with only the given observables and follows a similar behavior as Pandas DataFrames. A string, i.e. a single observable, returns an array, using alist of strings returns a Data object with the given observables.

Furthermore, value returns a Tensor with shape (nevents, n_obs).

print(f"We have {data_normal.nevents} events in our dataset with the minimum of {np.min(data_normal.value())}")  # remember! The obs cut out some of the data
We have 9922 events in our dataset with the minimum of -4.994344351401815

data_normal.n_obs
1
data_normal["obs1"]  # indexing Pandas DataFrame style
<tf.Tensor: shape=(9922,), dtype=float64, numpy=
array([-0.25431733,  1.33504241, -1.10180075, ...,  0.24971481,
        4.61162467,  1.74609291])>

Model#

Building models is by far the largest part of zfit. We will therefore cover an essential part, the possibility to build custom models, in an extra chapter. Let’s start out with the idea that you define your parameters and your observable space; the latter is the expected input data.

PDFs are normalized (over a specified range); this is the main model and is what we are going to use throughout the tutorials.

A PDF is defined by

(1)#\[\begin{align} \mathrm{PDF}_{f(x)}(x; \theta) = \frac{f(x; \theta)}{\int_{a}^{b} f(x; \theta)} \end{align}\]

where a and b define the normalization range (norm), over which (by inserting into the above definition) the integral of the PDF is unity.

Let’s create a simple Gaussian PDF. We already defined the Space for the data before, now we only need the parameters. These are a different objects than a Space.

Parameter#

A Parameter (there are different kinds actually, more on that later) takes the following arguments as input: Parameter(name, initial value[, lower limit, upper limit]) where the limits are highly recommended but not mandatory. Furthermore, step_size can be given (which is useful to be around the given uncertainty, e.g. for large yields or small values it can help a lot to set this). Also, a floating argument is supported, indicating whether the parameter is allowed to float in the fit or not (just omitting the limits does not make a parameter constant).

Parameters have a unique name. This is served as the identifier for e.g. fit results.

mu = zfit.Parameter('mu', 1, -3, 3, step_size=0.2)
sigma_num = zfit.Parameter('sigma42', 1, 0.1, 10, floating=False)

Now we have everything to create a Gaussian PDF:

gauss = zfit.pdf.Gauss(obs=obs, mu=mu, sigma=sigma_num)

Since this holds all the parameters and the observables are well-defined, we can retrieve them

gauss.n_obs  # dimensions
1
gauss.obs
('obs1',)
gauss.space
<zfit Space obs=('obs1',), axes=(0,), limits=(array([[-5.]]), array([[10.]])), binned=False>
gauss.norm
<zfit Space obs=('obs1',), axes=(0,), limits=(array([[-5.]]), array([[10.]])), binned=False>

As we’ve seen, the obs we defined is the space of Gauss: this acts as the default limits whenever needed (e.g. for sampling). gauss also has a norm, which equals by default as well to the obs given, however, we can explicitly change that.

We can also access the parameters of the PDF in two ways, depending on our intention: either by name (the parameterization name, e.g. mu and sigma, as defined in the Gauss), which is useful if we are interested in the parameter that describes the shape

gauss.params
{'mu': <zfit.Parameter 'mu' floating=True value=1>,
 'sigma': <zfit.Parameter 'sigma42' floating=False value=1>}

or to retrieve all the parameters that the PDF depends on. As this now may sounds trivial, we will see later that models can depend on other models (e.g. sums) and parameters on other parameters. There is one function that automatically retrieves all dependencies, get_params. It takes three arguments to filter:

  • floating: whether to filter only floating parameters, only non-floating or don’t discriminate

  • is_yield: if it is a yield, or not a yield, or both

  • extract_independent: whether to recursively collect all parameters. This, and the explanation for why independent, can be found later on in the Simultaneous tutorial.

Usually, the default is exactly what we want if we look for all free parameters that this PDF depends on.

gauss.get_params()
OrderedSet([<zfit.Parameter 'mu' floating=True value=1>])

The difference will also be clear if we e.g. use the same parameter twice:

gauss_only_mu = zfit.pdf.Gauss(obs=obs, mu=mu, sigma=mu)
print(f"params={gauss_only_mu.params}")
print(f"get_params={gauss_only_mu.get_params()}")
params={'mu': <zfit.Parameter 'mu' floating=True value=1>, 'sigma': <zfit.Parameter 'mu' floating=True value=1>}

get_params=OrderedSet([<zfit.Parameter 'mu' floating=True value=1>])

Functionality#

PDFs provide a few useful methods. The main features of a zfit PDF are:

  • pdf: the normalized value of the PDF. It takes an argument norm that can be set to False, in which case we retrieve the unnormalized value

  • integrate: given a certain range, the PDF is integrated. As pdf, it takes a norm argument that integrates over the unnormalized pdf if set to False

  • sample: samples from the pdf and returns a Data object

integral = gauss.integrate(limits=(-1, 3))  # corresponds to 2 sigma integral
integral
<tf.Tensor: shape=(1,), dtype=float64, numpy=array([0.95449974])>

Tensors#

As we see, many zfit functions return Tensors. Usually, these are array-Like objects that resemble numpy arrays close enough that they can be directly used in-place of actual numpy arrays.

If explicitly needed, we can always safely convert them to a numpy array by calling np.asarray(tensor).

More information about the backend can be found in Extended tutorial on TensorFlow

np.sqrt(integral)  # works out-of-the-box
array([0.97698502])

They also have shapes, dtypes, can be slices etc. So do not convert them except you need it. More on this can be seen in the talk later on about zfit and TensorFlow 2.0.

sample = gauss.sample(n=1000)  # default space taken as limits
sample
<zfit.Data: Data obs=('obs1',) shape=(1000, 1)>
sample['obs1'][:10]
# sample.value()[:10, 0]  # "value" returns a numpy array-like object
<tf.Tensor: shape=(10,), dtype=float64, numpy=
array([1.58814014, 1.86415489, 2.29062885, 0.83554616, 0.57065959,
       1.66506381, 1.07723188, 1.2664886 , 0.60385018, 0.96153973])>
sample.n_obs
1
sample.obs
('obs1',)

We see that sample returns also a zfit Data object with the same space as it was sampled in. This can directly be used e.g.

probs = gauss.pdf(sample)
probs[:10]
<tf.Tensor: shape=(10,), dtype=float64, numpy=
array([0.33558066, 0.27463279, 0.17346144, 0.39358388, 0.36381669,
       0.31978913, 0.39775426, 0.38502515, 0.36883501, 0.39864733])>

NOTE: In case you want to do this repeatedly (e.g. for toy studies), there is a more efficient way (see later on)

Plotting#

So far, we have a dataset and a PDF. Before we go for fitting, we can make a plot. This functionality is not directly provided in zfit (but can be added to zfit-physics). It is however simple enough to do it:

def plot_model(model, data, scale=1, plot_data=True):  # we will use scale later on

    nbins = 50

    lower, upper = data.data_range.limit1d
    x = znp.linspace(lower, upper, num=1000)  # np.linspace also works
    y = model.pdf(x) * size_normal / nbins * data.data_range.volume
    y *= scale
    plt.plot(x, y, label='model')
    if plot_data:
        # legacy way, also works
        # data_plot = data.value()[:, 0]  # we could also use the `to_pandas` method
        # plt.hist(data_plot, bins=nbins)
        # modern way
        data_binned = data.to_binned(nbins)
        mplhep.histplot(data_binned, label='data')
    plt.legend()
plot_model(gauss, data_normal)
../../../_images/8560e58ffe3255b0aed49e42df6c48eab2fbc20a4bd114db038cfef7e0db5a91.png

We can of course do better (and will see that later on, continuously improve the plots), but this is quite simple and gives us the full power of matplotlib/mplhep.

Different models#

zfit offers a selection of predefined models (and extends with models from zfit-physics that contain physics specific models such as ARGUS shaped models).

print(zfit.pdf.__all__)
['BasePDF', 'BaseFunctor', 'Exponential', 'Voigt', 'CrystalBall', 'DoubleCB', 'GeneralizedCB', 'GaussExpTail', 'GeneralizedGaussExpTail', 'Gauss', 'GeneralizedGauss', 'BifurGauss', 'Uniform', 'TruncatedGauss', 'WrapDistribution', 'Cauchy', 'Poisson', 'QGauss', 'ChiSquared', 'StudentT', 'Gamma', 'JohnsonSU', 'Bernstein', 'Chebyshev', 'Legendre', 'Chebyshev2', 'Hermite', 'Laguerre', 'RecursivePolynomial', 'ProductPDF', 'SumPDF', 'GaussianKDE1DimV1', 'KDE1DimGrid', 'KDE1DimExact', 'KDE1DimFFT', 'KDE1DimISJ', 'FFTConvPDFV1', 'ConditionalPDFV1', 'ZPDF', 'SimplePDF', 'SimpleFunctorPDF', 'UnbinnedFromBinnedPDF', 'BinnedFromUnbinnedPDF', 'HistogramPDF', 'SplineMorphingPDF', 'BinwiseScaleModifier', 'BinnedSumPDF', 'BaseBinnedFunctorPDF', 'BaseBinnedPDF', 'SplinePDF', 'TruncatedPDF', 'LogNormal', 'CachedPDF']

print(zphysics.pdf.__all__)
['Argus', 'RelativisticBreitWigner', 'CMSShape', 'Cruijff', 'ErfExp', 'Novosibirsk', 'Tsallis']

To create a more realistic model, we can build some components for a mass fit with a

  • signal component: CrystalBall

  • combinatorial background: Exponential

  • partial reconstructed background on the left: Kernel Density Estimation

Extended tutorial on different KDEs in zfit

Shape fit#

mass_obs = zfit.Space('mass', 0, 1000)
# Signal component

mu_sig = zfit.Parameter('mu_sig', 400, 100, 600)
sigma_sig = zfit.Parameter('sigma_sig', 50, 1, 100)
alpha_sig = zfit.Parameter('alpha_sig', 200, 100, 400, floating=False)  # won't be used in the fit
npar_sig = zfit.Parameter('n sig', 4, 0.1, 30, floating=False)
signal = zfit.pdf.CrystalBall(obs=mass_obs, mu=mu_sig, sigma=sigma_sig, alpha=alpha_sig, n=npar_sig)
# combinatorial background

lam = zfit.Parameter('lambda', -0.01, -0.05, -0.0001)
comb_bkg = zfit.pdf.Exponential(lam, obs=mass_obs)
part_reco_data = np.random.normal(loc=150, scale=160, size=70_000)
part_reco = zfit.pdf.KDE1DimGrid(obs=mass_obs, data=part_reco_data)

Composing models#

We can also compose multiple models together. Here we’ll stick to one dimensional models, the extension to multiple dimensions is explained in the “custom models tutorial”.

Here we will use a SumPDF. This takes pdfs and fractions. If we provide n pdfs and:

  • n - 1 fracs: the nth fraction will be 1 - sum(fracs)

  • n fracs: no normalization attempt is done by SumPDf. If the fracs are not implicitly normalized, this can lead to bad fitting behavior if there is a degree of freedom too much

Extended tutorial on composed models, including multidimensional products

sig_frac = zfit.Parameter('sig_frac', 0.3, 0, 1)
comb_bkg_frac = zfit.Parameter('comb_bkg_frac', 0.25, 0, 1)
model = zfit.pdf.SumPDF([signal, comb_bkg, part_reco], [sig_frac, comb_bkg_frac])

In order to have a corresponding data sample, we can just create one. Since we want to fit to this dataset later on, we will create it with slightly different values. Therefore, we can use the ability of a parameter to be set temporarily to a certain value with

print(f"before: {sig_frac}")
with sig_frac.set_value(0.25):
    print(f"new value: {sig_frac}")
print(f"after 'with': {sig_frac}")
before: <zfit.Parameter 'sig_frac' floating=True value=0.3>

new value: <zfit.Parameter 'sig_frac' floating=True value=0.25>

after 'with': <zfit.Parameter 'sig_frac' floating=True value=0.3>

While this is useful, it does not fully scale up. We can use the zfit.param.set_values helper therefore. (Sidenote: instead of a list of values, we can also use a FitResult, the given parameters then take the value from the result)

with zfit.param.set_values([mu_sig, sigma_sig, sig_frac, comb_bkg_frac, lam], [370, 34, 0.08, 0.31, -0.0025]):
    data = model.sample(n=10_000)
plot_model(model, data);
../../../_images/22cf4e8f033c526a612c5ea3f44f77c6f5cfe3f6f0e43cd4cb3ab4e84b750ec4.png

Plotting the components is not difficult now: we can either just plot the pdfs separately (as we still can access them) or in a generalized manner by accessing the pdfs attribute:

def plot_comp_model(model, data):
    for mod, frac in zip(model.pdfs, model.params.values()):
        plot_model(mod, data, scale=frac, plot_data=False)
    plot_model(model, data)
plot_comp_model(model, data)
../../../_images/4912c73100f84e61d06e26d87d940f11a95bc94fa7eff88a194b69207804b3aa.png

Now we can add legends etc. Btw, did you notice that actually, the frac params are zfit Parameters? But we just used them as if they were Python scalars and it works.

print(model.params)
{'frac_0': <zfit.Parameter 'sig_frac' floating=True value=0.3>, 'frac_1': <zfit.Parameter 'comb_bkg_frac' floating=True value=0.25>, 'frac_2': <zfit.ComposedParameter 'Composed_autoparam_1' params=[('frac_0', 'sig_frac'), ('frac_1', 'comb_bkg_frac')] value=0.45>}

Extended PDFs#

So far, we have only looked at normalized PDFs that do contain information about the shape but not about the absolute scale. We can make a PDF extended by adding a yield to it.

The behavior of the new, extended PDF does NOT change, any methods we called before will act the same. Only exception, some may require an argument less now. All the methods we used so far will return the same values. What changes is that the flag model.is_extended now returns True. Furthermore, we have now a few more methods that we can use which would have raised an error before:

  • get_yield: return the yield parameter (notice that the yield is not added to the shape parameters params)

  • ext_{pdf,integrate}: these methods return the same as the versions used before, however, multiplied by the yield

  • sample is still the same, but does not require the argument n anymore. By default, this will now equal to a poissonian sampled n around the yield.

The SumPDF now does not strictly need fracs anymore: if all input PDFs are extended, the sum will be as well and use the (normalized) yields as fracs

The preferred way to create an extended PDf is to use extended=yield in the PDF constructor.

Alternatively, one can use extended_pdf = pdf.create_extended(yield). (Since this relies on copying the PDF, which may does not work for different reasons, there is also a pdf.set_yield(yield) method that sets the yield in-place. This won’t lead to ambiguities, as everything is supposed to work the same.)

yield_model = zfit.Parameter('yield_model', 10000, 0, 20000, step_size=10)
# model_ext = model.create_extended(yield_model)  # using an existing model

# when creating a new model 
model_ext = zfit.pdf.SumPDF([signal, comb_bkg, part_reco], [sig_frac, comb_bkg_frac], extended=yield_model)

alternatively, we can create the models as extended and sum them up

sig_yield = zfit.Parameter('sig_yield', 2000, 0, 10000, step_size=1)
sig_ext = signal.create_extended(sig_yield)

comb_bkg_yield = zfit.Parameter('comb_bkg_yield', 6000, 0, 10000, step_size=1)
comb_bkg_ext = comb_bkg.create_extended(comb_bkg_yield)

part_reco_yield = zfit.Parameter('part_reco_yield', 2000, 0, 10000, step_size=1)
part_reco.set_yield(
    part_reco_yield)  # unfortunately, `create_extended` does not work here. But no problem, it won't change anything.
part_reco_ext = part_reco
model_ext_sum = zfit.pdf.SumPDF([sig_ext, comb_bkg_ext, part_reco_ext])

Loss#

A loss combines the model and the data, for example to build a likelihood. Furthermore, it can contain constraints, additions to the likelihood. Currently, if the Data has weights, these are automatically taken into account.

nll_gauss = zfit.loss.UnbinnedNLL(gauss, data_normal)

The loss has several attributes to be transparent to higher level libraries. We can calculate the value of it using value.

nll_gauss.value()
<tf.Tensor: shape=(), dtype=float64, numpy=33274.949302190376>

Notice that due to graph building, this will take significantly longer on the first run. Rerun the cell above and it will be way faster.

Furthermore, the loss also provides a possibility to calculate the gradients or, often used, the value and the gradients.

nll_gauss.gradient()
<tf.Tensor: shape=(1,), dtype=float64, numpy=array([9583.07235599])>

We can access the data and models (and possible constraints)

nll_gauss.model
[<zfit.<class 'zfit.models.dist_tfp.Gauss'>  params=[mu, sigma42]]
nll_gauss.data
[<zfit.Data: Data obs=('obs1',) shape=(9922, 1)>]
nll_gauss.constraints
[]

Similar to the models, we can also get the parameters via get_params.

nll_gauss.get_params()
OrderedSet([<zfit.Parameter 'mu' floating=True value=1>])

Extended loss#

More interestingly, we can now build a loss for our composite sum model using the sampled data. Since we created an extended model, we can now also create an extended likelihood, taking into account a Poisson term to match the yield to the number of events.

nll = zfit.loss.ExtendedUnbinnedNLL(model_ext_sum, data)
nll.get_params()
OrderedSet([<zfit.Parameter 'sig_yield' floating=True value=2000>, <zfit.Parameter 'comb_bkg_yield' floating=True value=6000>, <zfit.Parameter 'part_reco_yield' floating=True value=2000>, <zfit.Parameter 'mu_sig' floating=True value=400>, <zfit.Parameter 'sigma_sig' floating=True value=50>, <zfit.Parameter 'lambda' floating=True value=-0.01>])

Minimization#

While a likelihood is interesting, we usually want to minimize it. Therefore, we can use the minimizers in zfit, most notably Minuit, a wrapper around the iminuit minimizer.

The philosophy is to create a minimizer instance that is mostly stateless, e.g. does not remember the position.

Given that iminuit provides us with a very reliable and stable minimizer, it is usually recommended to use this. Others are implemented as well and could easily be wrapped, however, the convergence is usually not as stable.

Minuit has a few options:

  • tol: the Estimated Distance to Minimum (EDM) criteria for convergence (default 1e-3)

  • verbosity: between 0 and 10, 5 is normal, 7 is verbose, 10 is maximum

  • gradient: if True, uses the Minuit numerical gradient instead of the TensorFlow gradient. This is usually more stable for smaller fits; furthermore, the TensorFlow gradient can (experience based) sometimes be wrong. Using "zfit" uses the TensorFlow gradient.

minimizer = zfit.minimize.Minuit()
# minimizer = zfit.minimize.NLoptLBFGSV1()

Other minimizers#

There are a couple of other minimizers, such as SciPy, NLopt that are wrapped and can be used.

They all provide a highly standardize interface, good initial values and non-ambiguous docstring, i.e. every method has its own docstring.

When to stop: a crucial problem is the question of convergence. iminuit has the EDM, a comparably reliable metric. Other minimizers in zfit use the same stopping criteria, to be able to compare results, but that is often very inefficienc: many minimizer do not use the Hessian, required for the EDM, internally, or do not provide it. This means, the Hessian has to be calculated separately (expensive!). Better stopping criteria ideas are welcome!

help(zfit.minimize.ScipyNewtonCGV1.__init__)
Help on function __init__ in module zfit.minimizers.minimizers_scipy:

__init__(self, tol: 'float | None' = None, gradient: 'Callable | str | None' = None, hessian: 'None | (Callable | str | scipy.optimize.HessianUpdateStrategy)' = None, verbosity: 'int | None' = None, maxiter: 'int | str | None' = None, criterion: 'ConvergenceCriterion | None' = None, strategy: 'ZfitStrategy | None' = None, name: 'str' = 'SciPy Newton-CG ') -> 'None'
    WARNING! This algorithm seems unstable and may does not perform well!

    |@doc:minimizer.scipy.info| This implenemtation wraps the minimizers in
    `SciPy optimize <https://docs.scipy.org/doc/scipy/reference/generated/scipy.optimize.minimize.html>`_. |@docend:minimizer.scipy.info|


    Args:
        tol: |@doc:minimizer.tol| Termination value for the
               convergence/stopping criterion of the algorithm
               in order to determine if the minimum has
               been found. Defaults to 1e-3. |@docend:minimizer.tol|
        gradient: |@doc:minimizer.scipy.gradient| Define the method to use for the gradient computation
               that the minimizer should use. This can be the
               gradient provided by the loss itself or
               method from the minimizer.
               In general, using the zfit provided automatic gradient is
               more precise and needs less computation time for the
               evaluation compared to a numerical method, but it may not always be
               possible. In this case, zfit switches to a generic, numerical gradient
               which in general performs worse than if the minimizer has its own
               numerical gradient.
               The following are possible choices:

               If set to ``False`` or ``'zfit'`` (or ``None``; default), the
               gradient of the loss (usually the automatic gradient) will be used;
               the minimizer won't use an internal algorithm. |@docend:minimizer.scipy.gradient|
               |@doc:minimizer.scipy.gradient.internal| ``True`` tells the minimizer to use its default internal
               gradient estimation. This can be specified more clearly using the
               arguments ``'2-point'`` and ``'3-point'``, which specify the
               numerical algorithm the minimizer should use in order to
               estimate the gradient. |@docend:minimizer.scipy.gradient.internal|

        hessian: |@doc:minimizer.scipy.hessian| Define the method to use for the hessian computation
               that the minimizer should use. This can be the
               hessian provided by the loss itself or
               method from the minimizer.

               While the exact gradient can speed up the convergence and is
               often beneficial, this ain't true for the computation of the
               (inverse) Hessian matrix.
               Due to the :math:`n^2` number of entries (compared to :math:`n` in the
               gradient) from the :math:`n` parameters, this can grow quite
               large and become computationally expensive.

               Therefore, many algorithms use an approximated (inverse)
               Hessian matrix making use of the gradient updates instead
               of calculating the exact matrix. This turns out to be
               precise enough and usually considerably speeds up the
               convergence.

               The following are possible choices:

               If set to ``False`` or ``'zfit'``, the
               hessian defined in the loss (usually using automatic differentiation)
               will be used;
               the minimizer won't use an internal algorithm. |@docend:minimizer.scipy.hessian|
               |@doc:minimizer.scipy.hessian.internal| A :class:`~scipy.optimize.HessianUpdateStrategy` that holds
               an approximation of the hessian. For example
               :class:`~scipy.optimize.BFGS` (which performs usually best)
               or :class:`~scipy.optimize.SR1`
               (sometimes unstable updates).
               ``True``  (or ``None``; default) tells the minimizer
               to use its default internal
               hessian approximation.
               Arguments ``'2-point'`` and ``'3-point'`` specify which
               numerical algorithm the minimizer should use in order to
               estimate the hessian. This is only possible if the
               gradient is provided by zfit and not an internal numerical
               method is already used to determine it. |@docend:minimizer.scipy.hessian.internal|
        verbosity: |@doc:minimizer.verbosity| Verbosity of the minimizer. Has to be between 0 and 10.
          The verbosity has the meaning:

           - a value of 0 means quiet and no output
           - above 0 up to 5, information that is good to know but without
             flooding the user, corresponding to a "INFO" level.
           - A value above 5 starts printing out considerably more and
             is used more for debugging purposes.
           - Setting the verbosity to 10 will print out every
             evaluation of the loss function and gradient.

           Some minimizers offer additional output which is also
           distributed as above but may duplicate certain printed values. |@docend:minimizer.verbosity|
        maxiter: |@doc:minimizer.maxiter| Approximate number of iterations.
               This corresponds to roughly the maximum number of
               evaluations of the ``value``, 'gradient`` or ``hessian``. |@docend:minimizer.maxiter|
        criterion: |@doc:minimizer.criterion| Criterion of the minimum. This is an
               estimated measure for the distance to the
               minimum and can include the relative
               or absolute changes of the parameters,
               function value, gradients and more.
               If the value of the criterion is smaller
               than ``loss.errordef * tol``, the algorithm
               stopps and it is assumed that the minimum
               has been found. |@docend:minimizer.criterion|
        strategy: |@doc:minimizer.strategy| A class of type ``ZfitStrategy`` that takes no
               input arguments in the init. Determines the behavior of the minimizer in
               certain situations, most notably when encountering
               NaNs. It can also implement a callback function. |@docend:minimizer.strategy|
        name: |@doc:minimizer.name| Human-readable name of the minimizer. |@docend:minimizer.name|

print(zfit.minimize.__all__)
['ScipyDogleg', 'ScipyCOBYLA', 'ScipyTrustNCG', 'ScipyTrustKrylov', 'WrapOptimizer', 'Adam', 'Minuit', 'LevenbergMarquardt', 'ScipyBaseMinimizerV1', 'ScipyLBFGSBV1', 'ScipyTrustConstrV1', 'ScipyPowellV1', 'ScipySLSQPV1', 'ScipyNewtonCGV1', 'ScipyTruncNCV1', 'ScipyNelderMeadV1', 'ScipyBaseMinimizer', 'ScipyLBFGSB', 'ScipyTrustConstr', 'ScipyPowell', 'ScipySLSQP', 'ScipyNewtonCG', 'ScipyTruncNC', 'ScipyNelderMead', 'ScipyBFGS', 'NLoptBaseMinimizerV1', 'NLoptLBFGSV1', 'NLoptTruncNewtonV1', 'NLoptSLSQPV1', 'NLoptMMAV1', 'NLoptCCSAQV1', 'NLoptShiftVarV1', 'NLoptMLSLV1', 'NLoptStoGOV1', 'NLoptESCHV1', 'NLoptISRESV1', 'NLoptSubplexV1', 'NLoptBOBYQAV1', 'NLoptCOBYLAV1', 'NLoptBaseMinimizer', 'NLoptLBFGS', 'NLoptTruncNewton', 'NLoptSLSQP', 'NLoptMMA', 'NLoptCCSAQ', 'NLoptShiftVar', 'NLoptMLSL', 'NLoptStoGO', 'NLoptESCH', 'NLoptISRES', 'NLoptSubplex', 'NLoptBOBYQA', 'NLoptCOBYLA', 'Ipyopt', 'LevenbergMarquardt', 'BaseMinimizer', 'minimize_supports', 'DefaultStrategy', 'DefaultToyStrategy', 'PushbackStrategy', 'EDM']

For the minimization, we can call minimize, which takes a

  • loss as we created above

  • optionally: the parameters to minimize

By default, minimize uses all the free floating parameters (obtained with get_params). We can also explicitly specify which ones to use by giving them (or better, objects that depend on them) to minimize; note however that non-floating parameters, even if given explicitly to minimize won ‘t be minimized.

iminuit compatibility#

As a sidenote, a zfit loss provides the same API as required by iminuit. Therefore, we can also directly invoke iminuit and use it.

The advantage can be that iminuit offers a few things out-of-the-box, like contour drawings, which zfit does not. Disadvantage is that the zfit worflow is exited.

import iminuit

iminuit_min = iminuit.Minuit(nll, nll.get_params())
# iminuit_min.migrad()  # this fails sometimes, as iminuit doesn't use the limits and is therefore unstable
result = minimizer.minimize(nll)
---------------------------------------------------------------------------
KeyboardInterrupt                         Traceback (most recent call last)
Cell In[54], line 1
----> 1 result = minimizer.minimize(nll)

File ~/checkouts/readthedocs.org/user_builds/zfit/checkouts/latest/.venv/lib/python3.12/site-packages/zfit/minimizers/baseminimizer.py:477, in BaseMinimizer.minimize(self, loss, params, init)
    475 loss, params, init = self._check_convert_input(loss=loss, params=params, init=init, floating=True)
    476 with self._make_stateful(loss=loss, params=params, init=init):
--> 477     return self._call_minimize(loss=loss, params=params, init=init)

File ~/checkouts/readthedocs.org/user_builds/zfit/checkouts/latest/.venv/lib/python3.12/site-packages/zfit/minimizers/baseminimizer.py:489, in BaseMinimizer._call_minimize(self, loss, params, init)
    486 prelim_result = None
    488 try:
--> 489     result = self._minimize(loss=loss, params=params, init=init)
    490 except TypeError as error:
    491     if "got an unexpected keyword argument 'init'" in error.args[0]:

File ~/checkouts/readthedocs.org/user_builds/zfit/checkouts/latest/.venv/lib/python3.12/site-packages/zfit/minimizers/minimizer_minuit.py:185, in Minuit._minimize(self, loss, params, init)
    182 for i in range(self._internal_maxiter):
    183     # perform minimization
    184     try:
--> 185         minimizer = minimizer.migrad(**minimize_options)
    186     except MaximumIterationReached as error:
    187         if minimizer is None:  # it didn't even run once

File ~/checkouts/readthedocs.org/user_builds/zfit/checkouts/latest/.venv/lib/python3.12/site-packages/iminuit/minuit.py:789, in Minuit.migrad(self, ncall, iterate, use_simplex)
    787 t = mutil._Timer(self._fmin)
    788 with t:
--> 789     fm = _robust_low_level_fit(
    790         self._fcn,
    791         self._last_state,
    792         replace_none(ncall, 0),
    793         self._strategy,
    794         self._tolerance,
    795         self._precision,
    796         iterate,
    797         use_simplex,
    798     )
    800 self._last_state = fm.state
    802 self._fmin = mutil.FMin(
    803     fm,
    804     "Migrad",
   (...)
    809     t.value,
    810 )

File ~/checkouts/readthedocs.org/user_builds/zfit/checkouts/latest/.venv/lib/python3.12/site-packages/iminuit/minuit.py:2745, in _robust_low_level_fit(fcn, state, ncall, strategy, tolerance, precision, iterate, use_simplex)
   2743 if precision is not None:
   2744     migrad.precision = precision
-> 2745 fm = migrad(ncall, tolerance)
   2746 strategy = MnStrategy(2)
   2747 migrad = MnMigrad(fcn, fm.state, strategy)

File ~/checkouts/readthedocs.org/user_builds/zfit/checkouts/latest/.venv/lib/python3.12/site-packages/zfit/minimizers/evaluation.py:247, in LossEval.value(self, values)
    245 loss_value = None
    246 try:
--> 247     loss_value = self.loss.value(full=self.full)
    248     loss_value, _, _ = self.strategy.callback(
    249         value=loss_value,
    250         gradient=None,
   (...)
    253         loss=self.loss,
    254     )
    255 except Exception as error:

File ~/checkouts/readthedocs.org/user_builds/zfit/checkouts/latest/.venv/lib/python3.12/site-packages/zfit/core/loss.py:543, in BaseLoss.value(self, params, full)
    541 # log_offset = z.convert_to_tensor(log_offset)
    542 with self._check_set_input_params(params, guarantee_checked=checked):
--> 543     return self._call_value(self.model, self.data, self.fit_range, self.constraints, log_offset)

File ~/checkouts/readthedocs.org/user_builds/zfit/checkouts/latest/.venv/lib/python3.12/site-packages/zfit/z/zextension.py:332, in FunctionWrapperRegistry.__call__.<locals>.concrete_func(*args, **kwargs)
    330 try:
    331     try:
--> 332         result = func_to_run(*args, **kwargs)
    333     except KeyError as error:
    334         warnings.warn(
    335             f"An error occurred while running a jitted function. The error was: {error}."
    336             f" The function will be recompiled",
    337             RuntimeWarning,
    338             stacklevel=3,
    339         )

File ~/checkouts/readthedocs.org/user_builds/zfit/checkouts/latest/.venv/lib/python3.12/site-packages/tensorflow/python/util/traceback_utils.py:150, in filter_traceback.<locals>.error_handler(*args, **kwargs)
    148 filtered_tb = None
    149 try:
--> 150   return fn(*args, **kwargs)
    151 except Exception as e:
    152   filtered_tb = _process_traceback_frames(e.__traceback__)

File ~/checkouts/readthedocs.org/user_builds/zfit/checkouts/latest/.venv/lib/python3.12/site-packages/tensorflow/python/eager/polymorphic_function/polymorphic_function.py:833, in Function.__call__(self, *args, **kwds)
    830 compiler = "xla" if self._jit_compile else "nonXla"
    832 with OptionalXlaContext(self._jit_compile):
--> 833   result = self._call(*args, **kwds)
    835 new_tracing_count = self.experimental_get_tracing_count()
    836 without_tracing = (tracing_count == new_tracing_count)

File ~/checkouts/readthedocs.org/user_builds/zfit/checkouts/latest/.venv/lib/python3.12/site-packages/tensorflow/python/eager/polymorphic_function/polymorphic_function.py:878, in Function._call(self, *args, **kwds)
    875 self._lock.release()
    876 # In this case we have not created variables on the first call. So we can
    877 # run the first trace but we should fail if variables are created.
--> 878 results = tracing_compilation.call_function(
    879     args, kwds, self._variable_creation_config
    880 )
    881 if self._created_variables:
    882   raise ValueError("Creating variables on a non-first call to a function"
    883                    " decorated with tf.function.")

File ~/checkouts/readthedocs.org/user_builds/zfit/checkouts/latest/.venv/lib/python3.12/site-packages/tensorflow/python/eager/polymorphic_function/tracing_compilation.py:139, in call_function(args, kwargs, tracing_options)
    137 bound_args = function.function_type.bind(*args, **kwargs)
    138 flat_inputs = function.function_type.unpack_inputs(bound_args)
--> 139 return function._call_flat(  # pylint: disable=protected-access
    140     flat_inputs, captured_inputs=function.captured_inputs
    141 )

File ~/checkouts/readthedocs.org/user_builds/zfit/checkouts/latest/.venv/lib/python3.12/site-packages/tensorflow/python/eager/polymorphic_function/concrete_function.py:1322, in ConcreteFunction._call_flat(self, tensor_inputs, captured_inputs)
   1318 possible_gradient_type = gradients_util.PossibleTapeGradientTypes(args)
   1319 if (possible_gradient_type == gradients_util.POSSIBLE_GRADIENT_TYPES_NONE
   1320     and executing_eagerly):
   1321   # No tape is watching; skip to running the function.
-> 1322   return self._inference_function.call_preflattened(args)
   1323 forward_backward = self._select_forward_and_backward_functions(
   1324     args,
   1325     possible_gradient_type,
   1326     executing_eagerly)
   1327 forward_function, args_with_tangents = forward_backward.forward()

File ~/checkouts/readthedocs.org/user_builds/zfit/checkouts/latest/.venv/lib/python3.12/site-packages/tensorflow/python/eager/polymorphic_function/atomic_function.py:216, in AtomicFunction.call_preflattened(self, args)
    214 def call_preflattened(self, args: Sequence[core.Tensor]) -> Any:
    215   """Calls with flattened tensor inputs and returns the structured output."""
--> 216   flat_outputs = self.call_flat(*args)
    217   return self.function_type.pack_output(flat_outputs)

File ~/checkouts/readthedocs.org/user_builds/zfit/checkouts/latest/.venv/lib/python3.12/site-packages/tensorflow/python/eager/polymorphic_function/atomic_function.py:251, in AtomicFunction.call_flat(self, *args)
    249 with record.stop_recording():
    250   if self._bound_context.executing_eagerly():
--> 251     outputs = self._bound_context.call_function(
    252         self.name,
    253         list(args),
    254         len(self.function_type.flat_outputs),
    255     )
    256   else:
    257     outputs = make_call_op_in_graph(
    258         self,
    259         list(args),
    260         self._bound_context.function_call_options.as_attrs(),
    261     )

File ~/checkouts/readthedocs.org/user_builds/zfit/checkouts/latest/.venv/lib/python3.12/site-packages/tensorflow/python/eager/context.py:1683, in Context.call_function(self, name, tensor_inputs, num_outputs)
   1681 cancellation_context = cancellation.context()
   1682 if cancellation_context is None:
-> 1683   outputs = execute.execute(
   1684       name.decode("utf-8"),
   1685       num_outputs=num_outputs,
   1686       inputs=tensor_inputs,
   1687       attrs=attrs,
   1688       ctx=self,
   1689   )
   1690 else:
   1691   outputs = execute.execute_with_cancellation(
   1692       name.decode("utf-8"),
   1693       num_outputs=num_outputs,
   (...)
   1697       cancellation_manager=cancellation_context,
   1698   )

File ~/checkouts/readthedocs.org/user_builds/zfit/checkouts/latest/.venv/lib/python3.12/site-packages/tensorflow/python/eager/execute.py:53, in quick_execute(op_name, num_outputs, inputs, attrs, ctx, name)
     51 try:
     52   ctx.ensure_initialized()
---> 53   tensors = pywrap_tfe.TFE_Py_Execute(ctx._handle, device_name, op_name,
     54                                       inputs, attrs, num_outputs)
     55 except core._NotOkStatusException as e:
     56   if name is not None:

KeyboardInterrupt: 
plot_comp_model(model_ext_sum, data)

Fit result#

The result of every minimization is stored in a FitResult. This is the last stage of the zfit workflow and serves as the interface to other libraries. Its main purpose is to store the values of the fit, to reference to the objects that have been used and to perform (simple) uncertainty estimation.

Extended tutorial on the FitResult

print(result)

This gives an overview over the whole result. Often we’re mostly interested in the parameters and their values, which we can access with a params attribute.

print(result.params)

This is a dict which stores any knowledge about the parameters and can be accessed by the parameter (object) itself:

result.params[mu_sig]

‘value’ is the value at the minimum. To obtain other information about the minimization process, result contains more attributes:

  • valid: whether the result is actually valid or not (most important flag to check)

  • fmin: the function minimum

  • edm: estimated distance to minimum

  • info: contains a lot of information, especially the original information returned by a specific minimizer

  • converged: if the fit converged

result.fmin

Info provides additional information on the minimization, which can be minimizer dependent. For example, if iminuit was used, the instance can be retrieved.

print(result.info['minuit'])  # the actual instance used in minimization

Estimating uncertainties#

The FitResult has mainly two methods to estimate the uncertainty:

  • a profile likelihood method (like MINOS)

  • Hessian approximation of the likelihood (like HESSE)

When using Minuit, this uses (currently) its own implementation. However, zfit has its own implementation, which can be invoked by changing the method name.

Hesse also supports the corrections for weights.

We can explicitly specify which parameters to calculate, by default it does for all.

result.hesse(name='hesse')
# result.hesse(method='hesse_np', name='numpy_hesse')
# result.hesse(method='minuit_hesse', name='iminuit_hesse')

We get the result directly returned. This is also added to result.params for each parameter and is nicely displayed with an added column

print(result.params)
errors, new_result = result.errors(params=[sig_yield, part_reco_yield, mu_sig],
                                   name='errors')  # just using three, but we could also omit argument -> all used
errorszfit, new_result_zfit = result.errors(params=[sig_yield, part_reco_yield, mu_sig], method='zfit_errors', name='errorszfit')
print(errors)
print(result)

What is ‘new_result’?#

When profiling a likelihood, such as done in the algorithm used in errors, a new minimum can be found. If this is the case, this new minimum will be returned, otherwise new_result is None. Furthermore, the current result would be rendered invalid by setting the flag valid to False. Note: this behavior only applies to the zfit internal error estimator.

We can also access the covariance matrix of the parameters

result.covariance()

Bonus: serialization#

Human-readable serialization is experimentally supported in zfit. zfit aims to follow the HS3 standard, which is currently in development.

Extended tutorial on serialization

model.to_dict()
gauss.to_yaml()
zfit.hs3.dumps(nll)

End of zfit#

This is where zfit finishes and other libraries take over.

Beginning of hepstats#

hepstats is a library containing statistical tools and utilities for high energy physics. In particular you do statistical inferences using the models and likelhoods function constructed in zfit.

Short example: let’s compute for instance a confidence interval at 68 % confidence level on the mean of the gaussian defined above.

from hepstats.hypotests import ConfidenceInterval
from hepstats.hypotests.calculators import AsymptoticCalculator
from hepstats.hypotests.parameters import POIarray
calculator = AsymptoticCalculator(input=result, minimizer=minimizer)
value = result.params[mu_sig]["value"]
error = result.params[mu_sig]["hesse"]["error"]

mean_scan = POIarray(mu_sig, np.linspace(value - 1.5 * error, value + 1.5 * error, 10))
ci = ConfidenceInterval(calculator, mean_scan)
ci.interval()
from utils import one_minus_cl_plot

ax = one_minus_cl_plot(ci)
ax.set_xlabel("mean")

There will be more of hepstats later.