zfit.run.set_autograd_mode#

run.set_autograd_mode()#

Use automatic or numerical gradients.

zfit runs on top of TensorFlow, a modern, powerful computing engine very similar in design to Numpy. An interactive tutorial can be found at zfit/zfit-tutorials

automatic gradient

A strong feature of TensorFlow is the possibility to derive an analytic expression for the gradient by successively applying the chain rule to all of its operations. This is independent of whether the code is run in graph or eager execution, but requires all operations that are dynamic to be tf.* operations. For example, multiplying by a constant (constant as in not chaning ever) does not require the constant to be a tf.constant(...) but can be a Python scalar. For example, it is also fine to use a fixed template shape using Numpy (Scipy etc), as the template shape will stay constant (this requires though to use a z.numpy_function to work, but this is another story about graph mode or not).

To allow to have dynamic numpy operations in a component, preferably wrapped with z.numpy_function instead of forced eager, and to still retrieve a meaningful gradient, a numerical gradient has to be used. In general, this can be achieved by setting the autograd to False. Any derivative received will then be numerically computed. Furthermore, some minimizers (e.g. Minuit) have their own way of calculating gradients, which can be faster. Disabling autograd and using the zfit builting numerical way of calculating the gradients and hessian can be less stable and may raises errors.

Parameters:

autograd (bool | None) – Whether the automatic gradient feature of TensorFlow should be used or a numerical procedure instead. If any non-constant Python (numpy, scipy,…) code is used inside, this should be switched on.