Use automatic or numerical gradients.
zfit runs on top of TensorFlow, a modern, powerful computing engine very similar in design to Numpy. An interactive tutorial can be found at zfit/zfit-tutorials
A strong feature of TensorFlow is the possibility to derive an analytic expression for the gradient by successively applying the chain rule to all of its operations. This is independent of whether the code is run in graph or eager execution, but requires all operations that are dynamic to be
tf.*operations. For example, multiplying by a constant (constant as in not chaning ever) does not require the constant to be a
tf.constant(...)but can be a Python scalar. For example, it is also fine to use a fixed template shape using Numpy (Scipy etc), as the template shape will stay constant (this requires though to use a
z.numpy_functionto work, but this is another story about graph mode or not).
To allow to have dynamic numpy operations in a component, preferably wrapped with
z.numpy_functioninstead of forced eager, and to still retrieve a meaningful gradient, a numerical gradient has to be used. In general, this can be achieved by setting the
autogradto False. Any derivative received will then be numerically computed. Furthermore, some minimizers (e.g.
Minuit) have their own way of calculating gradients, which can be faster. Disabling
autogradand using the zfit builting numerical way of calculating the gradients and hessian can be less stable and may raises errors.