derivkit package#
Subpackages#
Submodules#
Module contents#
Provides all derivkit methods.
- class derivkit.CalculusKit(function: Callable[[Sequence[float] | ndarray], float | ndarray[tuple[Any, ...], dtype[floating]]], x0: Sequence[float] | ndarray)#
Bases:
objectProvides access to gradient, Jacobian, and Hessian tensors.
Initialises class with function and expansion point.
- Parameters:
function – The function to be differentiated. Accepts a 1D array-like. Must return either a scalar (for gradient/Hessian) or a 1D array (for Jacobian).
x0 – Point at which to evaluate derivatives (shape
(p,)) forpinput parameters.
- gradient(*args, **kwargs) ndarray[tuple[Any, ...], dtype[floating]]#
Returns the gradient of a scalar-valued function.
This is a wrapper around
derivkit.calculus.build_gradient(), with thefunctionandtheta0arguments fixed to the values provided at initialization ofCalculusKit. Any additional positional or keyword arguments are forwarded toderivkit.calculus.build_gradient().Refer to the documentation of
derivkit.calculus.build_gradient()for available options.
- hessian(*args, **kwargs) ndarray[tuple[Any, ...], dtype[floating]]#
Returns the Hessian of a scalar-valued function.
This is a wrapper around
derivkit.calculus.build_hessian(), with thefunctionandtheta0arguments fixed to the values provided at initialization ofCalculusKit. Any additional positional or keyword arguments are forwarded toderivkit.calculus.build_hessian().Refer to the documentation of
derivkit.calculus.build_hessian()for available options.
- hessian_diag(*args, **kwargs) ndarray[tuple[Any, ...], dtype[floating]]#
Returns the diagonal of the Hessian of a scalar-valued function.
This is a wrapper around
derivkit.calculus.build_hessian_diag(), with thefunctionandtheta0arguments fixed to the values provided at initialization ofCalculusKit. Any additional positional or keyword arguments are forwarded toderivkit.calculus.build_hessian_diag().Refer to the documentation of
derivkit.calculus.build_hessian_diag()for available options.
- hyper_hessian(*args, **kwargs) ndarray[tuple[Any, ...], dtype[floating]]#
Returns the third-derivative tensor of a function.
This is a wrapper around
derivkit.calculus.build_hyper_hessian(), with thefunctionandtheta0arguments fixed to the values provided at initialization ofCalculusKit. Any additional positional or keyword arguments are forwarded toderivkit.calculus.build_hyper_hessian().Refer to the documentation of
derivkit.calculus.build_hyper_hessian()for available options.
- jacobian(*args, **kwargs) ndarray[tuple[Any, ...], dtype[floating]]#
Returns the Jacobian of a vector-valued function.
This is a wrapper around
derivkit.calculus.build_jacobian(), with thefunctionandtheta0arguments fixed to the values provided at initialization ofCalculusKit. Any additional positional or keyword arguments are forwarded toderivkit.calculus.build_jacobian().Refer to the documentation of
derivkit.calculus.build_jacobian()for available options.
- class derivkit.DerivativeKit(function: Callable[[float | ndarray], Any] | None = None, x0: float | ndarray | None = None, *, tab_x: ArrayLike | None = None, tab_y: ArrayLike | None = None)#
Bases:
objectUnified interface for computing numerical derivatives.
The class provides a simple way to evaluate derivatives using any of DerivKit’s available backends (e.g., adaptive fit or finite difference). By default, the adaptive-fit method is used.
You can supply either a function and x0, or tabulated tab_x/tab_y and x0 in case you want to differentiate a tabulated function. The chosen backend is invoked when you call the
.differentiate()method.Example
>>> import numpy as np >>> from derivkit import DerivativeKit >>> dk = DerivativeKit(np.cos, x0=1.0) >>> deriv = dk.differentiate(order=1) # uses the default "adaptive" method
- function#
The callable to differentiate.
- x0#
The point or points at which the derivative is evaluated.
- default_method#
The backend used when no method is specified.
Initializes the DerivativeKit with a target function and expansion point.
- Parameters:
function – The function to be differentiated. Must accept a single float and return a scalar or array-like output.
x0 – Point or array of points at which to evaluate the derivative.
tab_x – Optional tabulated x values for creating a
tabulated_model.one_d.Tabulated1DModel.tab_y – Optional tabulated y values for creating a
tabulated_model.one_d.Tabulated1DModel.
- differentiate(*, method: str | None = None, **kwargs: Any) Any#
Compute derivatives using the chosen method.
Forwards all keyword arguments to the engine’s
.differentiate().- Parameters:
method – Method name or alias (e.g.,
"adaptive","finite","fd"). Default is"adaptive".**kwargs – Passed through to the chosen engine.
- Returns:
The derivative result from the underlying engine.
If
x0is a single value, returns the usual derivative output.If
x0is an array of points, returns an array where the first dimension indexes the points inx0. For example, if you pass 5 points and each derivative has shape(2, 3), the result has shape(5, 2, 3).
Notes
Thread-level parallelism across derivative evaluations can be controlled by passing
n_workersvia**kwargs. Note that this does not launch separate Python processes. All work occurs within a single process using worker threads.- Raises:
ValueError – If
methodis not recognized.
- class derivkit.ForecastKit(function: Callable[[Sequence[float] | ndarray], ndarray] | None, theta0: Sequence[float] | ndarray, cov: ndarray | Callable[[ndarray], ndarray], *, cache_theta: bool = True, cache_theta_number_decimal_places: int = 14, cache_theta_maxsize: int | None = 4096)#
Bases:
objectProvides access to forecasting workflows.
Initialises the ForecastKit with model, fiducials, and covariance.
As an optimization the class can cache function values, which speeds up repeated function calls and their associated derivatives. In this case the function will truncate the number of decimals of the input parameters.
- Parameters:
function – Callable returning the model mean vector \(\mu(\theta)\). May be
Noneif you only plan to use covariance-only workflows (e.g. generalized Fisher withterm="cov"). Required forfisher(),dali(), andfisher_bias().theta0 – Fiducial parameter values of shape
(p,)wherepis the number of parameters.cov –
Covariance specification. Supported forms are:
cov=C0: fixed covariance matrix \(C(\theta_0)\) with shape(n_obs, n_obs), wheren_obsis the number of observables.cov=cov_fn: callable withcov_fn(theta)returning the covariance matrix \(C(\theta)\) evaluated at the parameter vectortheta, with shape(n_obs, n_obs). The covariance attheta0is evaluated once and cached.
cache_theta – A flag which, if set to
True, turns on caching of function values.cache_theta_number_decimal_places – The number of decimal places that are included in the caching.
cache_theta_maxsize – The maximum size of the cache.
- dali(*, method: str | None = None, forecast_order: int = 2, n_workers: int = 1, **dk_kwargs: Any) dict[int, tuple[ndarray[tuple[Any, ...], dtype[float64]], ...]]#
Builds the DALI expansion for the given model up to the given order.
- Parameters:
method – Method name or alias (e.g.,
"adaptive","finite"). IfNone, thederivkit.derivative_kit.DerivativeKitdefault is used.forecast_order – The requested order of the forecast. Currently supported values and their meaning are given in
derivkit.forecasting.forecast_core.SUPPORTED_FORECAST_ORDERS.n_workers – Number of workers for per-parameter parallelization/threads. Default
1(serial). Inner batch evaluation is kept serial to avoid oversubscription.**dk_kwargs – Additional keyword arguments passed to
derivkit.calculus_kit.CalculusKit.
- Returns:
A dict mapping
order -> multipletfor allorder = 1..forecast_order.For each forecast order k, the returned multiplet contains the tensors introduced at that order. Concretely:
order 1:
(F_{(1,1)},)(Fisher matrix)order 2:
(D_{(2,1)}, D_{(2,2)})order 3:
(T_{(3,1)}, T_{(3,2)}, T_{(3,3)})
Here
D_{(k,l)}andT_{(k,l)}denote contractions of thek-th andl-th order derivatives via the inverse covariance.Each tensor axis has length
p = len(self.theta0). The additional tensors at orderkhave parameter-axis ranks fromk+1through2*k.
- delta_chi2_dali(*, theta: ndarray, dali: dict[int, tuple[ndarray, ...]], forecast_order: int | None = 2) float#
Computes a displacement chi-squared under the DALI approximation.
This evaluates a scalar
delta_chi2from the displacementd = theta - theta0using the provided DALI tensors.The expansion point is taken from
ForecastKit.theta0.- Parameters:
theta – Evaluation point in parameter space with shape
(p,).dali – DALI tensors as returned by
ForecastKit.dali().forecast_order – Order of the forecast to use for the DALI contractions.
- Returns:
Scalar delta chi-squared value.
- delta_chi2_fisher(*, theta: ndarray, fisher: ndarray) float#
Computes a displacement chi-squared under the Fisher approximation.
This evaluates the standard quadratic form
delta_chi2 = (theta - theta0)^T @ F @ (theta - theta0)using the provided Fisher matrix and the stored expansion point
ForecastKit.theta0.- Parameters:
theta – Evaluation point in parameter space with shape
(p,).fisher – Fisher matrix with shape
(p, p).
- Returns:
Scalar delta chi-squared value.
- delta_nu(data_unbiased: ndarray, data_biased: ndarray) ndarray#
Computes the difference between two data vectors.
This helper is used in Fisher-bias calculations and any other workflow where two data vectors are compared: it takes a pair of vectors (for example, a version with a systematic and one without) and returns their difference as a 1D array whose length matches the number of observables implied by
cov. It works with both 1D inputs and 2D arrays (for example, correlation-by-ell) and flattens 2D inputs using NumPy’s row-major (“C”) order, which is the standard convention throughout the DerivKit package.- Parameters:
data_unbiased – Reference data vector without the systematic. Can be 1D or 2D. If 1D, it must follow the NumPy’s row-major (“C”) flattening convention used throughout the package.
data_biased – Data vector that includes the systematic effect. Can be 1D or 2D. If 1D, it must follow the NumPy’s row-major (“C”) flattening convention used throughout the package.
- Returns:
A 1D NumPy array of length
n_observablesrepresenting the mismatch between the two input data vectors. This is simply the element-wise difference between the input with systematic and the input without systematic, flattened if necessary to match the expected observable ordering.
- fisher(*, method: str | None = None, n_workers: int = 1, **dk_kwargs: Any) ndarray#
Computes the Fisher information matrix for a given model and covariance.
- Parameters:
method – Derivative method name or alias (e.g.,
"adaptive","finite"). IfNone, thederivkit.derivative_kit.DerivativeKitdefault is used.n_workers – Number of workers for per-parameter parallelisation. Default is
1(serial).**dk_kwargs – Additional keyword arguments forwarded to
derivkit.derivative_kit.DerivativeKit.differentiate().
- Returns:
Fisher matrix with shape
(n_parameters, n_parameters).
- fisher_bias(*, fisher_matrix: ndarray, delta_nu: ndarray, method: str | None = None, n_workers: int = 1, rcond: float = 1e-12, **dk_kwargs: Any) tuple[ndarray, ndarray]#
Estimates parameter bias using the stored model, expansion point, and covariance.
This function takes a model, an expansion point, a covariance matrix, a Fisher matrix, and a data-vector difference
delta_nuand maps that difference into parameter space. A common use case is the classic “Fisher bias” setup, where one asks how a systematic-induced change in the data would shift inferred parameters.Internally, the function evaluates the model response at the expansion point and uses the covariance and Fisher matrix to compute both the parameter-space bias vector and the corresponding shifts. See https://arxiv.org/abs/0710.5171 for details.
- Parameters:
fisher_matrix – Square matrix describing information about the parameters. Its shape must be
(p, p), wherepis the number of parameters.delta_nu – Difference between a biased and an unbiased data vector, for example \(\Delta\nu = \nu_{\mathrm{biased}} - \nu_{\mathrm{unbiased}}\). Accepts a 1D array of length n or a 2D array that will be flattened in row-major order (“C”) to length n, where n is the number of observables. If supplied as a 1D array, it must already follow the same row-major (“C”) flattening convention used throughout the package.
n_workers – Number of workers used by the internal derivative routine when forming the Jacobian.
method – Method name or alias (e.g.,
"adaptive","finite"). IfNone, thederivkit.derivative_kit.DerivativeKitdefault is used.rcond – Regularization cutoff for pseudoinverse. Default is
1e-12.**dk_kwargs – Additional keyword arguments passed to
derivkit.derivative_kit.DerivativeKit.differentiate().
- Returns:
A tuple
(bias_vec, delta_theta)of 1D arrays with lengthp, wherebias_vecis the parameter-space bias vector anddelta_thetaare the corresponding parameter shifts.
- gaussian_fisher(*, method: str | None = None, n_workers: int = 1, rcond: float = 1e-12, symmetrize_dcov: bool = True, **dk_kwargs: Any) ndarray#
Computes the generalized Fisher matrix for parameter-dependent mean and/or covariance.
This function computes the generalized Fisher matrix for a Gaussian likelihood with parameter-dependent mean and/or covariance. Uses
derivkit.forecasting.fisher_gaussian.build_gaussian_fisher_matrix().- Parameters:
method – Derivative method name or alias (e.g.,
"adaptive","finite").n_workers – Number of workers for per-parameter parallelisation.
rcond – Regularization cutoff for pseudoinverse fallback in linear solves.
symmetrize_dcov – If
True, symmetrize each covariance derivative via \(\tfrac{1}{2}(C_{,i} + C_{,i}^{\mathsf{T}})\).**dk_kwargs – Forwarded to the internal derivative calls.
- Returns:
Fisher matrix with shape
(p, p)withpthe number of parameters.
- getdist_dali_emcee(*, dali: dict[int, tuple[ndarray, ...]], names: Sequence[str], labels: Sequence[str], **kwargs: Any)#
Returns GetDist
getdist.MCSamplesfromemceesampling of a DALI posterior.This is a thin wrapper around
derivkit.forecasting.getdist_dali_samples.dali_to_getdist_emcee()that fixes the expansion point toself.theta0.- Parameters:
dali – DALI tensors as returned by
ForecastKit.dali().names – Parameter names for GetDist (length
p).labels – Parameter labels for GetDist (length
p).**kwargs – Forwarded to
derivkit.forecasting.getdist_dali_samples.dali_to_getdist_emcee()(e.g.n_steps,burn,thin,n_walkers,init_scale,seed,prior_terms,prior_bounds,logprior,sampler_bounds,label).
- Returns:
A
getdist.MCSamplescontaining MCMC chains.
- getdist_dali_importance(*, dali: dict[int, tuple[ndarray, ...]], names: Sequence[str], labels: Sequence[str], **kwargs: Any)#
Returns GetDist
getdist.MCSamplesfor a DALI posterior via importance sampling.This is a thin wrapper around
derivkit.forecasting.getdist_dali_samples.dali_to_getdist_importance()that fixes the expansion point toself.theta0.- Parameters:
dali – DALI tensors as returned by
ForecastKit.dali().names – Parameter names for GetDist (length
p).labels – Parameter labels for GetDist (length
p).**kwargs – Forwarded to
derivkit.forecasting.getdist_dali_samples.dali_to_getdist_importance()(e.g.n_samples,kernel_scale,seed,prior_terms,prior_bounds,logprior,sampler_bounds,label).
- Returns:
A
getdist.MCSampleswith importance weights.
- getdist_fisher_gaussian(*, fisher: ndarray, names: Sequence[str] | None = None, labels: Sequence[str] | None = None, **kwargs: Any)#
Converts a Fisher Gaussian into a GetDist
getdist.gaussian_mixtures.GaussianND.This is a thin wrapper around
derivkit.forecasting.getdist_fisher_samples.fisher_to_getdist_gaussiannd()that fixes the mean to the stored expansion pointself.theta0.- Parameters:
fisher – Fisher matrix with shape
(p, p)evaluated atself.theta0.names – Optional parameter names (length
p).labels – Optional parameter labels (length
p).**kwargs – Forwarded to
derivkit.forecasting.getdist_fisher_samples.fisher_to_getdist_gaussiannd()(e.g.label,rcond).
- Returns:
A
getdist.gaussian_mixtures.GaussianNDwith meanself.theta0and covariance given by the (pseudo-)inverse Fisher matrix.
- getdist_fisher_samples(*, fisher: ndarray, names: Sequence[str], labels: Sequence[str], **kwargs: Any)#
Draws GetDist
getdist.MCSamplesfrom the Fisher Gaussian atself.theta0.This is a thin wrapper around
derivkit.forecasting.getdist_fisher_samples.fisher_to_getdist_samples()that fixes the sampling center to the stored expansion pointself.theta0.- Parameters:
fisher – Fisher matrix with shape
(p, p)evaluated atself.theta0.names – Parameter names for GetDist (length
p).labels – Parameter labels for GetDist (length
p).**kwargs – Forwarded to
derivkit.forecasting.getdist_fisher_samples.fisher_to_getdist_samples()(e.g.n_samples,seed,kernel_scale,prior_terms,prior_bounds,logprior,hard_bounds,store_loglikes,label).
- Returns:
A
getdist.MCSamplesobject containing samples drawn from the Fisher Gaussian.
- laplace_approximation(*, neg_logposterior: Callable[[ndarray], float], theta_map: Sequence[float] | ndarray | None = None, method: str | None = None, n_workers: int = 1, ensure_spd: bool = True, rcond: float = 1e-12, **dk_kwargs: Any) dict[str, Any]#
Computes a Laplace (Gaussian) approximation around
theta_map.The Laplace approximation replaces the posterior near its peak with a Gaussian. It does this by measuring the local curvature of the negative log-posterior using its Hessian at
theta_map. The Hessian acts like a local precision matrix, and its inverse is the approximate covariance.If
theta_mapis not provided, this uses the stored expansion pointself.theta0.- Parameters:
neg_logposterior – Callable that accepts a 1D float64 parameter vector and returns a scalar negative log-posterior value.
theta_map – Expansion point for the approximation. This is often the maximum a posteriori estimate (MAP). If
None, usesself.theta0.method – Derivative method name/alias forwarded to the Hessian builder.
n_workers – Outer parallelism forwarded to Hessian construction.
ensure_spd – If
True, attempt to regularize the Hessian to be symmetric positive definite (SPD) by adding diagonal jitter.rcond – Cutoff for small singular values used by the pseudoinverse fallback when computing the covariance.
**dk_kwargs – Additional keyword arguments forwarded to
derivkit.derivative_kit.DerivativeKit.differentiate().
- Returns:
Dictionary with the Laplace approximation outputs (theta_map, neg_logposterior_at_map, hessian, cov, and jitter).
- laplace_covariance(hessian: ndarray, *, rcond: float = 1e-12) ndarray#
Computes the Laplace covariance matrix from a Hessian.
In the Laplace (Gaussian) approximation, the Hessian of the negative log-posterior at the expansion point acts like a local precision matrix. The approximate posterior covariance is the matrix inverse of that Hessian.
- Parameters:
hessian – 2D square Hessian matrix.
rcond – Cutoff for small singular values used by the pseudoinverse fallback.
- Returns:
A 2D symmetric covariance matrix with the same shape as
hessian.
- laplace_hessian(*, neg_logposterior: Callable[[ndarray], float], theta_map: Sequence[float] | ndarray | None = None, method: str | None = None, n_workers: int = 1, **dk_kwargs: Any) ndarray#
Computes the Hessian of the negative log-posterior at
theta_map.The Hessian at
theta_mapmeasures the local curvature of the posterior peak. In the Laplace approximation, this Hessian plays the role of a local precision matrix, and its inverse provides a fast Gaussian estimate of parameter uncertainties and correlations.If
theta_mapis not provided, this uses the stored expansion pointself.theta0.- Parameters:
neg_logposterior – Callable returning the scalar negative log-posterior.
theta_map – Point where the curvature is evaluated (typically the MAP). If
None, usesself.theta0.method – Derivative method name/alias forwarded to the calculus machinery.
n_workers – Outer parallelism forwarded to Hessian construction.
**dk_kwargs – Additional keyword arguments forwarded to
derivkit.derivative_kit.DerivativeKit.differentiate().
- Returns:
A symmetric 2D array with shape
(p, p)giving the Hessian ofneg_logposteriorevaluated attheta_map.
- logposterior_dali(*, theta: ndarray, dali: dict[int, tuple[ndarray, ...]], forecast_order: int | None = 2, prior_terms: Sequence[tuple[str, dict[str, Any]] | dict[str, Any]] | None = None, prior_bounds: Sequence[tuple[float | None, float | None]] | None = None, logprior: Callable[[ndarray], float] | None = None) float#
Computes the log posterior (up to a constant) under the DALI approximation.
If no prior is provided, this returns the DALI log-likelihood expansion with a flat prior and no hard cutoffs. Priors may be provided either as a pre-built
logprior(theta)callable or as a lightweight prior specification viaprior_termsand/orprior_bounds.The expansion point is taken from the stored
self.theta0.- Parameters:
theta – Evaluation point in parameter space with shape
(p,).dali – DALI tensors as returned by
ForecastKit.dali().forecast_order – Order of the forecast to use for the DALI contractions.
prior_terms – Prior term specification passed to the underlying prior builder. Use this only if
logprioris not provided.prior_bounds – Global hard bounds passed to the underlying prior builder. Use this only if
logprioris not provided.logprior – Optional custom log-prior callable. If it returns a non-finite value, the posterior is treated as zero at that point and the function returns
-np.inf. Cannot be used together withprior_termsorprior_bounds.
- Returns:
Scalar log posterior value, defined up to an additive constant.
- logposterior_fisher(*, theta: ndarray, fisher: ndarray, prior_terms: Sequence[tuple[str, dict[str, Any]] | dict[str, Any]] | None = None, prior_bounds: Sequence[tuple[float | None, float | None]] | None = None, logprior: Callable[[ndarray], float] | None = None) float#
Computes the log posterior under the Fisher approximation.
The returned value is defined up to an additive constant in log space. This corresponds to an overall multiplicative normalization of the posterior density in probability space.
If no prior is provided, this returns the Fisher log-likelihood expansion with a flat prior and no hard cutoffs. Priors may be provided either as a pre-built
logprior(theta)callable or as a lightweight prior specification viaprior_termsand/orprior_bounds.The expansion point is taken from the stored
self.theta0.- Parameters:
theta – Evaluation point in parameter space with shape
(p,).fisher – Fisher matrix with shape
(p, p).prior_terms – Prior term specification passed to the underlying prior builder. Use this only if
logprioris not provided.prior_bounds – Global hard bounds passed to the underlying prior builder. Use this only if
logprioris not provided.logprior – Optional custom log-prior callable. If it returns a non-finite value, the posterior is treated as zero at that point and the function returns
-np.inf. Cannot be used together withprior_termsorprior_bounds.
- Returns:
Scalar log posterior value, defined up to an additive constant.
- negative_logposterior(theta: Sequence[float] | ndarray, *, logposterior: Callable[[ndarray], float]) float#
Computes the negative log-posterior at
theta.This converts a log-posterior callable into the objective used by MAP estimation and curvature-based methods. It simply returns
-logposterior(theta)and validates that the result is finite.- Parameters:
theta – 1D array-like parameter vector.
logposterior – Callable that accepts a 1D float64 array and returns a scalar float.
- Returns:
Negative log-posterior value as a float.
- xy_fisher(*, x0: ndarray, mu_xy: Callable[[ndarray, ndarray], ndarray], cov_xy: ndarray, method: str | None = None, n_workers: int = 1, rcond: float = 1e-12, symmetrize_dcov: bool = True, check_cyy_consistency: bool = True, atol: float = 0.0, rtol: float = 1e-12, **dk_kwargs: Any) ndarray#
Computes the X–Y Gaussian Fisher matrix (noisy inputs and outputs).
This implements the X–Y Gaussian Fisher formalism where the measured inputs and outputs are both noisy and may be correlated. Input uncertainty is propagated into an effective output covariance via a local linearization of
mu_xy(x, theta)with respect toxat the measured inputsx0.- Parameters:
x0 – Measured inputs with shape
(n_x,).mu_xy – Model mean callable
mu_xy(x, theta) -> y.cov_xy – Full covariance for the stacked vector
[x, y]with shape(n_x + n_y, n_x + n_y). Note that this is the joint covariance of the input and output measurements, ordered as[x0, y0](inputs first, then outputs), so it contains the blocksCxx,Cxy, andCyy.method – Derivative method name/alias forwarded to the derivative backend.
n_workers – Number of workers for derivative evaluations.
rcond – Regularization cutoff used in linear solves / pseudoinverse fallback.
symmetrize_dcov – If
True, symmetrize each covariance derivative.check_cyy_consistency – If
Trueand thisForecastKitwas initialized with a fixed output covariancecov=self.cov0(i.e. \(C_{yy}\)), verify that it matches the \(C_{yy}\) block implied bycov_xy.atol – Absolute tolerance for the \(C_{yy}\) consistency check.
rtol – Relative tolerance for the \(C_{yy}\) consistency check.
**dk_kwargs – Forwarded to the derivative backend.
- Returns:
Fisher matrix with shape
(p, p), wherepis the number of parameters.- Raises:
ValueError – If
check_cyy_consistency=Trueandself.cov0is compatible with a \(C_{yy}\) block but differs from the \(C_{yy}\) block extracted fromcov_xybeyondrtol/atol.
- class derivkit.LikelihoodKit(data: ArrayLike, model_parameters: ArrayLike)#
Bases:
objectHigh-level interface for Gaussian and Poissonian likelihoods.
The class stores
dataandmodel_parametersand provides methods to evaluate the corresponding likelihoods.Initialises the likelihoods object.
- Parameters:
data – Observed data values. The expected shape depends on the particular likelihoods. For the Gaussian likelihoods,
datais 1D or 2D, where axis 0 represents different samples and axis 1 the values. For the Poissonian likelihoods,datais reshaped to align withmodel_parameters.model_parameters – Theoretical model values. For the Gaussian likelihoods, this is a 1D array of parameters used as the mean of the multivariate normal. For the Poissonian likelihoods, this is the expected counts (Poisson means).
- gaussian(cov: ArrayLike, *, return_log: bool = True) tuple[tuple[ndarray[tuple[Any, ...], dtype[float64]], ...], ndarray[tuple[Any, ...], dtype[float64]]]#
Evaluates a Gaussian likelihoods for the stored data and parameters.
- Parameters:
cov – Covariance matrix. May be a scalar, a 1D vector of diagonal variances, or a full 2D covariance matrix. It will be symmetrized and normalized internally.
return_log – If
True, return the log-likelihoods instead of the probability density function.
- Returns:
coordinate_gridsis a tuple of 1D arrays giving the evaluation coordinates for each dimension.probabilitiesis an array with the values of the multivariate Gaussian probability density (or log-density) evaluated on the Cartesian product of those coordinates.
- Return type:
A tuple
(coordinate_grids, probabilities)where
- poissonian(*, return_log: bool = True) tuple[ndarray, ndarray]#
Evaluates a Poissonian likelihoods for the stored data and parameters.
- Parameters:
return_log – If
True, return the log-likelihoods instead of the probability mass function.- Returns:
countsis the data reshaped to align with the model parameters.probabilitiesis an array of Poisson probabilities (or log-probabilities) computed fromcountsandmodel_parameters.
- Return type:
A tuple
(counts, probabilities)where