derivkit.forecasting.priors_core module#
Prior utilities (core priors + unified builder).
Priors are represented as functions that evaluate how compatible a parameter vector is with the prior assumptions. They return a single log-prior value, with negative infinity used to exclude invalid parameter values.
The return value is interpreted as a log-density defined up to an additive
constant. Returning -np.inf denotes zero probability (hard exclusion).
These priors are intended for use when constructing log-posteriors for sampling or when evaluating posterior surfaces. Plotting tools such as GetDist only visualize the distributions they are given and do not apply priors implicitly. As a result, priors must be included explicitly: either by adding them when generating samples, or, in the case of Gaussian approximations, by incorporating them directly into the Fisher matrix.
- derivkit.forecasting.priors_core.build_prior(*, terms: Sequence[tuple[str, dict[str, Any]] | dict[str, Any]] | None = None, bounds: Sequence[tuple[float | floating | None, float | floating | None]] | None = None) Callable[[ndarray[tuple[Any, ...], dtype[floating]]], float]#
Build and return a single log-prior callable from a simple specification.
This function combines one or more prior components into a single
logprior(theta) -> floatcallable by summing their log-densities and optionally applying global hard bounds.Each prior component (a “term”) specifies one distribution, such as a Gaussian or log-uniform prior, and may also include its own hard bounds.
- Parameters:
terms – Optional list of prior terms. Each term is specified either as
("prior_name", params)or as a dictionary with keys"name","params", and optional per-term"bounds".bounds – Optional global hard bounds applied to the combined prior.
- Returns:
A callable that evaluates the combined log-prior at a given parameter vector.
- Behavior:
- If
termsisNoneor empty: If
boundsisNone, returns an improper flat prior.If
boundsis provided, returns a uniform prior overbounds.
- If
Prior terms are summed in log-space.
Global
boundsare applied after all terms are combined.
Examples
` build_prior() build_prior(bounds=[(0.0, None), (None, None)]) build_prior(terms=[("gaussian_diag", {"mean": mu, "sigma": sig})]) `
- derivkit.forecasting.priors_core.prior_beta(*, index: int, alpha: float | floating, beta: float | floating) Callable[[ndarray[tuple[Any, ...], dtype[floating]]], float]#
Constructs a Beta distribution prior for a single parameter in
(0, 1).This prior uses the Beta density on
x in (0, 1), with shape parametersalpha > 0andbeta > 0. The returned callable evaluates the corresponding log-density up to an additive constant (the normalization constant does not depend onxand is therefore omitted).The (unnormalized) density is proportional to:
x**(alpha - 1) * (1 - x)**(beta - 1)
- Parameters:
index – Index of the parameter to which the prior applies.
alpha – Alpha shape parameter (must be greater than 0).
beta – Beta shape parameter (must be greater than 0).
- Returns:
A callable that evaluates the log-prior at a given parameter vector.
- Raises:
ValueError – If
alphaorbetaare not positive.
- derivkit.forecasting.priors_core.prior_gaussian(*, mean: ndarray[tuple[Any, ...], dtype[floating]], cov: ndarray[tuple[Any, ...], dtype[floating]] | None = None, inv_cov: ndarray[tuple[Any, ...], dtype[floating]] | None = None) Callable[[ndarray[tuple[Any, ...], dtype[floating]]], float]#
Constructs a multivariate Gaussian prior (up to an additive constant).
This prior has a density proportional to
exp(-0.5 * (theta - mean)^T @ inv_cov @ (theta - mean))withinv_covbeing the inverse of the provided covariance matrix andthetabeing the parameter vector.- Parameters:
mean – Mean vector.
cov – Covariance matrix (provide exactly one of
covorinv_cov).inv_cov – Inverse covariance matrix (provide exactly one of
covorinv_cov).
- Returns:
A callable that evaluates the log-prior at a given parameter vector.
- Raises:
ValueError – If neither or both of
covandinv_covare provided, or if the provided covariance/inverse covariance cannot be normalized/validated.
- derivkit.forecasting.priors_core.prior_gaussian_diag(*, mean: ndarray[tuple[Any, ...], dtype[floating]], sigma: ndarray[tuple[Any, ...], dtype[floating]]) Callable[[ndarray[tuple[Any, ...], dtype[floating]]], float]#
Constructs a diagonal Gaussian prior (up to an additive constant).
This prior has a density proportional to
exp(-0.5 * sum_i ((x_i - mean_i) / sigma_i)^2), with independent components.- Parameters:
mean – Mean vector.
sigma – Standard deviation vector (must be positive).
- Returns:
A callable that evaluates the log-prior at a given parameter vector.
- Raises:
ValueError – If mean and sigma have incompatible shapes, or if any sigma entries are non-positive.
- derivkit.forecasting.priors_core.prior_gaussian_mixture(*, means: ndarray[tuple[Any, ...], dtype[floating]], covs: ndarray[tuple[Any, ...], dtype[floating]] | None = None, inv_covs: ndarray[tuple[Any, ...], dtype[floating]] | None = None, weights: ndarray[tuple[Any, ...], dtype[floating]] | None = None, log_weights: ndarray[tuple[Any, ...], dtype[floating]] | None = None, include_component_norm: bool = True) Callable[[ndarray[tuple[Any, ...], dtype[floating]]], float]#
Constructs a Gaussian mixture prior (up to an additive constant).
This prior has a density proportional to a weighted sum of Gaussian components.
The mixture is:
p(theta) = sum_n w_n * N(theta | mean_n, cov_n)
where
N(theta | mean, cov)is the multivariate Gaussian density with the specified mean and covariance;w_nare the mixture weights (non-negative, summing to 1); and the sum runs over the n=1..N components.Provide exactly one of:
covs:
(n, p, p)inv_covs:
(n, p, p)
Here,
nis the number of components andpis the parameter dimension.Provide exactly one of:
weights: (n,) non-negative (normalized internally)
log_weights: (n,) (normalized internally in log-space)
- Parameters:
means – Component means with shape
(n, p).covs – Component covariances with shape
(n, p, p).inv_covs – Component inverse covariances with shape
(n, p, p).weights – Mixture weights with shape
(n,). Can include zeros.log_weights – Log-weights with shape
(n,). Can include -inf entries.include_component_norm – If
True(default), include the per-component Gaussian normalization factor proportional to \(|C_n|^{-1/2}\). This is important for mixtures when covariances differ.
- Returns:
A callable that evaluates the log-prior at a given parameter vector.
- Raises:
ValueError – If inputs have incompatible shapes, if both/neither of
covs/inv_covsare provided, if both/neither ofweights/log_weightsare provided, if weights are invalid, or if covariance inputs are not compatible withinclude_component_norm=True.
- derivkit.forecasting.priors_core.prior_half_cauchy(*, index: int, scale: float | floating) Callable[[ndarray[tuple[Any, ...], dtype[floating]]], float]#
Constructs a half-Cauchy prior for a single non-negative parameter.
This prior has a density proportional to
1 / (1 + (x/scale)^2)forx >= 0, with x being the parameter.- Parameters:
index – Index of the parameter to which the prior applies.
scale – Scale parameter of the half-Cauchy distribution.
- Returns:
A callable that evaluates the log-prior at a given parameter vector.
- Raises:
ValueError – If scale is not positive.
- derivkit.forecasting.priors_core.prior_half_normal(*, index: int, sigma: float | floating) Callable[[ndarray[tuple[Any, ...], dtype[floating]]], float]#
Constructs a half-normal prior for a single non-negative parameter.
This prior is a commonly used weakly informative choice for non-negative amplitude or scale parameters. It concentrates probability near zero while allowing larger values with Gaussian tails, making it suitable when the parameter is expected to be small but not exactly zero.
The half-normal distribution is obtained by taking the absolute value of a zero-mean normal distribution,
|N(0, sigma)|with N being a normal random variable and sigma the standard deviation.The (unnormalized) density is proportional to
exp(-0.5 * (x / sigma)**2)forx >= 0.- Parameters:
index – Index of the parameter to which the prior applies.
sigma – Standard deviation of the underlying normal distribution.
- Returns:
A callable that evaluates the log-prior at a given parameter vector.
- Raises:
ValueError – If sigma is not positive.
- derivkit.forecasting.priors_core.prior_jeffreys(*, index: int) Callable[[ndarray[tuple[Any, ...], dtype[floating]]], float]#
Constructs a Jeffreys prior for a single positive scale parameter.
This prior encodes reparameterization invariance for a positive scale parameter: inference does not depend on the choice of units used to describe the parameter. It is commonly used to express ignorance about the absolute scale of a quantity.
For a positive scale parameter \(x\), the (improper) prior density is proportional to \(1/x\) on \((0, infinity)\). In practice, this has the same functional form as the log-uniform prior; the separate name exists to emphasize the statistical motivation.
- Parameters:
index – Index of the parameter to which the prior applies.
- Returns:
A callable that evaluates the log-prior at a given parameter vector.
- derivkit.forecasting.priors_core.prior_log_normal(*, index: int, mean_log: float | floating, sigma_log: float | floating) Callable[[ndarray[tuple[Any, ...], dtype[floating]]], float]#
Constructs a log-normal prior for a single positive parameter.
This prior has a density proportional to
exp(-0.5 * ((log(x) - mean_log) / sigma_log) ** 2) / xforx > 0.- Parameters:
index – Index of the parameter to which the prior applies.
mean_log – Mean of the underlying normal distribution in log-space.
sigma_log – Standard deviation of the underlying normal distribution in log-space.
- Returns:
A callable that evaluates the log-prior at a given parameter vector.
- Raises:
ValueError – If sigma_log is not positive.
- derivkit.forecasting.priors_core.prior_log_uniform(*, index: int) Callable[[ndarray[tuple[Any, ...], dtype[floating]]], float]#
Constructs a log-uniform prior for a single positive parameter.
This prior assigns equal weight to multiplicative (relative) changes in the parameter rather than additive changes. It is commonly used for positive, scale-like parameters when no preferred scale is known.
For a positive parameter \(x\), the (improper) prior density is proportional to \(1/x\) on \((0, infinity)\). This has the same functional form as the Jeffreys prior for a positive scale parameter; see
prior_jeffreysfor that interpretation.- Parameters:
index – Index of the parameter to which the prior applies.
- Returns:
A callable that evaluates the log-prior at a given parameter vector.
- derivkit.forecasting.priors_core.prior_none() Callable[[ndarray[tuple[Any, ...], dtype[floating]]], float]#
Constructs an improper flat prior (constant log-density).
This prior has a density proportional to 1 everywhere.
- Returns:
Log-prior value (always
0.0).
- derivkit.forecasting.priors_core.prior_uniform(*, bounds: Sequence[tuple[float | floating | None, float | floating | None]]) Callable[[ndarray[tuple[Any, ...], dtype[floating]]], float]#
Constructs a uniform prior with hard bounds.
This prior has a density proportional to 1 within the specified bounds and zero outside. The log-density is constant (up to an additive constant) within the bounds and
-np.infoutside.- Parameters:
bounds – Sequence of (min, max) pairs for each parameter. Use None for unbounded sides.
- Returns:
A callable that evaluates the log-prior at a given parameter vector.