Torch-MvNorm’s documentation¶
Integrate multivariate normal density (CDFs)
Easily obtain partial derivatives of CDFs w.r.t location, mean and covariance (and higher derivatives)
Manipulate quantities within a tensor-based framework (e.g. broadcasting is fully supported)
-
mvnorm.
integration
¶ Controls the integration parameters:
integration.maxpts
, the maximum number of density evaluations (default 1000×d);integration.abseps
, the absolute error tolerance (default 1e-6);integration.releps
, the relative error tolerance (default 1e-6);integration.n_jobs
, the number of jobs forjoblib.Parallel
(default to 1).
-
mvnorm.
multivariate_normal_cdf
(value, loc=0.0, covariance_matrix=None, diagonality_tolerance=0.0)¶ Compute orthant probabilities
P(Z_i < value_i, i = 1,...,d)
for a multivariate normal random vector Z. Closed-form backward differentiation with respect to value, loc or covariance_matrix is supported.- Parameters
value (torch.Tensor,) – upper integration limits. It can have batch shape. The last dimension must be equal to d, the dimension of the Gaussian vector.
loc (torch.Tensor, optional) – Mean of the Gaussian vector. Default is zeros. Can have batch shape. Last dimension must be equal to d, the dimension of the Gaussian vector. If a float is provided, the value is repeated for all the d components.
covariance_matrix (torch.Tensor, optional) – Covariance matrix of the Gaussian vector. Can have batch shape. The two last dimensions must be equal to d. Identity matrix by default.
diagonality_tolerance=0.0 (float, optional) – Avoid expensive numerical integration if the maximum of all off-diagonal values is below this tolerance (in absolute value), as the covariance is considered diagonal. If there is a batch of covariances (e.g. covariance_matrix has shape [N,d,d]), then the numerical integrations are avoided only if all covariances are considered diagonal. Diagonality check can be avoided with a negative value.
- Returns
probability – The probability of the event
Y < value
. Its shape is the the broadcasted batch shape (just a scalar if the batchshape is []). Closed form derivative are implemented if value loc, covariance_matrix require a gradient.- Return type
torch.Tensor
Notes
Parameters value and covariance_matrix, as well as the returned probability tensor are broadcasted to their common batch shape. See PyTorch’ broadcasting semantics. The integration is performed with Scipy’s impementation of A. Genz method 1. Partial derivative are computed using closed form formula, see e.g. Marmin et al. 2, p 13.
References
- 1
Alan Genz and Frank Bretz, “Comparison of Methods for the Computation of Multivariate t-Probabilities”, Journal of Computational and Graphical Statistics 11, pp. 950-971, 2002. Source code.
- 2
Sébastien Marmin, Clément Chevalier and David Ginsbourger, “Differentiating the multipoint Expected Improvement for optimal batch design”, International Workshop on Machine learning, Optimization and big Data, Taormina, Italy, 2015. PDF.
Examples
>>> import torch >>> from torch.autograd import grad >>> from mvnorm import multivariate_normal_cdf as Phi >>> n = 4 >>> x = 1 + torch.randn(n) >>> x.requires_grad = True >>> # Make a positive semi-definite matrix >>> A = torch.randn(n,n) >>> C = 1/n*torch.matmul(A,A.t()) >>> p = Phi(x,covariance_matrix=C) >>> p tensor(0.3721, grad_fn=<PhiHighDimBackward>) >>> grad(p,(x,))[0] tensor([0.0085, 0.2510, 0.1272, 0.0332])