svd

xitorch.linalg.svd(A: xitorch._core.linop.LinearOperator, k: Optional[int] = None, mode: str = 'uppest', bck_options: Mapping[str, Any] = {}, method: Optional[Union[str, Callable]] = None, **fwd_options) → Tuple[torch.Tensor, torch.Tensor, torch.Tensor][source]

Perform the singular value decomposition (SVD):

\[\mathbf{A} = \mathbf{U\Sigma V}^H\]

where \(\mathbf{U}\) and \(\mathbf{V}\) are semi-unitary matrix and \(\mathbf{\Sigma}\) is a diagonal matrix containing real non-negative numbers. This function can handle derivatives for degenerate singular values by setting non-zero degen_atol and degen_rtol in the backward option using the expressions in 1.

Parameters
  • A (xitorch.LinearOperator) – The linear operator to be decomposed. It has a shape of (*BA, m, n) where (*BA) is the batched dimension of A.

  • k (int or None) – The number of decomposition obtained. If None, it will be min(*A.shape[-2:])

  • mode (str) – "lowest" or "uppermost"/"uppest". If "lowest", it will take the lowest k decomposition. If "uppest", it will take the uppermost k.

  • bck_options (dict) –

    Method-specific options for solve() which used in backpropagation calculation with some additional arguments for computing the backward derivatives:

    • degen_atol (float or None): Minimum absolute difference between two singular values to be treated as degenerate. If None, it is torch.finfo(dtype).eps**0.6. If 0.0, no special treatment on degeneracy is applied. (default: None)

    • degen_rtol (float or None): Minimum relative difference between two singular values to be treated as degenerate. If None, it is torch.finfo(dtype).eps**0.4. If 0.0, no special treatment on degeneracy is applied. (default: None)

    Note: the default values of degen_atol and degen_rtol are going to change in the future. So, for future compatibility, please specify the specific values.

  • method (str or callable or None) – Method for the svd (same options for symeig()). If None, it will choose "exacteig".

  • **fwd_options – Method-specific options (see method section below).

Returns

It will return u, s, vh with shapes respectively (*BA, m, k), (*BA, k), and (*BA, k, n).

Return type

tuple of tensors (u, s, vh)

Note

It is a naive implementation of symmetric eigendecomposition of A.H @ A or A @ A.H (depending which one is cheaper)

References

1

Muhammad F. Kasim, “Derivatives of partial eigendecomposition of a real symmetric matrix for degenerate cases”. arXiv:2011.04366 (2020) https://arxiv.org/abs/2011.04366

method="exacteig"
svd(..., method="exacteig")

Eigendecomposition using explicit matrix construction. No additional option for this method.

Warning

  • As this method construct the linear operators explicitly, it might requires a large memory.

method="davidson"
svd(..., method="davidson", *, M=None, max_niter=1000, nguess=None, v_init="randn", max_addition=None, min_eps=1e-06, verbose=False)

Using Davidson method for large sparse matrix eigendecomposition 2.

Parameters
  • max_niter (int) – Maximum number of iterations

  • v_init (str) – Mode of the initial guess ("randn", "rand", "eye")

  • max_addition (int or None) – Maximum number of new guesses to be added to the collected vectors. If None, set to neig.

  • min_eps (float) – Minimum residual error to be stopped

  • verbose (bool) – Option to be verbose

References

2

P. Arbenz, “Lecture Notes on Solving Large Scale Eigenvalue Problems” http://people.inf.ethz.ch/arbenz/ewp/Lnotes/chapter12.pdf