tensor.linalg – Linear Algebra Operations#
The pytensor.tensor.linalg module exposes the user-facing linear
algebra API.
Constructors#
- pytensor.tensor.linalg.block_diag(*matrices)[source]#
Construct a block diagonal matrix from a sequence of input tensors.
Given the inputs
A,BandC, the output will have these arrays arranged on the diagonal:- [[A, 0, 0],
[0, B, 0], [0, 0, C]]
- Parameters:
A (tensors) – Input tensors to form the block diagonal matrix. last two dimensions of the inputs will be used, and all inputs should have at least 2 dimensins.
B (tensors) – Input tensors to form the block diagonal matrix. last two dimensions of the inputs will be used, and all inputs should have at least 2 dimensins.
... (C) – Input tensors to form the block diagonal matrix. last two dimensions of the inputs will be used, and all inputs should have at least 2 dimensins.
- Returns:
out – The block diagonal matrix formed from the input matrices.
- Return type:
tensor
Examples
Create a block diagonal matrix from two 2x2 matrices:
..code-block:: python
import numpy as np from pytensor.tensor.linalg import block_diag
A = pt.as_tensor_variable(np.array([[1, 2], [3, 4]])) B = pt.as_tensor_variable(np.array([[5, 6], [7, 8]]))
result = block_diagonal(A, B, name=’X’) print(result.eval()) Out: array([[1, 2, 0, 0],
[3, 4, 0, 0], [0, 0, 5, 6], [0, 0, 7, 8]])
Decomposition#
- pytensor.tensor.linalg.cholesky(x, lower=True, *, check_finite=True, overwrite_a=False, on_error='nan')[source]#
Return a triangular matrix square root of positive semi-definite
x.L = cholesky(X, lower=True) implies dot(L, L.T) == X.
- Parameters:
x (tensor_like) –
lower (bool, default=True) – Whether to return the lower or upper cholesky factor
check_finite (bool) – Unused by PyTensor. PyTensor will return nan if the operation fails.
overwrite_a (bool, ignored) – Whether to use the same memory for the output as
a. This argument is ignored, and is present here only for consistency with scipy.linalg.cholesky.on_error (['raise', 'nan']) – If on_error is set to ‘raise’, this Op will raise a
scipy.linalg.LinAlgErrorif the matrix is not positive definite. If on_error is set to ‘nan’, it will return a matrix containing nans instead.
- Returns:
Lower or upper triangular Cholesky factor of
x- Return type:
Example
import pytensor import pytensor.tensor as pt import numpy as np x = pt.tensor('x', shape=(5, 5), dtype='float64') L = pt.linalg.cholesky(x) f = pytensor.function([x], L) x_value = np.random.normal(size=(5, 5)) x_value = x_value @ x_value.T # Ensures x is positive definite L_value = f(x_value) assert np.allclose(L_value @ L_value.T, x_value)
- pytensor.tensor.linalg.lu(a, permute_l=False, check_finite=True, p_indices=False, overwrite_a=False)[source]#
Factorize a matrix as the product of a unit lower triangular matrix and an upper triangular matrix:
… math:
A = P L U
Where P is a permutation matrix, L is lower triangular with unit diagonal elements, and U is upper triangular.
- Parameters:
a (TensorLike) – Matrix to be factorized
permute_l (bool) – If True, L is a product of permutation and unit lower triangular matrices. Only two values, PL and U, will be returned in this case, and PL will not be lower triangular.
check_finite (bool) – Unused by PyTensor. PyTensor will return nan if the operation fails.
p_indices (bool) – If True, return integer matrix indices for the permutation matrix. Otherwise, return the permutation matrix itself.
overwrite_a (bool) – Ignored by Pytensor. Pytensor will always perform computation inplace if possible.
- Returns:
P (TensorVariable) – Permutation matrix, or array of integer indices for permutation matrix. Not returned if permute_l is True.
L (TensorVariable) – Lower triangular matrix, or product of permutation and unit lower triangular matrices if permute_l is True.
U (TensorVariable) – Upper triangular matrix
- pytensor.tensor.linalg.lu_factor(a, *, check_finite=True, overwrite_a=False)[source]#
LU factorization with partial pivoting.
- Parameters:
a (TensorLike) – Matrix to be factorized
check_finite (bool) – Unused by PyTensor. PyTensor will return nan if the operation fails.
overwrite_a (bool) – Unused by PyTensor. PyTensor will always perform the operation in-place if possible.
- Returns:
LU (TensorVariable) – LU decomposition of
apivots (TensorVariable) – An array of integers representin the pivot indices
- pytensor.tensor.linalg.qr(A, mode='full', overwrite_a=False, pivoting=False, lwork=None)[source]#
QR Decomposition of input matrix
a.The QR decomposition of a matrix
Ais a factorization of the form :math`A = QR`, whereQis an orthogonal matrix (\(Q Q^T = I\)) andRis an upper triangular matrix.This decomposition is useful in various numerical methods, including solving linear systems and least squares problems.
- Parameters:
A (TensorLike) – Input matrix of shape (M, N) to be decomposed.
mode (str, one of "full", "economic", "r", or "raw") –
How the QR decomposition is computed and returned. Choosing the mode can avoid unnecessary computations, depending on which of the return matrices are needed. Given input matrix with shape Choices are:
”full” (or “complete”): returns
QandRwith dimensions(M, M)and(M, N).- ”economic” (or “reduced”): returns
QandRwith dimensions(M, K)and(K, N), where
K = min(M, N).
- ”economic” (or “reduced”): returns
”r”: returns only
Rwith dimensions(K, N).- ”raw”: returns
Handtauwith dimensions(N, M)and(K,), whereHis the matrix of Householder reflections, and tau is the vector of Householder coefficients.
- ”raw”: returns
pivoting (bool, default False) – If True, also return a vector of rank-revealing permutations
Psuch thatA[:, P] = QR.overwrite_a (bool, ignored) – Ignored. Included only for consistency with the function signature of
scipy.linalg.qr. Pytensor will always automatically overwrite the input matrixAif it is safe to do sol.lwork (int, ignored) – Ignored. Included only for consistency with the function signature of
scipy.linalg.qr. Pytensor will Ignored. Included only for consistency with the function signature ofscipy.linalg.qr. Pytensor will automatically determine the optimal workspace size for the QR decomposition.
- Returns:
Q or H (TensorVariable, optional) – A matrix with orthonormal columns. When mode = ‘complete’, it is the result is an orthogonal/unitary matrix depending on whether a is real/complex. The determinant may be either +/- 1 in that case. If mode = ‘raw’, it is the matrix of Householder reflections. If mode = ‘r’, Q is not returned.
R or tau (TensorVariable, optional) – Upper-triangular matrix. If mode = ‘raw’, it is the vector of Householder coefficients.
- pytensor.tensor.linalg.svd(a, full_matrices=True, compute_uv=True)[source]#
This function performs the SVD on CPU.
- Parameters:
full_matrices (bool, optional) – If True (default), u and v have the shapes (M, M) and (N, N), respectively. Otherwise, the shapes are (M, K) and (K, N), respectively, where K = min(M, N).
compute_uv (bool, optional) – Whether or not to compute u and v in addition to s. True by default.
- Returns:
U, V, D
- Return type:
matrices
- pytensor.tensor.linalg.eig(x)[source]#
Return the eigenvalues and right eigenvectors of a square array.
Note that regardless of the input dtype, the eigenvalues and eigenvectors are returned as complex numbers. As a result, the gradient of this operation is not implemented (because PyTensor does not support autodiff for complex values yet).
If you know that your input has strictly real-valued eigenvalues (e.g. it is a symmetric matrix), use
pytensor.tensor.linalg.eighinstead.- Parameters:
x (TensorLike) – Square matrix, or array of such matrices
- pytensor.tensor.linalg.eigh(a, b=None, lower=True, UPLO=None, driver='evr')[source]#
Return the eigenvalues and eigenvectors of a symmetric/Hermitian matrix.
- Parameters:
a (TensorLike) – Symmetric/Hermitian matrix (or batch thereof).
b (TensorLike, optional) – Second matrix for the generalized eigenvalue problem
A v = w B v. Must be positive-definite. IfNone, the standard eigenvalue problem is solved.lower (bool) – Whether to use the lower or upper triangle of a (and b, if provided). Default is True
UPLO ({'L', 'U'}, optional) – Whether to use the lower or upper triangle of a (and b, if provided). Default is ‘L’ (lower). UPLO is deprecated and will be removed in a future version. Use the
lowerargument instead.driver ({'evr', 'evd'}, optional) – LAPACK driver to use.
'evr'(default) uses the MRRR algorithm, the fastest general-purpose driver. This is the default used by Scipy.'evd'uses divide-and-conquer, matching NumPy, JAX, and MLX.
- Returns:
w (Variable) – Eigenvalues of the system, in ascending order.
v (Variable) – Eigenvectors of the system, in ascending order.
- pytensor.tensor.linalg.eigvalsh(a, b=None, lower=True)[source]#
Compute the eigenvalues of a symmetric/Hermitian matrix.
This is identical to
eigh(a, b, lower)[0], but more efficient when only the eigenvalues are needed.- Parameters:
a (TensorLike) – Symmetric/Hermitian matrix (or batch thereof).
b (TensorLike, optional) – Second matrix for the generalized eigenvalue problem
A v = w B v. Must be positive-definite. IfNone, the standard eigenvalue problem is solved.lower (bool, optional) – Whether to use the lower or upper triangle of a (and b). Default True.
- Returns:
w – Eigenvalues of the system, in ascending order.
- Return type:
- pytensor.tensor.linalg.schur(A, output='real', sort=None)[source]#
Schur Decomposition of input matrix
A.The Schur decomposition of a matrix
Ais a factorization of the form \(A = Z T Z^H\), whereZis a unitary matrix andTis either upper-triangular (for complex Schur form) or quasi-upper-triangular (for real Schur form with output=’real’).- Parameters:
A (TensorLike) – Input square matrix of shape (M, M) to be decomposed.
output (str, one of "real" or "complex") – For real-valued
A, if output=’real’, then the Schur form is quasi-upper-triangular. If output=’complex’, the Schur form is upper-triangular. For complex-valuedA, the Schur form is always upper-triangular regardless of the output parameter.sort (str or None, optional) –
Specifies whether the upper eigenvalues should be sorted. Available options:
None (default): eigenvalues are not sorted
’lhp’: left half-plane (real(λ) < 0)
’rhp’: right half-plane (real(λ) >= 0)
’iuc’: inside unit circle (abs(λ) <= 1)
’ouc’: outside unit circle (abs(λ) > 1)
- Returns:
T (TensorVariable) – Schur form of A. An upper-triangular matrix (or quasi-upper-triangular if output=’real’).
Z (TensorVariable) – Unitary Schur transformation matrix such that A = Z @ T @ Z.conj().T
- pytensor.tensor.linalg.qz(A, B, output='real', sort=None, return_eigenvalues=False)[source]#
QZ Decomposition of input matrix pair
(A, B).The QZ decomposition (also known as the generalized Schur decomposition) of a matrix pair
(A, B)is a factorization of the form \(A = Q H Z^H\) and \(B = Q K Z^H\), whereQandZare unitary matrices, andHandKare upper-triangular matrices.- Parameters:
A (TensorLike) – First input square matrix of shape (M, M) to be decomposed.
B (TensorLike) – Second input square matrix of shape (M, M) to be decomposed.
output (str, one of "real" or "complex") – For real-valued
AandB, if output=’real’, then the Schur forms are quasi-upper-triangular. If output=’complex’, the Schur forms are upper-triangular. For complex-valuedAandB, the Schur forms are always upper-triangular regardless of the output parameter.sort (str or None, optional) – Specifies whether the generalized eigenvalues should be sorted. Available options are: - None (default): eigenvalues are not sorted - ‘lhp’: left half-plane (real(λ) < 0) - ‘rhp’: right half-plane (real(λ) >= 0) - ‘iuc’: inside unit circle (abs(λ) <= 1) - ‘ouc’: outside unit circle (abs(λ) > 1)
return_eigenvalues (bool, default False) – If True, the function also returns the generalized eigenvalues as two arrays
alphaandbeta, where the generalized eigenvalues are given by the ratioalpha / beta.
- Returns:
H (TensorVariable) – Schur form of A. An upper-triangular matrix (or quasi-upper-triangular if output=’real’).
K (TensorVariable) – Schur form of B. An upper-triangular matrix (or quasi-upper-triangular if output=’real’).
Q (TensorVariable) – Unitary matrix such that A = Q @ H @ Z.conj().T and B = Q @ K @ Z.conj().T.
Z (TensorVariable) – Unitary matrix such that A = Q @ H @ Z.conj().T and B = Q @ K @ Z.conj().T.
alpha (TensorVariable, optional) – Numerators of the generalized eigenvalues (returned if
return_eigenvaluesis True).beta (TensorVariable, optional) – Denominators of the generalized eigenvalues (returned if
return_eigenvaluesis True).
Notes
Unlike scipy.linalg.qz, the sort function is allowed. Behavior in this case follows that of scipy.linalg.ordqz.
Inverse#
- pytensor.tensor.linalg.inv(*inputs, name=None, return_list=False, **kwargs)[source]#
Generalizes a core
Opto work with batched dimensions.TODO: C implementation? TODO: Fuse Blockwise?
- pytensor.tensor.linalg.pinv(x, hermitian=False)[source]#
Computes the pseudo-inverse of a matrix \(A\).
The pseudo-inverse of a matrix \(A\), denoted \(A^+\), is defined as: “the matrix that ‘solves’ [the least-squares problem] \(Ax = b\),” i.e., if \(\bar{x}\) is said solution, then \(A^+\) is that matrix such that \(\bar{x} = A^+b\).
Note that \(Ax=AA^+b\), so \(AA^+\) is close to the identity matrix. This method is not faster than
matrix_inverse. Its strength comes from that it works for non-square matrices. If you have a square matrix though,matrix_inversecan be both more exact and faster to compute. Also this op does not get optimized into a solve op.
- pytensor.tensor.linalg.tensorinv(a, ind=2)[source]#
PyTensor utilization of numpy.linalg.tensorinv;
Compute the ‘inverse’ of an N-dimensional array. The result is an inverse for
arelative to the tensordot operationtensordot(a, b, ind), i. e., up to floating-point accuracy,tensordot(tensorinv(a), a, ind)is the “identity” tensor for the tensordot operation.- Parameters:
a (array_like) – Tensor to ‘invert’. Its shape must be ‘square’, i. e.,
prod(a.shape[:ind]) == prod(a.shape[ind:]).ind (int, optional) – Number of first indices that are involved in the inverse sum. Must be a positive integer, default is 2.
- Returns:
b –
a’s tensordot inverse, shapea.shape[ind:] + a.shape[:ind].- Return type:
ndarray
- Raises:
LinAlgError – If
ais singular or not ‘square’ (in the above sense).
Products#
- pytensor.tensor.linalg.kron(a, b)[source]#
Kronecker product.
Same as np.kron(a, b)
- Parameters:
a (array_like) –
b (array_like) –
- Return type:
array_like with a.ndim + b.ndim - 2 dimensions
- pytensor.tensor.linalg.matrix_dot(*args)[source]#
Shorthand for product between several dots.
Given \(N\) matrices \(A_0, A_1, .., A_N\),
matrix_dotwill generate the matrix product between all in the given order, namely \(A_0 \cdot A_1 \cdot A_2 \cdot .. \cdot A_N\).
- pytensor.tensor.linalg.matrix_power(M, n)[source]#
Raise a square matrix,
M, to the (integer) powern.This implementation uses exponentiation by squaring which is significantly faster than the naive implementation. The time complexity for exponentiation by squaring is :math:
mathcal{O}((n log M)^k)- Parameters:
M (TensorVariable) –
n (int) –
Solve#
- pytensor.tensor.linalg.solve(a, b, *, lower=False, overwrite_a=False, overwrite_b=False, check_finite=True, assume_a='gen', transposed=False, b_ndim=None)[source]#
Solves the linear equation set
a * x = bfor the unknownxfor squareamatrix.If the data matrix is known to be a particular type then supplying the corresponding string to
assume_akey chooses the dedicated solver. The available options arediagonal
‘diagonal’
tridiagonal
‘tridiagonal’
banded
‘banded’
upper triangular
‘upper triangular’
lower triangular
‘lower triangular’
symmetric
‘symmetric’ (or ‘sym’)
hermitian
‘hermitian’ (or ‘her’)
positive definite
‘positive definite’ (or ‘pos’)
general
‘general’ (or ‘gen’)
If omitted,
'general'is the default structure.The datatype of the arrays define which solver is called regardless of the values. In other words, even when the complex array entries have precisely zero imaginary parts, the complex solver will be called based on the data type of the array.
- Parameters:
a ((..., N, N) array_like) – Square input data
b ((..., N, NRHS) array_like) – Input data for the right hand side.
lower (bool, default False) – Ignored unless
assume_ais one of'sym','her', or'pos'. If True, the calculation uses only the data in the lower triangle ofa; entries above the diagonal are ignored. If False (default), the calculation uses only the data in the upper triangle ofa; entries below the diagonal are ignored.overwrite_a (bool) – Unused by PyTensor. PyTensor will always perform the operation in-place if possible.
overwrite_b (bool) – Unused by PyTensor. PyTensor will always perform the operation in-place if possible.
check_finite (bool) – Unused by PyTensor. PyTensor returns nan if the operation fails.
assume_a (str, optional) – Valid entries are explained above.
transposed (bool, default False) – If True, solves the system A^T x = b. Default is False.
b_ndim (int) – Whether the core case of b is a vector (1) or matrix (2). This will influence how batched dimensions are interpreted. By default, we assume b_ndim = b.ndim is 2 if b.ndim > 1, else 1.
- pytensor.tensor.linalg.solve_triangular(a, b, *, trans=0, lower=False, unit_diagonal=False, check_finite=True, b_ndim=None)[source]#
Solve the equation
a x = bforx, assumingais a triangular matrix.- Parameters:
a (TensorVariable) – Square input data
b (TensorVariable) – Input data for the right hand side.
lower (bool, optional) – Use only data contained in the lower triangle of
a. Default is to use upper triangle.trans ({0, 1, 2, 'N', 'T', 'C'}, optional) – Type of system to solve: trans system 0 or ‘N’ a x = b 1 or ‘T’ a^T x = b 2 or ‘C’ a^H x = b
unit_diagonal (bool, optional) – If True, diagonal elements of
aare assumed to be 1 and will not be referenced.check_finite (bool, optional) – Unused by PyTensor. PyTensor will return nan if the operation fails.
b_ndim (int) – Whether the core case of b is a vector (1) or matrix (2). This will influence how batched dimensions are interpreted.
- pytensor.tensor.linalg.lu_solve(LU_and_pivots, b, trans=False, b_ndim=None, check_finite=True, overwrite_b=False)[source]#
Solve a system of linear equations given the LU decomposition of the matrix.
- Parameters:
LU_and_pivots (tuple[TensorLike, TensorLike]) – LU decomposition of the matrix, as returned by
lu_factorb (TensorLike) – Right-hand side of the equation
trans (bool) – If True, solve A^T x = b, instead of Ax = b. Default is False
b_ndim (int, optional) – The number of core dimensions in b. Used to distinguish between a batch of vectors (b_ndim=1) and a matrix of vectors (b_ndim=2). Default is None, which will infer the number of core dimensions from the input.
check_finite (bool) – Unused by PyTensor. PyTensor will return nan if the operation fails.
overwrite_b (bool) – Ignored by Pytensor. Pytensor will always compute inplace when possible.
- pytensor.tensor.linalg.cho_solve(c_and_lower, b, *, b_ndim=None)[source]#
Solve the linear equations A x = b, given the Cholesky factorization of A.
- Parameters:
c_and_lower (tuple of (TensorLike, bool)) – Cholesky factorization of a, as given by cho_factor
b (TensorLike) – Right-hand side
check_finite (bool) – Unused by PyTensor. PyTensor will return nan if the operation fails.
b_ndim (int) – Whether the core case of b is a vector (1) or matrix (2). This will influence how batched dimensions are interpreted.
- pytensor.tensor.linalg.tensorsolve(a, b, axes=None)[source]#
PyTensor utilization of numpy.linalg.tensorsolve.
Solve the tensor equation
a x = bfor x. It is assumed that all indices ofxare summed over in the product, together with the rightmost indices ofa, as is done in, for example,tensordot(a, x, axes=len(b.shape)).- Parameters:
a (array_like) – Coefficient tensor, of shape
b.shape + Q.Q, a tuple, equals the shape of that sub-tensor ofaconsisting of the appropriate number of its rightmost indices, and must be such thatprod(Q) == prod(b.shape)(in which senseais said to be ‘square’).b (array_like) – Right-hand tensor, which can be of any shape.
axes (tuple of ints, optional) – Axes in
ato reorder to the right, before inversion. If None (default), no reordering is done.
- Returns:
x
- Return type:
ndarray, shape Q
- Raises:
LinAlgError – If
ais singular or not ‘square’ (in the above sense).
- pytensor.tensor.linalg.tridiagonal_lu_factor(a)[source]#
Return the decomposition of A implied by a solve tridiagonal (LAPACK’s gttrf)
- Parameters:
a – The input matrix.
- Returns:
The LU factorization of A.
- Return type:
dl, d, du, du2, ipiv
- pytensor.tensor.linalg.tridiagonal_lu_solve(a_diagonals, b, *, b_ndim, transposed=False)[source]#
Solve a tridiagonal system of equations using LU factorized inputs (LAPACK’s gttrs).
- Parameters:
a_diagonals – The outputs of tridiagonal_lu_factor(A).
b – The right-hand side vector or matrix.
b_ndim – The number of dimensions of the right-hand side.
transposed – Whether to solve the transposed system.
- Returns:
The solution vector or matrix.
- Return type:
- pytensor.tensor.linalg.solve_continuous_lyapunov(A, Q)[source]#
Solve the continuous Lyapunov equation \(A X + X A^H + Q = 0\).
Note that the lyapunov equation is a special case of the Sylvester equation, with \(B = A^H\). This function thus simply calls
solve_sylvesterwith the appropriate arguments.- Parameters:
A (TensorLike) – Square matrix of shape
N x N.Q (TensorLike) – Square matrix of shape
N x N.
- Returns:
X – Square matrix of shape
N x N- Return type:
- pytensor.tensor.linalg.solve_discrete_lyapunov(A, Q, method='bilinear')[source]#
Solve the discrete Lyapunov equation \(A X A^H - X = Q\).
- Parameters:
A (TensorLike) – Square matrix of shape N x N
Q (TensorLike) – Square matrix of shape N x N
method (str, one of
"direct"or"bilinear") –Solver method used, .
"direct"solves the problem directly via matrix inversion. This has a pure PyTensor implementation and can thus be cross-compiled to supported backends, and should be preferred whenNis not large. The direct method scales poorly with the size ofN, and the bilinear can beused in these cases.
- Returns:
X – Square matrix of shape
N x N. Solution to the Lyapunov equation- Return type:
- pytensor.tensor.linalg.solve_discrete_are(A, B, Q, R)[source]#
Solve the discrete Algebraic Riccati equation \(A^TXA - X - (A^TXB)(R + B^TXB)^{-1}(B^TXA) + Q = 0\).
Discrete-time Algebraic Riccati equations arise in the context of optimal control and filtering problems, as the solution to Linear-Quadratic Regulators (LQR), Linear-Quadratic-Guassian (LQG) control problems, and as the steady-state covariance of the Kalman Filter.
Such problems typically have many solutions, but we are generally only interested in the unique stabilizing solution. This stable solution, if it exists, will be returned by this function.
- Parameters:
A (TensorLike) – Square matrix of shape M x M
B (TensorLike) – Square matrix of shape M x M
Q (TensorLike) – Symmetric square matrix of shape M x M
R (TensorLike) – Square matrix of shape N x N
- Returns:
X – Square matrix of shape M x M, representing the solution to the DARE
- Return type:
Notes
This function is copied from the scipy implementation, found here: scipy/scipy
Notes are also adapted from the scipy documentation.
The equation is solved by forming the extended symplectic matrix pencil as described in [1], :math:
H - lambda J, given by the block matrices:\[\begin{split}H = egin{bmatrix} A & 0 & B \\ -Q & I & 0 \\ 0 & 0 & R \end{bmatrix} , \quad J = egin{bmatrix} I & 0 & 0 \\ 0 & A^H & 0 \\ 0 & -B^H & 0 \end{bmatrix}\end{split}\]The stable invariant subspace of the pencil is then computed via the QZ decomposition. Failure conditions are linked to the symmetry of the solution matrix \(U_2 U_1^{-1}\), as described in [1] and [2]. When the solution is not symmetric, NaNs are returned.
[3] describes a balancing procedure for Hamiltonian matrices that can improve numerical stability. This procedure is not yet implemented in this function.
References
- pytensor.tensor.linalg.solve_sylvester(A, B, Q)[source]#
Solve the Sylvester equation \(AX + XB = C\) for \(X\).
Following scipy notation, this function solves the continuous-time Sylvester equation.
- Parameters:
A (TensorLike) – Square matrix of shape
M x M.B (TensorLike) – Square matrix of shape
N x N.Q (TensorLike) – Matrix of shape
M x N.
- Returns:
X – Matrix of shape
M x N.- Return type:
Summary#
- pytensor.tensor.linalg.det(*inputs, name=None, return_list=False, **kwargs)[source]#
Generalizes a core
Opto work with batched dimensions.TODO: C implementation? TODO: Fuse Blockwise?
- pytensor.tensor.linalg.slogdet(x)[source]#
Compute the sign and (natural) logarithm of the determinant of an array.
Returns a naive graph which is optimized later using rewrites with the det operation.
- Parameters:
x ((..., M, M) tensor or tensor_like) – Input tensor, has to be square.
- Returns:
A tuple with the following attributes
sign ((…) tensor_like) – A number representing the sign of the determinant. For a real matrix, this is 1, 0, or -1.
logabsdet ((…) tensor_like) – The natural log of the absolute value of the determinant.
If the determinant is zero, then
signwill be 0 andlogabsdetwill be -inf. In all cases, the determinant is equal to
sign * exp(logabsdet).
- pytensor.tensor.linalg.norm(x, ord=None, axis=None, keepdims=False)[source]#
Matrix or vector norm.
- Parameters:
x (TensorVariable) – Tensor to take norm of.
ord (float, str or int, optional) –
- Order of norm. If
ordis a str, it must be one of the following: ’fro’ or ‘f’ : Frobenius norm
’nuc’ : nuclear norm
’inf’ : Infinity norm
’-inf’ : Negative infinity norm
If an integer, order can be one of -2, -1, 0, 1, or 2. Otherwise
ordmust be a float.Default is the Frobenius (L2) norm.
- Order of norm. If
axis (tuple of int, optional) – Axes over which to compute the norm. If None, norm of entire matrix (or vector) is computed. Row or column norms can be computed by passing a single integer; this will treat a matrix like a batch of vectors.
keepdims (bool) – If True, dummy axes will be inserted into the output so that norm.dnim == x.dnim. Default is False.
- Returns:
Norm of
xalong axes specified byaxis.- Return type:
Notes
Batched dimensions are supported to the left of the core dimensions. For example, if
xis a 3D tensor with shape (2, 3, 4), thennorm(x)will compute the norm of each 3x4 matrix in the batch.If the input is a 2D tensor and should be treated as a batch of vectors, the
axisargument must be specified.