Given tensor
I can choose to merge the right two indices into a single index,
,
which I will write as . Performing an SVD decomposition between the left index and the combined right index gives:
, where we sum over repeated indices (i.e alpha).
Question: Is there a clean & efficient pythonistic way of performing this operation, while preserving the explicit dependence on the original/unmerged indices for the final tensor? (Meaning has 3 dimensions.) Don’t assume the dimension of A’s indices are equal.
If it helps, I don’t need a general method for decomposing tensors with higher number of dimensions than 3. Thx
In the np.linalg.svd docs 1, I can see that there exists a method for decomposing higher rank tensors. They give an example:
A = np.random.randn(2, 5, 3) # I modified their example to fit my question.
U, S, V = np.linalg.svd(A, full_matrices=False)
U.shape, S.shape, V.shape
Which decomposes into tensors ranked ((2, 5, 3) (2, 3) (2, 3, 3)). This looks like
with having no zero elements (and thus not diagonal). This appears not to be an SVD decomposition in any way I’m familiar with. Perhaps I’m missing something?