I have a Tensor that is in a condensed format representing a sparse 3-D matrix. I need to convert it to a normal matrix (the one that it is actually representing). So, in my case, each row of any 2-D slice of my matrix can only contain one non-zero element. As data, then, I have for each of these rows, the value, and the index where it appears. For example, the tensor
inp = torch.tensor([[ 1, 2],
[ 3, 4],
[-1, 0],
[45, 1]])
represents a 4x5 matrix (first dimension comes from the first dimension of the tensor, second comes from the metadata) A, where A[0][2] = 1, A[1][4] = 3, A[2][0] = -1, A[3][1] = 45
.
This is just one 2-D slice of my Matrix, and I have a variable number of these. I was able to do this for a 2-D slice as described above in the following way using sparse_coo_tensor:
>>> torch.sparse_coo_tensor(torch.stack([torch.arange(0, 4), inp.t()[1]]), inp.t()[0], [4,5]).to_dense()
tensor([[ 0, 0, 1, 0, 0],
[ 0, 0, 0, 0, 3],
[-1, 0, 0, 0, 0],
[ 0, 45, 0, 0, 0]])
Is this the best way to accomplish this? Is there a simpler, more readable alternative? How do I extend this to a 3-D matrix without looping? For a 3-D matrix, you can imagine the input to be something like
inp_list = torch.stack([inp, inp, inp, inp])
and the desired output would be the above output stacked 4 times.
I feel like I should be able to do something if I create an index array correctly, but I cannot think of a way to do this without using some kind of looping.