I’m trying to understand the workflow of convtranspose of pytorch with groups > 1 , mainly focusing on the calculation process between grouped transposeconv weights and padded input, I’ve experimented with my code, but I cant understand how the result was calculated.
I had an experiment with my code:
import torch
import torch.nn as nn
weight = torch.tensor([1,1,2] * 8).reshape(4,2,3)
transpose_conv = nn.ConvTranspose1d(4, 4, 3, stride=1, padding=2, groups=2,bias=False)
x = torch.tensor([1,1,1,1,1,1, 1,1,2, 1,1,2]).reshape(1,4,3).type(torch.float32)
with torch.no_grad():
transpose_conv.weight.copy_(weight.reshape(4, 2, 3))
print(transpose_conv(x))
and the result was tensor([[[ 8.],[ 8.],[10.],[10.]]]), I can’t understand how the result was calculated, could you please tell me the process?
YunFu Cui is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.
Short answer
We need to keep in mind that for a convolution, the kernel values need to be flipped/reversed before taking the weighted sum with the signal (in your case, [1, 1, 2]
becomes [2, 1, 1]
; but also see the last section An implementation detail? below). The resulting calculations are:
- For your first two channels:
2 * dot([1, 1, 1], [2, 1, 1])
=8
. - For your last two channels:
2 * dot([1, 1, 2], [2, 1, 1])
=10
.
Long answer
Let’s take this apart:
-
Your input
x
is a 3-element signal with 4 channels, with the first two channels containing values[1, 1, 1]
and the last
two channels containing values[1, 1, 2]
. -
Since you have
groups=2
, the 2nd example regarding groups from
ConvTranspose1d
‘s documentation applies:At groups=2, the operation becomes equivalent to having two conv layers side by side, each seeing half the input channels (2 per
group in your case) and producing half the output channels (again 2 per group in your case), and both subsequently
concatenated.In others words: the first half of your convolution kernel’s weights will convolve the two channels containing
[1, 1, 1]
, the last half of your weights will convolve the two channels containing[1, 1, 2]
. -
Your
weight
tensor that contains the convolution kernel’s weights has a shape of(4, 2, 3)
. With your setup, this translates to: You have two groups of weights of
shape(2, 2, 3)
, each convolving2
input channels to produce2
output channels with a kernel size of3
. -
Regarding padding, we follow the note on padding from
ConvTranspose1d
‘s documentation:The
padding
argument effectively addsdilation * (kernel_size - 1) - padding
amount of zero padding to both sizes (I guess this should be “sides”) of the input.This translates to no additional padding in your case with
dilation=1
,kernel_size=3
,padding=2
.¹ -
As a consequence from your signal effectively not being padded, your output will contain a single element for each channel, as there is only one single position to convolve your signal of length
3
with your kernel of size3
. -
Since no striding or dilation is applied in your case either, the corresponding values can be directly calculated as the weighted sums of
the elements at corresponding positions in your signalx
and in the kernel weights from theweight
tensor (note that
we need to reverse the order of elements in the kernel, since this is a convolution and not a correlation, thus[1, 1, 2]
fromweight
becomes[2, 1, 1]
in the calculation):- First two channels:
2 * dot([1, 1, 1], [2, 1, 1])
=2 * (1*2 + 1*1 + 1*1)
=2 * (2 + 1 + 1)
=2 * 4
=8
. - Last two channels:
2 * dot([1, 1, 2], [2, 1, 1])
=2 * (1*2 + 1*1 + 2*1)
=2 * (2 + 1 + 2)
=2 * 5
=10
.
- First two channels:
An implementation detail?
I mentioned above that, since we work with a convolution, the order of weights in the kernel needs to be reversed. This is based on the mathematical definition of a discrete convolution as can be found, for example, on Wikipedia. However, only later did I realize that this is not always followed in PyTorch, compare below:
import torch
import torch.nn.functional as f
signal = torch.tensor([2, 1, 1, 3]).reshape(1, 1, -1)
kernel = torch.tensor([1, 1, 5]).reshape(1, 1, -1)
print(f.conv1d(signal, kernel, padding=0).ravel().tolist())
# Prints [8, 17] == [dot([2,1,1],[1,1,5]), dot([1,1,3],[1,1,5])]
print(f.conv_transpose1d(signal, kernel, padding=2).ravel().tolist())
# Prints [12, 9] == [dot([2,1,1],[5,1,1]), dot([1,1,3],[5,1,1])]
Thus, while getting reversed for transposed convolutions, for (regular) convolutions the kernel doesn’t get reversed.
I am not entirely sure whether this algorithmic difference is intentional or an implementation detail. It could be an implementation detail since usually, when you learn your weights rather than predefining them, it does not really matter whether you learn them for the kernel or the reversed kernel. I guess it is intentional though, since (quote from ConvTranspose1d
‘s documentation) [transposed convolution] can be seen as the gradient of [(regular) convolution], so I guess it follows as a mathematical result that only one of both operations needs to work with the reversed kernel. I did not fully think this through, but in that situation, I would have expected the (regular) convolution kernel to be reversed rather than the one of the transposed convolution, following the mathematical definition of a discrete convolution mentioned above. In any case, I agree that this makes the observed result less obvious.
¹) The padding
parameter definition for transposed convolutions might not be really intuitive, but it is defined for ease of use in connection with “regular” convolutions; again, see the note on padding from the documentation: This is set so that when a Conv1d
and a ConvTranspose1d
are initialized with same parameters, they are inverses of each other in regard to the input and output shapes.
0