I am working with time series data and have noticed a discrepancy in the input tensor order required for LSTM and Conv1d/BatchNorm1d/Dropout1d layers in PyTorch. For example, say I have an input tensor that has the shape (Batch Size, Sequence Length, Features)
When using an LSTM layer, I can simply pass this tensor through
import torch
import torch.nn as nn
batch_size, seq_length, features = 32, 10, 8
input_tensor = torch.randn(batch_size, seq_length, features)
lstm = nn.LSTM(input_size=features, hidden_size=16, batch_first=True)
output, _ = lstm(input_tensor)
However, for a Conv1d layer, I need to permute the tensor dimensions before passing it through the layer:
input_tensor_permuted = input_tensor.permute(0, 2, 1)
conv1d = nn.Conv1d(in_channels=features, out_channels=16, kernel_size=3)
output = conv1d(input_tensor_permuted)
Similarly, if the data input were organized as (Batch Size, Features, Sequence Length), I would need to permute it before passing it to the LSTM:
input_tensor_alt = torch.randn(batch_size, features, seq_length)
input_tensor_alt_permuted = input_tensor_alt.permute(0, 2, 1)
output, _ = lstm(input_tensor_alt_permuted)
I understand that Conv1d and friends operates across the sequence dimension due to the nature of convolutional operations. But why do RNN layers like LSTM break the mold and expect a different input shape?
Could someone explain the reason behind this design choice?
As you notice, the (bs, num_channels, length)
is natural for the convolution operation. The convolution is easily generalized to any dimension; it’s just another summation over this dimension. So this is the perspective of the convolution. We can compare the documentations of Conv1d and Conv2d, the formula is the same, only the weight
and the convolution operator change, silently.
LSTM and other sequential networks do not share this point of view. Their goal is to process a sequence of features. One illustration for this is the Vision Transformer[1], a image (2D) sequential model, flatten the inputs during its preprocessing. A sequential network only knows one dimension and treats the input as a sequence of elements.
So, in PyTorch, they kept the specificity of each process: sequential network and CNN. I do think this is a solid and clear choice. Note that if you work on more specific domain Deep Learning packages, their dimension orders are arbitrary.
[1]: ViT Paper https://arxiv.org/abs/2010.11929. Obviously, Vision transformers literature have many way to add “2D” positional information. The easiest way is the positional embedding.