Difference between the expected input tensor order for LSTM and Conv1d?
I am working with time series data and have noticed a discrepancy in the input tensor order required for LSTM and Conv1d/BatchNorm1d/Dropout1d layers in PyTorch. For example, say I have an input tensor that has the shape (Batch Size, Sequence Length, Features)
Difference between the expected input tensor order for LSTM and Conv1d?
I am working with time series data and have noticed a discrepancy in the input tensor order required for LSTM and Conv1d/BatchNorm1d/Dropout1d layers in PyTorch. For example, say I have an input tensor that has the shape (Batch Size, Sequence Length, Features)