I am trying to make an LSTM model that will detect anomalies in timeseries data. It takes 5 inputs and produces 1 boolean output (True/False if anomaly is detected). The anomaly pattern will usually be between 3 – 4 timesteps in a row. Unlike most LSTM examples where they are forecasting to predict future data, or classifying a whole sequence of data, I am trying to have a True/False detection flag output at every timestep (True at the last timestep point in the patter if it is detected).
Unfortunately it seems like CrossEntropyLoss doesn’t allow for anything more than 1D output tensors, and in this case it will be 2D [num sequences, length of sequence with boolean data]
Here is some example code of what I am trying to produce:
import torch
import torch.nn as nn
import torch.optim as optim
import numpy as np
# Define LSTM classifier model
class LSTMClassifier(nn.Module):
def __init__(self, input_size, hidden_size, num_layers, output_size):
super(LSTMClassifier, self).__init__()
self.hidden_size = hidden_size
self.num_layers = num_layers
self.lstm = nn.LSTM(input_size, hidden_size, num_layers, batch_first=True)
self.fc = nn.Linear(hidden_size, output_size)
def forward(self, x):
h0 = torch.zeros(self.num_layers, x.size(0), self.hidden_size).to(x.device)
c0 = torch.zeros(self.num_layers, x.size(0), self.hidden_size).to(x.device)
out, _ = self.lstm(x, (h0, c0))
out = self.fc(out[:, -1, :])
return out
# Input - 100 examples containing 5 data points per timestep (where there are 10 timesteps)
X_train = np.random.rand(100, 10, 5)
# Output - 100 examples containing 1 True/False output per timestep to match the input
y_train = np.random.choice(a=[True, False], size=(100, 10)) # Binary labels (True or False)
# Convert data to PyTorch tensors
X_train_tensor = torch.tensor(X_train, dtype=torch.float32)
y_train_tensor = torch.tensor(y_train, dtype=torch.bool)
# Define model parameters
input_size = X_train.shape[2] # 5 inputs per timestep
hidden_size = 4 # Pattern we are trying to detect is usually 4 timesteps long
num_layers = 1
output_size = 1 # True/False
# Instantiate the model
model = LSTMClassifier(input_size, hidden_size, num_layers, output_size)
# Define loss function and optimizer
criterion = nn.CrossEntropyLoss()
optimizer = optim.Adam(model.parameters(), lr=0.001)
# Train the model
num_epochs = 10
for epoch in range(num_epochs):
optimizer.zero_grad()
outputs = model(X_train_tensor)
loss = criterion(outputs, y_train_tensor)
loss.backward()
optimizer.step()
print(f'Epoch [{epoch+1}/{num_epochs}], Loss: {loss.item()}')
# Test the model
X_test = np.random.rand(10, 10, 5) # Generate some test data - same dimensions as input
X_test_tensor = torch.tensor(X_test, dtype=torch.float32)
with torch.no_grad():
predictions = model(X_test_tensor)
predicted_outputs = torch.argmax(predictions, dim=1)
print("Predicted Outputs:", predicted_outputs)
Do I need to re-shape the output, or perhaps use a different loss function, or a model other than LSTM?
koala_nn is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.