I am trying to build a FCNN to solve signal deconvolution problem. I have two measured signals: Cobs and Kernel and I used a simple FCNN trying to get Ctrue. My input is the measured Cobs
device = torch.device(“cuda” if torch.cuda.is_available() else “cpu”)
class DeconvolutionNN(nn.Module):
def init(self):
super(DeconvolutionNN, self).init()
self.layer1 = nn.Linear(149, 256)
self.layer2 = nn.Linear(256, 128)
self.layer3 = nn.Linear(128, 149)
self.relu = nn.ReLU()
def forward(self, x):
x = self.relu(self.layer1(x))
x = self.relu(self.layer2(x))
x = self.layer3(x)
return x
I think the DeconvolutionNN will output a random guess Ctrue and I want to minimize L2 loss. Since the relationship between Cobs, Ctrue and kernel is Cobs = Ctrue conv kernel
This is how I computed the loss
def compute_loss(model, y_true, y_pred, kernel, device):
kernel_tensor = torch.tensor(kernel, dtype=torch.float32).view(1, 1, -1).to(device)
y_pred = y_pred.view(y_pred.shape[0], 1, -1)
y_conv = F.conv1d(y_pred, kernel_tensor, padding='same').view(y_pred.shape[0], -1)
return torch.mean((y_true - y_conv) ** 2)
After define all I need, I am trying to optimize the model.
model = DeconvolutionNN().to(device)
optimizer = optim.Adam(model.parameters(), lr=0.01)
Cturehat = []
train_loss = []
epochs = 3000
for epoch in range(epochs):
model.train()
optimizer.zero_grad()
Ctrue_hat = model(Cobs)
loss = compute_loss(model, Cobs, Ctrue_hat, kernel, device)
loss.backward()
optimizer.step()
Cturehat.append(Ctrue_hat.detach().numpy())
if (epoch + 1) % 100 == 0:
print(f'Epoch [{epoch+1}/{epochs}], Loss: {loss.item():.4f}')
train_loss.append(loss.item())
My loss is nearly 47000 and not changed. I have tried multiple things such as increasing the training epochs, increasing or decreasing the learning rate but it doesn’t help. I am wondering if my logic behind this is correct or any possible explaintion that why my model seems like not learning anything. Thanks
SpoonBB is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.