enter image description here
Hello,
I am currently using PyTorch to train my MLP model. My model configuration includes a hidden layer size of 200 and 150 epochs. Below is the code for my model:
class MLP(nn.Module):
def __init__(self, input_size, hidden_size, output_size):
super(MLP, self).__init__()
self.fc1 = nn.Linear(input_size, hidden_size)
self.leaky_relu = nn.ReLU()
self.fc2 = nn.Linear(hidden_size, hidden_size)
self.fc3 = nn.Linear(hidden_size, hidden_size)
self.fc4 = nn.Linear(hidden_size, hidden_size)
self.fc5 = nn.Linear(hidden_size, hidden_size)
self.fc6 = nn.Linear(hidden_size, output_size)
def forward(self, x):
out = self.fc1(x)
out = self.leaky_relu(out)
out = self.fc2(out)
out = self.leaky_relu(out)
out = self.fc3(out)
out = self.leaky_relu(out)
out = self.fc4(out)
out = self.leaky_relu(out)
out = self.fc5(out)
out = self.leaky_relu(out)
out = self.fc6(out)
return out
I suspect that my model is underfitting. To better understand why this might be happening and to prevent it in the future, I would like to delve deeper into the issue. Additionally, I am considering increasing the number of epochs to improve the model’s performance. Your assistance and insights would be greatly appreciated.
Thank you for your help.
deer_taste_beer is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.