I created a diagram for my neural network. This is the NN I created using pytorch
class MLP(nn.Module):
def __init__(self):
super().__init__()
self.layers = nn.Sequential(
nn.Linear(2830, 1024),
nn.ReLU(),
nn.BatchNorm1d(1024),
nn.Dropout(0.2),
nn.Linear(1024, 512),
nn.ReLU(),
nn.BatchNorm1d(512),
nn.Dropout(0.2),
nn.Linear(512, 1),
)
def forward(self, x):
return self.layers(x)
And this is the diagram I created using this script, where gray stands for hidden nodes and black for input/output.
I am afraid that this representation is partly unclear because of the multiple layers of the same size, that in reality are simple ReLu, BatchNorm and Dropout layers. I also noted that sometimes these types of layers are not shown in diagrams, but since my model is simple and I think they are relevant for its correct functioning I would like to show them.
Do you think this diagram is correct? Can you understand the structure of the network looking at the image? What would you do differently?