Why does the training fully-connected neural network (FNN) change after interrupting?
I am training the plane fully-connected neural nets and I noticed that after interrupting it the character of training changes. I interrupt it to check the results. And then I continue the process, but sometimes it reaches a plateau immediately after interrupting.
Implementing GradNorm for a Physics Informed Neural Network in pytorch
I am currently writing a physics informed neural network that learns the velocity, temperature, and pressure field from the evolution of a bubble in a pool boiling scenario. I have tried to implement the GradNorm regularization scheme to help convergence of the model as outlined here https://arxiv.org/abs/2308.08468 , but instead of helping convergence it prevents it.
How to generate the corresponding Hypernetwork given an already-trained neural network?
Hypernetworks are implemented as an extra block for an original neural network (e.g., ResNet). The embeddings and the parameters(weights and bias for a linear layer) of the Hypernetwork are optimized via backpropagation.