I’m working on a Physics Informed Neural Network and I need to take the derivatives of the outputs w.r.t the inputs and use them in the loss function.
The issue is related to the neural network’s multiple outputs. I tried to use ‘autograd.grad’ to calculate the derivatives of the outputs, but it sums all the contributions.
For example, if my output ‘u’ has shape [batch_size, n_output], the derivative ‘dudx’ has shape [batch_size, 1], instead of [batch_size, n_output].
Due to the sum, I can’t use the derivatives in the loss function. I tried with a for loop to calculate each derivative but the training takes forever. Do you have any idea how to solve this problem?
More details on what was attempted:
My loss function contains high-order derivatives of the outputs with respect to the inputs x and y. I tried to compute the derivatives with a for loop (as below) but it takes forever.
def gradient(y, x, grad_outputs=None):
if grad_outputs is None:
grad_outputs = torch.ones_like(y)
grad = torch.autograd.grad(y, [x], grad_outputs=grad_outputs, create_graph=True)[0]
return grad
def compute_derivatives(x, y, u):
dudx = gradient(u, x)
dudy = gradient(u, y)
dudxx = gradient(dudx, x)
dudyy = gradient(dudy, y)
dudxxx = gradient(dudxx, x)
dudxxy = gradient(dudxx, y)
dudyyy = gradient(dudy, y)
dudxxxx = gradient(dudxxx, x)
dudxxyy = gradient(dudxxy, y)
dudyyyy = gradient(dudyyy, y)
return dudxx, dudyy, dudxxxx, dudyyyy, dudxxyy
For N outputs the shape of u is [batch_size, N] and I need to compute the derivatives for each column of u in parallel so that each derivative (ex. dudx, dudy…) shape matches the shape of u.
I tried also vmap to do the computation in parallel but it doesn’t work because it does not allow to use ‘autograd.grad’ and it is essential to me to keep the graph of all operations on the inputs.
The ideal would be to use autograd.grad but with an output derivative which is not summed, but has shape [batch_size, n_output].
Any help or suggestion would be much helpful. Thanks in advance
Isabella Osei is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.