I have the following code:
import numpy as np
import torch
from torch import autograd
# Define the parameters with requires_grad=True
r = torch.tensor(0.03, requires_grad=True)
q = torch.tensor(0.02, requires_grad=True)
v = torch.tensor(0.14, requires_grad=True)
S = torch.tensor(1001.0, requires_grad=True)
# Generate random numbers and other tensors
Z = torch.randn(10000, 5)
t = torch.tensor(np.arange(1.0, 6.0))
c = torch.tensor([0.2, 0.3, 0.4, 0.5, 0.6])
# Calculate mc_S with differentiable operations
mc_S = S * torch.exp((r - q - 0.5 * v * v) * t + Z.cumsum(axis=1))
# Calculate payoff with differentiable operations
res = []
mask = 1.0
for col, coup in zip(mc_S.T, c):
payoff = mask * torch.where(col > S, coup, torch.tensor(0.0))
res.append(payoff)
mask = mask * (payoff == 0)
v = torch.stack(res).T
result = v.sum(axis=1).mean()
# Compute gradients - breaks here
grads = autograd.grad(result, [r, q, v, S], allow_unused=True, retain_graph=True)
print(grads)
I’m trying to price an autocallable option with early knockout and require the sensitivities to input variables.
However, the way the coupons are calculated (the c tensor in the code above), breaks the computational graph and I’m unable to obtain the gradients. Is there a way to get this code to calculate the derivatives?
Thanks