I got this problem when trying to use maskedtensor in backpropagation. But it works well without maskedtensor. It seems like something wrong with maskedtensor. But I don’t know how to fix it.
torch 1.12.0
maskedtensor 0.10.0
from maskedtensor import masked_tensor
import torch
torch.manual_seed(22)
# Sum needs custom autograd, since the mask of the input should be maintained
data = torch.randn(2, 2, 3).mul(5).float()
mask = torch.randint(2, (2, 2, 3), dtype=torch.bool)
m = masked_tensor(data, mask, requires_grad=True)
# print(m)
s = torch.sum(m)
print("s: ", s)
s.backward()
---------------------------------------------------------------------------
RuntimeError Traceback (most recent call last)
c:UsersWINTERAnaconda3libsite-packagestorch_ops.py in __getattr__(self, op_name)
197 try:
--> 198 op, overload_names = torch._C._jit_get_operation(qualified_op_name)
199 except RuntimeError as e:
RuntimeError: No such operator aten::_s_where
The above exception was the direct cause of the following exception:
AttributeError Traceback (most recent call last)
<ipython-input-34-71fbc27a2af4> in <module>()
9 s = torch.sum(m)
10 print("s: ", s)
---> 11 s.backward()
12 # print("m.grad: ", m.grad)
c:UsersWINTERAnaconda3libsite-packagestorch_tensor.py in backward(self, gradient, retain_graph, create_graph, inputs)
393 retain_graph=retain_graph,
394 create_graph=create_graph,
--> 395 inputs=inputs)
396 torch.autograd.backward(self, gradient, retain_graph, create_graph, inputs=inputs)
397
...
--> 202 raise AttributeError(f"'_OpNamespace' object has no attribute '{op_name}'") from e
203
204 # let the script frontend know that op is identical to the builtin op
AttributeError: '_OpNamespace' object has no attribute '_s_where'
I checked the version of torch and torchvision and it seemed compatible with maskedtensor. And I tried other version of maske
New contributor
user25242455 is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.