captum failing: “Target not provided when necessary, cannot take gradient with respect to multiple outputs?”
I’m using the overruling dataset is a small (n=2400, evenly split) binary classification task. I’m also using a tiny BERT model (4.4M params).
Getting unusual tensor strides in backwards pass when using Pytorch’s custom autograd feature
I’m getting strides like (0, 0)
when writing custom Pytorch autograd functions. Here’s a minimal, reproducible example:
Multilabel classification and BCEWithLogitsLoss
Im trying to classificate multilabels for sentiment analysis.
A problem related to torch.nn.functional.logsoftmax
I’m trying to calculate negative log-likelihood for evaluation, ‘pred’ is already a probability vector, and by some reasons I’m trying to get log(pred) by
log_probs = F.log_softmax(pred)
but the numerical result is very wired
In PyTorch, the operation of nn.linear depends on the size. What’s wrong?
test = nn.Linear(1440, 1440, bias=False) hidden1 = torch.randn(100, 1440) hidden2 = torch.randn(400, 1440) output1 = test(hidden1) output2 = test(hidden2) If I test it as above, shouldn’t the output1 and output2[:100,:] parts be exactly the same? There are slightly different parts, do you know why? It should be the same as simple matmul calculation, but it […]
How the method of interpolate in pytorch works?
I’m using torch.nn.functional.interpolate recently. But I don’t know how it works.
for example, I give an input y as follows:
torch.cuda.is_available() returns False even after installing PyTorch with CUDA
I have recently installed PyTorch with CUDA support on my machine, but when I run torch.cuda.is_available()
, it returns False
. I verified my GPU setup using nvidia-smi
, and it seems that my system recognizes the GPU correctly.
CUDA not available in PyTorch after having the toolkit installed
This problem started a while ago when I uninstalled my CUDA version 12.4 and installed 11.2 since pytorch had some issues with 12.4 . But even after having the new version installed when I check , it shows my CUDA to be unavailable even after I checked the availability in the cmd .
CUDA not available in PyTorch after having the toolkit installed
This problem started a while ago when I uninstalled my CUDA version 12.4 and installed 11.2 since pytorch had some issues with 12.4 . But even after having the new version installed when I check , it shows my CUDA to be unavailable even after I checked the availability in the cmd .
Pytorch: matrices are equal but have different result
How is this possible?