Expected all tensors to be on the same device
I know it’s because the tensors are on the different device, but I don’t know why.
ValueError: too many values to unpack (expected 3)” in PyTorch
While coding Lenet5 with pytorch, Value error occurs.
This is code for dataset.
Frustrated with pytorch data type in basic tensor operation, how to make it easier?
I am new to pytorch. I am quite frustrated with basic operation with different data types,
torch.cuda.OutOfMemoryError when training model on GPU, but not for larger batch sizes on CPU
I am working on training a MultiModal model in PyTorch. I have a training loop which runs just fine (albeit slowly) on my CPU (I tested up to batch size = 32). However, when I try to run it on a GPU (Tesla P40), it only works up to batch size = 2. With larger batch sizes it throws a torch.cuda.OutOfMemoryError. I am working with pre embedded video and audio, and pre tokenized text. Is it possible that the GPU can really not handle batch sizes larger than 2 or could there be something wrong in my code? Do you have any advice on how I might go about troubleshooting? I apologize for this simple question, it is my first time working with a GPU cluster. I am running this code on my university’s GPU cluster and have double checked that the GPU I am using is not being used by anyone else.
I encountered a tensor dimension mismatch problem in textual inversion
I am trying to reproduce this project: https://github.com/feizc/Gradient-Free-Textual-Inversion,But I now have a problem:
Correct way to swap PyTorch tensors without copying
I have two PyTorch tensors x, y with the same dimensions. I would like to swap the data behind the two tensors, ideally without having to copy. The purpose of this is to have code elsewhere that holds onto the tensor x to now read & write the data y and vice-versa.
how to make the dataloader return tv_tensor instead of normal tensor?
in a dataset I have target like this:
Memory leak when passing sequential image data into encoder
I have a model implemented like below to classify a sequence of images:
More efficient implementation of tensor indexing
I currently have a tensor X of shape (2,3,4,10), and an indexing vector Y of shape (4):
How to mask 3D tensor efficiently?
Say I have a tensor