When trying to implement, one of the steps is to resize each image to (448, 448). But even with the transform applied the Dataloader throw an exception about difference of size in the dataset.
The exact error message: “RuntimeError: each element in list of batch should be of equal size”
from torchvision import datasets
from torchvision.transforms import v2, ToTensor
from torch.utils.data import DataLoader
validation_data = datasets.voc.VOCDetection(
root='.DATA/',
download=False,
image_set="val",
transform=v2.Compose([ v2.Resize(size=(448, 448)), ToTensor() ])
)
batch_size = 64
validation_dataloader = DataLoader(validation_data, batch_size=batch_size)
for X, y in validation_dataloader:
print(f"Shape of X [N, C, H, W]: {X.shape}")
break
New contributor
Henrique Hott is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.