I write a function to get the Validation accuracy
def evaluation(loader, model, device):
model.eval()
model.to(device)
correct = 0
total = len(loader.dataset)
for data in loader:
with torch.no_grad():
inputs, labels = data[0].to(device), data[1].to(device)
pred = model(inputs)
pred = pred.argmax(dim=1)
correct += pred.eq(labels).sum().item()
acc = correct / total
return acc
but when i testing it I found that the for loop is been skipped. Why?
My loader and dataset are in this picture
enter image description here
the dataloader code:
data_list = list(range(0, len(dataset)))
val_list = random.sample(data_list, int(len(dataset) * val_split))
trainset = [dataset[i] for i in data_list if i not in val_list]
valset = [dataset[i] for i in data_list if i in val_list]
# Weighed Sampling
if weighted_sampling:
label_count = Counter([int(data[1]) for data in dataset])
weights = [100 / label_count[int(data[1])] for data in trainset]
sampler = WeightedRandomSampler(weights, num_samples=len(trainset), replacement=True)
train_loader = DataLoader(trainset, batch_size=batch_size, sampler=sampler, drop_last=True)
else:
train_loader = DataLoader(trainset, batch_size=batch_size, shuffle=False, drop_last=True)
val_loader = DataLoader(valset, batch_size=batch_size, shuffle=True, drop_last=True)
I have tried to check my dataset and dataloader, but I can’t found someting wrong.
I am expecting for the reason, thanks!
New contributor
Lyanna Stark is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.