I’m currently dealing with PyTorch and Numpy, and I ran across a weird issue. When I run the program, which is simply a PyTorch autoencoder for the MNIST digit dataset, on my windows pc (with cuda), it works just fine. However, when I try to run it on my macbook with either cpu/mps, the program freeze whenever I convert two certain tensors to numpy and check if they are equal.
The exact code snippet is:
with torch.no_grad():
for inputs, labels in tqdm(
test_loader,
total=len(test_loader),
desc=f"Epoch #{epoch_id + 1}/{total_epochs}",
leave=False,
position=1
):
inputs, labels = inputs.to(DEVICE), labels.to(DEVICE)
# Compute the model's prediction
outputs = model(inputs)
# Compute the loss
loss = criterion(outputs, labels)
# Update the running loss
running_loss += loss.item() * labels.size(0)
all_predictions.extend(np.argmax(outputs.cpu().numpy(), axis=1))
all_labels.extend(labels.cpu().numpy())
# noinspection PyTypeChecker
running_loss /= len(test_loader.dataset)
accuracy = np.average(np.array(all_predictions) == np.array(all_labels))
Basically, when I’m trying to compute the accuracy after the test set an epoch on the test set is finished, the program crashed. Looking at the activity monitor, it seems like the memory usage sky-rockets (up to ~20gb for python alone), however, the arrays consist of just 100k which should be pretty easy to compute.
When I try to create two arrays of same length (100k) through a python console, everything works just fine.
Any assistance will be appreciated!