I use GPU to train the model on Kaggle while the code and dataset from the book Deep Learning With Python second edition.
the kaggle’s tensorflow version is 2.15.0 and keras’version is 3.4.1.
model.compile(optimizer="rmsprop", loss="sparse_categorical_crossentropy")
callbacks = [
keras.callbacks.ModelCheckpoint("/kaggle/working/checkpoint.model.keras",
save_best_only=True)
]
history = model.fit(train_input_imgs, train_targets,
epochs=50,
callbacks=callbacks,
batch_size=64,
validation_data=(val_input_imgs, val_targets))
I want to save the best model using ModelCheckpoint and download it to a directory on my computer, and then use it in my computer’s jupyter notebook.
In my local conda environment,tensorflow-gpu’version is 2.6.0 and keras is 2.6.0,too.
from tensorflow.keras.utils import array_to_img
model = keras.models.load_model("F:\deeplearning\model_segmentation\model\checkpoint.model.keras")
i = 4
test_image = val_input_imgs[i]
plt.axis("off")
plt.imshow(array_to_img(test_image))
mask = model.predict(np.expand_dims(test_image, 0))[0]
def display_mask(pred):
mask = np.argmax(pred, axis=-1)
mask *= 127
plt.axis("off")
plt.imshow(mask)
display_mask(mask)
then the problem comes.
This is my first time asking a question on stackoverflow, I would be grateful if anyone could help me
I have tried to downgrade the version of TensorFlow on Kaggle, but the GPU cannot be used with the lower version, which is obviously not what I expected
文中宇 is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.