Am tring to load a keras model into my colab environment using API’s but on importing the model and unzipping it I keep getting errors that the model can not be read, the format is wrong
# To move my Api key to the required directory
!mkdir -p ~/.kaggle
!mv /content/kaggle.json ~/.kaggle/
!chmod 600 ~/.kaggle/kaggle.json
#!/bin/bash
!kaggle models instances versions download tensorflow/resnet-50/tensorFlow2/feature-vector/1
import tarfile
# Open the tar.gz file
tar = tarfile.open('/content/resnet-50.tar.gz', 'r:gz')
# Extract all files to the current directory
tar.extractall()
# Close the tar file
tar.close()
After importing and unziping the tar.gz fileColab envrionment
The model is saved_model.pb
dir_path ='/content/variables'
!ls /content/variables
model = tf.keras.models.load_model(dir_path)
But I keep getting this error
saved_model.pb variables.data-00000-of-00001 variables.index
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
<ipython-input-14-f8210f38ec52> in <cell line: 3>()
1 dir_path ='/content/variables'
2 get_ipython().system('ls /content/variables')
----> 3 model = tf.keras.models.load_model(dir_path)
/usr/local/lib/python3.10/dist-packages/keras/src/saving/saving_api.py in load_model(filepath, custom_objects, compile, safe_mode)
197 )
198 else:
--> 199 raise ValueError(
200 f"File format not supported: filepath={filepath}. "
201 "Keras 3 only supports V3 `.keras` files and "
ValueError: File format not supported: filepath=/content/variables. Keras 3 only supports V3 `.keras` files and legacy H5 format files (`.h5` extension). Note that the legacy SavedModel format is not supported by `load_model()` in Keras 3. In order to reload a TensorFlow SavedModel as an inference-only layer in Keras 3, use `keras.layers.TFSMLayer(/content/variables, call_endpoint='serving_default')` (note that your `call_endpoint` might have a different name).
I tried the TFSMLayer as guided by the ValueError
tf.keras.layers.TFSMLayer(dir_path,call_endpoint='serve',trainable=False)
And I got this error
”’
RuntimeError Traceback (most recent call last)
/usr/local/lib/python3.10/dist-packages/tensorflow/python/training/py_checkpoint_reader.py in NewCheckpointReader(filepattern)
91 try:
—> 92 return CheckpointReader(compat.as_bytes(filepattern))
93 # TODO(b/143319754): Remove the RuntimeError casting logic once we resolve the
RuntimeError: Unsuccessful TensorSliceReader constructor: Failed to find any matching files for /content/variables/variables/variables
During handling of the above exception, another exception occurred:
NotFoundError Traceback (most recent call last)
9 frames
NotFoundError: Unsuccessful TensorSliceReader constructor: Failed to find any matching files for /content/variables/variables/variables
During handling of the above exception, another exception occurred:
FileNotFoundError Traceback (most recent call last)
/usr/local/lib/python3.10/dist-packages/tensorflow/python/saved_model/load.py in load_partial(export_dir, filters, tags, options)
1043 ckpt_options, options, filters)
1044 except errors.NotFoundError as err:
-> 1045 raise FileNotFoundError(
1046 str(err) + “n You may be trying to load on a different device ”
1047 “from the computational device. Consider setting the “
FileNotFoundError: Unsuccessful TensorSliceReader constructor: Failed to find any matching files for /content/variables/variables/variables
You may be trying to load on a different device from the computational device. Consider setting the experimental_io_device
option in tf.saved_model.LoadOptions
to the io_device such as ‘/job:localhost’.
”’