I have the following training setup from tensorflow v1’s estimators:
<code>config = tf.estimator.RunConfig(
tf_random_seed=seed,
log_step_count_steps=log_every_n_iter,
save_checkpoints_steps=save_checkpoints_steps,
session_config=gpu_config,
)
</code>
<code>config = tf.estimator.RunConfig(
tf_random_seed=seed,
log_step_count_steps=log_every_n_iter,
save_checkpoints_steps=save_checkpoints_steps,
session_config=gpu_config,
)
</code>
config = tf.estimator.RunConfig(
tf_random_seed=seed,
log_step_count_steps=log_every_n_iter,
save_checkpoints_steps=save_checkpoints_steps,
session_config=gpu_config,
)
I followed some guides, like:
- Migrate from estimator to Keras API
- Migrate checkpoint saving
- Migrate canned estimators
And I tried to migrate some of the config options to keras, resulting in the following code:
<code>tf.compat.v1.keras.utils.set_random_seed(seed)
model_checkpoint_callback = tf.compat.v1.keras.callbacks.ModelCheckpoint(
filepath=model_dir,
save_freq=save_checkpoints_steps,
save_best_only=False,
)
</code>
<code>tf.compat.v1.keras.utils.set_random_seed(seed)
model_checkpoint_callback = tf.compat.v1.keras.callbacks.ModelCheckpoint(
filepath=model_dir,
save_freq=save_checkpoints_steps,
save_best_only=False,
)
</code>
tf.compat.v1.keras.utils.set_random_seed(seed)
model_checkpoint_callback = tf.compat.v1.keras.callbacks.ModelCheckpoint(
filepath=model_dir,
save_freq=save_checkpoints_steps,
save_best_only=False,
)
Note: I still don’t know if it works as I still have to migrate the models and add the callback.
The questions are: Is set_random_seed
the correct way of doing it? And what about log_every_n_iter
? How can I replicate that behaviour.