When fitting my tensorflow (2.15) model, I get the following logs:
Epoch 2/10
32/32 - 0s - loss: 17.3287 - 99ms/epoch - 3ms/step
Epoch 3/10
32/32 - 0s - loss: 16.9345 - 123ms/epoch - 4ms/step
and so on… I want to add a timestamp to the “logging” message (I dont think this is logging, since changing the logger format did not change anything in the messages for me). Anyway, I thought I could just use a on_epoch_end
callback and create my own logging message. But with that I cannot access the runtime information like the ms/step metric.
What I am asking: Can I add a time to the messages of fit? If not, where would I find the training time per epoch in an on_epoch_end
callback?
import logging
import sys
import keras
import numpy as np
import tensorflow as tf
# logging setup
logger = tf.get_logger()
handler = logging.StreamHandler(sys.stdout)
formatter = logging.Formatter("%(asctime)s - %(levelname)s - %(message)s")
handler.setFormatter(formatter)
logger.addHandler(handler)
class LoggerTensorflow(keras.callbacks.Callback):
def on_epoch_end(self, epoch, logs=None):
epoch_string = f"Epoch {epoch + 1} / {self.params['epochs']}"
logs_formatted = " - ".join([f"{k}: {v:.4f}" for k, v in logs.items()])
logger.info(f"{epoch_string} - {logs_formatted}")
return super().on_epoch_end(epoch, logs)
# Generate some fake data: 1000 samples with 1 feature.
np.random.seed(42) # for reproducible results
x_data = np.random.rand(1000, 1) # features
y_data = 3 * x_data + 2 + np.random.normal(0, 0.05, (1000, 1)) # targets with noise
# Build a simple linear regression model
model = tf.keras.Sequential([tf.keras.layers.Dense(1, input_shape=(1,))])
# Compile the model specifying the optimizer, loss function, and metrics to track
model.compile(loss="mse")
# Train the model
model.fit(x_data, y_data, epochs=10, verbose=2)