Following is my code, where I have created a custom loss function.
Problem is I am getting NaN loss.
import tensorflow as tf
from tensorflow.keras.layers import Input, Dense
from tensorflow.keras.models import Model
from tensorflow.keras.optimizers import Adam
def custom_loss(y_true, y_pred, X_train):
cust_loss = tf.reduce_mean(tf.square(y_true - (tf.math.pow(X_train, y_pred))))
tf.print(cust_loss)
return cust_loss
if __name__ == '__main__':
X_train = tf.random.normal(shape=(100, 1))
y_train = tf.math.square(X_train)
input_features = Input(shape=(1,))
dense1 = Dense(10, activation='relu')(input_features)
output = Dense(1)(dense1)
model = Model(inputs=input_features, outputs=output)
model.compile(optimizer=Adam(), loss=lambda y_true, y_pred: custom_loss(y_true, y_pred, X_train))
model.fit(X_train, y_train, epochs=10, batch_size=100)
This is an example of the output that I am seeing.
Epoch 1/10
[1m1/1[0m [32m━━━━━━━━━━━━━━━━━━━━[0m[37m[0m [1m0s[0m 578ms/step - loss: nan
[1m1/1[0m [32m━━━━━━━━━━━━━━━━━━━━[0m[37m[0m [1m1s[0m 621ms/step - loss: nan
I am expecting to get a non-NaN loss. Where am I going wrong?
New contributor
Abhishek Bhatt is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.