Using tensorflow, I have a trained multi-ouput regression model so that when I run model.evaluate(X,Y) I get an accuracy of 1.0. Being suspicious of perfection, I tried model.predict(X) thinking that this should be similar to Y. This returns something that is ~55% similar to Y.
My questions:
- How does model.evaluate() determine accuracy for regression models?
- Because I am trying to train a multi-output regression model, is there anything special that I need to do to prevent tensorflow from interpreting this as a classification problem??
Below is a snippet from my code:
`
model = simple_model()
model.compile(optimizer=tf.keras.optimizers.Adam(),
loss=tf.keras.losses.CategoricalCrossentropy(from_logits=True),
metrics=['accuracy'])
logdir="logs/fit/" + datetime.now().strftime("%Y%m%d-%H%M%S")
tensorboard_callback = tf.keras.callbacks.TensorBoard(log_dir=logdir)
history = model.fit(X, Y, epochs=3, callbacks=[tensorboard_callback])
test_loss, test_acc = model.evaluate(X, Y, verbose=2)
print(test_acc)
`
Thank you for taking time to read this.
Best Regards!
Things that I have tried:
Because the accuracy is reported as 1 (meaning 100%) I was suspicious that the model.evaluate() is somehow treating my model as a classification problem when I hoped it would be a regression problem. By using model.predict() for the same data that allegedly was 100% accurate, I did not get good results.
I believe that there may be some subtle way my model is structured, forcing it to be a classification model when I really desire it to be a regression model.
cakemaster2 is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.