I notice a huge difference in performance when running model.predict(x) for large datasets (~10mil)
Code 1:
x = tf.convert_to_tensor(df)
y = model.predict(x)
Code 2:
y = model.predict(df)
The 2nd way is alot slower than the first. In fact, I notice for the 2nd way, it takes a long while even before the model starts predicting so I assumed perhaps conversing is slow. However, convert_to_tensor runs very fast so I am not sure whats the reason for the difference.