I have a model that is performance really well. I’m using XGBoost for it. When I look at the features importance, it doesn’t make sense, though. The features importance is how the model is understanding the problem to make the prediction.
So, if the knowledge of the problem doesn’t have a logical explanation, can I trust the model?
I’m thinking that maybe I need more data to confirm if the features importance keep the same order, or if it changes. Am I right?