I am just asking this out of genuine curiosity, and also in order to get a grasp on how LLM’s actually work. I am fascinated and at the same time overwhelmed with the pace AI development is taking, and I try to stay up to date as much as possible. One of the thoughts I had recently is that, if LLM’s keep getting better every day, then these will be able to tackle more difficult problems over time. What I would expect from a system like this is to at least be aware its own current limitations, but that does not seem to be the case. I am not talking about asking it mere facts that it may or may not have access to in its training data. I am talking about really specific and complex questions related with mathematics, coding or physics, where chatGPT will likely be able to solve in a future, but it is not there yet. Why does it prefer to give a wrong answer instead of saying ‘I DONT KNOW’?