I’ve found it difficult to find any straightforward way to get input and output tokens from LangChain when calling an LLM. All the LLM-specific libraries I’ve worked with provide this easily with a simple call after calling the LLM and getting a response.
Can anyone provide code examples for what I hope will be a simple library method call to get Input and Output tokens from a returned LLM call in LangChain?