Getting Input and Output token counts with LangChain
I’ve found it difficult to find any straightforward way to get input and output tokens from LangChain when calling an LLM. All the LLM-specific libraries I’ve worked with provide this easily with a simple call after calling the LLM and getting a response.
Use batched output as input of another chain
I have a set of questions I want a language model to answer: