I’m new to using LangChain and currently experimenting with the SemanticSimilarityExampleSelector to avoiding adding unnecessary examples in the prompt. However, I’ve encountered an issue when trying to integrate an ExampleSelector with a ChatPromptTemplate.
Here’s what I’m trying to achieve:
I want to use ExampleSelector within ChatPromptTemplate to dynamically select relevant examples based on the input.
The implementation works fine without the ExampleSelector, but when I add the selector, I encounter the following error:
Error Message:
Error in prompt: ‘input_text’. It probably throws error before final prompt as It does not get printed.
Here is the code:
def extract_information(input_text, examples, model):
try:
active_model = LLMManager.get_instance(model=model)
print(f"Examples={examples}")
embedding = FastEmbedEmbeddings()
example_selector = SemanticSimilarityExampleSelector.from_examples(
examples,
embedding,
FAISS,
k=4
)
few_shot_prompt = FewShotChatMessagePromptTemplate(
example_selector=example_selector,
example_prompt=ChatPromptTemplate.from_messages(
[("human", "{input}"), ("ai", "{output}")]
)
)
print(f"loaded model")
base_prompt=f"""Role: You are a helpful assistant
Input Text: "{input_text}"n,
output: """
final_prompt = (
SystemMessagePromptTemplate.from_template(
base_prompt
)
+ few_shot_prompt
+ HumanMessagePromptTemplate.from_template("{input_text}")
)
print(f"final prompt={final_prompt.format()}")
chain = final_prompt | active_model
result = chain.invoke({"input_text": input_text})
print(f"result={result}")
except Exception as e:
print(f"Error in prompt:{e}")
return None
System Info
linux
langchain-python
latest langchain package`
langchain documentation, github page, code review