I’m trying to set up a PydanticOutPutParser
instance at the end of a RAG LCEL chain, but am receiving the error
TypeError: argument 'text': 'dict' object cannot be converted to 'PyString'
This is my associated code
from langchain_core.runnables import (
RunnableParallel,
RunnablePassthrough
)
from langchain_core.output_parsers import PydanticOutputParser
from langchain_core.pydantic_v1 import (
BaseModel,
Field
)
from langchain_core.prompts import PromptTemplate
from langchain.schema.output_parser import StrOutputParser
class Fee(BaseModel):
fee_subject: str = Field(description="The subject in which the fee relates to.")
fee_amount: float = Field(description="The dollar cost of the fee.")
class Fees(BaseModel):
fees: List[Fee] = Field(description="List of fees.")
vectorstore = Milvus.from_texts(
texts=all_texts,
embedding=OpenAIEmbeddings(),
connection_args={"uri": URI},
drop_old=True
)
retriever = vectorstore.as_retriever()
pydantic_output_parser = PydanticOutputParser(pydantic_object=Fees)
test_prompt = """
You are a fee-finding support assistant. Your job is to find any applicable fees relating to a person's query.
Return the fee and fee amount related to each part of a person's query.
If you don't find anything, then return $0. Do not make up fees. You are given supporting context to pull information from along with the original question.
n{format_instructions}n
Question: {question}
Context: {context}
Answer: """
test_prompt_template = PromptTemplate(
template=test_prompt,
input_variables=['question', 'context'],
partial_variables={"format_instructions": pydantic_output_parser.get_format_instructions()})
retrieval = RunnableParallel(
{'context': retriever, 'question': RunnablePassthrough()}
)
model = Ollama(
model="llama3",
temperature=0
)
str_output_parser = StrOutputParser()
chain = retrieval | test_prompt_template | model | pydantic_output_parser
question = "I have a shipment being delivered to an airport. What amount in fees can I expect from shipping with XPO?"
output = chain.invoke({"question": question})
The error is happening when I invoke the chain. What am I missing here?
FYI, I have the {format_instructions}
in the prompt because that is what I did in a previous piece of code, but not sure if that is correct in this context.