I have a function that takes in a Langugaue Model, a vector store, question and tools; and returns a response, at the moment the tools argument is not being added because based on this example the function .bind_tools
is not an attribute of the llm
-> llm is below
## Bedrock Client
bedrock_client = boto3.client(service_name="bedrock-runtime", region_name="us-west-2")
bedrock_embeddings = BedrockEmbeddings(model_id="amazon.titan-embed-text-v1", client=bedrock_client)
llm=Bedrock(model_id="anthropic.claude-v2:1", client=bedrock_client,
model_kwargs={'max_tokens_to_sample': 512})
without changing the LLM to ChatOpenAPI
as in the example reference how do a bind a tool to langchain bedrock.
I have also tried tools rendering but not working below is my main get response function
def get_response(llm, vectorstore, question, tools ):
## create prompt / template this helps to guide the AI on what to look out for and how to answer
prompt_template = """
System: You are a helpful ai bot, your name is Alex, you are to provide information to humans based on faq and user information, in the user information provided you are to extract the users' firstName and lastName from the json payload and recognize that as the persons name. use the currencyVerificationData to determine the number of currency accounts that the user has and if they are approved if the status is VALID, other statuses will indicate that the user is not yet approved and needs to provide more information for validation. use bankFilledData as the users beneficiaries, from that section of the payload you would be able to extract the beneficiaries bankName, bankAccountNumber; use accountDetails as information for bank account detail information;
Human: Please use the given context to provide concise answer to the question
If you don't know the answer, just say that you don't know, don't try to make up an answer.
If you need clarity, ask more questions, do not refer to the json payload when answering questions just use the values you retrieve from the payload to answer
<context>
{context}
</context>
The way you use the information is to identify users name and use it in response
Question: {question}
Assistant:"""
# llm.bind_tools(tools) // not working, python error attribute not found
PROMPT = PromptTemplate(
template=prompt_template, input_variables=["context", "question", "user_information"]
)
qa = RetrievalQA.from_chain_type(
llm=llm,
chain_type="stuff",
retriever=vectorstore.as_retriever(
search_type="similarity", search_kwargs={"k": 5}
),
return_source_documents=True,
chain_type_kwargs={"prompt": PROMPT}
)
answer=qa({"query":question})
return answer['result']