I am using Langchain 0.2 to create a conversation pipeline with OpenAI LLM. Following is the code that I have written:
chat_llm = ChatOpenAI(
model_name="gpt-4",
api_key=config["api_key"],
openai_api_base=platform_url,
default_headers=llm_headers,
)
history_aware_retriever = create_history_aware_retriever(
llm=chat_llm,
retriever=vector_store.as_retriever(),
prompt=condense_default_system_question_prompt
)
qa_chain = create_stuff_documents_chain(chat_llm, qa_prompt)
convo_qa_chain = create_retrieval_chain(history_aware_retriever, qa_chain)
result = convo_qa_chain.invoke({
"input": user_query,
"chat_history": chat_memory.chat_memory.messages
})
enter image description here
On convo_qa_chain.invoke
the OpenAI Api is giving the HTTP status 429(Too many requests).
NOTE: The Curl command with the same payload works. It gives me a valid output.
- I added logs.
- I extraced the payload and created a CURL Command that I found to be working.
- Debug but the abstraction of OpenAI extension provided in the Langchain_openai doesn’t let me understand any error.
New contributor
gundeep nagpal is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.