I have this LangChain code for answering questions by getting similar docs from the vector store and using llm to get the answer of the query:
llm_4 = AzureOpenAI(
# temperature=0,
api_version= os.environ['OPENAI_API_VERSION_4'],
openai_api_key= os.environ['AZURE_OPENAI_API_KEY_4'],
deployment_name="gpt4-deploy",
# model_name="gpt4-o",
azure_endpoint=os.environ['AZURE_OPENAI_ENDPOINT_4']
)
llm_3 = AzureOpenAI(
# temperature=0,
api_version= os.environ['OPENAI_API_VERSION_3'],
openai_api_key= os.environ['AZURE_OPENAI_API_KEY_3'],
deployment_name="test-rtb-deployment",
# deployment_name="gpt-16k-deployment",
# model_name="gpt-3.5-turbo-16k",
azure_endpoint=os.environ['AZURE_OPENAI_ENDPOINT_3']
)
response=get_answer(relavant_docs, user_input, llm_4)
…
#Create embeddings instance
def create_embeddings():
#embeddings = OpenAIEmbeddings()
embeddings = SentenceTransformerEmbeddings(model_name="all-MiniLM-L6-v2")
# embeddings = SentenceTransformerEmbeddings(model_name="text-davinci-003")
return embeddings
def get_answer(docs, user_input, llm=None):
if llm:
chain = load_qa_chain(llm, chain_type="stuff")
else:
chain = load_qa_chain(OpenAI(), chain_type="stuff")
with get_openai_callback() as cb:
response = chain.run(input_documents=docs, question=user_input)
return response
It’s working with gpt3, but with gpt4 getting:
BadRequestError: Error code: 400 – {‘error’: {‘code’: ‘OperationNotSupported’, ‘message’: ‘The completion operation does not work with the specified model, gpt-4. Please choose different model and try again. You can learn more about which models can be used with each operation here: https://go.microsoft.com/fwlink/?linkid=2197993.’}}
I tried what was suggested by these similar issues:
How to use the new gpt-3.5-16k model with langchain?
I am trying to make a docs question answering program with AzureOpenAI and Langchain
But I still didn’t figure out how to solve it!
In our Azure AI we have 2 services:
- OpenAI 3 deployments
- OpenAI4 deployments
What should I use to make it with with gpt4?