Hi Stackoverflow Community,
I am currently working on a project where I am integrating GPT-3.5 models into my application. However, I have encountered an issue where only gpt-3.5-turbo-instruct seems to be working with the /v1/chat/completions endpoint. When I try to use other GPT-3.5 and gpt-4 models, I receive the following error:
Error code: 404 – {‘error’: {‘message’: ‘This is a chat model and not supported in the v1/completions endpoint. Did you mean to use v1/chat/completions?’, ‘type’: ‘invalid_request_error’, ‘param’: ‘model’, ‘code’: None}}
Does this mean that the /v1/completions endpoint is deprecated for other GPT-3.5 and 4 models? Or am I missing something in my setup?
For context, I have already added $5 to my OpenAI account, so I believe I should have access to the models. would appreciate any guidance on whether this issue is due to endpoint deprecation or if there’s a different recommended approach to using other GPT-3.5 and 4 models with the /v1/chat/completions endpoint.
What I Tried
Model Initialization:
from langchain_openai import OpenAI
llm = OpenAI(model=’gpt-4-turbo’)
Attempted to use other GPT-3.5 and gpt 4 models with the /v1/chat/completions endpoint.
Received the aforementioned 404 error.
I expected that other GPT-3.5 and 4 models would work with the /v1/chat/completions endpoint without encountering a 404 error. Specifically, I expected to be able to interchangeably use different GPT-3.5 models with this endpoint for generating completions.
For context, I have already added $5 to my OpenAI account, so I believe I should have access to the models.
I would appreciate any guidance on whether this issue is due to endpoint deprecation or if there’s a different recommended approach to using other GPT-3.5 and 4 models with the /v1/chat/completions endpoint.
Thank you!