Apparently, the new model OpenAI o1-preview
doesn’t support system
role:
import os
from openai import OpenAI
client = OpenAI(api_key=os.environ['OPENAI_API_KEY'])
messages = [
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": "Hello, how are you?"}
]
response = client.chat.completions.create(
model='o1-preview',
messages=messages,
temperature=0.0,
max_tokens=500,
stream=False
)
response_message = response.choices[0].message.content
print(response_message)
out:
BadRequestError: Error code: 400 - {'error': {'message': "Unsupported value: 'messages[0].role' does not support 'system' with this model.", 'type': 'invalid_request_error', 'param': 'messages[0].role', 'code': 'unsupported_value'}}
Moreover, the temperature
and max_tokens
parameters don’t support either!
1
Currently o1 is still a beta version and has some limitations. Here is the official description https://platform.openai.com/docs/guides/reasoning/beta-limitations
During the beta phase, many chat completion API parameters are not yet available. Most notably:
- Modalities: text only, images are not supported.
- Message types: user and assistant messages only, system messages are not supported.
- Streaming: not supported.
- Tools: tools, function calling, and response format parameters are not supported.
- Logprobs: not supported.
- Other: temperature, top_p and n are fixed at 1, while presence_penalty and frequency_penalty are fixed at 0.
Assistants and Batch: these models are not supported in the Assistants API or Batch API.
Seayouth CU is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.
There’s been a lot of speculation about how the o1
model family works. As stated in the official OpenAI documentation:
The
o1
series of large language models are trained with reinforcement
learning to perform complex reasoning.o1
models think before they
answer, producing a long internal chain of thought before responding
to the user.
Until the release of the o1
model family, the system message was the way how to set the behavior of the model. But OpenAI says the o1
model family thinks and produces a long internal chain of thought before they answer.
This gives me a suspicion that the o1
model family uses the system message internally to kind of test different approaches on how to give the best answer. OpenAI is extremely secretive about the o1
model family, so this is just my speculation. It could be that the system message will be supported by the o1
model family in the near future, but then again, I can hardly imagine a model being that creative when searching for the answer if we set the behavior through the system message.