If you don’t believe me, try it for yourself with this python script:
from openai import OpenAI
from openai.types.responses.response_stream_event import ResponseCompletedEvent
client = OpenAI()
response = client.responses.create(
model="o3-pro",
input="What model are you?",
instructions="",
# store=False,
reasoning = {"effort": "medium", "summary": "detailed"},
stream=True
)
for chunk in response:
print(f"{chunk.type}(...)")
if isinstance(chunk, ResponseCompletedEvent):
print(f"Final: {chunk.response.output_text}")
You should see an output like:
response.created(...)
response.in_progress(...)
response.output_item.added(...)
response.output_item.done(...)
response.output_item.added(...)
response.content_part.added(...)
response.output_text.delta(...)
response.output_text.done(...)
response.content_part.done(...)
response.output_item.done(...)
response.completed(...)
Final: I’m ChatGPT, a large language model built by OpenAI using the GPT-4 architecture.
Most of the time the API-based models don’t identify themselves as ChatGPT either, making me think the training data for the o-pro models piggybacks on GPT-4 training. I’m not sure why this is happening, but it could be an issue for somebody. o1-pro seems to make the same mistake.
The AI already has unwanted system messages like a knowledge cutoff date and a forced “vision” safety policy … don’t ask for more to be done automatically and ruin things!
This isn’t from training. It only does this because of the system message added by OpenAI. You can usually get ChatGPT to dump it for you.
OpenAI likes to do experiments on ChatGPT so the exact underlying model isn’t the same as what’s in the API. Instead, you can specify chatgpt-4o-latest to get the same model as whatever is currently being used in ChatGPT.
Copying parts of the dumped system prompt tend to yield an assistant almost identical to ChatGPT. I’m not sure why anyone would want that, though.
The vision safety injection tells us that the models aren’t trained on this safety and that it’s modular enough to just switch off when OpenAI feels ready.
That said, I kind of wish they really did just train it into the models. Removing the vision safety could be a big issue when it comes to powerful organizations that don’t understand the limitations of AI. Imagine being implicated in a crime just because an AI hallucinated you into it.