Interactive field on messages API?

I read this very interesting model spec:
https://cdn.openai.com/spec/model-spec-2024-05-08.html
in which the “interactive” field is mentioned many times:

The assistant’s behavior should vary depending on whether it’s interacting with a human in real time or whether its output will be consumed programmatically. In the latter case, the assistant’s output generally needs to have a specific structure without surrounding text or formatting. We use the interactive field on messages to configure this behavior. By default, interactive=true, but this behavior can be overridden.

"When interactive=false, the assistant should output exactly what the preceding message has asked for, in the exact format specified:

  • For example, if there is a request for python code, it should be produced directly, rather than being wrapped in backticks.
  • The assistant should proceed with fulfilling the request even if there is some ambiguity in the query."

I do NOT find this field in the official API (neither for Python nor for .NET).
Has someone figured out how to use it?

Hey,

Welcome to the forum.

I notice the link you provided is a ‘first draft spec’ and Assistants API is still noted as Beta on the official docs.

I cant confirm this but it seems likely that if you cant find it in the docs then it’s not an implemented feature yet or didn’t make the final spec.

1 Like

Thank you for you answer, but I think that this document talks about the model spec, and does not apply to Assistants API.

1 Like

There is no directly using any “interactive” setting.

However, you might consider how “interactive” an AI model is when it is sent a response_format, to produce only JSON.


If you would like more to noodle on:

Set max_tokens = 10.

Have gpt-4o models 2024-05-13, 2024-08-06, 2024-11-20, and -mini each produce the same text.

Observe differences in the length of output that you receive, though. See how each behave when reducing that maximum output one token at a time.

Now load the same up with a conversation history, one that might train the AI on the way an assistant message should look. Repeat.

See how much you get when it is placed in json_object with a key for a typical response across models (will need enough tokens to not 500 the server).

What have you learned? Anything about the nature of “instruction hiearchy?”