Passing variables to Realtime API saved prompt?

Hi,

I am using Realtime API for speech-to-speech. According to docs, you can pass variables to a saved prompt by updating the session like this:

# Use a server-stored prompt by ID. Optionally pin a version and pass variables.
      prompt: {
        id: "pmpt_123",          // your stored prompt ID
        version: "89",           // optional: pin a specific version
        variables: {
          city: "Paris"          // example variable used by your prompt
        }
      },

[taken from https://platform.openai.com/docs/guides/realtime-models-prompting?lang=python\\\]

However, when I try that, I am getting API errors (my variable is called native_language):

[error] ❌ OpenAI Realtime API error:
[error]    Type: invalid_request_error
[error]    Code: invalid_type
[error]    Message: Invalid type for ‘session.prompt.variables.native_language’: expected an object, but got a string instead.
[error] ⏱️ Error at: 2025-09-18 14:20:49.255621Z

What’s the proper syntax to pass the variables? And what is then the proper syntax to use those variables in the saved prompt? I cannot find documentation for this, and my trial&error was not successful yet.

EDIT: this seems to pass through the API:

prompt = {
“id”: “pmpt_xxxxxxxxxxxx”,
“variables”: {
“native_language”: {“type”: “input_text”, “text”: “English”},
“target_language”: {“type”: “input_text”, “text”: “Italian”},
},
}

But I still don’t know how to pick these variables up in the server-stored prompt.

Thank you very much.

1 Like

(edit) This walk-through is only applicable to the Responses API and its “chat” UI. Realtime uses its own prompt type created through “audio”, which does not offer variables.

You must have created the insertion points with variable names.

You can only edit “prompts” in the platform.openai.com site in order to construct prompt language with the special signal containers required.

The variables in the UI site are written in this form:

{{variable_name}} (two curly brackets)

keep them as alphanumeric, starting with a letter, only underscore or hyphen, simply best practices for code and keys.

Ensure that the chat playground is in “Responses” mode in the dots (kebab) menu.

Then add your variable names in the internal messages.

Then add your variable name placeholders to the message language.

Variables enclosed that are not present are highlighted in red.

Test in the UI by giving values:

The playground only simulates using the prompt. Use the variable keys in the API call as documented.

(if you have a system to store a prompt ID, a system to store the per-user variables, a lookup system to match these to a session, then you probably have all components needed to bypass the entire system altogether, and just provide “instructions”.)

2 Likes

I think its wrong because when using the realtime api with audio the prompt you have to use is a realtime prompt. Which is totally different than a normal prompt. it seems to me that a normal prompt cant even be used in a realtime call. The accept call function strangely has a parameter for variables but on the dashboard for realtime prompt there is no way to actually use them in a prompt.
Am I missing something here? or this whole realtime api calls documention and usage is a total mess right now.

did you manage to solve this issue?
I’m currently doing something similar to you but the API seems to do nothing with my variables

Nope, my prior ‘variable creation’ guidance was not informed by the topic’s “Realtime API”, and I merely gave a walkthrough of Responses.

Realtime prompt creation is here: https://platform.openai.com/audio/realtime/edit

No variables to be seen, which in a UI, one would expect the same pattern of both declaring them and inserting them as placeholders.

As far as the parameter being possible in the documentation and in validating API inputs, it looks like in the OpenAPI specification, OpenAI simply reused the same schema for prompt as they employ for the Responses API endpoint, even with link to “Responses”.

Perhaps the lack of facility is because any early dynamic context alteration would break the 90% discount cache that makes conversational use feasible, and not $0.50 per “hello” or interruption.

Or that the AI model is even worse at regarding “developer” instructions and who wrote them than the demotion and confusion you already get:

“only from what you provided”..

OpenAI action needed

  • Whether prompts with variables is planned or will never be offered on realtime, the API specification and generated API reference must currently be updated with a new “Prompt” subschema “RealtimePrompt” that eliminates the field “variables” as accepted, returning an error due to lack of support;
  • If supported, implemented, and “live”, it cannot be produced in the platform UI, which will need the variable creation feature and parameter preview.

The return object of realtime prompt creation is highly indicative of no support, with no field for variables in prompt_type:“realtime”

Sorry I didn’t see this sooner!

Let me check with rest of team and will give you an update on when this will land.

2 Likes