I am using Realtime API for speech-to-speech. According to docs, you can pass variables to a saved prompt by updating the session like this:
# Use a server-stored prompt by ID. Optionally pin a version and pass variables.
prompt: {
id: "pmpt_123", // your stored prompt ID
version: "89", // optional: pin a specific version
variables: {
city: "Paris" // example variable used by your prompt
}
},
However, when I try that, I am getting API errors (my variable is called native_language):
[error] â OpenAI Realtime API error:
[error] Type: invalid_request_error
[error] Code: invalid_type
[error] Message: Invalid type for âsession.prompt.variables.native_languageâ: expected an object, but got a string instead.
[error] âąď¸ Error at: 2025-09-18 14:20:49.255621Z
Whatâs the proper syntax to pass the variables? And what is then the proper syntax to use those variables in the saved prompt? I cannot find documentation for this, and my trial&error was not successful yet.
(edit) This walk-through is only applicable to the Responses API and its âchatâ UI. Realtime uses its own prompt type created through âaudioâ, which does not offer variables.
You must have created the insertion points with variable names.
You can only edit âpromptsâ in the platform.openai.com site in order to construct prompt language with the special signal containers required.
The variables in the UI site are written in this form:
{{variable_name}} (two curly brackets)
keep them as alphanumeric, starting with a letter, only underscore or hyphen, simply best practices for code and keys.
Ensure that the chat playground is in âResponsesâ mode in the dots (kebab) menu.
Then add your variable names in the internal messages.
The playground only simulates using the prompt. Use the variable keys in the API call as documented.
(if you have a system to store a prompt ID, a system to store the per-user variables, a lookup system to match these to a session, then you probably have all components needed to bypass the entire system altogether, and just provide âinstructionsâ.)
I think its wrong because when using the realtime api with audio the prompt you have to use is a realtime prompt. Which is totally different than a normal prompt. it seems to me that a normal prompt cant even be used in a realtime call. The accept call function strangely has a parameter for variables but on the dashboard for realtime prompt there is no way to actually use them in a prompt.
Am I missing something here? or this whole realtime api calls documention and usage is a total mess right now.
As far as the parameter being possible in the documentation and in validating API inputs, it looks like in the OpenAPI specification, OpenAI simply reused the same schema for prompt as they employ for the Responses API endpoint, even with link to âResponsesâ.
Perhaps the lack of facility is because any early dynamic context alteration would break the 90% discount cache that makes conversational use feasible, and not $0.50 per âhelloâ or interruption.
Or that the AI model is even worse at regarding âdeveloperâ instructions and who wrote them than the demotion and confusion you already get:
Whether prompts with variables is planned or will never be offered on realtime, the API specification and generated API reference must currently be updated with a new âPromptâ subschema âRealtimePromptâ that eliminates the field âvariablesâ as accepted, returning an error due to lack of support;
If supported, implemented, and âliveâ, it cannot be produced in the platform UI, which will need the variable creation feature and parameter preview.
The return object of realtime prompt creation is highly indicative of no support, with no field for variables in prompt_type:ârealtimeâ