Open Responses for the Open Source Community

Open Responses is an open-source specification and ecosystem inspired by the OpenAI Responses API. It is designed to make it easier to build multi-provider, interoperable LLM interfaces. The project defines a shared schema, client libraries, and tooling that let you call language models, stream outputs, and build agentic workflows without being locked into a single provider.

Motivation and overview

Most modern LLM platforms now rely on similar building blocks such as messages, tool calls, function calling, and multimodal inputs. The problem is that each provider represents these concepts slightly differently. Open Responses standardizes these primitives so you can focus on building instead of translating APIs.

With Open Responses, you get:

  • One spec, many providers: Define inputs and outputs once and run them across OpenAI, Anthropic, Gemini, or local models.

  • Composable agentic workflows: A unified approach to streaming, tool invocation, and message orchestration.

  • Simpler evaluation and routing: Compare providers, route requests, and log results using a shared schema.

  • A blueprint for providers: Labs and model vendors can expose their APIs in a common, well-defined format with minimal effort.

Discover more and read the spec here:

Find the code on GitHub:

10 Likes

That will help some people for sure. I’ll dive into it later.

3 Likes

You will note if exploring the “Open” version, there is little variation between the API. In fact, it imposes, embraces, and extends OpenAI’s limitations to ensure an uncompetitive surface if employed. Will you find logit_bias and others? No. Any “truncation” other than OpenAI’s unbudgeted version? No. I guess they don’t want you to obtain “minimal” reasoning in this enum, and assume someone is going to have a “flex” tier. Only having functions makes this endpoint shape as an internal iterator as its API “selling point” redundant.
Give Chat Completions a thinking whitespace 'heartbeat" before JSON for keepalive, or Google’s reasoning extension, and you are done.

The only “benefit” is in this spec, they actually spell out blocking image and file inputs to anything other than a user or tool role in this version, so you can’t deliver useful services either; a limit over OpenAI’s own yaml. Knocked down the streaming to 23 event types.

I’m just going to leave this here for your perusal, which is directly from parsing the existing OpenAPI specification. The response of responses is massively more lines when documented, so if the “reference” can make a web version of that, more power.

POST https://api.openai.com/v1/responses

Parameters (JSON body):
model: string # required [model ID]
input: string | array # required [text, or a list of input items]
├▸ (string): string # [equivalent to a single user text message]
└▸ (array): InputItem[] # [multi-part / multi-turn inputs]
├▸ message (EasyInputMessage): object
│ ├▸ type: “message” # required
│ ├▸ role: “user” | “assistant” | “system” | “developer” # required
│ └▸ content: string | InputContent[] # required
│ ├▸ (string): string # [plain text]
│ └▸ (array): InputContent[]
│ ├▸ input_text: object
│ │ ├▸ type: “input_text” # required
│ │ └▸ text: string # required
│ ├▸ input_image: object
│ │ ├▸ type: “input_image” # required
│ │ ├▸ detail: “low” | “high” | “auto” # required (default: auto)
│ │ ├▸ image_url: string | null # optional [URL or data URL (base64)]
│ │ └▸ file_id: string | null # optional [uploaded file ID]
│ └▸ input_file: object
│ ├▸ type: “input_file” # required
│ ├▸ file_id: string | null # optional
│ ├▸ filename: string # optional
│ ├▸ file_url: string # optional
│ └▸ file_data: string # optional [file content (encoded)]
├▸ item (Item): object # [conversation-state items; may include prior outputs/tool results]
│ └▸ type: (discriminator; varies) # required
│ ├▸ message (InputMessage): object
│ │ ├▸ type: “message” # required
│ │ ├▸ role: “user” | “system” | “developer” # required
│ │ └▸ content: InputContent[] # required (same InputContent shapes as above)
│ ├▸ item_reference: object
│ │ ├▸ type: “item_reference” | null # optional (default: item_reference)
│ │ └▸ id: string # required
│ ├▸ (output shapes that can appear in conversation state)
│ │ ├▸ output_message (OutputMessage): object
│ │ │ ├▸ type: “message” # required
│ │ │ ├▸ role: “assistant” # required
│ │ │ ├▸ id: string # required
│ │ │ ├▸ status: “in_progress” | “completed” | “incomplete” # required
│ │ │ └▸ content: OutputMessageContent[]
│ │ │ ├▸ output_text: object
│ │ │ │ ├▸ type: “output_text” # required
│ │ │ │ ├▸ text: string # required
│ │ │ │ ├▸ annotations: Annotation[] # required
│ │ │ │ └▸ logprobs: LogProb[] # optional (only if included via include)
│ │ │ └▸ refusal: object
│ │ │ ├▸ type: “refusal” # required
│ │ │ └▸ refusal: string # required
│ │ ├▸ file_search_call (FileSearchToolCall): object
│ │ │ ├▸ type: “file_search_call” # required
│ │ │ ├▸ id: string # required
│ │ │ ├▸ status: “in_progress” | “searching” | “completed” | “incomplete” | “failed” # required
│ │ │ ├▸ queries: string[] # required
│ │ │ └▸ results: array | null # optional (only if included via include)
│ │ ├▸ web_search_call (WebSearchToolCall): object
│ │ │ ├▸ type: “web_search_call” # required
│ │ │ ├▸ id: string # required
│ │ │ ├▸ status: “in_progress” | “searching” | “completed” | “failed” # required
│ │ │ └▸ action: object # required [search/open_page/find]
│ │ ├▸ computer_call (ComputerToolCall): object
│ │ │ ├▸ type: “computer_call” # required
│ │ │ ├▸ id: string # required
│ │ │ ├▸ call_id: string # required
│ │ │ ├▸ action: object # required [click/double_click/drag/keypress/move/screenshot/scroll/type/wait]
│ │ │ ├▸ pending_safety_checks: ComputerCallSafetyCheckParam[] # required
│ │ │ └▸ status: “in_progress” | “completed” | “incomplete” # required
│ │ ├▸ function_call (FunctionToolCall): object
│ │ │ ├▸ type: “function_call” # required
│ │ │ ├▸ id: string # required
│ │ │ ├▸ call_id: string # required
│ │ │ ├▸ name: string # required
│ │ │ ├▸ arguments: string # required [JSON string]
│ │ │ └▸ status: “in_progress” | “completed” | “incomplete” # required
│ │ ├▸ function_call_output: object
│ │ │ ├▸ type: “function_call_output” # required
│ │ │ ├▸ call_id: string # required
│ │ │ ├▸ output: string | array # required [JSON string, or content list]
│ │ │ └▸ status: “in_progress” | “completed” | “incomplete” | null # optional
│ │ ├▸ reasoning (ReasoningItem): object
│ │ │ ├▸ type: “reasoning” # required
│ │ │ ├▸ id: string # required
│ │ │ ├▸ encrypted_content: string | null # optional (only if included via include)
│ │ │ ├▸ summary: Summary[] # required
│ │ │ ├▸ content: ReasoningTextContent[] # optional
│ │ │ └▸ status: “in_progress” | “completed” | “incomplete” # required
│ │ ├▸ image_generation_call (ImageGenToolCall): object
│ │ │ ├▸ type: “image_generation_call” # required
│ │ │ ├▸ id: string # required
│ │ │ ├▸ status: “in_progress” | “completed” | “generating” | “failed” # required
│ │ │ └▸ result: string | null # required [base64 image]
│ │ └▸ code_interpreter_call (CodeInterpreterToolCall): object
│ │ ├▸ type: “code_interpreter_call” # required
│ │ ├▸ id: string # required
│ │ ├▸ status: “in_progress” | “completed” | “incomplete” | “interpreting” | “failed” # required
│ │ ├▸ container_id: string # required
│ │ ├▸ code: string | null # required
│ │ └▸ outputs: array | null # optional (only if included via include)
└▸ item_reference: object
├▸ type: “item_reference” | null # optional (default: item_reference)
└▸ id: string # required [ID of an item to reference]

instructions: string | null # optional [system/developer message inserted into context]
previous_response_id: string | null # optional [multi-turn linkage; cannot be used with conversation]
conversation: string | object | null # optional [conversation container]
├▸ (string): string # [conversation ID]
└▸ (object): object
└▸ id: string # required

metadata: object | null # optional [up to 16 string→string pairs]
temperature: number | null # optional (default: 1) [0..2]
top_p: number | null # optional (default: 1) [0..1]
top_logprobs: integer | null # optional [0..20]

user: string # optional (deprecated)
safety_identifier: string # optional
prompt_cache_key: string # optional

service_tier: “auto” | “default” | “flex” | “scale” | “priority” | null # optional (default: auto)
prompt_cache_retention: “in-memory” | “24h” | null # optional

reasoning: object | null # optional [reasoning configuration]
├▸ effort: “none” | “minimal” | “low” | “medium” | “high” | “xhigh” | null # optional (default: medium)
├▸ summary: “auto” | “concise” | “detailed” | null # optional
└▸ generate_summary: “auto” | “concise” | “detailed” | null # optional (deprecated)

background: boolean | null # optional (default: false)
max_output_tokens: integer | null # optional
max_tool_calls: integer | null # optional

text: object # optional [text output configuration]
├▸ format: object # optional (default: {“type”:“text”})
│ ├▸ text: object
│ │ └▸ type: “text” # required
│ ├▸ json_schema: object # [Structured Outputs]
│ │ ├▸ type: “json_schema” # required
│ │ ├▸ name: string # required
│ │ ├▸ schema: object # required [JSON Schema]
│ │ ├▸ description: string # optional
│ │ └▸ strict: boolean | null # optional (default: false)
│ └▸ json_object: object # [older JSON mode]
│ └▸ type: “json_object” # required
└▸ verbosity: “low” | “medium” | “high” | null # optional (default: medium)

tools: Tool[] # optional [tools the model may call]
├▸ function: object
│ ├▸ type: “function” # required
│ ├▸ name: string # required
│ ├▸ description: string | null # optional
│ ├▸ parameters: object | null # required [JSON Schema]
│ └▸ strict: boolean | null # required
├▸ file_search: object
│ ├▸ type: “file_search” # required
│ ├▸ vector_store_ids: string[] # required
│ ├▸ max_num_results: integer # optional [1..50]
│ ├▸ ranking_options: object # optional
│ │ ├▸ ranker: “auto” | “default-2024-11-15” # optional
│ │ ├▸ score_threshold: number # optional [0..1]
│ │ └▸ hybrid_search: object # optional
│ │ ├▸ embedding_weight: number # required
│ │ └▸ text_weight: number # required
│ └▸ filters: object | null # optional
│ ├▸ ComparisonFilter: object
│ │ ├▸ type: “eq” | “ne” | “gt” | “gte” | “lt” | “lte” # required
│ │ ├▸ key: string # required
│ │ └▸ value: string | number | boolean | array # required
│ └▸ CompoundFilter: object
│ ├▸ type: “and” | “or” # required
│ └▸ filters: array # required
├▸ computer_use_preview: object
│ ├▸ type: “computer_use_preview” # required
│ ├▸ environment: “windows” | “mac” | “linux” | “ubuntu” | “browser” # required
│ ├▸ display_width: integer # required
│ └▸ display_height: integer # required
├▸ web_search: object
│ ├▸ type: “web_search” | “web_search_2025_08_26” # required
│ ├▸ filters: object | null # optional
│ │ └▸ allowed_domains: string[] | null # optional
│ ├▸ user_location: object | null # optional
│ │ ├▸ type: “approximate” # required (default: approximate)
│ │ ├▸ country: string | null # optional
│ │ ├▸ region: string | null # optional
│ │ ├▸ city: string | null # optional
│ │ └▸ timezone: string | null # optional
│ └▸ search_context_size: “low” | “medium” | “high” # required (default: medium)
├▸ mcp: object
│ ├▸ type: “mcp” # required
│ ├▸ server_label: string # required
│ ├▸ server_url: string # optional
│ ├▸ connector_id: enum # optional
│ ├▸ authorization: string # optional
│ ├▸ server_description: string # optional
│ ├▸ headers: object | null # optional
│ ├▸ allowed_tools: string[] | object | null # optional
│ │ ├▸ (array): string[] # [allowed tool names]
│ │ └▸ (filter object): object
│ │ ├▸ tool_names: string[] # optional
│ │ └▸ read_only: boolean # optional
│ └▸ require_approval: “always” | “never” | object | null # optional (default: always)
├▸ code_interpreter: object
│ ├▸ type: “code_interpreter” # required
│ └▸ container: string | object # required
│ ├▸ (string): string # [container ID]
│ └▸ (auto object): object
│ ├▸ type: “auto” # required
│ ├▸ file_ids: string[] # optional
│ └▸ memory_limit: “1g” | “4g” | “16g” | “64g” | null # optional
├▸ image_generation: object
│ ├▸ type: “image_generation” # required
│ ├▸ model: string # optional (default: gpt-image-1)
│ ├▸ quality: “low” | “medium” | “high” | “auto” # optional (default: auto)
│ ├▸ size: “1024x1024” | “1024x1536” | “1536x1024” | “auto” # optional (default: auto)
│ ├▸ output_format: “png” | “webp” | “jpeg” # optional (default: png)
│ ├▸ output_compression: integer # optional (default: 100) [0..100]
│ ├▸ moderation: “auto” | “low” # optional (default: auto)
│ ├▸ background: “transparent” | “opaque” | “auto” # optional (default: auto)
│ ├▸ input_fidelity: “high” | “low” | null # optional
│ ├▸ input_image_mask: object # optional
│ │ ├▸ image_url: string # optional
│ │ └▸ file_id: string # optional
│ └▸ partial_images: integer # optional (default: 0) [0..3]
├▸ local_shell: object
│ └▸ type: “local_shell” # required
├▸ shell: object
│ └▸ type: “shell” # required
├▸ custom: object
│ ├▸ type: “custom” # required
│ ├▸ name: string # required
│ ├▸ description: string # optional
│ └▸ format: object # optional
│ ├▸ text: object
│ │ └▸ type: “text” # required
│ └▸ grammar: object
│ ├▸ type: “grammar” # required
│ ├▸ syntax: “lark” | “regex” # required
│ └▸ definition: string # required
├▸ web_search_preview: object
│ ├▸ type: “web_search_preview” | “web_search_preview_2025_03_11” # required
│ ├▸ user_location: object | null # optional
│ │ ├▸ type: “approximate” # required (default: approximate)
│ │ ├▸ country: string | null # optional
│ │ ├▸ region: string | null # optional
│ │ ├▸ city: string | null # optional
│ │ └▸ timezone: string | null # optional
│ └▸ search_context_size: “low” | “medium” | “high” # required (default: medium)
└▸ apply_patch: object
└▸ type: “apply_patch” # required

tool_choice: “none” | “auto” | “required” | object # optional
├▸ (mode string): “none” | “auto” | “required”
└▸ (object): object
├▸ allowed_tools: object
│ ├▸ type: “allowed_tools” # required
│ ├▸ mode: “auto” | “required” # required
│ └▸ tools: array # required
├▸ hosted_tool: object
│ └▸ type: “file_search” | “web_search_preview” | “web_search_preview_2025_03_11” | “computer_use_preview” | “image_generation” | “code_interpreter” # required
├▸ function: object
│ ├▸ type: “function” # required
│ └▸ name: string # required
├▸ mcp: object
│ ├▸ type: “mcp” # required
│ ├▸ server_label: string # required
│ └▸ name: string | null # required
├▸ custom: object
│ ├▸ type: “custom” # required
│ └▸ name: string # required
├▸ apply_patch: object
│ └▸ type: “apply_patch” # required
└▸ shell: object
└▸ type: “shell” # required

prompt: object | null # optional [prompt template reference]
├▸ id: string # required
├▸ version: string | null # optional
└▸ variables: object | null # optional [map of substitutions]
└▸ <var_name>: string | InputContent # optional
├▸ (string): string
├▸ input_text: object
│ ├▸ type: “input_text” # required
│ └▸ text: string # required
├▸ input_image: object
│ ├▸ type: “input_image” # required
│ ├▸ detail: “low” | “high” | “auto” # required (default: auto)
│ ├▸ image_url: string | null # optional
│ └▸ file_id: string | null # optional
└▸ input_file: object
├▸ type: “input_file” # required
├▸ file_id: string | null # optional
├▸ filename: string # optional
├▸ file_url: string # optional
└▸ file_data: string # optional

truncation: “auto” | “disabled” | null # optional (default: disabled)
include: string[] | null # optional
├▸ “file_search_call.results”
├▸ “web_search_call.results”
├▸ “web_search_call.action.sources”
├▸ “message.input_image.image_url”
├▸ “computer_call_output.output.image_url”
├▸ “code_interpreter_call.outputs”
├▸ “reasoning.encrypted_content”
└▸ “message.output_text.logprobs”

parallel_tool_calls: boolean | null # optional (default: true)
store: boolean | null # optional (default: true)

stream: boolean | null # optional (default: false)
stream_options: object | null # optional
└▸ include_obfuscation: boolean # optional


Unvalidated, captured from field descriptions:

  • previous_response_id cannot be used together with conversation.
  • stream_options should only be set when stream=true.

Known model IDs in schema (model is typed as “string”, but the schema enumerates these IDs, and when built, the SDK blocks anything else):

  • gpt-5.2, gpt-5.2-2025-12-11, gpt-5.2-chat-latest, gpt-5.2-pro, gpt-5.2-pro-2025-12-11
  • gpt-5.1, gpt-5.1-2025-11-13, gpt-5.1-codex, gpt-5.1-mini, gpt-5.1-chat-latest
  • gpt-5, gpt-5-mini, gpt-5-nano, gpt-5-2025-08-07, gpt-5-mini-2025-08-07, gpt-5-nano-2025-08-07, gpt-5-chat-latest
  • gpt-4.1, gpt-4.1-mini, gpt-4.1-nano, gpt-4.1-2025-04-14, gpt-4.1-mini-2025-04-14, gpt-4.1-nano-2025-04-14
  • o4-mini, o4-mini-2025-04-16
  • o3, o3-2025-04-16, o3-mini, o3-mini-2025-01-31
  • o1, o1-2024-12-17, o1-preview, o1-preview-2024-09-12, o1-mini, o1-mini-2024-09-12
  • gpt-4o, gpt-4o-2024-11-20, gpt-4o-2024-08-06, gpt-4o-2024-05-13
  • gpt-4o-audio-preview, gpt-4o-audio-preview-2024-10-01, gpt-4o-audio-preview-2024-12-17, gpt-4o-audio-preview-2025-06-03
  • gpt-4o-mini-audio-preview, gpt-4o-mini-audio-preview-2024-12-17
  • gpt-4o-search-preview, gpt-4o-mini-search-preview, gpt-4o-search-preview-2025-03-11, gpt-4o-mini-search-preview-2025-03-11
  • chatgpt-4o-latest, codex-mini-latest, gpt-4o-mini, gpt-4o-mini-2024-07-18
  • gpt-4-turbo, gpt-4-turbo-2024-04-09, gpt-4-0125-preview, gpt-4-turbo-preview, gpt-4-1106-preview, gpt-4-vision-preview
  • gpt-4, gpt-4-0314, gpt-4-0613, gpt-4-32k, gpt-4-32k-0314, gpt-4-32k-0613
  • gpt-3.5-turbo, gpt-3.5-turbo-16k, gpt-3.5-turbo-0301, gpt-3.5-turbo-0613, gpt-3.5-turbo-1106, gpt-3.5-turbo-0125, gpt-3.5-turbo-16k-0613
  • (Responses-only enum in schema): o1-pro, o1-pro-2025-03-19, o3-pro, o3-pro-2025-06-10, o3-deep-research, o3-deep-research-2025-06-26, o4-mini-deep-research, o4-mini-deep-research-2025-06-26, computer-use-preview, computer-use-preview-2025-03-11, gpt-5-codex, gpt-5-pro, gpt-5-pro-2025-10-06, gpt-5.1-codex-max

How is Open Responses different from existing multi-provider libraries like LangChain?

How is open Responses different from existing multi-previder libraries like LangChain?

Hello :waving_hand:

Open Responses differs from LangChain in how it handles model providers. LangChain offers a unified interface to interact with multiple models, but you still need to write the logic for routing and formatting yourself. Open Responses focuses on simplifying responses through a direct API interface, automatically managing coordination between models. This makes it easier to integrate into open-source applications without much complexity​:white_heart:

1 Like

Better answer, not an AI-powered guess:

Open Responses is an API specification for directly consuming an endpoint resource of an AI model provider. It closely mirrors OpenAI’s platform implementation.

Langchain is a multi-turn agent builder library, that uses and consumes APIs, and has multi-provider shims for endpoint API interfaces such as Chat Completions, Responses, Google genai, and other proprietary providers.

1 Like