Open Responses for the Open Source Community

Open Responses is an open-source specification and ecosystem inspired by the OpenAI Responses API. It is designed to make it easier to build multi-provider, interoperable LLM interfaces. The project defines a shared schema, client libraries, and tooling that let you call language models, stream outputs, and build agentic workflows without being locked into a single provider.

Motivation and overview

Most modern LLM platforms now rely on similar building blocks such as messages, tool calls, function calling, and multimodal inputs. The problem is that each provider represents these concepts slightly differently. Open Responses standardizes these primitives so you can focus on building instead of translating APIs.

With Open Responses, you get:

  • One spec, many providers: Define inputs and outputs once and run them across OpenAI, Anthropic, Gemini, or local models.

  • Composable agentic workflows: A unified approach to streaming, tool invocation, and message orchestration.

  • Simpler evaluation and routing: Compare providers, route requests, and log results using a shared schema.

  • A blueprint for providers: Labs and model vendors can expose their APIs in a common, well-defined format with minimal effort.

Discover more and read the spec here:

Find the code on GitHub:

3 Likes

That will help some people for sure. I’ll dive into it later.

2 Likes