The Model Context protocol specification defines a mechanism to discover the authorization server the MCP Server is using with the WWW-Authenticate response header and the OAuth 2.0 Protected Resource Metadata RFC9728
See Sequence Diagram in the MCP Authorization SPEC. Unfortunately I can add links to this entry.
However, ChatGPT connectors are sending the GET /.well-known/oauth-authorization-server and /.well-known/openid-configuration requests directly to the MCP Server.
The below comes from a thorough investigation of my own successfully implemented setup; hope it helps. It was not easy for me to get here.
TL;DR - You are treating the optional discovery convenience as mandatory. The spec says a resource must publish its OAuth metadata and point to the authorization server. It does not require the initial 401 WWW-Authenticate round-trip; RFC 9728 explicitly allows clients to fetch the well-known metadata “by any means.” ChatGPT’s connector skips straight to those published URLs, which is perfectly compliant.
ChatGPT MCP OAuth flow — spec compliance notes
Summary
The ChatGPT connector does follow the MCP auth spec. It obtains the OAuth protected-resource metadata (RFC 9728) and the authorization-server metadata exactly as required—it simply fetches the published well-known documents directly instead of forcing an extra 401 round-trip. That route is explicitly permitted by the spec.
Relevant requirements
MCP auth extension: the resource must publish OAuth protected-resource metadata, and that metadata must identify the authorization server metadata URL.
RFC 9728 §2.1: clients may discover that metadata “by any means.” Supplying the URL via WWW-Authenticate is a convenience, not a requirement.
What the connector actually does
Capturing the network traffic during a reconnect shows the canonical sequence:
HEAD https://<server>/.well-known/oauth-authorization-server
GET https://<server>/.well-known/oauth-authorization-server → authorization-server metadata
Standard OAuth 2.1 authorization-code + PKCE flow (/mcp/authorize)
POST https://<server>/mcp/token (code exchange, then refresh)
Subsequent MCP requests present the issued bearer token
This is the textbook flow from the spec’s “Authorization Code Grant” section. No steps are skipped—the connector simply bypasses the optional “probe WWW-Authenticate and parse resource_metadata” detour.
Why the objection doesn’t stick
The sequence diagram in the spec is illustrative. Nothing obliges a client to hit an endpoint without a token first if it already knows (or guesses) the well-known URI.
OpenAI’s connector retrieves the mandated metadata, performs the authorized code exchange, and authenticates MCP calls exactly as described in the spec.
Our MCP server does return WWW-Authenticate: Bearer … resource_metadata="…" for clients that prefer that discovery path—but consuming that header is optional.
Bottom line: the connector complies with the spec; it just takes the direct discovery route that RFC 9728 explicitly allows.
At risk of asking a dumb question, may I ask why / why a proxy wouldn’t or shouldn’t bridge such a gap in most scenarios?
I ask because my setup as described is a “real life scenario” at least insofar as my app’s few daily users (~10-25) have been logging in w/ our oauth, dynamically discovering tools, and making auth’d tool calls for almost two weeks now.
Is it because most implementations use 3rd party oauth providers rather than serve their own? a security concern I’m not aware of?
Hey Everyone, Our engineering team took a look and they shared that by default, our client doesn’t request any scopes during the initial connection. We’ve recently made an update so that it now automatically discovers the required scopes from the www-authenticate header and requests them during the initial handshake instead.
This helps ensure the correct permissions are picked up automatically without manual intervention. For more details on how this works, you can refer to Section 4.2 (Protected Resource Metadata Discovery) in the specification here: