New MCP Connector does not follow MCP Authorization Spec

The Model Context protocol specification defines a mechanism to discover the authorization server the MCP Server is using with the WWW-Authenticate response header and the OAuth 2.0 Protected Resource Metadata RFC9728

See Sequence Diagram in the MCP Authorization SPEC. Unfortunately I can add links to this entry.

However, ChatGPT connectors are sending the GET /.well-known/oauth-authorization-server and /.well-known/openid-configuration requests directly to the MCP Server.

Could you change ChatGPT MCP Connector Oauth implementation to follow the MCP Auth specification?

6 Likes

I’m experiencing the exact same issue. Have you discovered any workarounds or found any updates on this?

The below comes from a thorough investigation of my own successfully implemented setup; hope it helps. It was not easy for me to get here.

TL;DR - You are treating the optional discovery convenience as mandatory. The spec says a resource must publish its OAuth metadata and point to the authorization server. It does not require the initial 401 WWW-Authenticate round-trip; RFC 9728 explicitly allows clients to fetch the well-known metadata “by any means.” ChatGPT’s connector skips straight to those published URLs, which is perfectly compliant.

ChatGPT MCP OAuth flow — spec compliance notes

Summary

The ChatGPT connector does follow the MCP auth spec. It obtains the OAuth protected-resource metadata (RFC 9728) and the authorization-server metadata exactly as required—it simply fetches the published well-known documents directly instead of forcing an extra 401 round-trip. That route is explicitly permitted by the spec.

Relevant requirements

  • MCP auth extension: the resource must publish OAuth protected-resource metadata, and that metadata must identify the authorization server metadata URL.
  • RFC 9728 §2.1: clients may discover that metadata “by any means.” Supplying the URL via WWW-Authenticate is a convenience, not a requirement.

What the connector actually does

Capturing the network traffic during a reconnect shows the canonical sequence:

  1. HEAD https://<server>/.well-known/oauth-authorization-server
  2. GET https://<server>/.well-known/oauth-authorization-server → authorization-server metadata
  3. Standard OAuth 2.1 authorization-code + PKCE flow (/mcp/authorize)
  4. POST https://<server>/mcp/token (code exchange, then refresh)
  5. Subsequent MCP requests present the issued bearer token

This is the textbook flow from the spec’s “Authorization Code Grant” section. No steps are skipped—the connector simply bypasses the optional “probe WWW-Authenticate and parse resource_metadata” detour.

Why the objection doesn’t stick

  • The sequence diagram in the spec is illustrative. Nothing obliges a client to hit an endpoint without a token first if it already knows (or guesses) the well-known URI.
  • OpenAI’s connector retrieves the mandated metadata, performs the authorized code exchange, and authenticates MCP calls exactly as described in the spec.
  • Our MCP server does return WWW-Authenticate: Bearer … resource_metadata="…" for clients that prefer that discovery path—but consuming that header is optional.

Bottom line: the connector complies with the spec; it just takes the direct discovery route that RFC 9728 explicitly allows.

1 Like

My workaround was to proxy those .well-known requests to the auth server from my MCP Server.

However, then I hit an issue I also posted here where ChatGPT does not request the openid and profile scopes.

I cannot post links

Most real-life scenarios will separate the auth server from the MCP server.

@sergio.delamo I have adjusted your account so that you can post links to the forum. Looking forward to what you have to share.

I’ll check in with OpenAI to learn if separating the auth server from the MCP server is on the roadmap.

1 Like

My current blocker is MCP OAuth authentication does not send the scopes `openid` and `profile`

1 Like

As a quick follow-up: the team is aware of this and will look into it, but I don’t have a timeline to share.

1 Like

@vb I think this is also preventing the official Github MCP server from connecting (api.githubcopilot com/mcp)

Hi @sergio.delamo

Thanks for this post, did you managed to fix it? I have been spending few nights with no resolution. I went on to the proxy model as well, still no luck. MCP Connector still “Failed to resolve OAuth client” after full proxy setup

the mcp server is receiving token and authorise in its root url. And it’s getting 404 error.

At risk of asking a dumb question, may I ask why / why a proxy wouldn’t or shouldn’t bridge such a gap in most scenarios?

I ask because my setup as described is a “real life scenario” at least insofar as my app’s few daily users (~10-25) have been logging in w/ our oauth, dynamically discovering tools, and making auth’d tool calls for almost two weeks now.

Is it because most implementations use 3rd party oauth providers rather than serve their own? a security concern I’m not aware of?

Is it because most implementations use 3rd party oauth providers rather than serve their own?

Yes, I don’t want to write an OAuthServer to implement an MCP server.

Hey everyone, We are looking into this issue with our team. We will get back with an update soon. Thank you!

Hey Everyone, Our engineering team took a look and they shared that by default, our client doesn’t request any scopes during the initial connection. We’ve recently made an update so that it now automatically discovers the required scopes from the www-authenticate header and requests them during the initial handshake instead.


This helps ensure the correct permissions are picked up automatically without manual intervention. For more details on how this works, you can refer to Section 4.2 (Protected Resource Metadata Discovery) in the specification here:

https://modelcontextprotocol.io/specification/draft/basic/authorization#protected-resource-metadata-discovery-requirements


Please let us know if you continue to see any issues after this change—we’re happy to help.

1 Like