What protections does OpenAI provide for MCP Server endpoints called by ChatGPT Apps?

I’m building an MCP Server using ChatGPT Apps SDK that runs behind AWS e.g. API Gateway + Lambda.

All traffic to the server comes from ChatGPT Apps via MCP tool calls, there is no direct public access and no browser-side client where I can run a sensor or JS challenge.

Before I deploy this in production, I want to understand the security model for tool-call requests originating from the ChatGPT platform.

Specifically:

  1. Does OpenAI provide any built-in protections (rate limiting, abuse detection, IP restrictions, etc.) for the outbound MCP tool-call traffic sent to my server?
  2. If best practice is to secure the MCP server independently, is there an officially recommended pattern for authenticating incoming MCP tool-call requests from OpenAI? How to support no-authenticated users as we can see an option in apps & connectors configuration.

If there is an official document or reference architecture for securing MCP servers used by ChatGPT Apps, I would appreciate a link.

Thanks,
Jake

2 Likes