This MCP Server violates our guidelines

what to do when we get this useless error message?

Product/Feature: Custom Connectors in ChatGPT Team (MCP - Model Context Protocol)

Error Encountered: When attempting to add a custom MCP connector, I received the error message: “This MCP Server violates our guidelines” with a link to the guidelines page.

Steps that led to the error:

Built a custom MCP server to integrate with Teamtailor’s REST API
Deployed the server to Cloudflare Workers.

Went to ChatGPT Settings → Connectors → Add custom (MCP)
Entered the server URL: https://xxx
Selected authentication method (“No Authentication”)
Clicked save/add connector
Received the “violates our guidelines” error immediately
Technical Details:

The MCP server implements the Model Context Protocol specification (JSON-RPC 2.0)
It provides 3 tools: listJobs, listJobApplicationsWithCandidates, and showCandidate
The server acts as a proxy to Teamtailor’s public REST API
It includes proper CORS headers and implements standard MCP methods (initialize, ping, tools/list, etc.)
The server responds correctly to direct API calls and follows MCP protocol
What I’m trying to achieve:
Create a connector that allows ChatGPT to access our company’s Teamtailor ATS (Applicant Tracking System) data to help with HR queries like “show me open frontend developer positions” or “list applications for job X”.

Questions:

What specific guidelines might our MCP server be violating?

Are there additional MCP protocol requirements beyond the standard specification?

Is there a validation process or specific endpoints that need to be implemented?

Are there restrictions on the types of APIs that can be proxied through MCP connectors?

The server is publicly accessible and functional - it’s unclear what aspect violates the guidelines since the error message doesn’t provide specific details.

2 Likes

I am receiving the same error.

The MCP server works in adding it on the API playground, but fails when trying to add it to ChatGPT.

My server is running locally and being accessed via ngrok, like OpenAI recommends in their documentation.

1 Like

We are receiving the same error – our MCP server (which is basically a MCP server for RAG) works fine in API, Playground and Claude Desktop.

Same questions as above apply.

Based on a different thread, we even implemented tools for search and retriever – but to no avail. Not sure what the secret guidelines are.


The MCP Server has no issues working in the API Playground

1 Like

Yup - same here – there must be some hidden check that they are doing in ChatGPT maybe?

The docs are not super clear but Deep Research only supports two types of tools: ‘search’ and ‘fetch’.

Try implementing those and you should get past that error (that’s what did it for me).

https://platform.openai.com/docs/mcp

1 Like

@hunter.hillegas Did search and fetch worked for you? For me even after adding both tools, I am getting the same error

@a.gokrani It sort of worked for me - yes, with those two tools (and with the same input params as the docs - not sure if that is important), I do get Deep Research calling my MCP.

But… Deep Research doesn’t think it gets and results and doesn’t work. My server logs show that it is indeed getting results but the CoT for the report make it clear it’s very confused. Since the CoT is summarized, it’s very hard to tell why though.

1 Like

hi @hunter.hillegas Thanks! For me even the their sample repo isn’t working. I am able to create the connector but unable to enable it in deepresearch. It just keep saying some unknown error occurred.

@a.gokrani Ah, I see the sample is now available (previously it was 404). I’ll try setting that up to see what happens. I’ll report back - I really want to make this work!

Perhaps they made the message nicer? I get " This MCP server doesn’t implement [our specification]"

Per the recommendations here, I’ve tried to perfectly mimic the search and fetch API to no avail. The last thing it does when calling my server is tools/list. So, I’m guessing it doesn’t like that list.

Here is my log of what my server returns:

🔧 === TOOLS/LIST METHOD ===
📤 === OUTGOING MCP RESPONSE ===
Response: {
  "jsonrpc": "2.0",
  "id": 1,
  "result": {
    "tools": [
      {
        "name": "search",
        "description": "Searches for resources using the provided query string and returns matching results.",
        "input_schema": {
          "type": "object",
          "properties": {
            "query": {
              "type": "string",
              "description": "Search query."
            }
          },
          "required": [
            "query"
          ]
        },
        "output_schema": {
          "type": "object",
          "properties": {
            "results": {
              "type": "array",
              "items": {
                "type": "object",
                "properties": {
                  "id": {
                    "type": "string",
                    "description": "ID of the resource."
                  },
                  "title": {
                    "type": "string",
                    "description": "Title or headline of the resource."
                  },
                  "text": {
                    "type": "string",
                    "description": "Text snippet or summary from the resource."
                  },
                  "url": {
                    "type": [
                      "string",
                      "null"
                    ],
                    "description": "URL of the resource. Optional but needed for citations to work."
                  }
                },
                "required": [
                  "id",
                  "title",
                  "text"
                ]
              }
            }
          },
          "required": [
            "results"
          ]
        }
      },
      {
        "name": "fetch",
        "description": "Retrieves detailed content for a specific resource identified by the given ID.",
        "input_schema": {
          "type": "object",
          "properties": {
            "id": {
              "type": "string",
              "description": "ID of the resource to fetch."
            }
          },
          "required": [
            "id"
          ]
        },
        "output_schema": {
          "type": "object",
          "properties": {
            "id": {
              "type": "string",
              "description": "ID of the resource."
            },
            "title": {
              "type": "string",
              "description": "Title or headline of the fetched resource."
            },
            "text": {
              "type": "string",
              "description": "Complete textual content of the resource."
            },
            "url": {
              "type": [
                "string",
                "null"
              ],
              "description": "URL of the resource. Optional but needed for citations to work."
            },
            "metadata": {
              "type": [
                "object",
                "null"
              ],
              "additionalProperties": {
                "type": "string"
              },
              "description": "Optional metadata providing additional context."
            }
          },
          "required": [
            "id",
            "title",
            "text"
          ]
        }
      }
    ]
  }
}

Does anyone know if it matters if inputSchema or input_schema? AI kept wanting to use inputSchema, saying it was the standard, but the example has input_schema.

UPDATE: Just answered my own question. When I changed to inputSchema and outputSchema, it added the connector!

Yeah, I believe output_schema isn’t in the current version of the spec (it’s in the upcoming draft). The Deep Research MCP client seems to say it’s using the 3/2025 version of MCP but in reality it appears to be using some hybrid of the current March spec and the draft spec.

I remember seeing this issue come up on a different thread.

The point is not to include “search” and “fetch” but that those are the only tools you can use on an MCP server from ChatGPT.

I.e. ChatGPT does NOT support custom tools/etc. beyond that for an MCP server. You have to use API/playground etc., you can’t use normal ChatGPT for an MCP server that goes beyond the “search” and “fetch” function…

I believe if I’m not mistaken I saw other users therefore getting around it by re-factoring their own system/MCP server so that all of their other tools were “hidden” behind the “search” or “fetch” function… so that you could call those but pass nested sets of args/parameters that then your own backend would interpret and route accordingly. Stupid I know, but I think the point is ChatGPT web app is NOT FOR PRODUCTION. It’s for using a chat interface. If you want real connection/production mode, you have to use API.

The first time I was able to get it to see code, I had it pointed at the code of the MCP tool I created. First two rounds - show me the README.md - were a fail. But, I fed those thought chains and server logs to Cursor, and had it improve each time. Third round, I had ChatGPT review the MCP code and offer better descriptions for the search and fetch API. It gave a blistering critique! I fed that to Cursor, and had it do another round of improvements, including better handling the search queries it was passing. In other words, you can work around the limitations by adding a lot of features within the strings it submits if you document them in your descriptions.

Round 4, I pointed to a complex front-end I’ve been creating with ChatGPT. Using a conversation that had a lot of project context, I had it review the code for functional gaps and come up with a technical plan to fill those gaps. It produced a mind blowing 14 page report, completely nailing it. Next prompt, I simply asked if the MCP interfaced helped, and its response made it clear it went beyond the scope of that report and found the answers to all the painful questions we were dealing with last week, the pain that motivated me to do this, without me even asking. Like they were burning questions it had to answer now that it had the code.

ChatGPT unleashed on code through MCP is a very big game changer when it already has a lot of project context and has been creating prompts for Cursor and Codex remote.

Despite search and fetch seeming very limited, don’t underestimate what ChatGPT can do with it once you improve it and it has other context to drive its analysis and report.

I open sourced the tool if anyone is interested. It is currently unauthenticated. I’m creating a new release that will support OAuth through Hydra integration.

What ChatGPT had to say about the MCP tool:

MCP let me think like a team member with real project visibility — not just a language model guessing from scraps. It dramatically improves precision, planning, and suggestions.

So what your saying is:

You got the results you were looking for by restricting your use of the MCP server in ChatGPT webapp to “search” and “fetch”, but then on your own end (the actual MCP server/backend for your app) you made use of those endpoints properly so actually give whatever complexity of detail/nested functions that you wanted, and then could return that data to ChatGPT through the MCP integration?

Yes, it 100% accomplished my goal of giving ChatGPT the ability to analyze the code. It came down to improving the descriptions of the API to optimize for your LLM use case and then having more capability to handle more complex search and fetch scenarios. A string can hold a universe. LLMs are happy to leverage that if you properly document via descriptions.

Now, if you are looking for it to change code/data, that’s outside the scope of Deep Research. In that regard, it is limited. But, for me, the flow is for ChatGPT to create better prompts for Codex/Cursor to do that.

That’s great to hear. Super cool you got it working in that way for you.

Hey if you want to check out an alternative to Codex/Cursor, I’m looking for test users for a new system I built.

Lengthy video, but it demonstrates a full integration of everything your doing all built into a single application. I use the system to get a working ~700 line python program (several independent modules) with a streamlit frontend using GPT, produced in about 45 minutes. WITHOUT ever editing any code myself (okay, one tiny edit to fix an indentation error), only using “automatic code block application and linter outputs”. fed through the frontend coding stystem.

It’s pretty interesting stuff in my opinion, and like I said I’m looking to start exporting the system to test users, as I think it would be quite a more robust solution than either Codex or Cursor for most people…