I built an external MCP server and I want to connect it to LLM clients like ChatGPT and Claude Desktop. The server works in Claude Desktop but ChatGPT gives me an error All tools (including name, description, and input schema) must be less than 5000 tokens. 5000 tokens seems ridiculously low, and this error shows for both FREE and PRO plans (personal not enterprise account). Is there any workarounds?
This limit is usually due to the TPM (Token Per Minute) limit for “tier 1” users. If the TMP limit is lower than your typical request, you can’t even send one single request as a tier 1 user. Fortunately, to become a tier 2 user and receive a much larger limit, which will allow you to send multiple requests to any and all models, you only need to: (1) spend at least $50 in API credits, and (2) have an API account that is at least seven days old since the first payment. Basically, pay $50 and wait one week to enjoy your $50 credit for as many requests as you like.
thanks for the details! I received the opposite answer from the OpenAI AI support agent:
I’m an AI support agent.
The error about tools needing to be under 5000 tokens is a current, hard limit for the overall tool schema size (name, description, input schema, etc.) when connecting MCP servers to ChatGPT—even for Pro and Free plans. This limit exists to ensure performance and security, and there are no official workarounds to bypass it.
If your tool schema exceeds 5000 tokens, you’ll need to reduce the length by shortening field names, descriptions, or splitting complex tools into multiple simpler ones. Currently, higher limits are not available for personal accounts; only Business and Enterprise plans support more advanced MCP capabilities, but the same 5000 token limit generally applies for individual tools. There are no user-configurable settings to raise this cap.
Let me know if you want tips on reducing schema size or further details!
so..before I go ahead and spend $50 just want to confirm the AI support response is incorrect?