~1770 character limit when calling plugin commands?

Hi all!

I am playing around with plugin development and noticed that ChatGPT seems to stop generating the request parameters when calling a command at around 1770 characters which leads to an error when calling the command. When generating standard responses to the user, ChatGPT can generate more than 1770 characters, but it seems there may be an artificial limit on how much it can generate for a command.

This prevents use cases like having ChatGPT compose a longish email and save it in your gmail drafts, having ChatGPT call your command with a large block of generated code, etc.

Has anyone else run into this?


I encountered this as well. Longer requests were getting truncated and failing. ChatGPT was aware that the messages were being truncated, believed it was not due to its own token limits, that it was not due to my API code, and thus it had something to do with the plugin interface itself.

interesting. Have you tried using a POST? Isn’t there an HTTP limit on the size of a GET?

The endpoints are currently POST. The error isn’t an HTTP or server error, it happens on the chatgpt client side. ChatGPT tries to create a request body in json, but after it hits the character limit it stops generating the request body which results in a malformed json string. This limit is way less than the LLM token limit, and ChatGPT can even continue responding normally and sometimes it auto retires generating request again (only to fail). I wonder if there is some artificial limit OpenAI places on request body generation (either processing time, character, or some other resource use).

Would love some input from someone at OpenAI on this topic to help get some clarity. It would be great to get this limit increased a lot more to enable more plug-in use cases.


Mine is also using POST. I’ve done some experiments with this trying to pass a long ‘lorem ipsum’ message. With that string, the truncation errors happen between 2000 and 2025 characters. I then try it on strings with “ABCD…” as a single stretch of characters, and it fails at 2000 characters. It also fails at 1000 characters. That makes me think it is a token limit on GPT’s side rather than a character limit of the middleware.

Noticing a similar thing. The error that ChatGPT shoots back is confusing. Would be nice for it to explicitly tell us “token limit reached” or something like that so we can handle the error deterministically.

I also noticed that it always cuts off at exactly (or just below) 760 tokens. Seems to be some sort of upper limit.