I am playing around with plugin development and noticed that ChatGPT seems to stop generating the request parameters when calling a command at around 1770 characters which leads to an error when calling the command. When generating standard responses to the user, ChatGPT can generate more than 1770 characters, but it seems there may be an artificial limit on how much it can generate for a command.
This prevents use cases like having ChatGPT compose a longish email and save it in your gmail drafts, having ChatGPT call your command with a large block of generated code, etc.
I encountered this as well. Longer requests were getting truncated and failing. ChatGPT was aware that the messages were being truncated, believed it was not due to its own token limits, that it was not due to my API code, and thus it had something to do with the plugin interface itself.
The endpoints are currently POST. The error isnât an HTTP or server error, it happens on the chatgpt client side. ChatGPT tries to create a request body in json, but after it hits the character limit it stops generating the request body which results in a malformed json string. This limit is way less than the LLM token limit, and ChatGPT can even continue responding normally and sometimes it auto retires generating request again (only to fail). I wonder if there is some artificial limit OpenAI places on request body generation (either processing time, character, or some other resource use).
Would love some input from someone at OpenAI on this topic to help get some clarity. It would be great to get this limit increased a lot more to enable more plug-in use cases.
Mine is also using POST. Iâve done some experiments with this trying to pass a long âlorem ipsumâ message. With that string, the truncation errors happen between 2000 and 2025 characters. I then try it on strings with âABCDâŚâ as a single stretch of characters, and it fails at 2000 characters. It also fails at 1000 characters. That makes me think it is a token limit on GPTâs side rather than a character limit of the middleware.
Noticing a similar thing. The error that ChatGPT shoots back is confusing. Would be nice for it to explicitly tell us âtoken limit reachedâ or something like that so we can handle the error deterministically.