Plugin kwargs maximum length

I’m writing a plugin that retrieves a page or two of text from an API, makes some revisions and submits that to another endpoint using a POST request. I notice that whenever the length of the revised version exceeds some amount, ChatGPT halts in the middle of forming the kwargs JSON, and after some time I see the following error:

ApiSyntaxError: Could not parse API call kwargs as JSON: exception=Unterminated string starting at: line 5 column 13 (char 291) url=http://localhost:5000/projects/{project_id}/drafts

I noticed that the generated kwargs are alway exactly 760 tokens long before being terminated. Is that a documented or undocumented limitation or a bug?

1 Like

Update: now it tries composing the request but fails, deletes the request from the UI it and says:

I apologize for the inconvenience. There seems to be a technical issue with the plugin that is preventing the submission of the draft. I will report this to the technical team for resolution.

Which is worse in it’s own way, as it is not reporting anything to anyone and there is no sign of the problem left for the users or developers to report or identify the issue.

1 Like

Is there any solution to the issue mentioned? I’m also facing the same issue.

Just hit the same problem. Makes my plugin idea not work.

@arashbm how exactly did you measure those 760 tokens / what exactly was 760 tokens? I tried to copy the truncated JSON from the UI and measure that, but got widely varying measures from 500 to 780.

One of the annoying things is that I couldn’t even tell the model how long the request can be. And that error message is just seriously broken, because that cutoff is a very specific condition they could describe to ChatGPT so that it can work around it, but where they just give an unspecific error that could also be an error generating the request to the plugin, so ChatGPT often keeps retrying with exactly the same request.

Does anybody have an idea where to report that bug to OpenAI? I tried that on discord, but I’m not sure anybody actually reads that. Somehow it’s funny - these times you not only have to think about usability (in that case of error messages) from the users side, but also from the machines (ChatGPT’s) side. :rofl: The only defense I found against that bug is to tell ChatGPT in the “description_for_model” that that horrible error message usually actually means “request too long”. Which is an annoying waste of tokens.

Hans-Peter

I just copy posted the broken JSON to Tiktokenizer web application, and selected GPT-4. It almost always returned 760 or 759. I haven’t tried it again for some while though.

I don’t think the new function call enabled models available through the API suffer from the same problem, so it should be some issue work ChatGPT backend. I’ll try longer bodies and update this post.

Oops, you’re right, the Tiktokenizer reports 760 or 759 for the cut off requests I collected, too. (I tried reimplementing tokenization for fun, but it seems it works quite a bit different than I thought. :rofl:)

Unfortunately, that doesn’t really help, since ChatGPT doesn’t seem to have a verbal concept for it’s own tokens - or at least I didn’t manage to find a the right words for that. Otherwise you could add something like “Your request must be smaller than 760 tokens.” to the “description_for_model”, but it seems we have to wait for OpenAI to improve that.

I’d bet the function calls are a byproduct of their work with plugins and are used internally for plugin calls. So I’d rather expect they have the same restrictions. But I’m curious what you find out, @arashbm .