Issues with 'Stopped talking to' due to Excessive POST Request Length in GPTs Action

Recently, I’ve been experimenting with a super cool idea: integrating Gemini Pro Vision into GPTs via Actions. :rofl:
While testing the schema, I encountered a phenomenon. The API requires the image data to be provided in the ‘data’ parameter as Base64 encoded data, leading to significantly large POST requests, which in turn triggers the ‘Stopped talking to’ issue.

If I reduce the image resolution and size to about 64x64, the data becomes small enough to send the request successfully.

I would like to seek help to understand:

  1. What is the current maximum length limit for HTTP requests in GPTs Action, or where can I find related documentation?
  2. Is there support for a segmented transmission mechanism to accommodate scenarios with a large volume of requests?
1 Like

So getting ChatGPT to output long string dumps similar to how a data image is structured- is one of the methods folks are using to get OpenAI to show some of its training data. They have been locking down things behind the scenes, but have been a little vague on limits while they are tweaking things. I have not been able to find any officially published data with express max limits, but people’s speculations.

Well, consider yourself lucky. You got an error message.

For me, the message is “Talked to…”, then it hangs there forever. That is definitely a showstopper for the upcoming GPT Store launch.

I do hope they thoroughly review that area and offer some guidelines.

I might as well point out that problem does not even have anything to do with AI expertise. It is in the realm of traditional web service/JavaScript programming, etc.

I suggest OpenAI hire more traditional DevOps people to strengthen the service delivery side of their platform.

1 Like

I have the same issue. I can’t POST long lists of data - maybe at about 100 lines I run into issues. My API works fine from Postman.

Originally it was the “Stopped talking to” error.

However, more recently, it’s giving me:

{
“response_data”: “ApiSyntaxError: Could not parse API call kwargs as JSON: exception=Unterminated string starting at: line 1 column 1717 (char 1716) url=my/api/location”
}

The thing is, it’s never actually hitting my API when the POST value is too long. My logs confirm this.

The entire process, even when it works, is also unbearably slow.

Your problem is of a different nature. There is an easy fix for that.

ChatGPT is clearly telling you the problem is the “string starting at: line 1 column 1717…”

ChatGPT packs your payload together with the system info into a big JSON object. Some characters of your payload may make the final package not satisfy the JSON syntax.

Many solutions are available. For example, you can JSON.stringify your payload. In the worst case scenario, you may base64 it.

1 Like

I don’t think so.

The only differences in my payload is the amount of data sent to the API.

I send 10 lines, each line has a single word, all alphanumeric. The API is called, data is returned. It works.

I send 100 lines, it fails. The API is never triggered.

There’s no obvious way for me to see what’s happening behind the scenes.

Most likely it is truncated.

If you are the developer, you can run it in Preview mode and watch the request and response in detail.

I bet you will see your payload is empty in the request. That is the sign of a crash caused by the payload.

Anyway, what is the total length of those 100 lines? There is an upper limit, which is not documented.

I think it is a ChatGPT bug.

There is a hard limit on the size of payload. The arcane error message is very misleading.

The direct reason is “Unterminated string…”, but it is caused by the truncated payload by ChatGPT itself!

I hope OpenAI people take a look at the problem. The current hard limit won’t allow a lot of serious GPTs.

For example, if you ask ChatGPT to generate some code, you cannot send it out to your GPT if that code is not trivial.

Yea this is going to clog it for any real data.

Can’t you use a link to the image instead? As in your GPT user would drop in the image URL instead of the actual image

I don’t think so. His case is similar to mine.

The data is still inside the ChatGPT and cannot come out yet.

The ChatGPT is not able to populate the payload into the “params”: {}, which is totally empty, as you can see in the endpoint request. Nobody but ChatGPT can write data into “params”.

This helped me. I had to ask gpt to escape characters that would interfere with passing the data (sh script) in a json request. Then it worked like a charm!