Bug Report - Issue with Custom GPT Action Handling Long Code Responses in BASE64 Encoding

Description: I’ve encountered a recurring issue with Custom GPT actions, specifically when generating and handling longer code responses encoded in BASE64 for JSON inclusion and subsequent HTTP endpoint transmission.

Steps to Reproduce:

  1. Request Custom GPT to generate a piece of code.
  2. Encode this code in BASE64 format for JSON compatibility.
  3. Configure the action to send this encoded data to a specified HTTP endpoint.

Expected Behavior: In the preview mode of Custom GPT, the debug log ‘Calling HTTP endpoint’ shows a correctly formatted JSON with the property “params” containing the full BASE64 encoded payload. This functions correctly for shorter code lengths.

Observed Bug: When the generated code exceeds a certain length, the “params” field in the JSON becomes empty. The debug log displays an error message: “response_data”: “ApiSyntaxError: Could not parse API call kwargs as JSON: exception=Unterminated string starting at: line 1 column ..."

Analysis: It appears that ChatGPT truncates the JSON output internally before making the HTTP endpoint call. This truncation results in an incomplete JSON object, leading to the observed error. The issue becomes evident when requesting ChatGPT to display the JSON it attempts to send. For longer payloads, ChatGPT partially displays the “params” content and prompts for confirmation to “Continue generating”. Upon agreeing, the full JSON with the complete payload is presented. However, this step seems unfeasible during runtime as the programmatic call to the endpoint likely proceeds with the truncated, thus invalid, JSON.

Possible Cause: The issue might stem from a design flaw or coding error at the web service layer, as the AI engine (GPT-4) successfully generates longer source code. It seems unlikely that this is a deliberate limitation on the HTTP service, as OpenAI’s capabilities extend beyond such constraints.

Suggested Fix: Adjusting the handling of longer JSON payloads within the ChatGPT’s web service layer should resolve the issue. This adjustment would not tax the core AI engine, as it is capable of generating and handling long strings of code. The fix likely involves ensuring complete transmission of the JSON object, irrespective of its length, during the HTTP endpoint call.

Hey I’m encountering the same error now. Can you please elaborate on “adjusting the handling of payloads within web service layer”?