Creating Gmail Drafts with GPT Actions (truncated payload)

Hi everyone,

I am building a custom GPT to create draft emails in a users Gmail account.
The issue that I am facing is that the payload my GPT sends is truncated most of the time, even though the content is not that big.
Here is the payload sent by it:

Test GPT needs to send this info to gmail googleapis com


as you can see the payload is incomplete. Therefore, I get this response:
response_data": “ApiSyntaxError: Could not parse API call kwargs as JSON: exception=Unterminated string starting at: line 1 column 19 (char 18) url=gmail API here”

I cannot pinpoint why the issue is happening and I would appreciate any suggestions.

Thank you!

Answer from

Based on the information provided in the extracts, it seems like you’re encountering an issue with the payload being truncated when using GPT Actions. While the extracts don’t provide a direct solution to your problem, they do offer some insights that might be helpful.

In one of the forum posts (GPT API (turbo-3.5) return output truncated with ellipsis (...) when token limit is not reached), a user faced a similar issue where the output from the GPT API was truncated even when the token limit was not reached. The user managed to solve the issue by changing the output format to a list of JSON line by line and reducing the amount of keyword input.

Another forum post (GTPs action argument limit 1000 symbols (utf) as schema doesn't allow to pass arguments in POST body - so no large text parameter can be sent) discusses the action argument limit of 1000 symbols (utf) in GPT Actions. The user managed to solve the issue by using the requestBody syntax. While these solutions might not directly apply to your case, they suggest that the issue could be related to the format of your payload or the size of the data you’re trying to send.

Here are a few suggestions:

  1. Check the format of your payload: Make sure it’s properly formatted as JSON and doesn’t contain any syntax errors.

  2. Check the size of your payload: If your payload is too large, it might be getting truncated. Try reducing the size of the data you’re sending.

  3. Check your GPT Action settings: Make sure your GPT Action is correctly configured to call your external API.

If you’re still having trouble, I recommend reaching out to OpenAI support or posting your question on the OpenAI community forum for more specific help.

Thank you for the answer.

The issue seems with the CustomGPT itself. My theory is that the encoded base64 message consumes too many tokens. I used [Tiktoken Calculator - a Hugging Face Space by JacobLinCool] to calculate how many tokens are needed for the text I am sending and the text itself consumes around 400 tokens but when converted to base64 it consumes around 1700 tokens.
Is this a known limitation? or do you have an Idea how to work around this?