GPT 4 not using max tokens when API called from Zapier

Hi - I’m trying to figure out whether this is a problem on the Zapier or ChatGPT side.

When I am calling the API from Zapier it says it is not using half of 8192 tokens and that the output is not being trimmed. However, the output is still being trimmed. See below.

Is there something I’m missing on the Open AI side here? Is there a setting to actually be able to use the 8192 max? I measured the input and output manually using tokenizer and it generally aligns with the numbers that Zapier is showing (i.e. it’s nowhere close to the max).

The models will not tend to use the maximum token count, you can instruct them to produce a more verbose response but they will typically end their work sooner than the max limit, the context size is there more for input than output.