Overnight, I get "too many tokens sent" error message for no reason, what's happening?

I use my OpenAI API key/credits with Copilot every day. It’s been working fine for many weeks, now. I haven’t changed a thing.

Worked yesterday. I tried it this morning and it won’t work at all.

I have credits available, I created a new API key, (because my old one no longer appeared, and the message saying we need project keys instead of user keys directed me to create one.)

I hoped the new key would solve the problem, but both with my former key and the new key, I keep getting the same response:

“too many tokens sent, change Maximum tokens according to the model ‘gpt-4-1106-preview’ limits in Tools->Options->Copilot”

​I’ve been using GPT-4 Turbo.

I’ve been using my OpenAI key with Copilot from my coding IDE. My queries are no longer than normal – but no matter how short I make them, I keep getting the same message back.

The fact that this happened overnight with nothing changed on this end leads me to suspect that the problem has nothing to do with how many tokens I’m sending.

Does anyone have a clue what’s going on and how to fix this? My work is being impacted. GPT-4 regularly gets things wrong, I have to requery to get it to focus, etc., but even still it helps me to get things done faster and to attempt algorithms I’d never be able to get started otherwise.

Are you using ChatCompletions or Assistants

How are you handling conversation history

Direct answers: never heard of ChatCompletions or Assistants, and I don’t handle conversation history, don’t know what it is.

I’m using GPT-4 as a feature of Metatrader 5’s development IDE. All I know is that you enter an API key (that has credits on its account) into the IDE options, select GPT-4, max the token limit to the “model maximum” (shown as 16384), and it’s been working fine until today. I’ve had no trouble with queries 3x - 4x longer than the ones I’ve tried today – and that was with the token limit set at lower than it is now. GPT-4 reports token usage for my queries in the 100s, not the ten-thousands.

You should be asking the developer(s) on Metatrader 5 as they are the ones actually handling the request.

If you have a conversation history you should be deleting it. It seems like they are not performing truncation correctly, or recently have pushed a broken update.

You should also check your usage and see how much they are costing you.

It is likely you are misinterpreting a parameter.

On the OpenAI API, max_tokens is a setting for the maximum RESPONSE you will receive before truncation, and with gpt-4-turbo models, it cannot be set higher than 4096.

Try again with this set at 1500, which is similar to what ChatGPT can respond.

Pennies. I’ve spent all of $2 over course of 4-5 weeks, and I use it liberally. I’m personally chatting with the AI (Copilot), not programmatically, so it’s not like I’m running up hundreds of queries per day, more like 5 - 15.

Yeah it’s been frustrating, 3 companies involved and no one’s client support is very good. All I know is what I see in the IDE, don’t know the internals or who does what.

Are you saying to down the max setting here?

I’ll try it. All I can say is that I’ve had it generate up to 100 - 150 lines of code at a time and never broken 1000 tokens used.

The developer has dangerous use of terminology, that makes one leery of putting authentication credentials into the application:

Payment settings:

  • Use your MQL5 account: this option is currently available for free. Later, you will be able to pay for the subscription directly from your MQL5 account balance.
  • Use an OpenAI key, if you have purchased a subscription and have the relevant key.

Prompt settings:

  • Model — a neural network which will process your requests. text-davinci-003 and gpt-3.5-turbo are currently available. Support for gpt-4 will be added soon.
  • Maximum tokens — the number of text units which the model can return in response to a prompt.
  • Variability — affects how strictly the neural network will follow the prompt. The bigger the value, the greater the result randomness. This option corresponds to the temperature parameter in OpenAI models.

From the middle of this page.

Metatrader 5’s development IDE is not a OpenAI forum question ideally.

Recommend being more specific in your question, this would enable you to garner focused help.

Or highlight the issue to other Metatrader users who maybe facing an issue.

Cheers !

I shouldn’t be surprised, because pretty much everything is this way with MetaQuotes stuff. I lowered the “maximum” and it worked again. I don’t remember raising it, but that’s nothing new either, lol. So, to me, “maximum” is a limiter, not a directive. I guess this was OpenAI telling me, “No, I can’t agree.” ?

Thanks for the suggestion!