Can anyone explain why I randomly get the “Enqueued token limit reached for text-embedding-3-large” message when I upload batches with 1000 texts consisting of about 200 ancient Greek words? That batch should be around 500K tokens, definitely below the 3M token limit. According to the logs, all previous jobs are either completed of failed, so nothing is queued. The very same jsonl-file can first be denied a few times and then later processed, so it seems rather random to me.
I have written a simple Python program that uploads one batch at the time, waits for it to complete, downloads it, waits a few seconds, and then goes on to the next batch. I get about 70% fails, 30% completions, all randomly distributed.