I have the same problem highlighted by some other users.
It seems that the “enqueued tokens” limit is not reset when one or more batches gets the status “completed”.
Since there is no “input_token” parameter returned in the “batch_activation” object (and this is a SEVERE ISSUE) we try to calculate the number of tokens we’re using by dividing the number of characters of the batch file length by 3. And we’re sure (absolutely sure, I mean) not to exceed the infamous limit of 90,000 tokens in the queue, since the algorithm that calculates the resource occupation must be “very” conservative. So we wonder how to do it, since each activation of a batch should be safely accepted, or the record in our database has to be flagged as “activation failed” and processed again later (but “when”?).
We noticed the same problem in the past days, when all the loaded and activated batches were at “completed” status. Only activating another batch, the same error message appeared again.
If there are daily limits please tell us (tell to all the Community via documentation) and suggest how to work around the problem.
The level of this bug should be set to SEVERE, because it breaks the workflow the batch API is intended to serve.
Please fix the issue. I suggest to update the “enqueued_tokens” data in the “company record” of your batabase every time one of the activated batches reaches the status “completed”.
Thanks.
P.S.
Since you calculate the tokens for each activation, why you don’t put this data into the “batch_activation” object? It should be easy also for us to keep track of the consumption, don’t you think?
Thanks for the attention.
2 Likes