Well, found it,
try 1
sending 1200000 tokens — 600 items * 2000 tokens
Done in 2.66 seconds
try 2
sending 1202000 tokens — 601 items * 2000 tokens
Traceback (most recent call last):
raise self._make_status_error_from_response(err.response) from None
openai.BadRequestError: Error code: 400 - {‘error’: {‘message’: ‘Requested 300500 tokens, max 300000 tokens per request’, ‘type’: ‘max_tokens_per_request’, ‘param’: None, ‘code’: ‘max_tokens_per_request’}}