aitest
1
Hi.
According to the documentation the model gpt-4-1106-preview max context length is 128,000 tokens.
However, when I use the api it returns this error.
{
“error”: {
“message”: “Rate limit reached for gpt-4-1106-preview in organization org-caigTai6iXXJrP5PXEM04Hd0 on tokens per min. Limit: 40000 / min. Please try again in 1ms. Visit OpenAI Platform to learn more.”,
“type”: “tokens”,
“param”: null,
“code”: “rate_limit_exceeded”
}
}
so, either the documentation is wrong, or the endpoint is wrong.
Thanks
PaulBellow
Split this topic
2