We requested a tokens per minute rate limit increase more than a month ago and have heard nothing back from the OpenAI team. Is there someone I can talk to about moving forward on this? We’re a venture funded startup on the cusp of closing an enterprise deal but there is no way we’ll handle the volume of traffic we will be getting soon without an increased TPM limit.
1 Like
Might be worth investigating Azure options if you think your usage may be high, they offer the OpenAI endpoints but on separate hardware and seems to give an inference speed and latency improvement for larger users.
Costing is identical.
Thanks for the tip, we’ll try them out