My issue is resolved.
The error for me exists between the keyboard and the chair.
I was getting the 429 errors with my free usage. My completions never worked.
In error, in order to fix, I signed up for a paid account. The problem was I signed up for the Plus account which gave me better access through the website thinking it was also API related. Tonight I decided to sign up for pay as you go and I am now receiving completions.
I still don’t understand why my free usage was at 0 use and receiving 429s, but once I signed up for pay as you go, I started to receive responses.
So in my case, free usage aside, I’ve corrected with pay as you go. I realize that doesn’t help the people who are using the free type, but I thought I would fall on my sword for possibly someone else’s benefit.
1 Like
@crgmonroe you mentioned you were using chatgpt pro
If you are using the API with that, it’s not a surprise you are having problems.
There is no API for chatgpt and openai are actively blocking access for apis as it is against their terms of service
Hahaha.
Yes, I have noticed that a lot here, but never say anything in reply, but you @crgmonroe have made me smile a bit bigger this morning.
We have no short of people here tossing rocks and chunking spears at OpenAI who fit into exactly what you have said in your post, @crgmonroe
Thank you for that reply!
You made my day and restored a bit of my faith in humanity with that single line!

1 Like
Thanks Ruby.
My motto in life is “no pride.” Raise the ship.
I’m glad I was able to bring a smile to your face. 
Thanks for your engagement in this thread.
1 Like
Thank @crgmonroe
I guess my life motto these days is:
“Be happy, smile and maintain a sense of humor in all situations, especially the bad ones.”

PS: It took me many years to learn to live this motto every day and in all situations! It is not easy, but it is doable with a lot of practice.
1 Like
Eventually I found this article Rate Limit Advice | OpenAI Help Center
Rate limits can be quantized, meaning they are enforced over shorter periods of time (e.g. 60,000 requests/minute may be enforced as 1,000 requests/second). Sending short bursts of requests or contexts (prompts+max_tokens) that are too long can lead to rate limit errors, even when you are technically below the rate limit per minute.
Thus, when it states that the rate limit is 3000/1min, it appears that the actual rate limit is 1/20ms… maybe, 1/20ms & 2/40ms & 3/60ms & … & 3000/1min.
Given the latency fluctuations, it would be impossible to actually make a request every 20ms, and it would be difficult for multiple servers in an organizational unit to coordinate to control requests at the msec level…
ps: The Codex API is 20/min, which is good for testing rate limits (if you ignore that it costs money)
edit: Even if I set the weights sufficiently, it returns 429, I’m not sure…
1 Like