Just constant randomness for the past hour, now feels more like two out of three attempts in total, and a bunch in a row.
My project is allowing me to switch models and get the same error between both gpt-3.5 and gpt-4 and both of their prior checkpoints.
500 {‘error’: {‘message’: 'The server had an error processing your request. Sorry about that! You can retry your request, or contact us through our help center at help.openai.com if you keep seeing this error. (Please include the request ID…
(Fun fact, this is an error emitted from internal endpoint to model connections, not any network problem, and can be seen in the API swagger specification)
Latest openai python and its new “robust” connections.
Anyone else up to verify and commiserate?
Thanks, I have an even more extensive usage log for my period of time.
I figured out my errors are being triggered from the endpoint not taking certain types of character sequences as input or output, but oddly only sometimes. Running at the edge of tokenization experimentation…
Basically the API has a parser of the language output by the AI. Anything that it doesn’t like, for example a function-start token followed by not a complete function, gets you a 500 error instead of telling you what the AI produced so you can fix it.