Using davinci-003, even when setting cURL timeout at 0, often results in no response at all and I have to kill the process.
Other engines seem to be operational. I can get consistent outputs with Curie, Davinci-002, and everything else. But Davinci-003 is, for all intents and purposes, not functional at the moment. It’s too big of a risk to even start an API call because there’s a 50%+ chance of it never finishing.
Perhaps this is because of ChatGPT, but it seems strange that only this one particular engine is affected. As a test, I’ve been waiting for an API call to finish with davinci-003, without killing the process. I’m now at 30 minutes! At least point, I think it’s safe to assume it’s never going to finish.
My main goal in posting this is because OpenAI’s API monitoring system doesn’t seem to reflect what’s happening in the real world.
Update: Issue seems to be getting better, I am getting responses now.
I thought ChatGPT was just called “3.5” but was not really separated from 3 but rather built to run on top of it, and hence called “3.5” specifically with " "-marks.
That’s what I think I remember reading last year. Maybe I remember wrong.
I don’t think that is correct and nor is it how deep learning ANNs work.
If you can find a reference (not from ChatGPT please, haha) which says ChatGPT was built “on top of” GPT-3, that would be very cool for you to share. It would not make sense, at least to me, that ChatGPT “runs on top of” text-davince-003. They are totally different models, I think, trained on different data as well (from what I have read).
Would be great to read a reference otherwise (not from ChatGPT please). Thanks!