I’m getting the exact same thing. My app stopped working due to this. Then when I looked at the logs, the chatgpt api was saying “Sorry, I am an AI language model and I do not have access to the transcript you are referring to.”, but when I ran the same exact prompt (with all the same settings) in the playground, it worked perfectly. Something is messed up with the API, even though the status is saying it’s working fine.
the edits model should not have been removed. I wonder if they either removed them by accident, or they forgot to mention that they were also going to be removed. I hope it’s the former, and they bring it back. I needed it for the my company’s app.
Also, I noticed they made some sort of change to 3.5-turbo, because a prompt that has always worked for me, just suddenly stopped generating the predicted results, at the same time as the edits model went offline. I actually diagnosed the turbo problem just 5 minutes ago.
I hate to add a “Me too”! but it is really distressing when something just ‘disappears’ for days and OpenAI doesn’t say anything? If it has been deprecated does anyone have a link to explain this?
Docs still state that these models are valid for the endpoints. I was also surprised these models were removed, as the edit endpoint itself seems to still be in the API docs and wasn’t expressly deprecated.
This is Karl (EinstAI/EinsteinDB), and unfortunately this seems to be a prevalent bottleneck throughout the REPL abstraction. Either the module got its pointers removed, or during the re-organization of models, the fusion of the newer models has superseded quite incorrectly the getters and setters for the JS which allocates the indices for codex.
The model: code-davinci-002 does not exist
Even though I have the examples available on Playground, try calling the API with the secret key and see if you can fetch all 50+ models depending on your level of subscription/access
Hey bud, The problem is in the fusion of DaVinci Models. A lot of the characteristics and programmatic traits which distinguish codex have seemingly been conjoined, not at the endpoint, but within the modularity of the scripts within Codex and Davinci; it begs the question whether Cushman codex remains untouched. I am sure DaVinci will provide you with the same functionality, but for production APIs you want codex to explicitly castigate non-code queries, and to effervescently procure the code completion you request. It’s a matter of time until OpenAI gets to this, limited time, scope, and staff are all relevant factors in our problems getting fixed. Patience is a virtue.