Issue with text-davinci-003 engine when using API

I was using chatgpt 3.5, and I was working on representing a code written in Solidity as a code written in ZoKrates DSL (.zok file), and I managed to do it. But when I wanted to use an API whithin my Python code, I noticed that the answers were very different and very wrong, like it did not even process it.

Looking more into it, I discovered this community, and learnt that ‘text-davinci-003’ API engine is not as intelligent as ChatGPT 3.5 that I was using ( Correct me if I am wrong about this fact). Is there a reason that ‘text-davinci-003’ API is falling behind this much ?

I am asking because I wanted to integrate ChatGPT in a project I am working, but this is a limitation, and I need a solid reasoning behind it to mention it in my thesis.

The model was released November 2022. Also the release date of initial ChatGPT.

It must be prompted in a particular way, the last of real “prompt engineering”, using operating instructions and particular AI/Human exchanges that are like its tuning. After doing so, you get a look back in time, and could write your own “original chatgpt”.


(this chatbot software is actually using Human: and AI: behind the scenes for completions.)

It was the first model to use reinforcement learning from human feedback datasets instead of supervised fine-tuning in creating its instruction-following for varied tasks.

And you can enjoy some more 2022 forum chat about ChatGPT

If you want chat like today’s ChatGPT, use gpt-3.5-turbo or gpt-4 and the chat-completions endpoint.