I was using chatgpt 3.5, and I was working on representing a code written in Solidity as a code written in ZoKrates DSL (.zok file), and I managed to do it. But when I wanted to use an API whithin my Python code, I noticed that the answers were very different and very wrong, like it did not even process it.
Looking more into it, I discovered this community, and learnt that ‘text-davinci-003’ API engine is not as intelligent as ChatGPT 3.5 that I was using ( Correct me if I am wrong about this fact). Is there a reason that ‘text-davinci-003’ API is falling behind this much ?
I am asking because I wanted to integrate ChatGPT in a project I am working, but this is a limitation, and I need a solid reasoning behind it to mention it in my thesis.
The model was released November 2022. Also the release date of initial ChatGPT.
It must be prompted in a particular way, the last of real “prompt engineering”, using operating instructions and particular AI/Human exchanges that are like its tuning. After doing so, you get a look back in time, and could write your own “original chatgpt”.