I am amazed by the difference in completions quality there can be between Da Vinci OO2 and the other models. I am a real fan of Neo4j and I submitted a simple node creation prompt to all the available models:
With a one shot prompt:
First, Da Vinci understood the properties list. Second it was able to write the right CYPHER query (Codex could not). All other text models were off-topic. Curies likes SQL, Babbage likes serialization and Ada is lost in tanslation
@zepef Try Da Vinci 003, it’s even better. I have got much more consistent results with it without even prompting it to generate Cypher Queries than use Da Vinci 002. Also, feels a bit weird to not use codex for this job if I’m being honest
For sure GPT-3.5 is great, it was not available when I wrote my post In fact you can easily do whatever is possible with chatGPT using Da Vinci 003. Just ask chatGPT to write the prompt for its own completion so it can be used on Da Vinci 003. Unbelievable!
May I ask you how you use chatGPT using DaVinci 003?
I mean isn’t it a default?
I am pretty new to OpenAI and just starting, but pretty much like it.
I mostly use it to help myself with Python and SQL tasks.
Is the chatGPT enough for that purpose or I need to go to codex or other models by setting an API?
I was recently confronted with a very down-to-earth business problem. One of my clients wanted to use an LLM to perform information extraction from unstructured text. So I created a prompt in this sense with chatGPT. The client objected that the API I was using was not official and that there was, at the time, no announcement from Open AI for a pro version. He was right, an industrial production launch cannot be based on such questions.
So I just asked chatGPT what Da Vinci-003 prompt he would write to get the same completion as he did. And it worked well. I think that the differences between the two models are more conversational and memorization of completions from one prompt to another for chatGPT and GPT-3.5.