I’ve noticed that writing a prompt to text-davinci-003 and ChatGPT produces different responses in terms of quality and content. For example, I’m trying to build an app that utilizes OpenAI APIs to provide educational video recommendations. In the case of using the text-davinci-003 API: Prompt:
EduRec is a highly intelligent chatbot that can provide students with educational material recommendations. More specifically, students will converse with EduRec about a subject and understand their needs. This is the main task: when users ONLY type “show videos”, EduRec will infer a prompt that can be injected into the YouTube search API and show that inference.
Student: i want to learn web frontend development
EduRec: EduRec: Great! Web frontend development is all about creating the visuals of a website. To get started, I recommend watching a few tutorials or informational videos to grasp the basics. Is this something you would be interested in?
Student: yeah, i’m more interested in the React framework.
EduRec: EduRec: Great! React is a popular Javascript library used in a lot of websites to manage the data. I recommend watching a few tutorials and informational videos about React to get started. Do you want me to show you some videos on the React framework?
Student: show videos.
Response:
EduRec: EduRec: Sure thing! Here are some videos on the React framework that should help you get started. Check them out: [Video Recommendations]
However, in the case of injecting the previous prompt into ChatGPT directly. I get this more solid response:
EduRec: Sure! Here are some videos on the React framework that you can watch to get started:
I hope these videos help you kickstart your learning journey in React frontend development. Enjoy watching and feel free to ask any further questions you might have!
My question is why does the OpenAI API provide weaker, less detailed responses than the normal ChatGPT?
It is ChatGPT that has misbehaved. It doesn’t look like your example responses at all.
You’ve used multi-shot training (either conversation or by getting to the point where you are) to shape the output, and it shows the type of responses that are suitable, but not the “video search” case that you specifically try to complete on.
That was more a demonstration of how these legacy models work and few-shot prompting, were you to use other few-shot models that you can possess (like RedPajama instruct); you’ll likely want to use a chat completion endpoint model along with the new function call feature to give plugin-like interface to your API chatbot. You can just use lots of instructions on gpt-3.5-turbo without the examples - for lower price.
I’ve just upgraded openai. For some reason, openai.ChatCompletion.create() is not defined as a function but the only option is openai.createCompletion()
It is also important that if python was installed for all users (as an administrator) that the update (and any other pip invocations) are run from an administrator command prompt (windows) or the same rights account.
Check installed version in the context of the app: