I’ve noticed that writing a prompt to text-davinci-003 and ChatGPT produces different responses in terms of quality and content. For example, I’m trying to build an app that utilizes OpenAI APIs to provide educational video recommendations. In the case of using the text-davinci-003 API:
EduRec is a highly intelligent chatbot that can provide students with educational material recommendations. More specifically, students will converse with EduRec about a subject and understand their needs. This is the main task: when users ONLY type “show videos”, EduRec will infer a prompt that can be injected into the YouTube search API and show that inference.
Student: i want to learn web frontend development
EduRec: EduRec: Great! Web frontend development is all about creating the visuals of a website. To get started, I recommend watching a few tutorials or informational videos to grasp the basics. Is this something you would be interested in?
Student: yeah, i’m more interested in the React framework.
Student: show videos.
EduRec: EduRec: Sure thing! Here are some videos on the React framework that should help you get started. Check them out: [Video Recommendations]
However, in the case of injecting the previous prompt into ChatGPT directly. I get this more solid response:
EduRec: Sure! Here are some videos on the React framework that you can watch to get started:
“React JS Crash Course for Beginners” by Traversy Media
Link: React JS Crash Course (2019) - YouTube
“React Tutorial for Beginners” by The Net Ninja
Link: - YouTube
“ReactJS Basics” by Programming with Mosh
Link: React JS - React Tutorial for Beginners - YouTube
I hope these videos help you kickstart your learning journey in React frontend development. Enjoy watching and feel free to ask any further questions you might have!
My question is why does the OpenAI API provide weaker, less detailed responses than the normal ChatGPT?
ChatGPT uses the model
gpt-3.5-turbo which is a newer model than
text-davinci-003 which is a GPT-3 model.
I don’t think anyone should be surprised there is a difference in the quality of responses.
is there a way to access a newer model than text-davinci-003?
gpt-3.5-turbo or (if you have access)
It is ChatGPT that has misbehaved. It doesn’t look like your example responses at all.
You’ve used multi-shot training (either conversation or by getting to the point where you are) to shape the output, and it shows the type of responses that are suitable, but not the “video search” case that you specifically try to complete on.
Davinci prompting, 99.91% logprop first try:
(Responses by guanaco-33b; Llama trains OpenAI; we simulate a conversation manager to tease longer answers)
Playground preset link: OpenAI Platform
That was more a demonstration of how these legacy models work and few-shot prompting, were you to use other few-shot models that you can possess (like RedPajama instruct); you’ll likely want to use a chat completion endpoint model along with the new function call feature to give plugin-like interface to your API chatbot. You can just use lots of instructions on gpt-3.5-turbo without the examples - for lower price.
it gives me a 404 status code when i use
gpt-3.5-turbo as a model in the create completion function.
gpt-3.5-turbo requires use of a different “chat” endpoint, and rigidly enforces the dictionary format of messages.
Ensure you’ve installed a recent update of the openai module if using python.
I’ve just upgraded
openai. For some reason,
openai.ChatCompletion.create() is not defined as a function but the only option is
pip install --upgrade openai
It is also important that if python was installed for all users (as an administrator) that the update (and any other pip invocations) are run from an administrator command prompt (windows) or the same rights account.
Check installed version in the context of the app:
python -c "import openai; print(openai.__version__)"