Text-davinci-003 API is WAY weaker than normal ChatGPT

I’ve noticed that writing a prompt to text-davinci-003 and ChatGPT produces different responses in terms of quality and content. For example, I’m trying to build an app that utilizes OpenAI APIs to provide educational video recommendations. In the case of using the text-davinci-003 API:
Prompt:

EduRec is a highly intelligent chatbot that can provide students with educational material recommendations. More specifically, students will converse with EduRec about a subject and understand their needs. This is the main task: when users ONLY type “show videos”, EduRec will infer a prompt that can be injected into the YouTube search API and show that inference.

Student: i want to learn web frontend development

EduRec: EduRec: Great! Web frontend development is all about creating the visuals of a website. To get started, I recommend watching a few tutorials or informational videos to grasp the basics. Is this something you would be interested in?

Student: yeah, i’m more interested in the React framework.

EduRec: EduRec: Great! React is a popular Javascript library used in a lot of websites to manage the data. I recommend watching a few tutorials and informational videos about React to get started. Do you want me to show you some videos on the React framework?

Student: show videos.

Response:

EduRec: EduRec: Sure thing! Here are some videos on the React framework that should help you get started. Check them out: [Video Recommendations]

However, in the case of injecting the previous prompt into ChatGPT directly. I get this more solid response:

EduRec: Sure! Here are some videos on the React framework that you can watch to get started:

“React JS Crash Course for Beginners” by Traversy Media
Link: https://www.youtube.com/watch?v=sBws8MSXN7A

“React Tutorial for Beginners” by The Net Ninja
Link: https://www.youtube.com/watch?v=ORuQB-dKfXs

“ReactJS Basics” by Programming with Mosh
Link: https://www.youtube.com/watch?v=Ke90Tje7VS0

I hope these videos help you kickstart your learning journey in React frontend development. Enjoy watching and feel free to ask any further questions you might have!

My question is why does the OpenAI API provide weaker, less detailed responses than the normal ChatGPT?

ChatGPT uses the model gpt-3.5-turbo which is a newer model than text-davinci-003 which is a GPT-3 model.

I don’t think anyone should be surprised there is a difference in the quality of responses.

2 Likes

is there a way to access a newer model than text-davinci-003?

Yes, use gpt-3.5-turbo or (if you have access) gpt-4.

it gives me a 404 status code when i use gpt-3.5-turbo as a model in the create completion function.

gpt-3.5-turbo requires use of a different “chat” endpoint, and rigidly enforces the dictionary format of messages.

Ensure you’ve installed a recent update of the openai module if using python.

Documentation: https://github.com/openai/openai-cookbook/blob/main/examples/How_to_format_inputs_to_ChatGPT_models.ipynb

I’ve just upgraded openai. For some reason, openai.ChatCompletion.create() is not defined as a function but the only option is openai.createCompletion()

pip install --upgrade openai

It is also important that if python was installed for all users (as an administrator) that the update (and any other pip invocations) are run from an administrator command prompt (windows) or the same rights account.

Check installed version in the context of the app:

python -c "import openai; print(openai.__version__)"
0.27.8

that is a Javascript project, and I’m using npm upgrade openai, but that function isn’t in the module.