When I asked, “gpt-3.5-turbo” model using API it says:
user : Are you gpt-3 or gpt-3.5?
assistant : I am using the GPT-3 model.
user : Are you the last model?
assistant : As an AI language model, I am currently using the latest and most advanced version of GPT-3 developed by OpenAI. However, new models may be developed in the future that are even more advanced than GPT-3.
But when I ask gpt-3.5 on openAI website it says that it is 3.5.
Why is it so?
1 Like
It’s trained on content that existed before it was trained, so it has no way of knowing about itself. ChatGPT may include details about the model in the system prompt and you can do the same for API if it is critical to you.
Ok then. But why quality of API model responses is much lower than on the site?