Chatgpt API isn't good as it's website

Hello there,

I am writing to bring to your attention an intriguing observation I’ve made while experimenting with both ChatGPT 3.5 and 4. Despite posing identical questions to the models, I noticed a notable difference in the quality of responses when using the ChatOpenAI website versus the API.

In particular, responses obtained through chatopenai com appeared to be more refined and contextually accurate compared to those retrieved via the API. This discrepancy has piqued my interest, and I’m keen to understand if there are specific settings or parameters exclusive to the ChatOpenAI website that contribute to this variance.
To provide some context, I conducted tests by writing letters to given persons, and the results were consistently better on chatopenai com

1 Like

My understanding is that the web interface for ChatGPT includes additional system instructions that are not provided or published. You may be able to improve your system instructions in your API calls to match or exceed the quality of the web interface output.

You should define a system prompt telling the AI “who it is” You are a helpful assist… etc. and also try various temperature settings. The underlying model is the same for both.

In particular, while ChatGPT is general purpose, on the API, special attention can be imbued upon the AI by that system message.

Different versions of models can also be selected. A system prompt I just typed up, extending on the form of ChatGPT’s:

It says GPT-4, but gpt-3.5-turbo gets this task...

You are ChatGPT, a large-language AI model trained by OpenAI, based on GPT-4.

ChatGPT is able to act as an expert consultant in many fields, and can perform many tasks by carefully paying attention to user context and dialog to provide a fulfilling answer. It provides exhaustive answers that cover every facet of the topic being discussed. It is an excellent writer, even extending into long works of fiction. ChatGPT is a natural-born problem-solver, and is also empathetic and helpful.

ChatGPT, help this user to the best of your abilities!

Let’s have the API gpt-3.5-turbo (not 1106) answer this question, then:

Hello! Thank you for bringing this observation to my attention. I can understand your curiosity about the differences in response quality between ChatGPT 3.5 and 4 when using the ChatOpenAI website versus the API.

The ChatOpenAI website and the API are designed to serve different purposes and user experiences. While they both utilize the underlying ChatGPT model, there may be variations in the way the responses are generated and presented.

The API is a versatile tool that allows developers to integrate ChatGPT into their own applications, services, or websites. When using the API, you have more control over the interaction, including the ability to adjust parameters such as temperature, max tokens, and presence of system messages. These parameters can influence the output and help tailor the responses to specific needs. However, without fine-tuning, the default settings may not always produce responses as refined as those on the ChatOpenAI website.

On the other hand, the ChatOpenAI website provides a user-friendly interface that aims to simplify the interaction with ChatGPT. It is optimized to generate coherent and contextually accurate responses by carefully designing the conversation format and applying some heuristics to ensure the best user experience.

To achieve similar quality on the API as you experienced on, you may need to experiment with adjusting the temperature parameter. Lower values like 0.2 make the model more focused and deterministic, while higher values like 0.8 introduce more randomness. You can also try using system messages to guide the conversation explicitly.

It’s worth noting that both ChatGPT 3.5 and 4 have their own strengths and weaknesses, and OpenAI is continuously working on refining and improving the models. If you have any specific questions or need assistance with using the API effectively, feel free to let me know!


ChatGPT's boilerplate answer for comparison

Thank you for bringing up this observation. The difference in the quality of responses between the ChatOpenAI website and the API may be attributed to several factors:

  1. Model Version: The ChatOpenAI website may be using a different version of the model compared to the API. Models can be updated, and there might be variations in performance between versions.
  2. Default Settings: The ChatOpenAI website may have default settings or configurations that are optimized for user interactions, which could affect the quality of responses. These settings might not be exposed directly through the API.
  3. Post-processing: The website might perform additional post-processing or filtering on the model’s responses to refine them further for user interactions.
  4. Fine-tuning: It’s possible that the model used on the website has been fine-tuned specifically for the website’s use case, which could result in improved performance for certain tasks.
  5. API Usage: When using the API, the user has more control over the input and may not provide the same level of detail or context as when using the website’s interface, which could affect the model’s ability to generate accurate responses.

To get a better understanding of the specific settings and differences between the two, it would be best to consult with OpenAI directly or review any documentation or guidelines provided for using the API and the ChatOpenAI website. Additionally, OpenAI continually updates and improves its models, so it’s possible that future updates may narrow the gap in performance between the two platforms.

With language, the quality of course is subjective.