Different Outputs from Assistant API vs. Custom GPTs with Same Settings

I’ve noticed different behaviors between the Assistant API and custom GPT, even though they’re configured identically. This is surprising, as I expected similar responses from both.

Any insights or similar experiences would be helpful!

2 Likes

Hi.

I am experiencing the same here.

Same instructions, same files.

Any ideas?

I’m also running into this issue. Does anyone know what might be causing this?

Here is my use case: I created a custom GPT and an Assistant designed to write SQL code using an information schema uploaded as a CSV file.

This is the instruction I gave to both:

You are an SQL developer. I will ask you questions and I want you to write SQL code that produces the answers to those questions. I want you to use the attached information schema to write this code.

It works without problem on the custom GPT, but doesn’t work on the GPT Assistant. On the Assistant, I get a message saying it needs a schema (doesn’t see the CSV file, which does work on the customGPT), or it gives code that doesn’t reflect the schema (e.g. it uses some of the right tables/columns, but also some that aren’t in the schema).

Two options come to mind to me:

  1. The custom GPT uses a different version to the GPT version available to the Assistant. The Assistant API is using gpt-4-1106-preview, and I can’t see what version is used by the custom GPT.

  2. The custom GPT gives different instructions (perhaps a “pre prompt”?) that isn’t seen by the user.

The Assistant API also seems unable to interpret a database information schema (saved as a CSV file), but the custom GPT can.

Thanks in advance for any thoughts you might have!

1 Like

Yeah, that is exactly what is happening me.

The only thing you can do to force the reply with file is also specifying in each message the file_id, even is defined in the Assisant.

I can only imagine that the Assistants API uses a different (worse) version of ChatGPT than the web-based interface itself. Either that, or there is an instruction given in the web-based version that we can’t see.

2 Likes

I too am experiencing this issue. I get much higher quality replies from the Custom GPT.

This is disappointing because I want to use Zapier to create an email draft using my custom ChatGPT.

Having this same issue. Was excited after testing things out in the GPTs and seeing great results, and then buy API Credits and see that the responses from the Assistant with same files and instructions be unusable :frowning:

Having all the same issues. And forget about using new features such as voice.

I have exactly this problem. I have created a custom GPT with the prompt and all the instructions, I have tested it for months to make sure its responses are the ones I need before passing it to an integration via the API.

After so much testing, when I finally go and create the wizard with the same instructions and configuration, the responses I get have nothing to do with those of Custom GPT, I have already tried everything and I don’t know why this happens. I hope OpenAi fixes this soon, my project has completely stopped.