Different Outputs from Assistant API vs. Custom GPTs with Same Settings

I’ve noticed different behaviors between the Assistant API and custom GPT, even though they’re configured identically. This is surprising, as I expected similar responses from both.

Any insights or similar experiences would be helpful!


I am experiencing the same here.

Same instructions, same files.

Any ideas?

I’m also running into this issue. Does anyone know what might be causing this?

Here is my use case: I created a custom GPT and an Assistant designed to write SQL code using an information schema uploaded as a CSV file.

This is the instruction I gave to both:

You are an SQL developer. I will ask you questions and I want you to write SQL code that produces the answers to those questions. I want you to use the attached information schema to write this code.

It works without problem on the custom GPT, but doesn’t work on the GPT Assistant. On the Assistant, I get a message saying it needs a schema (doesn’t see the CSV file, which does work on the customGPT), or it gives code that doesn’t reflect the schema (e.g. it uses some of the right tables/columns, but also some that aren’t in the schema).

Two options come to mind to me:

  1. The custom GPT uses a different version to the GPT version available to the Assistant. The Assistant API is using gpt-4-1106-preview, and I can’t see what version is used by the custom GPT.

  2. The custom GPT gives different instructions (perhaps a “pre prompt”?) that isn’t seen by the user.

The Assistant API also seems unable to interpret a database information schema (saved as a CSV file), but the custom GPT can.

Thanks in advance for any thoughts you might have!

Yeah, that is exactly what is happening me.

The only thing you can do to force the reply with file is also specifying in each message the file_id, even is defined in the Assisant.

I can only imagine that the Assistants API uses a different (worse) version of ChatGPT than the web-based interface itself. Either that, or there is an instruction given in the web-based version that we can’t see.

I too am experiencing this issue. I get much higher quality replies from the Custom GPT.

This is disappointing because I want to use Zapier to create an email draft using my custom ChatGPT.