Vastly Different Responses (Assistant Playground vs. API)

The assistant’s instructions are to accept a short phrase about business and to write a 250-500 word summary of that phrase. It is explicitly told in the instructions to respond only with written paragraphs, no bulleted or numbered lists. It is set to use the GPT-4 model.

In playground, it performs flawlessly every time. However, I can provide the identical input via the API, and I receive a woefully inadequate response that blatantly ignores my instructions—every time. It almost always includes a numbered list, and oftentimes it isn’t even a summary at all, but something completely different.

I’m using Make to send the JSON for each step of the process (Create thread, create and run message, retrieve message). There are no errors, all inputs are being accepted and responded to accordingly. The issue is the quality of the response.

I’m hoping that there is a setting or something I can do to get responses via the API that are in alignment with those in Playground. Thank you in advance.

You must not provide an additional “instruction” field with the run, as that will overwrite the assistant instructions with the new one.

1 Like

Thanks for the response, but I’m not providing instructions with any messages or runs. The only instructions are those configured for the Assistant itself within the OpenAI browser control panel.

I might have a solution for you. I was having a similar problem, found this thread, then later solved my problem. Here goes:

Using the Python API, I was thinking that a “run” is an iterative message-passing class. I made an assistant, made a thread, then made a run, then added messages to the thread in a loop. This did not work.

Instead, I made the assistant, made a thread, and then in a loop I (a) added a message, and then (b) created a run to get the assistant’s output for that one message. This works. Runs, as I now understand them, are a way to say “hey OpenAI, go!”

So why did I get any output in the first place? I think that when I create a run without any messages, GPT goes “well, I need to say something”, and so outputs something. Of course, that something isn’t based on any messages, so it’s useless.

1 Like

This fixed my problem thanks

2 Likes

Hi - this sounds great. Can you please explain how this works a bit more?

Yes, I can elaborate.

OpenAI has documentation for the Python API for Assistants at https://platform.openai.com/docs/assistants/overview

Notice how the steps are:

  1. Create an assistant: assistant = client.beta.assistants.create
  2. Create a thread: thread = client.beta.threads.create
  3. Add a message to the thread: message = client.beta.threads.messages.create(the_assistant_id, the_thread_id)
  4. Create a “Run”: run = client.beta.threads.runs.create(the_thread_id)

The mistake which I and jack.isaak7 made was that we initially swapped steps 3 and 4. Our incorrect thinking was that a “Run” object can have messages dynamically added to it. Wrong! You need to create the messages first; the Run then bundles up the existing messages and fetches a reply.

The reason this error was hard to notice is that, if you create a Run without any messages, OpenAI still replies. The reply is nonsense, since it is not based on the message; hence this thread’s title “Vastly Different Responses”.