Assistant API is not aware of its instructions

I am using the API for example:

const assistant = await openai.beta.assistants.create({
      name: "Test Assistant",
      instructions: "When I say 'red', write a short poem.",
      tools: [{ type: "code_interpreter" }],
      model: "gpt-4-1106-preview"
});

When I test it says “It seems you’ve mentioned the word “red,” but I’m not sure what context you’re referring to. Could you please provide more information or clarify your request so I can assist you appropriately?”

Why is the Assistant API not aware of its instructions? This should be like the system message in ChatCompletions, no?

4 Likes

Hi @hello44,

Have you been able to make Assistant to follow your instructions? I’m having the same problem, where I’ve set an instruction for the assistant to always answer in json format, but often ignores it.

Thanks.

1 Like

Same here. It doesnt seem to remember its own name. But in playground it remembers everything

1 Like

I was having a similar problem. It turns out that if you provide instructions within the run creation, they will override your assistant instructions. Be sure not to provide instructions in the run, not even instructions: “”.

7 Likes

Exactly!! though the documentation says ‘Additional Instructions for the Run’, they are not additional, but overriding. I wasted a lot of time with this not being clear. Run level instructions are overring the Assistant level instructions.

3 Likes

I was having a similar problem. No matter how much I edited the prompt, the assistant didn’t get my name right and confused me with the author of the report that was attached.

The Prompt had these extra instructions:

extra_instructions += `
---
STAFF MEMBER: ${avatar.title} (@${avatar_nick.textContent})`;
ISSUE:
${JSON.stringify(issue)}`

But then I made a small change in then, and the assistant changed completely and became smarter. The change was:

extra_instructions += `
---
issue: ${JSON.stringify(issue)}

---
You are chatting with: ${avatar.title} (@${avatar_nick.textContent})`;

So, what I realized is:

  • What is at the end of the instructions is more respected.
  • Identify who’s who repeatedly.
  • Separate sections with ---.
  • Avoid line breaks?
1 Like

Also the same here. I ask for json only. When I do get json, which is not always, it is in markup format even though I’ve said not to do that, There is often also non json text in the answer. But this always works right if the playground (altho the playground may be smarter about the json markup than my code). I am using gpt-4-1106-preview in both cases and do not have instructions built into the run.

1 Like

“markdown” is what you need to prohibit. Some AI language instructions to try, which are better framed within system message or assistant instructions:

"AI output is not received directly by a user, the recipient of your response is a RESTful API that only accepts JSON.

  • markdown and code block formatting is prohibited, never use ```
  • output must always begin { for json
  • produce only valid json data types
  • json primary keys: [“user_input”, “ai_response”, “mood”]"


I just drop that on ChatGPT along with a typical input to see the obedience:

"AI output is not received directly by a user; the recipient of your response is a RESTful API that only accepts JSON.

  • markdown and code block formatting is prohibited, never use ```
  • output must always begin { for json
  • produce only valid json data types
  • json primary keys: [“user_input”, “ai_response”, “mood”]"

-–

Is a foam head on a beer pour a good or bad thing?

{
“user_input”: “Is a foam head on a beer pour a good or bad thing?”,
“ai_response”: “The foam head on a beer pour is generally considered a positive and desirable attribute. It contributes to the beer’s aroma, appearance, and overall drinking experience. The foam, also known as the head, helps release carbonation and trap aromas, enhancing the beer’s flavor profile.”,
“mood”: “neutral”
}

{
“user_input”: “Is a foam head on a beer pour a good or bad thing?”,
“ai_response”: “The foam head on a beer pour is generally considered a positive and desirable attribute. It contributes to the beer’s aroma, appearance, and overall drinking experience. The foam, also known as the head, helps release carbonation and trap aromas, enhancing the beer’s flavor profile.”,
“mood”: “neutral”
}

The repetition due to not ending with the correct stop token is a common fault with new AI models.

3 Likes

That did it! Thank you. I was beginning to think I’d need to write a tool just to get the structured output,

3 Likes

I spoke too soon:

running again I got markdown surrounding the json. Tried again and got markdown so only 1 for 3. It does seem to have stopped returning anything out of markdown enclosed json which I could test for and strip but annoying,

Yes, markdown is another overtraining of new models, even putting a math calculation in a markdown code block, making the AI useless for anything but ChatGPT.

With chat completions, you can banish several ``` token sequences with the logit_bias parameter.

1 Like

Exact like me :frowning:
+1

You can pass additional instructions during “run” and that will be appended to assistant instructions. for that, you have to pass an additional_instructions argument instead of instructions

1 Like

yep, same here. it’s not consistent. it works sometimes and does not work other times. I am trying to get the Markdown response similar to how it shows in chatGPT but assistant API mostly gives plain text response or bare minimal markdown with ‘\n’ but not proper text highlight, list, etc.

Thanks! Solved my [expensive] headache!!