const assistant = await openai.beta.assistants.create({
name: "Test Assistant",
instructions: "When I say 'red', write a short poem.",
tools: [{ type: "code_interpreter" }],
model: "gpt-4-1106-preview"
});
When I test it says “It seems you’ve mentioned the word “red,” but I’m not sure what context you’re referring to. Could you please provide more information or clarify your request so I can assist you appropriately?”
Why is the Assistant API not aware of its instructions? This should be like the system message in ChatCompletions, no?
Have you been able to make Assistant to follow your instructions? I’m having the same problem, where I’ve set an instruction for the assistant to always answer in json format, but often ignores it.
I was having a similar problem. It turns out that if you provide instructions within the run creation, they will override your assistant instructions. Be sure not to provide instructions in the run, not even instructions: “”.
Exactly!! though the documentation says ‘Additional Instructions for the Run’, they are not additional, but overriding. I wasted a lot of time with this not being clear. Run level instructions are overring the Assistant level instructions.
I was having a similar problem. No matter how much I edited the prompt, the assistant didn’t get my name right and confused me with the author of the report that was attached.
Also the same here. I ask for json only. When I do get json, which is not always, it is in markup format even though I’ve said not to do that, There is often also non json text in the answer. But this always works right if the playground (altho the playground may be smarter about the json markup than my code). I am using gpt-4-1106-preview in both cases and do not have instructions built into the run.
“markdown” is what you need to prohibit. Some AI language instructions to try, which are better framed within system message or assistant instructions:
"AI output is not received directly by a user, the recipient of your response is a RESTful API that only accepts JSON.
markdown and code block formatting is prohibited, never use ```
Is a foam head on a beer pour a good or bad thing?
{
“user_input”: “Is a foam head on a beer pour a good or bad thing?”,
“ai_response”: “The foam head on a beer pour is generally considered a positive and desirable attribute. It contributes to the beer’s aroma, appearance, and overall drinking experience. The foam, also known as the head, helps release carbonation and trap aromas, enhancing the beer’s flavor profile.”,
“mood”: “neutral”
}
{
“user_input”: “Is a foam head on a beer pour a good or bad thing?”,
“ai_response”: “The foam head on a beer pour is generally considered a positive and desirable attribute. It contributes to the beer’s aroma, appearance, and overall drinking experience. The foam, also known as the head, helps release carbonation and trap aromas, enhancing the beer’s flavor profile.”,
“mood”: “neutral”
}
The repetition due to not ending with the correct stop token is a common fault with new AI models.
running again I got markdown surrounding the json. Tried again and got markdown so only 1 for 3. It does seem to have stopped returning anything out of markdown enclosed json which I could test for and strip but annoying,
Yes, markdown is another overtraining of new models, even putting a math calculation in a markdown code block, making the AI useless for anything but ChatGPT.
With chat completions, you can banish several ``` token sequences with the logit_bias parameter.
You can pass additional instructions during “run” and that will be appended to assistant instructions. for that, you have to pass an additional_instructions argument instead of instructions
yep, same here. it’s not consistent. it works sometimes and does not work other times. I am trying to get the Markdown response similar to how it shows in chatGPT but assistant API mostly gives plain text response or bare minimal markdown with ‘\n’ but not proper text highlight, list, etc.