Adding messages to the assistant API

Hi, I’m kind of new to using assistants API so bear with me.

My use case is having a specific defined assistant (brainstorming assistant) and to keep asking questions to it. So i’ve created the assistant, and i’ve played around with it on the playground it’s functioning how i want it to. however once i write the code in node.js im a bit confused on how i can chain messages to it? I’ve defined two different messages, like below, and have copied the rest of the code from the math assistant example. but i only ever get a response to the second message i’ve chained. how do you get one separate answer per user message?

const thread = await openai.beta.threads.create();

const message = await openai.beta.threads.messages.create(, {
    role: "user",
    content: "Expand on: 'digital pizza pan'",

const message2 = await openai.beta.threads.messages.create(, {
    role: "user",
    content: "Surprise me.",

const run = await openai.beta.threads.runs.create(, {

const checkStatusAndPrintMessages = async (threadId, runId) => {
    let runStatus = await openai.beta.threads.runs.retrieve(threadId, runId);
    if(runStatus.status === "completed"){
        let messages = await openai.beta.threads.messages.list(threadId); => {
            const role = msg.role;
            const content = msg.content[0].text.value; 
                `${role.charAt(0).toUpperCase() + role.slice(1)}: ${content}`
    } else {
        console.log("Run is not completed yet.");

setTimeout(() => {
}, 3000 );

I will have a post soon on NamedMessages. The key takeway from that post is to subclass Messages like so (in the betaassi framework):

class NamedMessage(BaseMessage) : 
    Tx_id:Optional[str] = Field(default="")

Behind the scene, it uses up one slot (out of 16) in the metadata. Then after you retrieve the latest msg ( if you use the run mechanism), you can update the metadata with the response message with the type and tx_id.

Then you can list the messages belonging to the specific tx_id.

The betaassi framework is written in python… so ymmv

You may want to store messages somewhere, either on your own backend or as @icdev2dev recommends in the metadata.

You can just keep an array of messages otherwise, and use it as a FIFO queue to push messages to the assistant thread and create a new run only after the assistant has already replied to the first. i.e.

  1. Add message to queue (create if empty)
  2. Create thread
  3. Add first message from queue to thread. Remove it from queue.
  4. Run Thread
  5. Wait for completion
  6. Render response message
  7. If queue is not empty, repeat from 3
1 Like

Thank you for your response, I understand the process you’ve laid out and it seems doable. Just had a few more questions about the assistants API; so the way the thread is with assistants is not like a queue? They’re not chained after each other? So you can only make one message call at a time? Would it just be better for me to use chat completions for my context then? I don’t see the advantage to assistants.

I don’t know what NamedMessages is.

The messages in a thread is mor of an array than a queue. In the sense that you can add any number of events (messages, files, etc) from as many “users” as you want, before you ask an specific assistnat to run everything that is accumulated.

It does not answer messages one-by-one, but rather take into account all the events (messages) that you have in the thread for it’s next response.

Let’s say the first message is “Did you know my name is Jorge?”
Second message is “What is my name?”

If you run this, the assistant will most likely reply with something like “Your name is Jorge”, and will not answer first “No” and then “Jorge”.

In other words, the thread acts on all information gathered once per run. The Assistant MAY decide to send you multiple message but that is independent of the number of messages received.

For your usecase… unless you need Code Interpreter, File Browsing or to use the Assistant Functions, you will prob indeed be better off with Chat completions.

Only other benefit of assistants is that it does the conversation management for you, so theoretically you wouldn’t need to manage your own chat history .

Happy building!

1 Like

Okay, that makes the Assistants API much more clear to me, thank you so much for the detailed response! Could I just ask you one more question, what would be the best way - in your opinion - to add context / a role (in the way you do with the Assistant when you’re creating it) to Chat completions? Would it just be through the initial message, e.g. “Act as a …”

I think that what you need is to define assistant instructions when creating it.

Take a look at the optional parameter “instructions” of the following endpoint:

As an example, here is the instructions of my IT assistant:

self.instruction= ‘You are an IT administrator. You are responsible for managing user permissions. You have a list of users and their permissions. You can get the permissions of a user by their username, update the permissions of a user by their username, and get the user ID based on the username. You can use the following functions: getPermissionsByUsername, updatePermissionsByUsername, getUserIdByUsername.’

I would like to add that you can add additional instructions per run as mentioned here:

More or less. That is what the System Prompt is for. When you’re sending a completion, you can assign messages to a user, the assistant or the system. These system messages are usually where you condition the model to respond or behave in a certain way.

Still, the difference between using system prompt or as you say just a user message is minimal. One thing to note is that system messages don’t have to be the fist message on a conversation, so you can change/update the behavior mid-convo if needed.

Hope that helps, best of luck and happy building!

1 Like