Conversational Assistant API

Hi all. I have been working on building Assistants manually for awhile now. I am new to coding but GPT has helped greatly in showing me how to build. One thing I am stuck on at the moment is trying to run an assistant, maintain memory, and carry a conversation with the data/potential files as a user would.

Currently I am stuck at a spot where the data gets loaded and I am able to ask one question but it quits after that. Would anyone know how to keep this going? Id like to basically continue the conversation every time I run it. Eventually also share the tool with others via streamlit or something along those lines.

Any ideas are welcome! Thank you.

What do you mean by “it quits after that”?
Something to keep in mind is assistant API is a bit different than the chat completion API, in that you create a run with a thread and an assistant. Its the thread object that keeps the messages, not the assistant. So if you wanted a multi-turn conversation, you would simply grab the thread and create a new message on it:

message = client.beta.threads.messages.create(
    thread_id=thread.id, role="user", content="Here is some new info"
)

Then you simply run the thread with an assistant:

# Execute our run
run = client.beta.threads.runs.create(
    thread_id=thread.id,
    assistant_id=assistant.id,
)

And grab the messages again:

messages = client.beta.threads.messages.list(
    thread_id=thread.id, order="asc", after=message.id
)

Something that confused me at first was the use of .create() as the method call, made me think I was resetting everything, but you are creating a new message on the thread, not a new thread.

1 Like

Quits as it the program would just end after answering my query. Either way, thank you for sharing some insights - this is definitely helpful and I will look to implement the recurring messages within my thread!

1 Like