Best way to set up assistant for company's internal use

I am looking to create an assistant for my company’s internal use, so employees can quickly get answers to FAQs about taking leave and company policies and such. Each employee should have their own thread.

However, each time a new thread is created the assistant needs to remember all the stuff I originally told it to “train” it (for example, “you are an assistant for our company, here are our policies…”). How can I start a thread from a given point like this? Or am I supposed to include all the instructions in the “instructions” section? It is a quite large amount of information.

1 Like

Welcome @siddarth.calidas

Threads are just a sequence of stored messages. You can have multiple threads running on a single assistant.

Every time you create a run, you can specify the thread_id and the assistant you want to run it on.

Hi @sps . Thanks for the response. I am still not entirely clear on how to give the assistant the context it needs. In the playground, I am either giving it some instructions in the instructions box, and the rest in the thread itself. Are all the instructions/context meant to be given to the assistant entirely in the instructions? I wasn’t sure how long that was meant to/allowed to be.

EDIT: I think I’m understanding better now - all the custom instructions must be provided in the instructions. Beyond that, messages are in a thread.

My worry is that there is a lot of information this assistant needs to know (thousands of tokens). Not sure if instructions are capable of handling so much information.

Hi @siddarth.calidas
As you understood now, all the instructions should go in the instructions section, and the instructions text (tokens) become part of the hole prompt along with the messages, for what there are 128k-token window (in latest models).
But if you have a big amount of information that the assistant has to check to give answers, then it would be much better to manage a knowledge base in a vector store using the File Search tool. The vector store is an independent object that you assign to the assistent to use (or to a thread).

Yes, the instructions are passed once when creating the assistant and can be overridden when creating a run with the instructions parameter; additional_instructions specific to the run can also be added during run creation.

Regarding the large instruction, you can pass a fairly large instruction up to 256,000 characters.

If you want the assistant to have any additional “knowledge,” it would be better to use a retrieval tool with a suitable file type to store that knowledge in OpenAI’s storage and attach it to the assistant while creating the assistant.

Understood. I think that using file storage would be better than forcing all the information down the instructions parameter. Thank you guys for the help!

OP’s requirement is rather simple and I don’t think he’ll need to goto any third party tool for this.