For Assistants, should small data be added as tool response or in instructions?

I am setting up an assistant (with the API) and I have a small amount of data/context that I want almost every response to be aware of, and this data changes slightly with every new message.

I see a few different ways of adding the data:

  1. Append it to each user message
  2. Create a tool “get context” with the data as the response
  3. Update the assistants instructions on each message
  4. Use files

Each time the data updates, the previous data is invalid, so #1 and #2 feel like they might be stuffing the context window with unnecessary data. But #2 seems better than #1 for the conversation flow. #4 feels inefficient since it’s such a small amount of data.

So I was just considering #3, updating the assistant instructions to always have the context appended to it whenever I create a new assistant run… but that also seems a bit odd.

Does anyone know what preferred approach might be preferred here? Do all these approaches end up being similar anyway?

Let me ask you this: Is interacting with your assistant meant to be single-shot only or more conversational?

If you just append context to it in each query, this could confuse the model, because traces of it would be left inside each and every query (if it’s a conversation), and the more that list grows, the more difficult it would be for it to follow the appended context.

If you’re not feeding in prior data from the user’s previous queries/responses, then I would programmatically change the instructions on each call. That actually sounds pretty efficient to me, but again, that’s only if it’s not meant to be a chat conversation/thread.

Otherwise, yeah you’d probably be looking at option 2 because you’d need to manually manage context, and potentially your own data structure for the user’s thread. This option does have the most flexibility though, and is likely the only option where you could inject context into it before it’s fed to the model, and remove that context before it’s fad back into the model’s context.