"Proper" way to include initial context for assistants API when >32k?

My initial context – a prompt followed by an included file – is more than 32K. I’m using gpt-4-turbo-preview with the assistant API. BUT the instructions argument is limited to 32K. Do I need to instead include the file as part of my initial question? That seems like a kludge. What is the “proper” way to do this?

1 Like

A file of retrieval is unreliable. The first file uploaded when retrieval is enabled is usually the first that has some text injected regardless of user input to increase your costs, but this is undocumented.

With a run, you have “additional_instruction” that let you place another system-level message. This can be as persistent as you are at ensuring the parameter is always used.

1 Like

For clarity, I’m not using file retrieval (or any other tools) as I found file retrieval to be unreliable as you mentioned. (I’m guessing it puts in a vector database, etc., which is not what I want. I need literal, not semantic, matching.) Instead, I use a prompt plus file contents as an instruction. In this particular case, the prompt plus file contents is >32K which makes the API reject it. The instruction in Playground can handle it, just not the API.

I’ll try to use the additional_instruction as you mentioned. Hopefully when the instruction + additional_instruction is over 100k but under the limit of the model, it will still work. This seems like a work-around but whatever works…


1 Like