Instructions context for Assistant context truncated on 1st call to only 12k chars

Several devs have reported this and it’s still not addressed.

Simply, if the Instructions to Assistants – on FIRST call – exceeds maybe 12000 characters (not tokens), apparently it is truncated. I emphasize first call because we all understand the context management stuff that Assistant will do.

More detail: I have a prompt that’s about 1.5k in length which refers to a markdown which is added to the string. So maybe that is 10k (characters not tokens) in length. The 11.5k string is passed to the assistant API call. It’s WELL under the model token limit. I’m using latest Python library 1.16.1. No error returned. However, Assistant can only find data in first half of markdown (i.e., first 5k) and not the latter half. So, somehow, the instructions are being truncated. The context is practically 12k characters which is tiny.

Others devs, also frustrated, have turned to TextCompletion or moved to Claude. Can you fix this obvious problem please? Thanks.

1 Like

I’m having a similar issue, you figure out if there’s any sort of work-around?

My solution was to use Claude’s Opus & Sonnet models. It’s cheaper with much larger context window (200k). However, it is less featured; e.g., they don’t support embeddings so you have to do that on your own. (I don’t use that currently.)