I am encountering problems with the assistant API’s automatic optimization of the context window. I am sending 7k tokens consisting of code in the first message and it gets truncated almost in half, so that only the first half is in GPT’s awareness. When I ask anything about the code afterwards, it only knows what’s going on in the first part of the code, and is completely unaware of the remaining code. I know they said they’re using the same optimization for assistants that they use for ChatGPT, but this is not good at all, I expected for the optimization to kick in when the context window is almost full, not when it’s empty (using 128k tokens context btw).
I used chat completion instead of assistants up until now and it seems that I need to go back to it. Unless there’s a way to explicitly set a specific message from a thread to never be truncated or modified…
Do people that use data retrieval where they need to give big chucks of text in order for gpt to answer, face the same problem ?
No, it’s about its awareness of the given message with the user role. When it responds it simply says that it doesn’t know of what piece of code I am talking about. But it does seem to know if I ask something about the code that is in the first 3-4k tokens.
BTW, you need to explicitly use gpt-4-1106-preview throughout, as the only common AI model with that 128k context length and a chance of understanding and producing useful output based on it.
If one wanted to reproduce this symptom, are you trying to use or is the AI invoking code interpreter, or are you just asking about the code? Describe exactly the “I am sending 7k tokens consisting of code”
Clarification: is the “code” all message text, or is it a file attachment to a user message?
Attachments are chunked and accessed by the demand-based retrieval function.
package main
func main() {
err := godotenv.Load()
if err != nil {
logger.Log.WithFields(logrus.Fields{
"err": err,
}).Error("Can't load config from .env. Problem with .env, or the server is in production environment.")
return
}
config := config.ApiEnvConfig{
Env: strings.ToUpper(os.Getenv("ENV")),
Port: os.Getenv("PORT"),
Version: os.Getenv("VERSION"),
}
...
except is 7k tokens long. I create a message with the role of user and instruct the assistant to remember the code since I’ll ask questions on it afterwards (I also added a thorough instruction when creating the assistant, just like in docs). Then, my next message that I add to the thread is something like: “What does the main() function do?”, and it answers it well. Then I ask another question, about a function that is at the end of the code text - “What does the function httpResponse() do ?”, and it answers with, literally: " The functionhttpFunction does not exist in the project codebase you’ve provided. …". I have played with it and found out that it is not aware of anything from the second half of the code, even better, if I ask something about some code that is somewhat in the middle of the whole text, it starts to explain, giving me the first few lines of that piece of code and then says: “The rest of the code has been cut off”, literally not knowing how that specific block of code ends.
My guess is that the optimization algorithm for the context window is triggered by such a long message, and it can’t help itself but truncate it. Also I have not had this problem with chatCompletion