Is the assistant opening the file and calling code interpreter? If not, the model will hallucinate a response. You can try prompting it to say something like “Open the file and answer the question: where did he study?”
Is the assistant opening the file and calling code interpreter? If not, the model will hallucinate a response. You can try prompting it to say something like “Open the file and answer the question: where did he study?”