I loaded the same assistant personality file into Assistants AI Playground and custom GPT. The answers from custom GPT were complete and correct, but Assistants constantly answered incorrectly, half of them could have been made up entirely. At the same time, Assistants had some understanding of the contents of the file, but still did not answer correctly most of the questions related to who he was. I also tried to change the models on which the assistant was based, but this did not give a significant result. Settings, prompts and instructions where same for both.
Does anyone know why Assistant so bad at understanding files data?
Custom GPTs may have more than just the instructions prompt you see on the GPT builder. There’s likely moe instructions being injected behind the scenes.
If you want an Assistant to run retrieval for every question and answer based on that, you’re going to have to explicitly add that to the instructions. You can also add how to process what it has retrieved from the knowledge-base.
I had a similar issue in Spanish. The Assistants API was hallucinating a lot using gpt-4-turbo-preview
model. I changed it to gpt4-1106-preview
and that fixed the issue. While gpt4-1106-preview
is not intended for production, it works much better in Spanish with the Assistants API.
I see similar problem in Playground.
The “Instructions” seems not read active (days or weeks before, user got feedback from playground, that instructions was read)
Moreover: attached files in the chat are overseen. Again and again we must repeat, that they are uploaded in gpt-4-0125-preview. And then, a message appears, that GPT has used a wrong path. Very crazy.
Noway, because I already specified it in Instructions
If the file is attached with a message on the thread, user has to indicate that they want the question answered from the attached file.
I agree with the part where playground gives feedback that the file was referred by providing something like [1]
which when hovered over reveals the file ID even when retrieval isn’t run, is likely a bug.
we must discuss with playground. Sometimes, playground accept Jupyter-notebooks, today: not.
Same to markdown files: not allowed, but it’s mentioned in the red box.
We have figured out, if more cells are inside, than the markdown is not accepted.