Is GPT4-o dumber in Assistans API than in normal chat?


I am asking myself if the new gtp4-o model ist working differently via assistants API than via the casual chat in the browser?

I would like to use the vision capabilities to transcribe handwritten notes. If I put a photo of my handwritten notes into ChatGPT gpt-4o it perfectly transcribes my ugly written notes. But if I am creating an assistant in the playground (of course using the gpt-4o model too) it gets a lot of things wrong and starts to hallucinate.

I tried it a bunch of times and the result is always the same: Via the normal chat it works perfectly, in assistants API it works badly.

How can that be? Isn’t it the same model, so the results should be the same?



Got the same behavior in the same tasks.