Is GPT4-o dumber in Assistans API than in normal chat?

Hello,

I am asking myself if the new gtp4-o model ist working differently via assistants API than via the casual chat in the browser?

I would like to use the vision capabilities to transcribe handwritten notes. If I put a photo of my handwritten notes into ChatGPT gpt-4o it perfectly transcribes my ugly written notes. But if I am creating an assistant in the playground (of course using the gpt-4o model too) it gets a lot of things wrong and starts to hallucinate.

I tried it a bunch of times and the result is always the same: Via the normal chat it works perfectly, in assistants API it works badly.

How can that be? Isn’t it the same model, so the results should be the same?

Thanks!

2 Likes

Got the same behavior in the same tasks.

Hi, I am facing to the same problem.
Will you find the reason or a workaround?
Thx

Same here, the performance of vision in assistant is significantly worse compared to classic gpt4o.