Since the update of the new default version of gpt-4o (October 2024), I’ve been encountering some issues with its behavior compared to the previous version. Specifically, I’ve noticed the following:
- The model doesn’t seem to follow instructions as accurately as before.
- It often fails to call the function calls when it should.
- In some cases, it has returned a function call response as a regular message, instead of calling the function (it returned the function response but “stop” as finish reason instead of “function_call”)
I use the completions API, and due to these inconsistencies, I think I’ll revert to the older version, gpt-4o-2024-05-13, which behaves more predictably in my opinion. It’s a shame because the new version is considerably cheaper, but I can’t risk these inconsistencies.
Has anyone else experienced similar issues with the new version? I’d appreciate any insights or workarounds you’ve found.
Thanks in advance!