Gpt-4o often forgets to include code

The new gpt-4o model often forgets to include code, it will respond with something like: Here is the modified code:
And nothing else. Prodding it usually helps.
This is especially true when the model is asked to respond with json object and has a relatively large system message.
The gpt-4-turbo model very rarely behaved like this (apart from preview initially back in Nov '23).

are you asking for mixed content, or do you just want the code/json?

It is usually mixed - the model replies with some text and code in-between slack three backtics. Previous models dealt with it pretty well.
I tried to instruct the model with a different json schema where code is separate - while it seems to produce better results in small, isolated tests with limited amount of instructions (system message), once I try it with our normal set of system message (just over 2k tokens) it misbehaves again. Enabling reflection helps to some degree. Switching to the turbo model makes the issue to go away - this is definitely a regression for us.

oh yeah 4o is definitely weaker than 4t which is subjectively weaker than 4

1 Like

i added to my custom instructions to always output the entire code… sometimes I add “no additional dialogue”