I don’t know if anyone else is experiencing this, but I’ve noticed a substantial increase in occurrences where the reasoning models only refactor a small part of the code instead of giving a fully updated code when used for programming. Both in the API and ChatGPT.
In o3-mini that wasn’t nearly as frequent as now.
When using canvas, it is much more insistent on doing the performed changes, having to confirm that I do want to apply them, having to spend unnecessary extra interactions or instructions in the API, and in case of chatgpt in having to explicitly re-instruct that I do want the full code every time.
It doesn’t make sense in the canvas, to erase the previous code and put […rest of the code is unchanged] there. Then I ask again for the full code and it answers me ending with “# (rest of pipeline unchanged…)”
While I appreciate the new models, I think this is something to look over, as struggling to put the pieces together kind of defeats the purpose of making it practical.