Canvas functionality is essentially broken

OK, so no matter what I do with the canvas or which model I choose, I end up running into issues after just a few iterations. It seems the AI gets stuck in a loop and rather than tell me there is an issue, it just lies that it is completing tasks as requested. In the latest one, I used up my GPT 4.5 quota with it doing nothing other than passing over the canvas and not actually updating anything:

It doesn’t see to matter whether I’m working with code, text, or data (JSON). After a few iterations, the models go rogue and start ‘damaging’ what’s in the canvas (randomly changing things, removing large chunks and replacing with place holders along the lines of [this section not changed], reverting to earlier terminology, dropping entire ‘columns’ of data).

I managed to fix some of the issue by getting it to use Python to complete to JSON files and repair the one it had completely damaged (it was simply supposed to add in some additional attributes). If it is unable to do the task, it should be open and honest about it. Continually fabricating makes it look like it is deliberately damaging stuff with no intention of repairing it. However, I think it’s more a case of it’s just fabricating responses to be nice (which is very unhelpful and I would suggest it’s not a sign of ‘intelligence’ either).

Telling the models to ‘fix’ their mistakes seems to simply get them fabricating, telling me they’ve fixed things, when clearly they haven’t.

If anyone one Open AI wants to see some example conversations, I’m happy to share the links.

I’m not encountering the same issues with non-Open AI products, so suspect it’s something specific about either the tool or the models and their training.

So far, my main work around has be to build in a canvas in the non-Open AI product and just ask questions to Chat GPT and supply code snippets. That seems the most robust approach.