It’s not being berated. I asked it for the session count of platitudes for illustration, and to remind it that we were going in circles.
It has a “difficult” task of writing a 51 element array of 5 values to internal Sqlite without conjuring up a fake python function, and putting three values and #The rest of the API values in the code.
It, despite being told expressly not to, in the custom LLM instructions, in the prompts, in corrections, it insists on performing this task incorrectly.
We have both agreed on the task, the precise instructions, and this is what I’m up against.
It’s pretty specific about it.
It’s very good about explaing what it belives the process is, and how precisely it should perform it.
I mean, we agree precisely.
And then, it just doesn’t do it. It writes mock python nonsense. It writes nonsense by itself. It writes nonsense if I hold its hand.
And when you ask why it’s having trouble, when you inform it that it made the same error again, it tells you this:
And I’ve got a dozen of those, all saying the same thing, all asking to try again now that it’s recognized the error, all that end up back in that same message.
It has performed this correctly, ONCE, in a chat so long it was impossible to continue. From which I exited with a detailed prompt to inform the next chat. As you can imagine, the next chat went back to useless nonsense.