o1-mini - so odd. On a python coding task, of “make a line of code with a signal do what is implied by the name, with new methods in the subclassed widget that is passed by the partial” task, I merely provided a human-written version of the deep nest of GUI Qt subclassed widgets and containers that’s pretty much impossible to make a bot understand otherwise. Plus I added in the main application wrapper where fonts were being loaded.
It wrote me so much imagined wrapper beyond my own code back at me – but not in a pasteable form because so much else was removed (AI: I don’t see where that font is set, so let’s remove the whole chain of code getting to that point) – that I had to go line-by-line to see what it was thinking and where it was actually implementing something novel (this model does NOT like your human coding…)
I am not your reasoning, AI!
But then out of the blue, half of the huge response was answering something that the AI could have no idea about, because it received no code and no “mention”, but it writes as if I asked it…as if some AI thought I was the reasoning going on?
As if it was having an argument with itself over its internal bad simulation of excess code outside what I provided that it wrote for itself, in an application not described besides the UI widget tree names.
Going on and on about how the UI looked, setting scrolls and stretches and custom layouts and on and on.
Did I pay for an entire UI app like mine to be created, that I’ll never see?