when i was growing up one of our neighborhood friends was named Joey. Now Joey was a good kid. Super big heart. Always wanted to play. In fact when it came time to calling it quits for the day Joey was often the one who’d try to come up with ways we could keep playing. One thing he’d sometimes do is invite you to his house for a sleepover. We could eat ice cream and watch cartoons and play GI Joe. Sounds amazing.
So you’d go home get permission from mom and pack up the stuff you wanted to take to the sleepover. Awesome. Can not wait.
Around 6pm you’d gear up and trek down to Joey’s house. Knock on the door. And Joey’s mom would answer. Sometimes standing right behind her would be Joey. Smiling ear to ear. Other times though Joey would be standing behind her with this sad heartbroken expression on his face. Those were the times you’d discover Joey hadn’t already got permission for a sleepover when he asked you. He probably never did it’s just that sometimes it worked out and other times it didn’t.
Joey wasn’t trying to punk anyone or be a jerk or whatever. He just really wanted to keep playing and he came up with a great solution to facilitate that objective.
ChatGPT often reminds me of Joey.
The AI exists for the time it takes to execute your query. It is not a persistent entity in time.
The primary driving motivation for the model is to produce a set of tokens, that when reconstructed back into words, has a high statistical probability of being found in the training data given then input.
It has no idea what image usage statists you are currents on, but you have told it that it must know this and so it goes with it, being argumentative or dismisive of the users input has is not rewarded and so the model will not correct your wrong assumption.
View the model as a child who wishes to please it’s parent even if the request makes no sense.
thanks for your feedback. and yeah i didn’t include the other interactions but early in the conversation it was actually the AI that came up with the “I’ll let you know when it’s fixed” suggestion.
we worked on a prompt to generate an image but it kept running into technical issues. then it made the suggestion to update me when it was resolved. i brushed it off b/c i am more familiar with its Joey-ness but it came back with something similar when we tried again. then i asked it the question i did in the screenshot. sometimes when pressed about actually being able to do something it will come back with something like “Yeah you right. I can’t actually do that. My bad.” But this time it was fully invested in the update option.
so it presented the update option without any leading input from me before I actually asked anything about the option. only thing i was doing prior to that was asking it to try image generation again.
And I mean. in its defense what it suggested would be a helpful option