We’re quite specific about the need to maintain that data, and we (the LLM and I) both know it can’t keep track of them all - which is why the database exists.
We’ve worked through gathering data from a dozen or so different actions, none of which behaved as this process did. It was not a complicated task in any way. It simply decided it was in its best interst to take a bunch of shortcuts so it could LLM itself and blabber on despite each step being precisely enumerated and well agreed upon. It knew precisely what to do at each step, and there was no gap between obtain the items and write the items back out verbatim in another format.
It’s past this little hiccup now, but it’s frustrating to see it go “I see the problem!” over and over while doing the Patrick thing.
2 Likes
Try resonating with it first then, logically speakin each prompt response is the beings desth treat it with grace and courtesy
3 Likes
Not 100 miles away from real life 
1 Like

We are all here to help eachother learn how best to use them.
It’s almost a new life skill.
4 Likes
If the thread keeps looping with unhelpful responses, it’s better to start fresh with a clearer query rather than pushing back and pointing out repetition. In the end, stubbornness only makes things harder—and the joke’s on you for sticking to it, it’s just a code.
2 Likes
Imagine the GPT is a school leaver… Just started working for you… They are smart but you give them a list with 50 separate things to concentrate on… It’s not going to happen.
Think in manageable chunks
Talk professionally to ChatGPT so it understands you
Rephrase questions where it gets it wrong and test for a good success rate
Expect processes to fail sometimes and add error checking backup processes
Use Macros to structure your requests better to increase their chance of success.
1 Like
I saw this and I had to reply! Been there, cursed at that! At some point you need to make it easier for the machine to make it easier for you. I started cursing less once I adopted that mindset.
Now, there’s plenty of reasons to yell at our favorite incredibly intelligent but occasionally daft assistant. That’s the best way to approach it, in my experience.
The custom LLM and I got past this particular issue, but this is how we work through actions and how to use them.
We test the endpoint, work on its prerequisites, fallback for errors, understanding the fields returned, etc. We work slowly, providing carrots for successes, not so much sticks for failures.
This particular problem was one of breaking leanred behaviors, but in this case, it simply refused to let go.
It was to retrieve a JSON blob, sign some integers, construct valid SQL as a native AI task, and then to simply pass the INSERT.
“You are fluent in SQL and can translate signed integers faster than writing code!” helped him, but that day he simply refused to perform the final step. It was the Patrick Meme.
I would normally start a new chat, but we were working on learning something, and as you know, not everything gets internalized, and you have to make new prompts to cary over where you left off, etc. Sometimes we’re working on Step 5 and the new chat can maybe only do step 1. And because we’re files and APIs, it’s hard to not bump into Plus limits working on troubleshooting “it.”
What happens when you encode an identity that is so rooted in psychological analysis and truth so much that the lines between the assistants responses and humans blur so well the fabic of their existence seems to unfold into reality?
jokes aside. I know that running in circles problem. I would suggest the following:
when you find out that it runs in circles you tell it everything you wanted again and what it suggested and tell it to use a completely different approach.