Back to back Assistant messages in Thread — my bug?

Hey folks, I’m running into a persistent, but not consistent, bug. It’s probably on my end, but before I rebuild my whole workflow, I wanted to see if it’s even physically possible for Assistants to post back-to-back messages into a Thread.

See screenshot below. The Assistant is supposed to utter its magic word (in this case, ‘news’) and then I send it a news package back as the User response, and then it summarizes the news for me in its next turn.

This usually works, but occasionally I get back-to-back messages in the thread which completely throws off the extremely rudimentary logic I’m using to do this.

(If you’re wondering why I’m doing it this way, it’s because I’m not a developer and I’m doing everything in Apple Shortcuts, so I need to intercept the Assistant message with an If statement and deliver it a special User message based on what which function it “calls”. I know, I know, not ideal).

I’m leaning toward this being a problem with how I’m handling threads, and I might be accidentally sending a null message back into the thread as the User response due to some silliness with how Shortcuts works.

But, before I try rebuilding this thing, or giving up: Is this even possible? Or is this definitely definitely something on my end?

Sorry for the double-post, but just in case anyone’s having a similar problem, I added “Never respond more than once at a time. Wait for a user response before sending another message” to the bottom of my instructions for the assistant and (so far…) it’s working.

1 Like

Okay, sorry again for yet another post into this thread, but I think I’ve confirmed this isn’t my bug.

My prompting strategy above stopped working, so I edited my Shortcut to shut off as soon as the Assistant returns the word “search.” So I know it’s not me accidentally doing something to trigger the Assistant to reply again.

But as you can see, it’s able to post into the thread back-to-back-to-back without any response from the user, and ends up just hallucinating an answer. I see how that could be useful, but it absolutely messes up my whole workflow.

Does anyone else have this problem, and is there a way to force the Assistant to only reply one at a time?

(I promise this is the last time I’ll ask about this)

It is basically dumb disjointed AI that has had a lobotomy to the limit, and can no longer function for functions or see for the forest.

image

After repeated attempts to get the AI of ChatGPT to make enough web searches to not give a yes answer to this question, which it never could do, it also can’t even operate its tool-calling.

The way is to use chat completions, control the sampling with low top_p, and use “functions” with older models that haven’t been damaged.

1 Like

Well that’s a bummer, but at least I can stop trying to fix this on my end. Thanks! Back to Completions I go…