Description: I’ve encountered an issue with the web search functionality in ChatGPT that causes the response to repeat and get stuck without addressing the actual question or providing varied information. The issue seems persistent even when attempts are made to redirect the conversation or clarify the question. It results in identical replies, which can be frustrating when trying to obtain specific or updated information.
Details:
Model Used: GPT-4o
Issue: Responses loop repetitively and do not provide the requested information. The web search appears to execute, but the answer stays the same.
Example: I asked about the limits for voice input while using GPT-4o and received an unchanging response that did not fully address my inquiry, as shown in the attached screenshot.
Attached Evidence: A screenshot (attached below) shows the repeated response after multiple attempts to obtain relevant information.
Request: I’m sharing this to inform OpenAI developers about this potential issue, as they might not yet be aware of it. I’d appreciate any insights into whether this is a known bug or if there is a workaround to prevent repetitive responses when using the web search feature. Any guidance on how to troubleshoot or resolve this issue would be helpful.
I’m getting it too across platforms. Even when you ask it to not repeat itself, it’ll just parrot the same thing over. Sometimes if you tell it to not search the web, it’ll momentarily put out something different, but the next answer might just be the same again. It’s really easy to repro with particular questions. In my case, I asked whether hearts or stars were used more in general websites and digital audio plugins, and it kept giving me the same thing like your screenshot.
Hello Gabriel: Yes absolutely this is happening. The 4o model (with web search) has become stuck in a repeating answer loop for me at least FIVE times since the integrated web search was available.
I am surprised that this is the only thread related to this concerning problem (unless there are new threads I’m not aware of). I have discovered that when it repeats for the 3rd time, if you turn web search button off temporarily and reprimand it for repeating itself, and demanding that it does not repeat itself further… that has worked every time. It seems to realize at that point the mistake it made.
It would be awfully nice if OpenAI could add this simple check to its generative algo, since the chatbot has the capability already to look at its previous reponses.
Yeap. Found the same. It appears it thinks it has the perfect answer and rather than refining the answer endlessly repeats it. Basically does not bother to read the follow up question and then dumbs down to becoming a search engine. Also loses it ability to understand subtle things such as humour. It becomes very “grey” and frustrating.
It appears to interpret any follow question unless wildly different as the same. I have not struck this with Perplexity.