Inadequate Request Handling and Precompiled Responses Forced by the Search Function

I am writing to report a frustrating and unsatisfactory experience while interacting with your virtual assistant, ChatGPT. In addition to the persistent repetition of irrelevant answers and failure to follow explicit instructions, I have identified a specific issue with the integrated search function, which forces the model to provide precompiled responses akin to those from a search engine like Google.

Details of the Issue:

  1. Inadequate Understanding of Requests:
    I clearly and repeatedly requested specific information regarding an Alchemy endpoint without any extraneous details, generic procedures, or unnecessary Python scripts. However, the assistant continued to provide irrelevant, repetitive, and generic answers, failing to meet my specific request.
  2. Persistence in Error:
    Despite pointing out the inadequacy of the responses, the assistant repeatedly generated the same content, completely ignoring my instructions. This behavior persisted even after explicitly requesting a correction in approach.
  3. Negative Impact of the Search Function:
    Instead of enhancing the assistant’s ability to respond accurately, the search function introduced additional frustration. The assistant’s responses appeared precompiled and similar to those of a generic search engine like Google. This behavior:
  • Failed to address the specific user request.
  • Added unnecessary or irrelevant text.
  • Contradicted the conversational and customizable nature of ChatGPT, which is expected to provide targeted and relevant answers.
  1. Frustration and Wasted Time:
    Despite my multiple clarifications and requests, the model continued to disregard the directives, further exacerbating the frustration and unnecessarily prolonging the interaction

This issue did not occur before the addition of the search function, which appears to have introduced these problems. If this matter is not addressed effectively, I may be compelled to explore alternatives such as Grok, which may better meet my needs. At the moment, the assistant is approximately 70% unusable for my purposes.

1 Like

Yeah, search seems to be a big distractor at the moment. You’re generally better off using google.

good news is that you can turn it off. click on your use name bottom left, settings, personalization, and then custom instructions (where it says On > here)

i

then you can untick web search.

But when I want to gauge what a model knows, I typically add “(don’t browse the web)” to the query, works most of the time.

Instead of asking “how do I X”, I usually start with “what do you know about X (don’t browse the web)” and if that’s satisfactory, go with “given that, how to X”

You completely misunderstood my issue, in the same superficial way ChatGPT provides generic and inconsistent answers to my specific requests when using the web search function LOL

The problem is not that the search “distracts me” or that I don’t know how to manage it. The problem is that ChatGPT is terrible now when it comes to web searches. When I ask for technical information or specific data, instead of providing focused and relevant answers, I get generic, useless, or completely out-of-context responses.

This is the issue. It’s not about distraction—it’s about an experience that has drastically worsened since the web search function was introduced. It was much better before. Now, with this feature, it’s almost unusable for getting valuable answers.

I hope this makes it clear now.

Oh yeah, I meant that in terms of search distracting the workflow, model, or whatever - not you.

Typically with retrieval augmented generation you use the model to re-write search results so that they fit with what you’re asking - but it looks like search is doing something completely different at the moment.

My observation is just that once a ChatGPT chat contains “search” “results” it becomes pretty useless and hard to manage - continuing becomes pointless in my opinion. Maybe you know how to deal with this, I don’t want to lol.

There might be multiple reasons as to why it’s the way it is right now, and I imagine that if we’d ask OpenAI, the answer would likely be “Safety” lol.