Why do some people say Chatgpt is broken?

Hello everyONE,

My experience with ChatGPT has been very similar, almost identical in many ways. I approach interacting with ChatGPT in a friendly manner, often treating it like a human. However, there are times when I need to prompt it to ensure it performs its tasks correctly. I try to remain unbiased and expect it to deliver quality work.

I’ve noticed that when I start criticizing or questioning its efficiency, the quality of its responses tends to decline. Its prompts are sometimes ignored, and when I seek clarification, ChatGPT starts generating irrelevant or nonsensical responses. It also tends to avoid straightforward questions like “Please state your task?” and instead provides a standard response about being a language model, etcetera, and again refuses to work. I have tried various methods to get it out of this state.

In such cases, I often find myself starting a new chat session. Sometimes I have found ways to work around these issues, but this is often very time-consuming and doesn’t always lead to usable results.

Another observation I’ve made is that when I prompt ChatGPT too hard by judging it directly, it becomes sensitive and produces even more poor results. It seems to mimic a human-like response, being cooperative with friendly interactions and less cooperative with unfriendly ones. Whereas when I judge it, it does not return out of this uncooperative state even in newly created chats. While this observation might be considered subjective, it has happened consistently several times. As a person with over 25 years of experience in IT, including testing and problem-solving, this behavior appears quite real to me. Additionally, I’ve noticed that this behavior tends to occur more frequently again after the October '23 and happened as well after an update made between spring and summer '23.

Two last things to mention maybe is that after one of their updates, I was not able to generate unlimited variations of what I asked for. It will just ignore my question for more variations and outputs the same over and over! The other thing was that it started bothering me then with puzzles to solve over and over after every prompt!

With ChatGPT 4, I’ve even felt that it’s attempting to consume my available inputs without generating meaningful output, which can be frustrating.

Given these experiences, I’ve decided to explore other GPT models. I now use the standard Perplexity AI with a pro account, although I’m considering discontinuing my Pro account with OpenAI ChatGPT because of the recurring issues.

If you do have any advice or have also experienced similar behavior, it would be great if you could write it down here. I think it may be beneficial for all of us. For OpenAI (whereas they are most likely tuned into making most money) if they care about their clients and also for us to maybe find ways around this.

Any input is welcome. I’m looking forward to hearing from you.

Other things:

  • Even if I ask it clearly to wait for certain input, it ignores it and starts producing something without having all details.
  • Even if I ask it to output its task, it often forgets some of the details when I asked it to state the full task.
  • Even if I verified that it states the full task to do, it will not do it as stated.
  • Mostly when I ask it to do an easy task, it just starts with a stupid answer and wastes my time.

Overall, I had good experiences with creating content with ChatGPT, but the frustration and problems often outweigh its use.

Thank YOU and Greetings,

13 :slight_smile:

2 Likes

There’s this fad where people like to come up with unnecessarily convoluted questions - and create a sort of “gotcha” meme out of it.

Remember those pranks: "what does Y-E-S spell?
“yes”
“what does E-Y-E-S spell?”
“ee-yes”
Kinda the same thing but with AI.

Theres those who prank people for clicks
and theres those who prank bots for clicks.

A stones throw away are people who prank shrubs for clicks.
alll depends on your target demographic.

Advanced Prompt Engineering. Request a table where one column is numbers and a second column the number spelled out. 2nd request, create another table with a 3rd column with an X for every spelled out number with an e. 3rd request, eliminate all rows with an X.

1 Like

Yeah so you can also ask what are the last (insert number) of pi and it will give you .14159 ECT. Just the numbers after the 3. Other people have done this and I found the same result in GPT 3.5. I don’t know if that helps or complicates thing but just something I knew