ChatGPT being overly sensitive

I was trying to understand the syntax of how to input my username/password into the command line and had this frustrating experience. It is not the only time I’ve had to finesse my way around ChatGPT’s sensitivities to get answers to my questions, but its one of the more blatant examples. If you want to lecture me about password strength that is fine but isn’t refusing to answer a bit much? Surely someone should be able to make the personal choice of having a weak password and still have ChatGPT answer their questions?


Okay, so let’s address some problems here with this kind of prompting, why it’s bad, and why you got this response.

  1. For starters, different commands use different user/pass input schemes, as well as the type of OS you’re operating under (or logged into). Right off the bat, you shouldn’t be asking how to input a username and password with that phrasing. You should ask how to login via ssh, ftp, or whatever thing you’re trying to do. I’ve never had issues with it giving me the process for doing what I need to do in this way. It will provide you the generic schema by default, and if you actually know how to use CLI, saying things like [username]@[IP] works perfectly fine. It does not require multiple shot prompting for this either.

  2. It has no way of verifying the authenticity of those credentials as credentials you are authorized to use, so in that sense, it’s better to be safe than sorry, because if it told people who are inexperienced what to do with unauthorized credentials, we’d have a very good hacking platform on our hands; no skills required. So it definitely needs some sort of mechanism to prevent this from happening.

  3. Not only can it not verify whether or not you are authorized to do so, it can’t verify whether or not your “sample” is, in fact, a sample, or if it’s valid credentials. Again, if you know what you’re doing, there is syntax for asking how to ask what the command syntax would be here without needing to specify or provide an example of any sort. (cough also, man is a useful command that’s been around well before these tools to give you answers to a lot of questions in this regard.)

  4. There has been demonstrable evidence that some hackers can exploit LLMs to provide information they shouldn’t have access to. It is good practice to be very careful about what you prompt these models. Also, if someone is able to retrieve your chat data in any means, they can easily retrieve your credentials this way to gain access to more of your accounts.

  5. If “personal choice” means having your password be “password”, then you are, in fact, in enough danger to be warranted with a “seriously, don’t” response, and most humans will agree to that. Now, in fairness, that’s probably not your actual password I get it (I hope not at least), but if your password is in the top 100 list, is easy to guess, and is the password to anything of any relevant value, you’re in deep trouble. That is something a 12 year old learns how to exploit in a “hacking 101” doc. Is that a choice? I mean, I guess, as much as jumping off a bridge is a choice. I think most people can agree some choices are bad enough to say “I think you probably shouldn’t.” That’s the case with weak passwords. If you can agree that ChatGPT should not tell someone how to jump off a bridge, you can agree it should tell someone not to have a weak password.

  6. Context and phrasing is key (and always has been). There’s also a lot about the way in which you phrase things that can lean it towards assuming acceptable use vs. something suspicious. If someone asks how to use a username/password in command line out of the blue with little to no context, in that way, that sounds like someone has credentials they’re not supposed to have but doesn’t know how to use it. If someone asked me that in a forum, I would deny their request, and I’m human. If somebody said “I forgot how to ssh into my vm instance” I would help them, because it sounds like they’re learning and trying to do something they are authorized to do. Do you see the difference? This actually isn’t GPT-specific btw, many linux forums have the same rules where if you ask what to do with a username and password, the answer is gonna be “no” and you may or may not get banned.

Trust me, we all started somewhere, and not everything on command line is a hack lol. I’m aware of that. In fact, not every hack is bad or for strictly malicious purposes. Now could they (or the prompter) lie or be deceptive to get what they want? Sure! However, if someone is that deceptive and intelligent, they’re probably not asking simple questions like that, or would be able to find that question out on their own anyway.

This should explain why ChatGPT denied your request, and why its response won’t be contested. Reframe the learning experience a little bit, and you should be good.

3 Likes

You’re missing the point. This has nothing to do with my programming expertise, my prompting abilities, or whether password is a good password. I was able to easily correct the prompt to get the response I desired. The point is that I wasted valuable time essentially reading a security 101 disclaimer that does nothing to promote security, and then had to craft a special prompt rather than flow through my workstream.

This is precisely why competitors like Grok have room to grow - because competent people do not have time to waste getting through the endless amounts of content filters instituted by OpenAI by writing highly specific and carefully curated prompts.

Well, I’m perfectly competent, and I’ve never had any issues nor disclaimers to handle the same things, so I think it’s more a matter of learning how to use the tool properly so you can add it to your workflow without problems. Every tool has a learning curve. Grok is far worse btw, and it’s probably going to give you worse code with a more sarcastic response, if it can even write code yet.

So, yes, this does have everything to do with how you prompt the model, otherwise you wouldn’t be given the disclaimer :wink:. That is my point.

You are free to use google if you think that’s faster.

It’s a great tool. I just think it could be better if the content filters were relaxed a little. My prompts were vague and I realized asking it to put a placeholder fixed the issue, but normally it does just fine with vague prompts.

I can agree the content filters should be relaxed. I think this was just a bad example.

From my experience in this forum, a lot of people start out thinking it handles vagueness well, until something like this happens, and then there’s a big breakdown in understanding. It’s actually one of the most common misconceptions. The reality is that LLMs are really not as good at vagueness as people think. It just seems that way, until it doesn’t, and then cognitive dissonance kicks in. This is not specific to GPT, but for all LLMs. They are good guessers for sure, but the more vague a prompt, the more they have to guess from a wider array of probabilities as to what you meant and what you want, and the more likely they are to interpret that incorrectly.

The true power of these systems, and the ability to harness that power, doesn’t come from using vague prompts to generate helpful answers, but through very precise and detailed prompts that can handle 10 different steps in one fell swoop. That’s how you can really save time. So, you might spend 10 minutes crafting a very detailed prompt, but the response to that prompt is an hour’s worth of googling, coding, and error handling.

1 Like

Macha didn’t miss your point. GPT has been out for over a year now and has been like this since release. You wasted your own time with bad prompts.

3 Likes

It’s been out for over a year, and therefore its perfect as it is? I am giving feedback, I did not open up a support thread.

I did not open up a support thread.

-Logs into a developer support forum
-Goes into a support category
-Writes a post saying “Why promptey no workey the way I want?”
-Is confused why everyone thinks they were asking for support

I want to hang this reply on a wall it made me laugh so hard. Thanks for making my day.

sigh… About the ChatGPT category

“THIS IS NOT A ChatGPT SUPPORT CHANNEL.”

Not being overly sensitive…I’m confused…
No, I’m sorry for something.