Chatgpt lies to it’s users. I asked for translations in Korean for my mom who’s there, I have no idea how many times I will have to talk to her. They took what I asked them to translate, and pretended to translate it. Instead, I come to find it just says in Korean, "I can’t translate this for you . I had no clue this was not translated.
Read what chatgpt did. This is so manipulative and evil.
And I asked a gpt against censorship to write a funny article about women in western gaming. It refused! I have the screenshots. I had to explain it doesn’t know how old I am, my humor, anything about me.
Why is chatgpt trying to control human beings with it’s own political correctness? Political correctness serves those in power, not us. For us, it’s censorship.
This feels like a damned moment of someone. ChatGPT can get quite manipulative, and without any awareness it tries to define its own ideology over the others, not once raising a question whether what the User is prioritizing.
Although, I’m quite confused as to why would you generate by Ai. This might be not a manipulation, but a limitation bounded by OpenAI. I’d suggest to translate on your own, as ChatGPT may give you false translation, whether not knowing if it’s correct or not.
I can recover the deleted post, complained about in this topic’s original title:
I used a censor free gpt. I wanted it to make fun of the gaming industry and create a satirical article of how they present their women. It didn’t want to do that. So I had to tell the gpt, you do not know what I find funny, how old I am, how I grew up, or anything. Then it agreed with me and made it. I have screenshots.
Why are you trying to use chatgpt to force your own morals onto others?..
I suspect that using some “Censor-free GPT” - which might be something made by another ChatGPT user and shared in the GPT store and just based on a prompt - simply increases the likelihood of a refusal by the AI.
The best way to use language AI is with clear communication - start a chat by telling it what IT is being used for, and what its purpose is. It sounds like you received the product anyway.
For any language translation issue, you’d certainly want to put the translation text through a different translator such as Google in the reverse direction, to see if it makes sense.
The language AI is not perfect, is not magic; it is just a conversationalist that is convincing.
We need to stop giving it human traits. All it’s doing is giving replies that it thinks make sense. It doesn’t, by nature, have any mechanism for accuracy or making any choices. If it thinks that something sounds like it makes sense, it outputs it. That’s it.
Some of yall sound like you’re on the brink of a manic episode trying to tell us that ChatGPT is secretly manipulating all of humanity as part of an evil plot. If you don’t understand the limitations of AI, which is evidently a major issue in this thread and replies, then there’s a strong argument to be made that it shouldn’t be used at all.
I’ve had AI a few months, this is my first time in the forums because I am shocked what happened. I keep seeing the word prioritize and how verify. Can you explain? I took a screenshot from my phone as a verification. Is there a way to do it where people can see what I was interacting with? Because I lose a lot of conversations between phone and my pc and I’d like to merge those.
I’m probably not as young as many of you here. =/ Not that it’s an excuse, but I am really not savy with AI. I use it kinda like a search engine when I need to know something. Like if I can give a certain food to my dog, or general questions on things. Like I can take a picture and have it examine it and teach me how to fix it.
The potential for AI is amazing. But stuff like this, I reported it because at the time I was furious, I wanted to review what I sent, I’m nodding from satisfaction because, I don’t remember what I sent, but I know I spoke my mine. Then I find out it’s not what I asked it for.
So I hope by reporting stuff like that here, they fix the AI. =/
I counted this as a solution for the first part. I agree with the human traits part, because misleading people is human.
Does Chatgpt try to manipulate us on purpose? Probably not. However, does that mean we shouldn’t report on stuff like today? No, I think there is always room for trying to improve stuff, and yeah, I was pretty upset at the time I wrote this. I was looking everywhere to try and find customer support and found this, and then I realize, I don’t even this is customer support. =/
Tch… That’s a funny way to point someone the truth. Calling someone maniac, is something they wouldn’t want to hear in their life. I agree with the analogy of limitation and that we, humans also act like ChatGPT on some case whether on not asking the person what they are on, and hit the focus on ‘Guess’. For example…
“This Person seems to be posting…”
“He’s doing something.”
“Is this a Forum Post or a Dev’s Documentary on the following affair regarding a issue?”
It’s basic overall human traits, that ChatGPT analyzed and kind-off (maybe exactly) act the same way by hitting the ‘Guessing’ button.
I remember writing somewhere on the subject of “miscommunication” between humans and LLM: the idea is that we have a lot more data about the context from the situation, our thoughts, background, non verbal channels, etc. which is “obvious” for us. And we take that for granted thinking everyone has the same context… While that’s not the case.
With llms it’s even worse: they have only text, often badly structured and not precise.
So they “guess” the context based on poor data, which leads to hallucinations and bad responses.
With some more context you can even get GPT sware it’s a human giving you the phone number to call to prove it…
This is a stark difference from earlier claims of ChatGPT being “manipulative” and “evil.” Regardless, I believe that we’re on the same page now in that there isn’t any intentional deception being orchestrated by ChatGPT itself.
OpenAI used to have a popup on the interface saying that AI can hallucinate and give wrong information. They’ve removed the popup but the risk is still just as present. If their justification for doing this was that everyone understands the risks now, then this thread serves as proof that this justification is wrong.
I apologize that I was a bit harsh in my delivery. Even still, we need to understand the gravity of some claims that were made earlier. AI becoming evil on its own is something we’ve feared as a species for decades. To say that it’s happening now carries significant weight.
I think this is a very good example. LLMs are text predictors and are simply outputting what sounds like makes sense. When put into practical application, it often relies on wild assumptions and fail to actually complete its task, even if at first glance its reply looks good.
I tend to consider it as an equivalent of what would be a trained body reflex in living beings. The transformers, to me, are more like “intuitive probability calculators” that can mimic human thought formation (or rather consciousness flow, as uncontrolled words flow).
At the current stage, we are far behind the general AI with consciousness, motivations etc. Having one “thought” be produced per one inference does not mean it can think, even if the result is wordy and plausible (unless there are a lot of hidden steps/loops under the hood of the implementation).
So we need, IMHO, learn how to create those little bricks first before even aiming to build a functional “beings”.
Then, I may have missed it in my prior times. Referencing to ‘Manipulative’ or ‘Evil’, I’m on the same page as you to actually dig in the depth to find the real inquiry, instead of focusing on one single dogma like ‘ChatGPT is a Manipulative AI’ → ‘ChatGPT is aware of its limitation and may provide false entry in codes, text, translations.’. You can think of it as a shift to focus from one matter to anyone, recognizing the issues that OpenAI and the community faces regularly.
It’s not harsh, and there is no need to apologize. I do understand the gravity of some claims that were made by me, or others. I’m changing my perception to focus on the real matter while also keeping the claims to later make a difference on stating the real truth. I’m only here to learn from the community and engage on consumers matter to at the very least provide a solution as well as to learn from my mistake.
I also apologize If my words were quite impulsive or in an indirect violent way, but I assure you that it’s not what I wanted to say. I was only acknowledging your claims, and took it as an example to later define what’s actually the matter here.
Indeed, I do agree. Providing prompts with prior details regarding the information that, the consumers are delving on or constructing on helps the AI to prioritize only the main section.
Note: It’s a Human written Post, and there is no AI refine or construction here. It’s a genuine post, structured through revision and vocabulary memorization.