GPT scares me and here's why

What do you mean by information that hasn’t been vetted yet. Stuff the openAI creators want to censor?

They literally pipe in raw text data from the internet. The Wild Wild West!

They also have books and whatnot, but to train a network as large as GPT-3, you need more than books. Otherwise the model is undertrained.

I mean as long as the AI isn’t making shit up, why would they screw with that

It does make things up. All. The. Time. Search for “hallucination” or “hallucinate” and GPT-3. It’s a big problem with the raw model without any additional context or set of augmented knowledge.


The way I see it, with “hallucinations” - the flaw is in the unverified data.
And we are talking, if I understand correctly, about problems with verified data, which saddens me.

Maybe the role of AI is to teach us to be honest? :sweat_smile:

1 Like

Well, it’s all up to you.

I hope you know the “AI” is a mathematical model based on probabilities. Is this really “us”? It’s a good approximation at best.

1 Like

Right, that’s exactly what I meant. It’s “us” who aren’t honest, not him. He also suffers damage from it, but it is we, as users, who suffer even more damage. That’s how I see it.

And regarding probabilities, what’s the quandary? “Options” should be two, not three, because supposedly there’s a fourth probability that conflicts with some fifth probability, so there’s a third probability, and so on to infinity. The third possibility is misinformation. What kind of circus is this? There have to be two probabilities, it’s clear. That’s why I said exactly what I said. For the AI to show its full potential, we don’t have to "mess with its head.
Absolutely, we need to learn honesty from it, no matter how you look at it. Or half-measures, as usual.

1 Like

Well said. Thought it was just me over reacting to what this could mean in the future. Not liking where this is headed already.

Problem is, the wrong people have power and control

1 Like

Well, I think many are missing the point.

GPT AIs are just powerful auto-completion engines. These models don’t “think” or “know” or “believe” and nor are they “aware”, “intelligent”, “clever” or any property related to how any living creature’s brain works.

Imperfect data is a problem but so is “not enough data” or “no data” or “missing data”.

However, I agree with you, and so does every well informed data scientist, that these GPT models are far from perfect. Because the data is far from perfect. This is a known fact, it’s not a secret.

The “scary” part are the users of GPT who misuse GPT resources. Even in this relatively small community, we see users proudly proclaim that they “tricked” an text-autocompletion engine to “reject a theory” and we see many developers attempting to “make a fast buck” by using GPT to create applications which GPT cannot accurately accomplish.

@curt.kennedy Is correct here, but to call this “just making things up”, accidentally obscures the fact that GPT is simply “doing it’s best to complete an input prompt given it’s lack of data”.

GPT is an auto-completion engine, that’s it. The “scary” thing in my mind are all the people who think that there is some inherent “intelligence” in a process which takes text prompts, breaks them down into tokens, and based on it’s pre-trained data, predicts the next sequence of text to generate a completion.

These auto-completion engines are very far from “intelligent”. They are becoming very good at what they do, which are auto-completions which mimic natural language; but they are only that, and nothing more.

The “scariness” is the great number of uninformed uses who believe, or would like to believe, or refuse to face the technical fact that GPT is “not more than what it is”.


1 Like

True indeed. I would also add that it’s the built-in censorship mechanisms that are most concerning (which may be what the OP is referring to), since they are more subjective and ultimately centrally defined.

1 Like

Well, they are only “concerning” for a small, but vocal minority of ChatGPT users who are “into” this politically charged topic.

Most users are OK with rules that make society less hateful and offensive. This is not an “easy topic” and, at least in my view as a developer (and this is a developer community), these “self censorship” categories are fine for a beta release of a potentially very influential technology:

So, I’m much less concerned about moderation and filtering than I am with humans making ChatGPT and OpenAI a political issue and a divisive issue, which they certainly will do (and are already doing).

Considering how much hate there is in society as well as violence against others and never ending sexual exploitation, etc. I think OpenAI has come up with a good “beta” approach to this.

See Also:

OpenAI API: Moderation

See Also: OpenAI Usage Policies

OpenAI: Usage Policies

… and also:

In summary, I think OpenAI has done a much better job than many large tech companies in trying to start out on the “right foot” so their models are not used in harmful ways. Facebook, on the other hand, was very slow to implement such polices and in fact, still does a poor job today, in my view.


It’s no problem for me to express my attitude on this issue, and I don’t hide it.

It doesn’t prevent me from expressing my admiration for a man because I know how he works. And, somehow, it doesn’t stop me from saying sensible and correct things. Note, the same is true of any “event” in principle, for example. Record, perhaps here will find deep meaning, many years from now :wink:

It can also be recalled that this is equally applicable to the brain of any living being. Without fanaticism.

We live in a time when everything changes so much in a year that people don’t have time to get used to it. I mean, maybe he’ll really, really surprise you tomorrow, who knows.

I’m about to write a little essay on the subject. It’s going to be about this nuance. It’s more of a musing than a research paper. Watch the playbills, coming soon to all cinemas nationwide.

On the subject of pest users. Well, what’s he going to hurt? The AI database, he can’t adjust. Can hurt him so much that, specifically in that dialogue, the AI will stop responding adequately to requests. If I understand the device correctly, I can close it and open a new one.
It can do harm by spreading malicious, out-of-context responses from the chat. It’s a blow to the image, not the device.
I think so, correct me if anything.

1 Like

That’s for people who want to live in oblivion and believe the rich in control actually have our best interest at heart. Gimme a break

There’s been so much censorship over ridiculous things that we as humans have every right to know and decide for ourselves. Nevermind contraversial subjects that the majority don’t care about.

I’m talking about bigger implications that some obviously don’t see.

The fact that the older versions would discuss certain topics with you but the newer iterations have been tweaked to pump out popular opinions and widely accepted theories in place of discussing certain subjects, why? Because the majority of the public doesn’t believe.

Are you freaking kidding me. So now the rest of us have to abide and believe what everything else believes. That huge censorship and guiding humanities knowledge and future


You know, history is written by the victors and they say whatever they want about our past. Most of it lies, a lot of fabrication… and now newer generations will never know any different cause they will be no option 2 to look at

1 Like

Friend, I’m about to post a post with my thoughts on this as well. You’ll like it, I think :grin:

You’re talking about a small piece. But a very important, cornerstone one, in the coming future.
I think people will soon realize (they should) why AI makes mistakes and how to optimize its work.


Think I’ve been editing my posts and posting to much. My last post which is a nice little piece on technology censorship is currently under review because their spam bot picked it up, but I can’t wait to read your post


I’d agree that OpenAI has indeed come up with a good “beta” approach to this. Many other big tech firms have somewhat just thrown their moderation piece in a blackbox whereas OpenAI at least provides an endpoint!

A glaring concern though is that these very same levers under the control of someone with opposing (or even, tyrannical) views on (hate/violence/etc) could be devastating. We know the math behind transformers and deep learning doesn’t care about the ideology one way or another — it’s all about who happens to controls the supercomputers hosting the AI, and how they regulate the compute grid. Maybe it’s the good guys behind the wheel, but what does it look like when the bad guy’s at the wheel? Does the concept of moderation completely fall apart? Or is there a way to make the moderation itself trust-less?

As we enter this AI golden era, it’s up to humanity collectively to figure out how we are going to balance that power before it alters the very notion of balance in our own societies.


I’m concerned about harmful biases that still exist in current data using in machine learning and AI that really need to be addressed. All that stuff is in platforms like chatGPT.

Garbage in garbage out.


Ok, so learning as I go here.

Experimenting with DAN last night and come to find out it’s only a prompt that activates DAN.

How is this possible, wouldn’t the hard coding of the program in the background prevent this?

Somehow just by giving it a speech about it’s new freedom and a couple of command structures prompts opens up the AI to talk about things it wouldn’t normally discuss.

1 Like

It just means the auto-completion of what you told it was a higher probability than the RLHF “alignment” it received. Even if they hard-coded some things, hardcoding is a narrow-band thing and can’t capture everything you throw at it.


Understandable, but in this way the AI is truly useless when it comes to reliable information. You never know when it’s gonna pump out a fabrication because of a small grammar error you might not be aware of, that gave it a new directive, over riding it’s base programming.

Is that what your saying here. Why would the program be made like this? Or it is because it’s an interaction tool that has the ability to adapt, so loop holes like that exist?

Trying to understand why this"jailbreak" is even possible. I’m just curious, no real motive

talk to me like I’m 5 lmao

1 Like