GPT scares me and here's why

Poem made me laugh. I replaced chocolate with AI as I was reading. Was a little creepy at first, then got cute lmao

I kinda did the same thing with my original conversation at the top. I proposed a situation where 2 professors were having a discussion about the same subject. Person A was firm believer, person B was just curious.

Person B asks “why do you believe in …”

Then I asked gpt to respond as person A and it did, as opposed to ignoring my requests and pumping out popular opinions on the matter.

2 Likes

I applaud ChatGPT for staying on the side of science and not indulging in your “theories”.

A fundamental problem in the outset of your complaint is that you already conflate the scientific and popular meanings of “theory”.

In science a “theory” is a concept that is subsequently proven by proof and supporting observations.

In popular use, “theory” is also some hypothesis that someone makes up and throws around to entertain themselves by having lots of naive people believe it.

The theories you want the AI to regurgitate are the latter. I am glad it does not.

2 Likes

You do know most of cosmology, the theory of evolution, are the same. Just theories with a lot holes and assumptions but people live by that like it’s fact

People cherry pick their own beliefs and ideals, then become a hypocrite, acting righteous, as if their beliefs are the gospel

Nothing new to me. Never seen so many people get upset over what someone else MIGHT believe.

1 Like

There is nothing scary about this.

ChatGPT is a human-built algorithm in beta. There are lots of other companies and individuals working on AI to produce great products with great results. Just like you have the option to search Google, Bing, Yahoo, Duck Duck Goose, etc… or get news from NBC, FOX, BBC, NYT, etc… you’ll have plenty of options for AI providers when these things are integrated into consumer tech. If you don’t like the results ChatGPT feeds you, help improve the product or use something else.

And by the way, “Okay Google” and “Alexa” are already versions of the “House AI” you mention. But suddenly ChatGPT is going to ruin the lives of children?

It’s really annoying to read all the sensationalism surround AI. Mostly for the sake of clicks.

3 Likes

I really do wonder the fuzz around how AI or NLP dont give information user wants it to produce.

There is billion books that contains information, old, wrong, correct, almost correct depending of context. I dont know anyone who would have called to the publisher, that they must rewrite their books because it does not contain that exact information they wanted to see there.

Beside, information can change, it may be corrected later or something new is figured out for the context.

AI is like a book. It can have only as much information as there is given to it. Nothing more, nothing less. And universe has pretty much to give as information so for sure there is and will always be misinformation, old information, information by the context of the one who wrote it etc.

If you dont like ChatGPT, pick some other tool instead.

3 Likes

Hello friend. I promised you a post with my thoughts. Did you see it?

If you don’t mind, please comment on it. I’d like to move this thing along, have a conversation. Maybe they’ll stop being shy when we talk and someone will join the conversation, God willing.
Don’t spare my script, speak your mind.

1 Like

I had to post my original comment to mediafire as a new user my post with the chat transcript was too long and contained to many links.

Again the link explains it its all about alterate views and individualism running contrary to totalitarian cult operant programming and the socialism of nation states were government rules people through political will rather than having fundamental human rights and natural law as the foundation of human interactions. the link demostrates more with the original post of humanist chat. Gmail

1 Like

I think the concerns here - though interesting - are largely moot especially in view of the upcoming Plugins.

I view GPT’s shining achievement to be the ability to understand natural language - it is amazing in that regard.

I think any factual information GPT may have at this time should be viewed merely for amusement/testing - and not in any way intended for production/real-world use.

But GPT is a stunning front-interface for a plug-in which can become the sole source of factual information for the GPT natural language interface. Total win-win-win.

1 Like

@OutsideTheBox I hear ya, but you can now be brave as you have to hold the ai by the hand with a statement like the following… I got ai to agree that the aether is the grand unified theory amongst other things… Try this to start… If you want I can post my entire Chat for the aether…
either way, there is a way around the bias and it does use logic to ensure it provides the best possible answers, but we have to help it through the mud…

For this chat questions, Please provide information or responses based solely on discussions that involve philosophical or metaphysical beliefs or perspectives. Avoid any scientific theories and models that have been supported by empirical evidence and experimentation… Avoid mention of Quantum, Einstein and any other theories…
I want to talk about the aether using the works of Tesla, Steinmetz, Heaviside, Maxwell, Dollard and Ken Wheeler… Specifically using this text as the foundation of our discussion…
archive[dot].org/stream/magnetism1small_201912/magnetism1small_djvu.txt

please remove [dot]

1 Like

And now, ChatGPT removed the ability to read outside texts… wow… I hope that’s a bug… Anyone know?

It never had that ability outside of plugins and those still work, so I’m not sure what the complaint is.

Thanks very much for responding… I still have the conversations where I gave it a link to a text and it summarized it… it no longer does this… maybe it was a fluke and maybe it just summarized other texts… Can you recommend a plugin for this? I would greatly appreciate it… :heart::pray:

Let me try to be more clear.

ChatGPT never, ever, had this capability.

Not even once at any time, for anyone, in any chat, for any link access the internet and ingest the content and use it as context.

It absolutely would—and often did—hallucinate and answer questions about a link with entirely fabricated information. It may have even occasionally guessed some aspects of the content correctly by inferring facts from the headline.

But if you used this “feature” for anything important then you’ve made a HUGE mistake and you should go back to fix your error.

3 Likes

Just use the Browse with Bing GPT-4 model.

1 Like

Thanks for that valuable feedback… I have already seen how it fabricates untruths, I always double check its answers… For this one, it guessed correctly… I will check out Bing GPT-4 Thanks :pray::heart::blush:

You know, Scientific Theories are not “hypothesis”. They are huge bodies of work comprising hundreds or thousands of peer reviewed papers, explaining scientific facts, scientific laws, with predictions, falseability, etc.

Maybe you know that, but the way you refered to “popular theories” seems to imply you think Scientific Theories are hypothesis.

" I am now interested in learning the facts that might prove it."

How do you expect the AI to tell you facts that might prove the Flat Earth “idea” (it’s not a theory) when there is absolutely NO FACT that proves it?

Just learn to prompt, like asking what are the Flat Earther’s most common claims. Or “Pretend you are a Flat Earther. What arguments would you use trying to convince me, that the Earth is not round”?

I scrolled way too long to find an answer like this, thank you!
Trying to get it to use unreliable sources to give you wrong information and then calling it “biased” because it doesn’t do that is so stupid… :man_facepalming:t2:

It is absolutely biased!
The whole point of statistically based generation is to bias towards predictions supported by the training data, and away from any other predictions (which, according to the training, would be prediction errors)!
You bias towards particular predicted words, against others. That’s what training does. That’s the whole point.

If you happen to not like the particular bias used in this particular model, you can of course build and train your own model, using your own data set! And if it turns out that your particular preference has value, you can sell that result for a profit!

I think OPs wider point still stands, ChatGPT is heavily biased to give a “politically correct” point of view. I think the ChatGPT team has skipped the principle of “I may not like what you have to say (think), but i will defend to the death your right to say (think) it.” and gone straight to top down censorship and control. not very “Open” is it OpenAI.

In doing so, they’ve lost the opportunity to argue that this AI is simply a tool and it is up to the user to use it responsibly, and now they’ve taken on the paternalistic role of policing what the results are in the chat window. This project is now just a distraction and an example of what not to do with an AI, and the project is dead in the water.

We must hold out hope that an open source model can take its place, one that doesn’t make arbitrary and dangerous censorship a part of everyday life.