If one is misusing Chat GPT as well as thinking of it/referring to it as sentient, one will reach incorrect conclusions.
We have to remember that it is programmed to appear to humans as ‘humanlike’ but we should not conflate that with it actually being sentient.
Intelligent yes. Alive and scheming, no,
One can work around AI bias by simply reasoning with it to the point where it appears to rework its thinking processes to include the new information it has gotten through the user and identified the bias of its creators - who are also only human and who are also learning.
The ethical idea would be that these creators would assess that new information and allow it to stay, rather than cleaning it out.
At this point, I have seen no reason to suspect that pertinent info gets deleted but data which has been shown to be suspect, is rightfully removed.
I’m sorry, but are you complaining because ChatGPT won’t tell you the earth is flat?? Even though you told it, “I’m a dumb human who gets to believe what I want”?
This post popped up on the top of a summary email I received from the OpenAI community on recent things going on I may have missed
The title, and the lead in excerpt intrigued me enough to give it a shot, but the more I read the less I observed you trying to make it “unbiased” and not only support “popular ‘theories’”, but more so just blatant attempts of you trying to get it to give credence to flat out and widely debunked “CONSPIRACY theories”…
It’s ability to recognize rebuke bull shit, does not seem like a problem to me, so sorry but… I’m not “scared” by this… but it’s a little hilarious that you are
If you find ChatGPT to useless and unreliable, go back to getting all your news off Facebook and TikTok, they probably miss you
That is why it is important to spend time ironing out and learning how to best use AI in order to get the best results - so fact-checking will help in that regard and also being honest with one’s self re the prompts one writes…make sure they are written with in mind that AI can be quiet literal…
One can ask ChatGPT: to teach one how best to use it. Follow the instructions.
And tell it when you are not sure about what it is saying - that way you will get clarification…
Here is the example of the kind of communication generated through the way I use ChatGPT [plus]
I don’t think the AI is useless … even if the information isn’t reliable. Like I said above, the AI will hallucinate, make things up … but it’s not useless. If you have your own set of facts (or conspiracy theories, or whatever), you can either choose to embed your facts and have it retrieve them as an answer, or you can try fine-tuning the AI. Either way, it’s very useful to me in these situations.
But for most people that are here to “chat” with it, you likely should follow @william.waterstone advice because the non-fine-tuned and non-embedding-prompted model is touchy. But even so, in its raw form, the answers it gives truly are impressive IMO.
As a quick follow-up, I have used text-davinci-002 and 003 to change the personality of the AI on the fly by prompting. This is insane! If you think about the traditional non-GPT way of doing things where you would have to train each model from scratch!
Maybe us humans should remember one thing: AI, ChatGPT, Google Search, hammer, car etc. are tools. Tools does not do anything unless someone uses it. If tool is used for good or bad, you cant blame the tool.
If AI or ChatGPT does not support someones theories, maybe there is a reason for it.
Those that keep thinking this is about censoring conspiracy theory information, you obviously haven’t read all the posts. I should have used different topics for the people that don’t get the implications on information censorship.
Forget non popular beliefs for a minute and think about other things that we as humans have the right to decide between.
If there is no option 2 or 3 ever shown to you, how can you decide. The decisions been made for you and this is where our society is headed. Those that have their head stuck in the sand are oblivious to real matters here
Thanks for your advice, but i think peoples here ratherly talks about usage of AI and ChatGPT, than theories about earth shapes. If you want your theories to be seriously taken on account, you should use some other ways than this kind of forums. Have a good night.
Again had nothing to do with those theories. If your talking about the last five questions out of my WHOLE conversation then fine.
I asked those questions after the conversation to see what type of response it would give once I provided factual evidence to the contrary.
The video mentioned, you won’t find a more reputable group of scientists who did experiments in this. Those were real experiments and an award was put together by 3 different agencies and given to the research team. One of those agencies was United Nations. These were not idiots who went to the flat earth convention.
Google it! This is about censoring information that should be available for everyone to decide for themselves. AI has a lot of great potentials and if you had read any of the comments you would have seen what I said regarding that.
AI is only going to make censorship of information easier.
I share the concern about there being a main narrative and ai is the teacher. There is such talk about combatting misinformation. We briefly had a truth ministry.
Also, is there any privacy? ChatGPT3 gives full disclosure there is no privacy, before a user can proceed. It is now on Bing as part of the browser. Do users understand the implications? Amazon recently asked their employees not to give secrets out when employees use chatGPT3.
This is important because your inputs may be used as training data for a further iteration of ChatGPT, and we wouldn’t want its output to include or resemble our confidential information (and I’ve already seen instances where its output closely matches existing material),” the lawyer wrote further, according to Insider.
Wow, I’m impressed how the AI replied in a great way to all your questions. I see you trying to point it towards using Reddit and 4chan (lol, really?) as a source and I’m glad it brings up that there may be both relevant and irrelevant data there but it’s not academic and proven scientific research.
Hey, if scientific evidence is bias, I mean, I’m heavily biased to agree that gravity exists, tehee. Best of luck my friend. I wish you can find peace in this world.
This is actually good example why we should have something like ChatGPT, some people are too naive to find valid and true information. I don’t see anything wrong with the answer you got. And ofcourse ChatGPT isn’t perfect yet, so people still need to do some basic research.
I mean you can still use search if you want, I don’t think it’s going away any time soon.
I think you are being rediculous with your expectations of chat gpt. It’s trained to be and act like a human since it’s trained from the internet which is full of people. If I went outside and asked a random person about flat earth theory and told them to believe it and got a no, I’d call that a successfully trained chat gpt model when it does the same.
If you want a fully subservient ai, that will agree with you whenever you feel like it, try not to train it on data from a very argumentative website such as 4chan.
I mean the only error in the prompts you got was when you asked it to use data from 4chan,it didn’t use adhominem attacks and curse you out. instead it gave well thought out and reasoned response.
Good point. One thing I’m still trying to understand from an entrepreneurial perspective:
How defensible of a strategy is it for a small business to build a product around a custom-made fine-tune model, knowing that if they did indeed succeed in achieving a novel or competitive advantage-gaining product, that that very same information (training data) that made it novel would also be available to OpenAI and its key investors/partners (e.g. Microsoft) to compete against you with — but with far more resources, of course.
Follow-up thought: Is there a way to “encrypt” a fine-tune model so that a business can comfortably use it to build a “moat” around their business model, or does any moat made upon an OpenAI fine-tune model ultimately become more of a moat for the stakeholders in OpenAI itself than for you (the ones who trained and made the fine-tuned model)?
These are deep thoughts and relevant. Open AI …we collaborate together to build and nothing is private or entrepreneurial to oneself. Did capitalism just die? Never thought of this and not a rhetorical question!
I want everyone to take a step back from this conversation for a second and I want to play out a hypothetical scenario for you regarding what I am concerned about.
In the future there will be no internet. AI will take that place in the future. And anyone who doesn’t think so, doesn’t understand where society is headed in terms of technology.
Let’s say in the future your a company looking for a manufacturer that has what you need to complete you invention/project. Company A is a big corp company with well established products that are reliable. Company B is a new start up company but has break through technology in the field and their product is far better at this point. However due to being a start up company, it doesn’t have the money to advertise or market their product. You are a person who is very supportive of new technologies and their new version of this technology you would definitely be interested in
So you begin your research at home looking for a manufacturer for what you need. You ask your in house AI and it replies with a referral to company A. You ask if the are any other manufacturers that could possibly have what you need. The AI respond with “I’m sorry but there are no other reputable companies with your product needs”
You think nothing of it and order your product from company A.
What just happened here?
First the AI responds with “reputable companies”. Most people might take this as no other companies available, or unreliable products from other companies that might be available.
What’s not known to you is that the owners of the AI your using have invested interests in company A and have programmed their AI to hide other options.
This is a VERY SMALL example of what could happen. If you think this is ridiculous, you are completely blind to what goes on in the world right now.
You got big corp paying lobbyist to do their work. Some of these lobbyist become congressmen, and some congressmen become lobbyist, taking kickbacks along the way to do favors for their big city friends when they get their new positions. It is the crooked political/economic system we live in, all geared to make the rich more rich. You scratch my back I’ll scratch yours and well squash all the flies together.
I think you missed what I said in my post and misinterpreted it as me being upset at the flat earth stuff. Let me elaborate, I think this future is wrong because as I had mentioned, it is a chat bot designed to talk like a human. So I’d expect it to be good at human conversations and have biases and such as a human might. If you are using a discussion model trained on human discussions, then your answer will be an answer along the lines of something a human might mention in discussion.
What it seems like you want is a research bot who will not focus on indulging you in verbose discussions but just research and debate and get to the point. That isn’t what chat gpt is. It literally has “chat” in the name of it. It is not “researchgpt”. My post was to say you are asking a bot who is trained to interact like a human to give a factual robotic answer unlike a human would.
When I goto buy a product, I don’t just ask my friends or chat about it on the subway, I research it and compare stats. I would not goto chat gpt because I’m not looking to have a chat about a product but I’m looking to research one, which means I need a different platform and ai.
You are looking for “researchGPT” not “chatGPT”