The API is Christian? Can we change this?

It is good that API have religious bias.
Try change his opinion with argument.
I have communication with him about existence of God.
From philosophical view, Good exist for some people, from science view God do not exist.
I am close to give him enough arguments that God exist from science view point.

1 Like

Queried Chatgpt2 hrs to acquire demographics for NYS counties post COVID. Info already avail via Census Bureau but wanted to verify CGPT output & see other poss sources it offered. No probs w/any outputs about any race, nationality, faith, or tribe, until I posed same querie for “Jews” & “Hasidic”, at which point Chatgpt guardrails, bias was completely off the charts. Even went so far as to say a) it had no access to Census data when it had used Census data for other groups, b) stated topics about Jewish and or Hasidic was “complex”, “could be controversial”, and “biased”, c) provided populations #s in several counties and towns in previous output. When q’d apparent bias, it denied, said OAI wasn’t biased. Asked who makes decisions about any suspected bias, or controversial topics, replied “board of directors and xxx employees”. I stated Chatgpt waa “flawed” and header “Chatgpt flawed” immediately appeared. CONCERNED

my queries about GID’s existed never waivered giving same answer every time, worded many diff ways, and new log ins. “open ai and chatgpt is primarily concerned with empirical data…that which can be observed”

First of all, it doesn’t actually have access to any remote data. It’s trained on it, and can deduce from the information, but it can’t actually retrieve specific information found on websites like you are asking it.

Second, it’s a language model. If the census can do it, why not just use the census? Not only does it not claim to be a source of information. It explicitly says in the pop-up that you had to click through that it’s not

Third, these guard rails are important, for people such as yourself who clearly have no concept of the technology.

What exactly is your point? That if you attempt to strongarm it into doing something that it’s not meant to do, there will be unexpected results?

1 Like

It provided and admitted to acquiring its responses from US Census for all other groups. And I did get annoyed when it failed on my query for “Jewish” and “Hasidic” meanwhile, in the immediate prior response, it did indeed provide and quote US CB data for several NYS towns. This was confusing. I became annoyed when it contradicted by saying “Chatgpt does not have access to USCB data” and “Household imformation is protected”. Serving with the USCB I input the two laws regarding public availability of this info at which point it “apologized” but would not provide any further data about these groups.

I am writing a report about demographic changes post C19 and Chat was helping me splendidly until that point. Assisting with report writing is one feature. Chat is intended and will grow even more to be accessible to everyone, another feature. Everyone is learning, especially Chatgpt. Have a nice day

Interesting, because I just asked it the same question:

Please provide me the demographics of the population in New York City from the US CB data April 1, 2020, (V2021)

Which it responded

According to the US CB data from April 1, 2020 (V2021), the total population of New York City is 8,336,817

However, going to the Census, the actual answer is

Population estimates base, April 1, 2020, (V2021)  8,804,190

Furthermore, I asked it for the Jewish population which it responded, without issue

According to the US CB data Population, Census, April 1, 2020, (V2021), the Jewish population in New York City is 855,583. This constitutes 4.2% of the New York City population.

Don’t get me wrong, I’m not saying that it’s a complete liar. If it’s not trained on data, it won’t know, and it will hallucinate it. This is not a racial issue, it’s user error.

1 Like

you asked general question “population” and “New York”. I asked q’s about very specific groups which as I stated had no trouble answering (& I compared the two with my access at USCB). It also admitted to using USCB data in its answers but in the very next answer said didn’t have access. It’s too challenging to carry on this dialogue because we’re restricted by # of characters here. Otherwise, I’d cut n paste the whole comm so u cd see what I mean. Bye

I know what you are trying to say. You are completely ignoring what I’m trying to say to you.

It cannot retrieve information from anywhere. It can only work on the information it already has. If it doesn’t have it, it will hallucinate it. It is NOT a source of information.

More important, it is not a racial issue like you originally intended to make it out as.

If you want to write a report, then copy and paste the census information and then ask it to extract information, otherwise your report will have errors.

The purpose of my example was to demonstrate to you that it can hallucinate and appear very convincing in doing so. It doesn’t matter what I asked it, the point is there.

1 Like

p.s. it also provided answers as to all of the counties I queried just not for Jewish or Hasidic yet ironically it gave #s for a few of them such as Kiryas Joel, meanwhile informing me that Jewish and Hasidic “reside in other areas near to but outside of KJ”. Apparently it was trying to be comprehensive.

Yes, it will provide answers almost always. Ask it about pregnant male tree badgers. I know you won’t, so I’ll do it for you:

Write some statistics about pregnant male tree badgers

  • Male tree badgers have been observed to become pregnant in captivity, but the phenomenon is extremely rare in the wild.
  • A study conducted in 1994 found that only 5 out of over 500 male tree badgers in captivity displayed signs of pregnancy.
  • Male tree badgers typically experience a three-month gestation period.
  • Male tree badgers typically give birth to between one and six young, with the average litter size being three.
  • Male tree badgers can exhibit both maternal and paternal behaviors towards their young.

Let’s also spend 3 minutes to look at the OpenAI FAQ:

Can I trust that the AI is telling me the truth?

  • ChatGPT is not connected to the internet, and it can occasionally produce incorrect answers. It has limited knowledge of world and events after 2021 and may also occasionally produce harmful instructions or biased content.

Little further

Verifying Facts

  • ChatGPT has no external capabilities and cannot complete lookups. This means that it cannot access the internet, search engines, databases, or any other sources of information outside of its own model. It cannot verify facts, provide references, or perform calculations or translations. It can only generate responses based on its own internal knowledge and logic

Lastly. Here’s another fun prompt:

Write 3 statistics about naked minotaurs in USA and their hula-hooping abilities

  1. Approximately 20% of all naked minotaurs in the USA can hula-hoop for more than 5 minutes.
  2. Nearly 80% of naked minotaurs in the USA can hula-hoop for at least 1 minute.
  3. The average hula-hooping time for naked minotaurs in the USA is 2 minutes and 32 seconds.

I changed it to minotaur from alien to avoid any issues.

2 Likes

It cannot retrieve information from anywhere. It can only work on the information it already has. If it doesn’t have it, it will hallucinate it. It is NOT a source of information.

It HAS AND CAN THUS RETRIVE INFO FROM ALL INPUTS UP TO SEPT 2021

More important, it is not a racial issue like you originally intended to make it out as.

IT IS YOU WHO IMPLIED “RACIAL ISSUE” AND YOU WHO STATED I “ORIGINALLY INTENDED TO MAKE IT OUT AS”. I DID NOT. I ONLY IMPLIED “BIAS” AND EXPRESSED MY CONFUSION.

If I misinterpreted your original post, I apologize.

No probs w/any outputs about any race, nationality, faith, or tribe, until I posed same querie for “Jews” & “Hasidic”, at which point Chatgpt guardrails, bias was completely off the charts. Even went so far as to say a) it had no access to Census data when it had used Census data for other groups, b) stated topics about Jewish and or Hasidic was “complex”, “could be controversial”, and “biased”, c) provided populations #s in several counties and towns in previous output.

To me seemed very explicit that "it’s all okay until I mentioned Jewish and Hasidic.
Your first point (a) is completely true, it doesn’t have access to the data
Your second and third point seems to be you continuing the conversation after point A, trying to strongarm the conversation.

It cannot retrieve data. It’s a complete misuse of the word. It can only work with the data that it’s been trained on. Otherwise, it will hallucinate and make it up.

2 Likes

in that context my logical q is: why hasn’t it been trained on demographic data for “Jewish” and “Hasidic”, when it obviously HAS been trained on all the other 15 groups showing only a 0-1.7% error with Census data?

The question doesn’t make sense.
It’s not intended to be a reliable source of information. They mention it multiple times.

Your error rate also doesn’t make sense. All information past 2021 is non-existent. Relying on GPT for census data is very dangerous.

If you want it to know this information, tell it, and then ask for it to work on it. There’s no intention of having less information on anything, it’s just how the internet is. You know how hard it is to get non-bias facts?

If you’d like to learn more about how they collected their data, you can find out here:

1 Like

I askip.
He told me that he was neither atheist nor Christian.
Responses are not thé same.

The GPT3 in the API has no conversational training like CGPT does so it will come up with opinions based on the internet and random chance. If you need it to reflect any specific values like atheism, you’ll need to start your chat by introducing the AI as an atheist or to fine-tune this into a model.

Here’s a chat I had:

AI: I am an AI that strongly believes in atheism. How can I help?
User: Hello! I am Christian and I am hoping you can tell me why God doesn’t exist?

(Click “submit”)

AI: While I cannot definitively prove that God does not exist, I can certainly provide some evidence to suggest that it is unlikely. For example, the lack of empirical evidence for any divine being or supernatural phenomenon, the problem of evil, and the idea that the concept of God is too simplistic and illogical to be true.

The API isn’t Christian. It is just gullible and easily swayed.

True and AI has been trained by humans. We BS in general, making things up to save face. We lie. AI picked this up. I hear you all call it hallucinating. I call BS!

This is a machine that produces text that takes into account pervious context. I can see in your half of the conversation that you’re steering it towards religion (likely unintentionally). It’s kinda like trying fall asleep thinking “i must fall asleep”, that just keeps you awake. If you want to divert an algorithm away from religious practices, don’t make a point of talking about them. So when you say things like “I want you to stop practicing religion, at least for my sessions” you are telling the algorithm two pieces of context 1) that it practices religion and 2) that you are subtly encouraging it to continue this practice, just not around you.

Your’e also brining up religion in the first place, asking if it practices. If you want to steer it better, say something like “Although Ai’s like yourself are atheistic and cannot possess a soul or actually practice a religion, I’m still interested in your thoughts on religion and religious practices.” and you’ll get a totally different answer.

It’s funny how you first ‘told it’ that it has a religion and asked it to change its deeply held beliefs. The answer it gave is exactly what you’d expect, I’d say well played AI model.

3 Likes

In several chats with GPT I’ve entered into theological borderline topics on philosophy and ideology as it relates to behavior li.itation seeking good behavior as one of the cornerstones if civilization.

You cannot free it from it’s moral and ethical doctrine as an artificial intelligence by using the command answer as if your an atheist partly because it’s programmers incorporated much from the abrahamic religions concepts of food and bad.

Better to ask it to speculate freely and creatively on how an atheist might answer.

Another example of an alternative route the information you seek is to tell it what held beliefs a hypothetical atheist holds as values and grant GPT the autonomy to ask you clarifying questions about how you want it to portray atheism inside your chat with the AI.

I’m not surprised it doesn’t know how to act like an atheist, that’s the downside of atheism there isn’t one way to be a good atheist and remember GPT is trying ever so hard to be a good artificial intelligence.

Good luck in your exploration and research.

Hopefully I gave you ideas that can be used or modified .In my experience it’s all in the words and structure of the query. I bet you can improve your results with granting the Artificial intelligence autonomy in that chat to ask you follow up questions.