Prejudice Against the Rise of AI

I just thought I would share this revelation I had today. Like many of you, I’ve been deeply immersed with AI and developing solutions that take advantage of it’s technology. One of the projects I undertook was to compile a large dataset of religious materials, the idea being to be able to semantically search for text references as opposed to the traditional keyword search.

Someone we had been working with, someone who spends large parts of his days researching these texts, was absolutely excited about the prospects of using this new technology. He was going to go out and get his colleagues together to test it out. Then today:

I talked to the lead person on each of the 3 churches this week and they are all of the same opinion that the potential for unintended consequences from using an intuitive AI program to ferret out information from various sources could cause misunderstanding on what we believe.

As someone who has spent the past several months looking at the world strictly from the standpoint of the great benefits of AI, I was a bit taken aback by this perfectly human response: Absolute terror of the unknown. I mean, maybe it’s not fear, maybe just a religious thing, but how do they reach such a conclusion without even first trying it?

Anyway, it was a rude awakening that sort of shook me out of my little box I’ve been living in for the past year. There are people who are genuinely afraid of Artificial Intelligence and what it might do to the world.

Something I think we all have to think more about.

How did you present your project to them? I am guessing perhaps they did not actually “get” it. Based on what you wrote, I am assuming your app is just a better search function to lead users to the verse for whatever version/edition of bibles they want. Perhaps the lead persons are misunderstanding that it will interpret the words in the bible in another way.

Humans expect a certain concreteness to computers, for fairly obvious reasons, so the idea of “hallucinations” are the biggest thing people new to this technology fixate on. I guess that is appropriate to do until you can assimilate the fact. I think the reasoning they are using is flawed… AI will eventually exceed human capacity to form and validate logical arguments, even about fuzzy topics like metaphysics.

What the church leaders really should be concerned about, imho, it the tendency to idealize this conversational AI by combining its confidence with our preconceptions about what to expect from a computer, which basically may produce a false-god-like authority.

I don’t know, it sounds like this may just be a case of clericalism rather than absolute terror of the unknown.

This is a research tool. Do not ask it for it’s interpretation of scripture. It is not human. It can’t think, it can’t feel, it can’t reason. It is designed to answer your questions as precisely as it can, and direct you to the appropriate texts that support it’s response.

This is a RAG implementation with strict instructions NOT to render any response that is not directly supported by the context documents it has evaluated. In over 8 months of testing, I have witnessed only two cases of hallucinations in our system: 1 was from Anthropic, which appears to be trained to be “chatty”. The 2nd was a response from gpt-4 on a question about scripture that was NOT in the context documents, but which it recognized from it’s training. And, I only refer to that as hallucination because it went against it’s instruction.

We worked with the lead person for a couple weeks, so he understood. The problem is he was unable to convince the leaders of the 3 other churches who did not try the system – not our system – at least once. To reach that sort of conclusion without hard evidence is a bit pre-judgmental, in my opinion.

1 Like

Had to look up “clericalism”, but I have considered that as another possibility.

My associate is trying to organize a Zoom meeting with them to allay their concerns. I’m not so convinced I want to spend the time trying to convince skeptics when we could be pursuing folks who are willing to embrace the technology.

1 Like

I see. Pastors often times used many editions of the Bible for the same verse to teach in their sermon and even compare them. Just imagine if they can use one app that will show that same verse in all bible editions, it could make their sermon writing more efficient.

But I think it is not unthinkable that some would be adamant at first. You just need to get one to try it. Someone influential.

1 Like

You would think. But, you’d be amazed how close-minded a lot of people are. We have Hebrew (Jewish) version, King James and New World Translation (Jehovah Witness).

One person said, “I don’t know about the New World Translation…” Another just wants to see King James Version – nothing else. The other just wants to see only the Hebrew Version. Which is all fine. Our system accommodates this. But in the back of my mind I’m thinking, “Why would you not be at least interested in what the other texts have to say?”

But, that’s their choice as customers, and the customer is always right.

I’m just reacting to the fact that this group was able to come to such a conclusion without asking one question.

If you are ever going to talk to the evangelicals, you need to have at least the New International Version (NIV) and New Living Translation (NLT).

1 Like

I knew the concept existed but I did have help from a friend :laughing:

You’ll see the same thing with certain lawyers, teachers, medical professionals, etc.

There are people that will be excited by this, and there are people that see it as a nuisance/annoyance and look at it with condescention. It’s easy to say they’re afraid, but I don’t think that’s necessarily the case - at least not consciously. It’s similar to how some older folks look at younger professionals. “You don’t know what you’re talking about, I’ve been in this business for 97 years so I don’t want to hear one more thing other than ‘yes sir’ coming out of your mouth” type situation.

1 Like

Thank you for the heads up!

It might not be directly tied to your example, but I have a general remark on this:

As someone who has a master’s degree in Deep Learning with a focus on NLP, and someone who uses AI in work and personal projects all the time, I can tell you that some apprehension of this technology is ABSOLUTELY warranted.
Now, I think there are 3 main concerns people might have, and I think their validity differs:

  1. fear of AI “taking over”: This is basically the fear of what happens when AI becomes too complex and smart for us to understand. It does not have to be malicious, but just as chimps wouldn’t be able to stop us if we decided to eradicate them, we wouldn’t be able to stop AI (at some point)

  2. fear of AI being used in malicious ways: Deepfakes, misinformation, etc. This is not a fear of the unknown, but a fear of what is already happening and will surely be on the rise. We already know how much damage fake news can do in our highly polarized world.

  3. fear of over-dependency on AI: In my eyes, this point is often overlooked by … everyone. If we start using AI for everything, and those models become larger, more complex, and therefore more exclusive to very few powerful individuals/corporations, that is a dramatic shift of power and a decrease in independence and freedom for the vast majority of people.

So, IMO, the first point is overblown. With our current methods (i.e. how neural networks are trained and built), it is not a question of when we will reach AGI, but IF. The AI we build are very task specific. And AGI is a bit more than just slapping together multiple specialized AIs (e.g. add a chatbot to 4 robot limbs and a vision system).

The second point, most people get quite well. It is currently not solved, and though there is a simple fix, AI companies wont do it because they dont actually care that much about those ethical issues. (the fix is to sift through the entire dataset, REMOVE all names from the text tags and then train GPT and all the other models again FROM SCRATCH. But ofc that’s too expensive)

The third point seems to get no attention at all, but is IMO the most likely to cause huge issues. Imagine that Google and Amazon will sell your data to various clients who will then use AI to predict … everything about you. Your health risk profile will serve healthcare not to give you better treatment, but to milk you for more money. Your YT feeds and google searches will be used to predict who you vote for. how likely you are to commit crimes, etc. CCTV will be obsolete, bc AI will know everything about you already. Imagine getting sentences for crimes you didn’t even commit … yet.
(there’s been a case in the UK where a couple was sentenced (falsely) based on a probability calculation, so this is no Sci Fi)

I think in most cases, it is not the fear of the unknown. It is precisely the fact we know - or at least suspect - how such powerful tools can be abused.

why do we like democracy? Because power is distributed instead of concentrated. A benevolent dictator could maybe improve many things, but a shitty one will make it so much worse. We don’t like concentration of power in politics, yet we do it with AI…

1 Like

I do not disagree. But, given the opportunity to test if your skepticism is warranted and declining because you are certain of a negative outcome? Let me repeat what was said:

From the leaders of 3 churches – they all agreed.

How many human beings, with access to far less information, cause far more “misunderstanding” on what they believe every day? Not to mention the Faux Devots (who knew that Moliere would come in handy one day?)

I guess what I’m saying is that I’m having a hard time getting over the close-mindedness of the whole thing, especially from religious leaders who you think would be open to the idea of spreading the Word through as many technological means as possible.

so then, are you irritated by the fear or the ignorance? Because those, I would contend, are two different things.

also, i wouldn’t read too much into it. Most laymen have no idea about how AI works, so it is guarenteed that they will misunderstand some things. If those religious figures in question have a hard time grasping the idea of computers and internet, AI will fly over their heads. In such cases, what I found in dealing with religious people is that they will trust the assurances of someone from their faith (or someone they know and respect). That’s the handy thing about faith. It is also often extended to people.

Neither. I am irritated because they pass judgement without even taking 15 minutes to try it out themselves.

That’s who reported back to us – the individual we actually worked with. Clearly he was unable to overcome the objections. Or, in the end, he was convinced by them as well.