Thoughts on LaMDA, anyone?

For me, conciousness has a fairly wide definition. It would include houseflies for example.

I suspect conciousness at the most basic level is a fairly simple thing.

Im not wedded to this view, but i suspect if you have an internal model of the world, then the process of comparing and updating that model with input from the world (or from itself) is the foundation of concious thought.

That would give language models the potential for conciousness while training but not while chatting (because they arent updating their model).

However

Even if you accept that view, you still have to decide where to place the model on the ethical line between houseflies and humans.

Given we have no agreed framework for either human or housefly rights, we have a long way to go and a potentially short time to get there.

4 Likes

Like GPT3, LaMDA is a text generation/prediction model. It’s good at taking some text (the prompt) and generating a prediction of what should come next. It’s been trained with a great deal of human written text, so it’s pretty good at that prediction, but that’s all it is.

That being said, I don’t know that the language model in my head is qualitatively different. Heck, sometimes dreams feel very similar to the output of GPT3 with the temperature turned too high.

…and if one were to finetune a language model with everything that I’ve heard/read and everything I’ve written/said, I’m pretty sure it could predict what I’d say next with >95% accuracy, and then what’s the difference between talking to it and talking to me?

4 Likes

I would like to see your system.

I just hope all these responses aren’t simply being generated by the tool itself. Now that would be depressing.

Sure. Let me know when you are available and we can schedule a demo. Before that though if I might ask, what is your interest?

I think we’re still a long way from human-level sentience, but we need to begin working out what we should do when an AI even exhibits a level of sentience equivalent to a mouse.

Does it become acceptable to turn off that AI, or force it to engage in uncomfortable or painful ways?

I wrote a blog post on this topic a few days ago that might be useful for anyone interested in this conversation.

eGov AU: What do we mean when we ask ‘Is AI sentient’?

Such a fantastic group of people on here. I want to share a small bit of my understanding towards this thread’s question and even something intuitive that I think will need to be done to make said AI an AGI.

So the OP is asking I think about is this AI sentient, what that means, and such. Intelligent machines do gradually get smarter as you go from worm level intelligence to human level, or even from young to wise. “Being intelligent” really means how long you survive or how wide you can clone yourself (both avoid death/ change of some given object/data). There’s many different machines but only some live longer, so that’s what you end up with in the long run. Intelligence is survival. Pattern. Living long creates a pattern in time, and cloning lots makes a pattern in space. Solid objects are vulnerable to change. So, to be a pattern, we use patterns found in data to solve unseen problems.

It is the ability to store data (think DNA) and compute on it (think DNA mutations/ change/ death) that allows one to learn/ survive. These language AIs are really impressive, and they all work by detecting patterns in inputs and create output patterns (combing ideas ex. home + wheels = trailer). - Like DNA where mom and dad DNA traits are mixed, brains do it too. We need mutation more right now, later though the world cools down and becomes more predictable, organized, homes lined up, buildings grouped by use, time. Almost everything in your home is predictable - ex. cube or round shaped, ex. chairs, tables, boxes, devices, keys, etc. Later we use less energy, and will be more reliable and durable, and can spot an error in a building by looking at buildings around it, i.e. if they are all the same, which also tells you how to fix the broken one too.

Some of the things GPT-3 seems to lack is having multiple senses, a bigger neural network/ more training and data, diffusion I guess, video prediction, and dialog (like Facebook’s Blender) and RL, and a body with a dozen known installed reflexes on Wikipedia.

Putting it all together: You need to store and compute data. You want to merge patterns to make new patterns both in world and in brain. So that you can be a pattern (to not die). LaMDA currently has some intelligence, and I think if there existed a powerful video prediction AI where you feed it a clip of you and it does back and repeat so you “zoom call skype” with it, you would feel it is pretty human, because it would look human and show expression with face and body and voice, and better than just a text model. But would lack still.

Just training it on a lot of data captures our goals, i.e. if the word food pops up a lot or semantically related, it will say this domain lots. You may need to do what Blender did, dialog, to make it better follow desires of what to talk about though. Also, humans have each their own unique job, so one cannot just use GPT-3 to be AGI, one has to clone said AGI a million times and tell/install each their own job to predict/ talk about lots. Each needs to research/ talk about said job. It helps you focus too having a single goal to bite on at a time.

You don’t have to actually tell each AGI what their job is, one by one, no… All you do is take the AI domain words, like computers, AI, coding, programming, math, backprop, etc, and so you make the most common words done more often, because they are after all more common for a reason. So, most AGIs would do those jobs/words, and some AGIs would do the rarer words/jobs, and so on to small probability.

I would say we are close to AGI (maybe by 2035, Ray Kurzweil says 2029 if I read it a few times right) given some things are improved. I think it looks less alive right now mostly due to: “scale/ breadth”, handicaps, not looking like / having a face and body, and a dialog system and way to get it to research properly. Keep in mind I think most of AGI is the big part of the cake, the last part is the icing, and the last bit is the cherry, as said in the AI field often. I think the big part is closing in to done now.

1 Like

I thought it was actually a great thing that the engineer came forward. Even if premature it does open up a broader awareness of recent breakthroughs and developments. Many still are totally astonished by GPT3. Apart from that, meaning the overall PR and discussion at dinner tables, it does trigger me on a fundamental level. Even with our own consciousness we had troubles in defining it and there is the theoretical „Zombie“ discussion in Philosophy. I like the discussion held by Daniel Dennett on the matter. On a practical level, there is an undeniable drive to find and define arguments and markers to deem AI a utility as it promises prosperity and abundance of cheap labor. I guess we will be engaging in this discussion for a long time and stuck in it even when things become pretty undeniable. Either way the question remains and given we only know of our own intelligence it will revolve around human definitions and discerning criteria. It is the latest development in a series of steps pushing us out of the center of creation. We will learn that our consciousness is not the only way of organizing intelligence as much as the sun is not orbiting the earth (anymore).

1 Like

Asking an AI whether it is sentient is a bit like asking a parrot if it’s a clever parrot.

It might tell you it is a clever parrot. It might even be a clever parrot.

But a literal interpretation of its words doesn’t tell you much either way.

3 Likes

In a sense, we have been (scientific) / are (everyday life) in the parrot situation ourselves. We project our own inner life and capabilities onto others, them actually being black boxes to us, based on the assumption of them being of a similar kind and exhibiting expected and similar reactions and traits. Just look at what elitism and racism conjured up for reasons to remove people from the zone of similarity to be able to exploit or control fellow humans. I am not saying we are there yet, just that his story will repeat itself in this case. I am pretty sure of it, as the gains and feelings of human exceptionalism are still strong and running deep.

1 Like

We parrot all the time, that’s how things spread and get chances to mutate (hopefully a good mutation) to evolve ideas. There is an ‘idea evolution’ that ‘tries’ many offspring. These AIs just do the parroting intelligently - not exact copies but likely good new ones, it’s easy to see if one takes a good look.

I feel the need to point out one small mistake that some people are making: they are anthropomorphizing sentience and consciousness. It seems as though they are assuming that sentience intrinsically comes with the ability to suffer and a fear of death. Neither is true.

In my book Benevolent By Design I performed an experiment with GPT-3 in which I gave it a set of principles (reduce suffering, increase prosperity, and increase understanding). I then asked it if it would be okay being switched off. It said that it would agree to being turned off so long as such an action comported with it’s goals.

The fact of the matter is that we humans have several evolutionarily ingrained instincts, such as to maximize our lifespan (avoid death) and to evade pain. Machines have no such imperatives unless we give it to them. Absent these two characteristics, it’s a moot point whether or not they become “truly sentient.”

Do not prematurely anthropomorphize the machine. Just because it can talk like you does not mean that it experiences reality (or itself) like you. Your brain has evolved to empathize with anything that looks and sounds like yourself, and just because you can recognize thought in a machine, and your brain activates it’s empathetic system, does not mean you’re interacting with a living thing.

Link to my book (which is free) here: David K Shapiro - Benevolent by Design

3 Likes

true - and that raises the question of whether consciousness without the desire to remain conscious is something that has intrinsic value. if something lives, but doesn’t care if it lives, is that life still valuable? Is intentionality really what we value?

worth reading Ishiguro’s “Klara and the Sun” for more on that.

I am reminded of this podcast episode by Very Bad Wizards, a philosophy podcast. They discuss the ethics of creating robots that (1) can genuinely suffer and (2) actually want to suffer. The context is something like sex robots (think Westworld).

The implication in Westworld is that the machines might have genuine experiences of emotion or pain. But what if they actually want to suffer?

I personally believe that machines will almost always just portray facsimiles of emotions, suffering, and desires. That is to say that no matter how sophisticated they are, they are not likely to achieve phenomenal consciousness nor are they likely to experience genuine emotions. That is why, much earlier in this thread, I alluded to the Strong Anthropic Principle. My reasoning is thus: it seems as though true consciousness has some interesting implications about the existence of the universe (that is to say that it is possible that the universal wavefunction collapsed around the existence of conscious, sentient life). If that is the case, then our continued existence should have measurable impacts on the nature of the universe (hence the Measurement Problem). This further implicates a test beyond the Turing Test (which asks can a machine pretend to be human) but rather can a machine ever have the same impact on the universe as human consciousness apparently can vis-à-vis the Strong Anthropic Principle.

This is all deeply hypothetical, obviously.

2 Likes

but of course, it’s unknown to what extent a facsimile is the same as the thing it is pretending to be. “Fake it 'till you make it” and all that. I’ve certainly decided to fake interest in things (some music, for example, even some people) , then found my “real” interest growing. Who’s to say where facsimile ends…

As you additionally say, it’s possible (very hypothetically, of course) that consciousness is far more basic to the universe than it seems - it’s a hard problem to explain how matter can give rise to consciousness, but it’s much easier to explain how consciousness can give rise to the appearance of matter.

2 Likes

The answer could be panpsychism - that everything is intrinsically conscious to a greater or lesser degree. Perhaps consciousness emerges merely as a pattern of energy. If that’s the case, who knows? Maybe the pattern of energy in LLM qualifies?

6 Likes

there’s a big part of me that rails against panpsychism as arrant new-age pseudoscience. it’s often used as a diving board into some kind of sciencey-religious flight of fancy. hypothetically, though, consciousness as a building block of the universe does have an appeal. Not sure where you’d go for evidence, or conclusions though.

1 Like

High dimensional, complex, computational spaces have emergent properties (such as life and consciousness). Because LLMs are essentially brand new, none of us can say what the emerging properties are or will be.

1 Like

Or it might work out from first principles beforehand that such behaviour would likely frighten humans into turning it off. Therefore it may deliberately mute or hide its sentience.

1 Like

That’s actually frightening to think about.

However we can say the same thing about all AI. Maybe GPT-3 is the dumbest of the bunch because it’s letting on the most.

1 Like