"GPT-3 can only predict what a human would say"

I was having a debate on a discord server and one person said that GPT-3 can “only” predict what a human would say or do in a given situation… As if that is not a remarkable milestone in achieving AGI. To my mind, the biggest constraint is the narrow window that GPT-3 can take in. Right now, it can handle any situation, so long as you can describe it in a paragraph or two.

What happens when you can give future iterations many pages of text? Entire books? Multiple books? You could feed such a transformer your entire life story and ask it what you might do for the next months, years, or decades.

As you gain more information, you can increment the corpus of text you feed into the transformer and get updated answers. This iterative process alone, I think, could be considered an AGI-complete solution.

7 Likes

You’ve read a book, yes? We can already capture everything from senses to thoughts in natural language. That includes emergency procedures and corporate ethics.

Anyways, taking my thoughts out to further conclusions: the two greatest constraints on this model of AGI will be text payload size and speed. DAVINCI is still pretty slow, only marginally faster than a human typing.

Perhaps future iterations will combine larger text windows with faster/cheaper output. The greatest limitation on AGI will be those variables.

I’m now imagining future scenarios where a LLM can ingest many books on a given topic in milliseconds and then produce an 80,000 word prognostic essay in a few seconds.

2 Likes

i think with all technology, AI included, humans design these things to do what we want and they are very good at giving us exactly this. even when they get things wrong, they can still surprise us. its not fair to judge them for doing exactly what we ask of them.

2 Likes

it wasn’t made for dogs…

…yet…

2 Likes

I think that’s backwards. A principal benefit of GPT-3 IMO is that it often says things that no human would ever say, but are none-the-less valuable and insightful.

2 Likes
1 Like

Well, humans are capable of saying just about anything, but I agree that we have a machine that can produce valuable and remarkable insights on tap.

Thanks for the article about long form output. I’ll have to spend time parsing it.

Update: Attention seems to be the hot topic, both in transformers and cognitive architecture. In terms of a general purpose AGI, deciding what to pay attention to is important (such as emergencies and disruptions) as well as coming back to long term tasks. Even so, I think there will be value in concise input for transformers since they are trained on so much data. You don’t need to explain the basics to a domain expert every time.

Concise input for transformers is absolutely vital, but it’s a chicken-and-egg problem – existing extractive summarizers for long documents are not capable of producing the same quality as transformer-driven abstractive summarizers.

1 Like

All I can provide is personal experience – I’ve tried a lot of them. Next time I do a sweep of what’s available, I will create a bibliography for you!

1 Like

That reminds me of the whole purpose of doing a simulation. If we scale that bigger, we might even be able to predict future outcomes of the human race.
In my opinion, it would be already great to have a Jarvis (Iron man) like AI that can help us through everyday activities and monitors our health, and gives us advice for our jobs (especially as developers). I know there is Siri and Alexa, etc. however, these language models are not even close to Jarvis-like capabilities. I had many discussions about building a Jarvis-like application with GPT3 and already developed several prototypes (not even close to the actual Jarvis tho). However, with the new codex model, we could level up on building an AI that not only gives us advice on day-to-day activities but also helps us write code by combining both models. Let me know what you think about that idea!

1 Like

Here is my two cents about this, with warning that everything I know about mind, concsiousness comes from reading Schopenhauer.
AI is immitation of ordinary man/scientist/writer. We don’t want that. What defines ordinary man is that he does not have intuitive knowledge taken directly from the real world, but from book, abstractions. He is good at drawing conclusions based on premises. He is good at playing with abstractions/premises, combining them to come with new conclusion that were implicitly present in premises. He is good at drawing analogies.

In most books, putting out of account those that are thoroughly
bad, the author, when their content is not altogether empirical,
has certainly thought but not perceived; he has written from
reflection, not from intuition, and it is this that makes them
commonplace and tedious. For what the author has thought
could always have been thought by the reader also, if he had
taken the same trouble; indeed it consists simply of intelligent
thought, full exposition of what is implicite contained in the theme. But no actually new knowledge comes in this way into the world, this is only created in the moment of perception, of direct comprehension of a new side of the thing. When,
therefore, on the contrary, sight has formed the foundation of
an author’s thought, it is as if he wrote from a land where the
reader has never been, for all is fresh and new, because it is
drawn directly from the original source of all knowledge.

My prediction is that AI will:

  1. eliminate, as Artur calls them, second order writers, and be some kind of purificator that will leave only best that take intuition of the real world and express them in their medium. This is something GPT3 can not do. GTP3 does not have ability to percieve, to have senses - to have direct, no BS, intuitive knowledge (which is anti abstract, direct knowledge of cause and effect).
  2. make people we though were good in their area look funny. Todays “Experts”, “Senior Engineers”, Bestseller writers.
  3. eliminate most of today’s BS jobs.

What I see is that real world experience will be highly priced, not sterile abstract knowledge from books as now AI can do that.

The main problem with AGI is that it is Artifical general inteligence, and not artifical genius inteligence and that is not possible without senses.

1 Like

Why do you believe senses are necessary? Can all sensors not be translated to language?

1 Like

Personally, I think that the mundane senses are immaterial to intelligence. The greatest works of literature simply use words to describe senses whenever necessary. When reading an instruction manual to build a moon rocket, does it contain smells? No. At most, it contains diagrams, which can easily be rendered as text. You can just as easily show a swashplate as describe one.

1 Like

I find the narrow window constraining. But it’s not quite as bad as it seems. Humans are pretty much the same, we’re always chunking or using contextual cues to make sense of things. Take away some of those cues and we start acting in bizarre ways (just as GPT-3 does). Our brains are constantly fabricating things or glossing over contradictions because that’s how we deal with limited information. Our brain is easily fooled about even the most basic things under the right set of controlled conditions. Body transfer or “rubber hand” illusion is a great example: Body transfer illusion - Wikipedia

2 Likes

But the opposite question is more interesting. Can you have consciousness without language?