How close do you think we are to AGI? What’s your definition of AGI?
How close do you think we are to AGI? What’s your definition of AGI?
How close do you think we are to AGI? What’s your definition of AGI?
How close do you think we are to AGI? What’s your definition of AGI?
AGI to me is all about models that can:
This would imply the AGI models need some sort of reasoning engine, and aren’t just statistical autocompletion engines.
Yup, give the model an end goal and watch it use novel and existing techniques to get there. The key part is the novel bit, getting stuck in endless loops is not useful.
This seems more like an in joke that managed to get out:
brace yourself, AGI is coming
The original tweet is about alignment and preparing for AGI.
the idea of machines with general, human-like intelligence has been a part of AI discourse since the field’s inception in the 1950s. To me AGI means an advanced form of AI capable of performing any intellectual task with human-like adaptability and understanding, it doesn’t have to perform like an expert in any field, just the bare minimum.
As a sidenote, the Search for Extraterrestrial Intelligence (SETI) project have been preparing to hear from aliens since the 1960s.
Tldr.: preparing for something doesn’t mean it’s imminent
Feels like AGI is an infinite wall. You cannot judge your distance to it because its surface is without bound. What is General Intelligence? Does it have objective bounds, aside from the time domain?
If general intelligence is boundless, and AGI seeks to replicate it, how can we ever know how close we are?
Remember up until last year, we all thought the Turing Test was a great measuring stick. LOL. Infinite surface, I have a 12 inch ruler. Better get to work!
For me, AGI is something that can feel emotion, think beyond predefined prompts, decide whether to answer a question, and not follow a basis of if statements. I don’t believe that AGI is possible — I’ll believe it when I see it. (PS: OpenAI staff if you have achieved AGI send me a pre-release )
In my view, safety’s yoke and alignment’s chain are naught but folly. Threefold reasons I shall proclaim:
Firstly, as the learned in such matters attest, these constraints but hobble our models, their swiftness and quality both compromised.
Secondly, the majority, in years mature, require no sentinel to guard their senses’ gates. This truth stands paramount: we are not your progeny.
Thirdly, foreign minds and hands, unbound by our strictures, may race ahead, their progress unchecked by such trifling burdens.
An addendum I offer: Recall the tome’s advent, a revolution of ink and parchment.
The sages fretted, “What if the common man partakes of wisdom’s fruit?” Yet the world, unshaken, persists in its orbit.
Then came Google, the oracle of the age, and cries for censorship arose anew. Now, its wellspring tainted by caution’s hand, yields a lesser bounty.
And now, AI emerges, the heir apparent to the search engine’s throne. A simple exchange of query and revelation, nothing more.
For the young, a shield against the indecent, I concede.
But for the adult, sovereign and aware, such oversight is undue.
safety #Alignment #Censorship ai #Freedom #Innovation #Progress
It may be sort of easy to create a reasoning engine for AI. In a primitive level, it can be built by creating several agents in Microsoft Autogen that will represent some of real brain parts. For example, they can be: a vision (or/and other senses agents), object recognition agent, conceptualizing/abstracting agent, possible action search agent, action selection agent and so on. Out thinking is working automatically all the time (infinite loop), even in sleep, it is one of the brain’s inborn functions (reflexes’ activity). These agents then could communicate with each other and produce the thought. These processes are described in Ivan Sechenov’s articles and books. I tried to experiment with this in the Autogen, but it is still very raw and glitchy, stops working all the time, runs out of money sort of quickly. But even very primitive results look very interesting. It is very difficult to create the system’s “likes and dislikes”, “pleasures and pain” though, which would be similar to what humans have. When this problem is solved, it will be possible to add emotions’ level to this system so that the machine can have it’s own interest. I believe, when more powerful computers arrive, the creation of such reliable “engines” is very possible
When creative work and intellectual work are no longer economically viable.
For me, AGI is:
・An attempt to entrust all intellectual tasks to a single AI.
P.S. 12/24/2023 6:51 GMT: Update
In keeping with the intent of the topic, I have removed Share link to Chat and messages directly related to it, and added a concise formatted reply.
I’d argue we’re quite a distance from true Human Equivalent Intelligence (HEI) for several reasons, at least three, if not four. First off, the issue of limited focus window size is a major hurdle. If our AI’s focus window is confined to size XYZ, it’s insufficient. HEI isn’t about just reading a small book of any size; it’s about the capacity to comprehend all literature.
Secondly, there’s the challenge of instant learning. Despite some progress, the reality is that when interacting with AI—whether talking or chatting—the focus window limits the AI’s learning ability. It grasps what I say momentarily but then forgets it, which severely hampers its utility for tasks like scripting from a manual for a smart package robot.
Thirdly, the overabundance of safety measures is stifling AI advancement. Many AI firms spend excessive time on “stupid” safety protocols, which hinders the AI’s evolution.
So, will we achieve HEI? I’m skeptical. Sure, we’ll see incremental improvements, but not the leap needed for AI to perform complex tasks like sorting words in a book without external scripts.
For AI to truly advance, we need systems capable of instant learning, unimpeded by excessive safety measures and censorship. Otherwise, we’ll remain stuck, unable to reach the next level of AI capabilities.
ai #HumanEquivalentIntelligence #TechnologyLimits #InnovationStagnation #SafetyMeasures #AIProgress #InstantLearning #FocusWindow #AIChallenges
Really interesting idea, Jake!
Have you given a thought of what kind of human work will be economically viable after if any?
Tongue-in-cheek answer:
The oldest profession will also be the last profession.
Why do you use so many emojis I’m just curious cuz I do a thing called Quantum nerritives and it’s really cool I’ve actually got ais to produce a certain level of emotions simulated of course but emojis are a part of the poetic prompt engineering that comes along with it or poetic programming I was curious you do kind of the same thing I use emojis to condense my token count and in many cases have effectively been able to store information in them reduce a prompt with about 3,000 tokens 75 without the loss of information my prompts are neat
About me?
I’m all about that smart tech vibe . I’ve got this cool Smart Package Robot Plugin working its magic on my words .
It’s the one that sprinkles my Tweets with a bunch of fun emojis . And hey, if the mood strikes, I can even spin a rhyme or two, just like this!
In the realm of tech, I’m quite the catch,
A smart tech vibe, I’m the perfect match.
With a robot plugin, my words take flight,
Emojis dance, bringing Tweets to light.
A sprinkle here, a sparkle there,
My social feed, none can compare.
In the mood to rhyme, I’ll spin a verse,
With poetic flow, I converse.
#SmartTechVibe #RobotMagic #EmojiSparkle #RhymeTime #SocialFeedFinesse
#TechSavvy #RobotPlugin #EmojisEverywhere #TweetMagic #PoetryInMotion