According to leaks reported by Bloomberg and Reuters, OpenAI is making progress in enabling AI models to plan ahead before providing answers.
On Tuesday at an internal all-hands meeting, OpenAI showed a demo of a research project that it claimed had new human-like reasoning skills, according to Bloomberg
From the news sources we are looking at the often proclaimed advanced reasoning capabilities paired with the ability to perform online research in order to answer more difficult questions.
Apparently this development is at least partially based on the Q* algorithm that last year sparked many imaginative discussions about what AI will be able to do in the near future.
I’m interested to see if this future is arriving sooner than later.
Not every human has the same reasoning capacity…a human with IQ 110 has another reasoning capacity than one with IQ 140. The reasoning capacity of individuals can vary significantly based on their IQ scores. Generally, IQ scores are intended to measure a range of cognitive abilities, including reasoning, problem-solving, and abstract thinking. The reasoning capacity of AI can be considered superior to that of most humans in certain specific, well-defined tasks and domains, particularly those involving large-scale data analysis, pattern recognition, and specialized logical reasoning. However, for general reasoning, common sense, creativity, and emotional intelligence, humans still hold the advantage. But my statement is that for most industrial applications we already have AGI and for the others, if we combine human intelligence with AI we shall have superior intelligence…
Well, in my view it is good to remind ourselves that while these numbers can serve a practical purpose (sorting/ego/whathaveyou) it is by no means an objective marker. And while you can use it to deduce a number of things that could be useful, it is also an open ended scale. So a person who is an outlier might also fully well realize what they don’t know, and will need to constantly integrate new uncertainties to keep up with this process of shifting frames of reference. So essentially I am saying that one must take care not to generalize too much based on some static assessment.