Hello @dragonek and welcome to the developer community of openai. What I am saying here is not officially from openai (I am not related to them). It is just my personal experience.
First of all (beside that GPT are not really good in counting): who defines what an important information is?
Let’s say you have a list of all existing programming languages and you ask GPT-4 to give you a summary about that list with the most important languages.
Important for whom? And what if you don’t provide that information? And what if the person who asks for it is not a programmer or not even in IT business and doesn’t know the criteria on how to decide that?
What if you want GPT-4 to shorten a recipee of a cake but the shortend version should still be tasty. Who decides what is tasty?
But to answer your question:
I personally think that is not possible unless you have a full knowledge representation of all the knowledge of the world and especially of the person who will check and the people who will use the result and what their intention is to use it!
And still there is some hope:
What you are most probably searching for to get closer is called “chain of density”.
And you will have to set up a decission layer - GPT-4 can’t do that for you unless you explain in very very very much detail how to do that.
Or maybe you can provide more informations. Maybe there is a solution in a more specialized area.
Generalisation (AGI) is something you should follow @daveshapautomator on linkedin for or watch his youtube videos.
Especially this one here:
Or you can check this WIP out: GitHub - daveshap/ACE_Framework: ACE (Autonomous Cognitive Entities) - 100% local and open source autonomous agents