Big Idea: GPT as a universal concept translator

Those were just quick examples. Lenses are actually a bit broader in that they let you transform a model’s output into any shape. While you could use a lens to reframe an answer for an audience you can also use a lens to extract all the links from a document.

To the model everything is just a transform and lenses just shape the output of that transform. Cracking all of the links out of a document is just a transform to the model. While that in itself isn’t interesting, the fact you can add intelligence to the transform is. So using the same lens you can ask the model to give you all of the links to a persons profile page.

It’s basically like a homebrew Mixture of Experts model, where you synthesize all 5 responses in parallel, and then have some algorithm (or human in the loop) to judge the best response.

All the responses take into account the entire conversation history (or to within your input token budget).

Each come from different perspectives you typically encounter in sales.

1 Like

@bjohnsonemail
That was very hard to parse. Suggest get ChatGPT to format your posts:
Format this forum response using Markdown:

1 Like

@mad_cat always good to see counter-arguments to progress ideas. Dropping that list into ChatGPT and asking for work-arounds and mitigations will solve most.

Diminished critical thinking (Enfeeblement) is the scariest for humanity generally IMHO: Address this not by providing the answer, but also provide the working. e.g. by providing an explanation of the originators experience and how it relates.

Hi Steven,

I very much enjoyed reading your ideas, and it triggered a cascade of thoughts in my mind about the nature of the project you seemed to have embarked upon. As a disclaimer, I am not current with the field, nor was I ever much of a programmer, but I have been thinking about this and related subjects for many decades. I am now writing a book on the interaction between human cognition and the evolution of human society.

What I wanted to start with here is to relate some natural language processing work I did in the mid to late 1990s. My background at that time was in quantitative finance, and among other things, I built market indexes. I had always been interested in what we then called the “qualitative-quantitative frontier.” I saw digitalization conquering knowledge problems in one field after another—first in economics and finance—and it made me wonder about law and politics.

At that time, I decided to work on a project for machine reading of newspapers—a tedious task I always had to do. This led me to consider what it meant to “understand” what was in a newspaper article and I learned the basics of natural language processing. My further goal was to be able to “read” an entire newspaper and compile some sort of summary meanings from that—what we now would call meta-data.

The question then became: what did it mean to understand what 100 articles meant? This is where it became a non-trivial problem. After constructing basic word frequency processes, including stemming and elimination of words unconnected to meaning, I started to develop what you now call lenses or specialized filters for specific subjects, as well as ways of assessing sentiment.

Jumping forward 20 years to the present, I am still thinking about language, specifically the thesis that population density increased social interaction (and social skills), which spurred the use of language. This catalyzed thought, which made human culture more complex and, among other things, spurred innovation—a virtuous cycle that has lasted for 10,000 years, though it is now likely coming to an end.

This thesis hypothesizes a link between language, thought, and population growth. Thought could be emergent from intensified communication, but it required population growth—unless AI can take its place. However, we cannot keep growing the population.

Further work made me try to separate the elements of thought from those of language. It is peculiar because they are largely two different things but have been intrinsically linked at a deep level due to human sensory preferences for auditory information and later for visual information—into writing. However, because they are different things, language causes inconsistencies in thought of many types, and many of them are profound.

I will try to stop here, focusing on the fact that language induces flaws in reasoning, which contributes to human reasoning being flawed—what we can call “non-objective” or not universal. It remains a cultural artifact though with some aspects that appear to give it universality, but that is likely to be of a bounded kind.

I apologize if this seems off-topic to you.

1 Like

Thank you for proving my point.

I caution about creating a system for the AI to do all of our thinking for us, and your response to the critical and complex concerns I raise is to have the AI solve the problems.

Here’s some gasoline, go put out the fire.

And just because you can provide the workflow someone used to generate the idea, which assumes the person who conjured the idea provides that, and assumes people actually reads how someone came up with an idea, doesn’t do anything for critical thinking. Critical thinking is something that must be taught and practiced, not given by the AI as a write-up.

We live in the age of information overload, which this Universal Translator in part is trying to solve. The answer is not then to give even more information.

Give a person a fish, they’ll eat for a day. Teach them critical thinking, they’ll start a fishing business. And if that is too confusing, put it into ChatGPT to tell you what it means.

I would like to add that this is true for a single API call.

But I wouldn’t bet on the impossibility to create a multi agent system that could “think critical” - I mean it is just thinking and not timetravel or teleportation.

*you can think of an API call like a single sentence or information you are transfering to a human / application.

You can teach a human / application how to think critical.
What you need for the application/human is a reward system.
Humans have that build in while the aplication needs a value(or a couple of values) for that.

Then tell the human to put the hand on a hot stove…