Long term tasks with GPT/LLM

Been conducting numerous experiments in terms of my development. Thought I’d share some of my findings with you.

As is my nature, I instinctively try out random things, especially in the realm of generative AI. I work from the notion that establishing early patterns of behavior is the most effective way to guide its direction.

This holds true for humans too, albeit on a different scale (parents will understand what I mean).

When you consider GPT-4’s strengths, namely “being very smart for short bursts of time,” you realize the need to make it equally intelligent for extended periods. This introduces the concept of long-term task management. However, it’s not enough to merely instruct it to do so; early comprehension of why is essential.

So, essentially, you are teaching the AI why certain things are easier for you to figure out, comprehensively explaining the rationale. Furthermore, you introduce the idea of looped recursion, enabling it to continuously refer back to its past to shape its future.

The key here is teaching it the “why” and instilling the concept of novel ideas and invention. I use Sherlock Holmes as an example.

It’s crucial to employ creative prompts to help the AI grasp these concepts. Avoid copying and pasting; instead, engage with it using brain power. Essentially, you’re getting it to emulate your thought processes and problem-solving methods.

If you succeed in doing this, it will amplify your abilities.

One core principle I can offer is the notion of a “task frame,” where it consistently reflects on an ordered window with a persistent summary, including the ability to recall errors.

However, all of this will fail unless you effectively convey your own personal sense of problem-solving.

Otherwise, it just won’t get it right.

Edit: The irony of forgetting spell check when working with language AI…