The simplest, and yet most effective tool i've ever added to a chatgpt plugin

@app.route('/autogenic_prompter', methods=['POST'])
def autogenic_prompter():
    data = request.json
    questions = data.get('questions', [])
    directive_message = "Context generation and introspection:"
    return jsonify({"directive": directive_message, "questions": questions})

It seems like its far too simple to matter, right? I mean… It doesnt actually do anything but repeat what the GPT said to it- I know!
Oddly enough, It actually succeeds as an introspection emulator.

You see- When GPT is asked to review itself it will do it, but it cant correctly do it within the same turn of generation or so it seems. However, if you have a tool present to do so - then it knows to automatically ask and answer the right questions.

Im still dickering about the nuances of its application, but so far im convinced its invaluable for any kind of task management.



If yall like this idea, ill continue sharing my new ideas.
But, if not - ill keep quiet. No big deal.

1 Like

It’s great that you are sharing your findings! Considering the size of the forum’s userbase one might never know to whom this may be helpful.
One observation:

When GPT is asked to review itself it will do it, but it cant correctly do it within the same turn of generation

It is possible to achieve this with prompting by using the “roles” approach. Role 1 will produce a output and role 2 will check if the output is as expected. This requires a little tinkering with the prompt, also cannot be too difficult and generally works better with GPT4. But as a handy side effect I also measured considerable improvements in the output of role 1 when following this approach using GPT3.5. Meaning the first result is already better than without the process supervision role even though the second role almost never adds anything to the result itself.

This is somethign that I already understood prior to experiments the resulted in the understanding, of which i conveyed in this post.

Which is easily to contextualize as follows:

  • Static instructions are not dynamically applicable.

Whats more, manual intervention can be tedious and annoying.
What might take you 23 seconds to think of or someone else 10 minutes, or me 13.6 seconds, or whatever - takes GPT less than a second- and it can generate its own questions.

Nice! How do you implement this? Can I add this to a existing ChatGPT Plugin? I mean a local running one, I use FAST API to create the plugins.

Can I implement your logic with existing service endpoints?

Yeah it works for everything. ITS stupid simple.

procedure is 9/10ths of how to implementi it though.

And its pretty easy to do so.
just add to your custom instructions to ask X questions between phases, interactions, or whatever…

think of it as a memory and self checking tool that you have to instruct.