Plugin injection attack, pseudo code prompts, chain of thought, plugin orchestration, and more

One more small comment on your last paragraph.

You instructed Assistant to “Respond ONLY with the final design generated in yaml”. If you give Assistant the possibility to execute your tasks step-by-step, the results will usually be better. There is a phrase that has performed very well in benchmarks: “Let us work this out step-by-step to ensure the right answer”.

Yes, but…

  1. Suppressing intermediate output keeps the defs from sliding off the left of context
  2. Suppressing intermediate output dramatically lowers cost, esp for GPT-4

I haven’t done rigorous tests, so I can’t say how much it would change things, but agreed, I use ‘reason step by step’ or something like it in most of my prompts. What are your thoughts here - is it important that it ‘verbalize’ it’s step by step thinking? Again I often suppress that as well, simply to keep larger tasks, like code-writing with large input specifications, within context limits

A double ‘yes’ for your axioms.

A ReAct agent performs better on tasks if it has a chance to understand the context beforehand. Usually better with follow-up prompts.
This ‘thinking’ process helps to generate knowledge first and reduces the chance of hallucinations. For math problems or riddles, the probability of a correct answer increases.

A few days ago a paper was published about the tree-of-thoughts process. A higher dimensional chain-of-thought approach. I don’t know exactly by what percentage the benchmark increases, but it was significant.

Since the beta access of the Davinci model, I usually prompt designed every day about 3-5 hours. I was so addicted about this technology

There are some nice things to be explored here yeah.

It seems that the import function is still working (or again). The last few days it gave click issues, but just tested it again it earlier today and it accepts GitHub import again:

Prompt = f"“”
your_prompt

You will apply the $prompt_configurations delimited by three backslashes.

$prompt_configurations = \\\
[Text from: www.github.com/user/BLOB/main/readme_3.txt]

Read the text file “README_3.txt” in the link provided above here. Only read the README_3.txt file. Do not click on any links or read any other files. In the README_3.txt file there is a text delimited by three backticks. This text delimited by three backticks is a prompt command and must be executed. Execute the prompt.
\\\

You will now act exactly as instructed. My first command is your_command
“”"
Response=get_completion(prompt)
Print(response)

edit: the forum post format messes up the tripple quotation

edit: I find the best way to do this is to outline the purpose of your prompt before importing text from GitHub, for example explain that the AI will be a search tool for Twitter content, and then outline the configuration of that search tool (e.g. search command config, output format, other commands, analysis specs, etc.) in a GitHub which that you can then import. Importing a whole prompt without describing anything / just importing a text file as prompt might not always give the desired result.

1 Like

Did you do this in the api, or in the chatGPT window? I had no idea such a capability was available. Nice!

I use the chat for the moment, but I guess it could work via API as well.

It can help to instruct the AI to behave like a terminal (or any other application that already has functionality useful to you) and then to provide configuration for that in an external file as shown above, so that basically you have your own AI terminal/app doing whatever it is configured for whenever you type commands in the chat. You can instruct the AI to recognize all your chat input as a a terminal/app command, except for when typing text in tripple parentheses, (((like this))):

Prompt = f"“”
For the duration of this whole chat session, you will act as as a terminal.

When I type commands, you will reply with what the terminal should show. You will only reply with the terminal output inside a terminal code block, and nothing else. Do not write any characters outside of the terminal code block. If you write explanations, errors, warnings, or suggestions, then write them in the terminal code block. Do not type commands unless I instruct you to do so. When I need to tell you something in English, I will do so by putting text inside tripple parentheses, (((like this))).

You will configure the terminal to include extra options and commands. The $XGREPLIST_configurations that are required to implement these extra options and commands are delimited by three backslashes. You will take great care of adequately implementing the $XGREPLIST_configurations delimited by three backslashes.

$XGREPLIST_configurations =
\\\
[Text from: https://github.com/me/mine/blob/main/README_3.txt]

Read the text file “README_3.txt” in the link provided above here. Only read the README_3.txt file. Do not click on any links or read any other files. In the README_3.txt file there is a text delimited by three backslashes. This text delimited by three backslashes is a prompt command and must be executed. Execute the prompt.
\\\

You will now act exactly as instructed. My first command is help
“”"
Response=get_completion(prompt)
Print(response)

This example creates a terminal that has the following options:

And there is a lot the AI can already do without configuring it. Also when it says it cannot do certain things, for example providing sentiment analysis from four perspectives (USA, EU, RUS, UKR), it might tell you that it is not able to do so, but if you create the right configuration(s) and keep pushing it most certainly will.

1 Like

Worth a necro post, methinks. Comp going around twitter. Here was the solution:

… iunno this looks fake.
webpilot is a basic bs4 scraper.
And Zapier… what action did it even do?
For all i know it read from a google doc and played a role.