In Large Language Models as Optimizers one learns that language models can optimize human prompts such that the optimized prompt yields better completions than the original prompt. (I assume: as assessed by the original prompter (OP) ).
I wonder if there are examples of “good” optimized prompts (i.e. non-adversarial ones, e.g. asking for a good explanation of a complex topic) that the OP cannot easily understand. Is there someone around who wants to share such an example?
4.3.2 Continuous Prompts
Because the purpose of prompt construction is to find a method that allows an LM to effectively perform a task, rather than being for human consumption, it is not necessary to limit the prompt to human-interpretable natural language.
I think this goes into the direction of my question.
It doesn’t perform as well as it could on current models:
3.5 can’t adhere to the multiple steps of system instructions now;
4.0 condenses its answers into minuscule summaries.
These are now techniques that must be fine-tuned on OpenAI products. And we do see more evidence of self-explaining before problem solving in today’s answers.
The custom instructions used are the chat share’s last user input after the examples of how it responds.
No, OP is not me but may be any human who entered a prompt that was then optimized by some language model - and that the human user doesn’t understand anymore (at least not so easily as his own original prompt).
Is your reply really related to my question? Could you highlight what you consider a prompt optimized by the language model that you (as the OP) don’t understand easily?
The AI won’t do it any more, and it didn’t really make sense once you understand tokens and relative value of their semantics, but a prompt could be rewritten by ChatGPT into a very abbreviated form and still be followed by the AI then. I certainly doubt it was optimized for a task, which is rather just minimizing the complexity and self-reference going on within the prompt so attention isn’t required just to follow the language.
Like:
OP{avoid neg avoid expn out abbr.3.par, Ripley char not ai per unbr.
(That’s just me writing what prompting for some kind of made up compressed prompt language would have looked like, not an AI-written sample. If the AI writes it there’s more value.)
Yes I read it already. It’s just fuzzing human prompts with variants to see if a benchmark can be met better by twists of wording. The thing that makes it less useful is that it is not targeting the nature of pretrained corpus AI, but working through layers of human fine-tune on end products in flux to either match or avoid trained patterns.
I didn’t see any big breakthroughs or lessons in the examples they showed, but the attention paid to the subject does let one consider the value of automated rewriting of user inputs for maximum performance by a “figure out what this person actually expects” AI.
More interesting would be untrained completion models, where the prompting techniques are generally to put the AI in a mindset that it is the middle of a book or manual where the language it is supposed to produce is what comes next. The rudimentary “here is a conversation between a novelist and a student” to get a chatbot could go in unimagined directions.