So, does this mean this kind of prompt can cause the model to do MUCH more work internally for the same number of tokens? Sounds like it might break openAIs pricing model of using tokens == cost. As far as I can tell from my experiments in playground, it works. For example, as far as I can tell, this ‘prompt’ goes through quite a few iterations before producing output:
(apologies to @PriNova for the sloppy psuedo-code, I’m lazy)
Respond ONLY with the final design.
designer is a function that accepts a single argument, ‘task’ and returns a single object ‘design’, a design for a system to perform that task.
critic is a function that accepts two arguments, ‘task’ and ‘design’, and returns a single object ‘critique’, a critique of the design with respect to the task definition.
queryGeneration is a function that accepts three arguments, ‘task’, ‘design’ and ‘critique’ and generates a return value, ‘query’, a question to ask the user to resolve ambiguities or priorities in design development. Question can return None if there are no ambiguities to resolve.
ask is a function that accepts a single argument, ‘query’, and returns ‘response’, the user response to query. Ask presents the query to the user, and then STOP to permit the user to respond.
editor is a function that takes four arguments, ‘task’, ‘design’, and ‘critique’ and ‘response’ and returns a revised ‘task’ with additional information intended to improve the performance of the design function with respect to the critique.
Given the task: ‘design an automated assistant for writing technical papers’,
loop over designer, critic, queryGenerator, ask, editor until the design stabilizes.
Respond ONLY with the final design generated in yaml