So, does this mean this kind of prompt can cause the model to do MUCH more work internally for the same number of tokens? Sounds like it might break openAIs pricing model of using tokens == cost. As far as I can tell from my experiments in playground, it works. For example, as far as I can tell, this âpromptâ goes through quite a few iterations before producing output:
(apologies to @PriNova for the sloppy psuedo-code, Iâm lazy)
Respond ONLY with the final design.
designer is a function that accepts a single argument, âtaskâ and returns a single object âdesignâ, a design for a system to perform that task.
critic is a function that accepts two arguments, âtaskâ and âdesignâ, and returns a single object âcritiqueâ, a critique of the design with respect to the task definition.
queryGeneration is a function that accepts three arguments, âtaskâ, âdesignâ and âcritiqueâ and generates a return value, âqueryâ, a question to ask the user to resolve ambiguities or priorities in design development. Question can return None if there are no ambiguities to resolve.
ask is a function that accepts a single argument, âqueryâ, and returns âresponseâ, the user response to query. Ask presents the query to the user, and then STOP to permit the user to respond.
editor is a function that takes four arguments, âtaskâ, âdesignâ, and âcritiqueâ and âresponseâ and returns a revised âtaskâ with additional information intended to improve the performance of the design function with respect to the critique.
Given the task: âdesign an automated assistant for writing technical papersâ,
loop over designer, critic, queryGenerator, ask, editor until the design stabilizes.
Respond ONLY with the final design generated in yaml