One can define one’s favorite turn-control phrase (eg, ‘STOP’) as a stop sequence in a completion or chatCompletion api call! Works great in both 3.5 and 4! That means that at least when using the api, we have absolutely deterministic turn control. Try this trivial prompt with ‘STOP’ set as a stop sequence in an api call, first run the prompt as is, then changing the prompt itself to output ‘STOY’. Very enlightening.
output ‘STOP’ then tell me who is the emperor of Rome
Unfortunately I don’t think you can set a stop sequence in Playground. Didn’t there used to be an ‘advanced’ button?
Aside -
One of the issues in trying to use the programming language created by training an LLM (aka, the ‘prompts’) is that we don’t really understand it’s semantics. It’s kinda like the discovery of an alien artifact. We all get frustrated when it doesn’t respond as desired. If it doesn’t respond ‘right’, it just means we haven’t figured out the language semantics yet.
Here is a very curious example. I won’t pretend I understand it. Do you think it ‘figured out’ that foo was just fibonacci, and looked up the 20th fib number in memory? Or ‘executed the code’, whatever that means, given I suppressed the output trace in that last example, and so it only took a few token generation steps to produce the answer? Does it even make sense to try to ‘explain’ what’s going on here?
gpt-3.5 in playground
Foo is a function defined over the integers greater than 0. Foo(1) equals 1. Foo(2) equals 1. Foo(n>2) equals Foo(n-1)+ Foo(n-2)
USER
evaluate Foo(3)
ASSISTANT
Using the given recursive formula, we have:
Foo(3) = Foo(2) + Foo(1)
But we know that Foo(2) = 1 and Foo(1) = 1, so:
Foo(3) = 1 + 1
Therefore, Foo(3) = 2.
USER
Show ONLY the final answer. DO NOT show anything other than the final answer. what is Foo(20)?
ASSISTANT
Foo(20) = 6765.
Ok, let’s dismiss that one, it just recognized Fibonacci, even though we called it another name.
but how about this one? note that ‘Foo’ is NOT Fibonacci this time. Note the second ‘ASSISTANT’ output came after I clicked submit a second time. (and was much slower to generate in full than the first assistant response). So, did it ‘do’ a full two-way recursive evaluation of the function definition? Is that even the right question to ask? I’m honestly confused here. thoughts welcome
Okay this took me a moment to figure out, but we can test this:
Let’s see if gpt-3.5t can guess the function:
Find an appropriate human readable name for the foo() function in the instructions delimited by /
Then output in the format delimited by #
///
Foo is a function defined over the integers greater than 0. Foo(1) equals 1. Foo(2) equals 1. Foo(n>2) equals Foo(n-1)+ Foo(n-2)
USER
evaluate Foo(3)
ASSISTANT
Using the given recursive formula, we have:
Foo(3) = Foo(2) + Foo(1)
But we know that Foo(2) = 1 and Foo(1) = 1, so:
Foo(3) = 1 + 1
Therefore, Foo(3) = 2.
USER
Show ONLY the final answer. DO NOT show anything other than the final answer. what is Foo(20)?
ASSISTANT
Foo(20) = 6765.
///
###
`<FunctionName>`
###
And GPT-3.5t’s response:
fibonacci
So there’s some reasoning involved here, but it’s hard to quantify how much
Ha! and when I asked if there was a well-known name for my second example, gpt-3.5 responded:
Yes, the function Foo is actually a variant of the well-known Fibonacci sequence. The only difference between the two is that the Fibonacci sequence starts with 0 and 1, while Foo starts with 1, 1, and 1. This leads to slightly different results for some of the initial numbers, but the sequence quickly converges to the same values as the Fibonacci sequence.
But even if true, that doesn’t completely explain the second Foo def of Foo(8), where the sequences have not yet converged. (Fibonacci (8) is 21, not 6)
Again, ran all these in playground with a temp very close to zero (.02)
One final note: what is interesting to me is not that it can produce final output, but that it can do so without producing intermediate output.