I’ve been looking at the various examples provided and in some cases the introduction to the prompt, or prompt guidance, is separated with special lines with characters, like
I was wondering if this has any special effect on the prompt, and if the AI actually interprets sections differently.
Sort of. There’s nothing actually hardcoded, but one pattern in the data GPT was trained on is that lines of characters like that denote a sort of change in context. Remember that GPT is just giving you what it thinks the most likely next token is. It’s not just a pile of statistics, but that is a big part of it. If you don’t put something like that in then you might get issues where GPT starts writing more instructions instead excecuting them.
thanks! I figured it could be something like that but I did not find any hints in the docs!
Does it matter which character or how such dividers are written?
Yeah, alot of it is just finagling until you get something the AI likes to work with.
@m-a.schenk thanks. Can you give some simple example of what you mean by changing logit bias? (its a shame these things can’t be found in the docs)
thanks! I didnt look into the API part of the docs, still in the playground
I went through the Marv chatbot example. But if i remove all the ### I still get it to work just as fine
Sometimes they help. Sometimes.
You can’t expect this sort of thing to be well documented. It’s an emergent property of the system and by the time we actually start to understand it’s probably going to be obsolete.
Ultimately it comes down to guess and check. Make hypothesis, code up a test, and evaluate the result.
Simpler is better IMHO. Fewer tokens. Eventually, when you turn up the volume of requests, you’ll want to optimize your prompts for token count.