I’m pretty certain I know the answer to this question, since intuitively I’m not sure how else the model would know it’s “in” a particular ‘sentence’ (or phrase, or piece of text, whatever). But given an existing prompt, compare two 50 token completions and one 100 token completion … is there going to be a significant difference in how the model works with that? I.e. does it ‘understand’ text that it’s in the process of feeding back as a response differently than the prompt material, or is every piece of text it adds ‘added’ to the prompt that it’s working with?
egads … I think that was probably confusing as to what I meant. I’ll have to take another look at it later.
But basically yeah, asking about the identity (or not) for OpenAI GPT-3 of one completion to two completions of half-the-size.
I also understand that this may be something we’re not clear on either way.