Question about completion lengths. example, two 50 token completions essentially equivalent to one 100 token completion in terms of how the model receives it as a prompt?

I’m pretty certain I know the answer to this question, since intuitively I’m not sure how else the model would know it’s “in” a particular ‘sentence’ (or phrase, or piece of text, whatever). But given an existing prompt, compare two 50 token completions and one 100 token completion … is there going to be a significant difference in how the model works with that? I.e. does it ‘understand’ text that it’s in the process of feeding back as a response differently than the prompt material, or is every piece of text it adds ‘added’ to the prompt that it’s working with?

egads … I think that was probably confusing as to what I meant. I’ll have to take another look at it later.

But basically yeah, asking about the identity (or not) for OpenAI GPT-3 of one completion to two completions of half-the-size.

I also understand that this may be something we’re not clear on either way.

1 Like

The two completions will be the same, at temperature=0 and without stop sequences that could interfere.

Thanks, that’s interesting. I assume then that temperature with the two half-size completions would apply, basically, two instead of one regimes of modification so that it becomes different in that case?