On which parameters the output of gpt4 series is decided?

wanted to ask if the tokens affect the output of the model. i am using gpt4 series model and it is not giving me whole text when i am sending larger text? this nature because the gpt model i am using has 4096 output token?

Hi and welcome to the Community!

Can you elaborate a bit on what you are trying to achieve, i.e. asking the model to do?

Generally, the magnitude of output is heavily influenced by the prompt/instructions and your input/output token ratio really depends on the specific task you are giving the model. 4096 tokens is the upper boundary - in practice though most output is significantly below that threshold.