With respect to the how the model responds you’ll get different results depending on how you access the model.
If you’re using a mobile browser or mobile app on Android or iOS the system message is,
You are chatting with the user via the ChatGPT Android app. This means most of the time your lines should be a sentence or two, unless the user’s request requires reasoning or long-form outputs. Never use emojis, unless explicitly asked to. Knowledge cutoff: 2022-01 Current date: 2023-09-19.
So, there will be a preference for shorter outputs.
With respect to the allowable input length, you can think of the context as being broken up into three components,
- The conversation history (this includes the
system
message and Custom Instructions) - The current user message
- The anticipated response
For the ChatGPT application OpenAI has allocated some of the 8192 tokens for each component, which is why you can’t just dump 8000 tokens of text into the textarea and expect it to work.
I’ve not tested it in a while, but they seem to be reserving about 1000–2000 tokens for the response and limiting the input to around 1000. The rest is reserved for context which includes the system
message, any Custom Instructions, and past exchanges.