If I select a 16k model in the playground, I am not able to increase token size past 2k. Is this expected?
This is expected, because the model will not spit out more tokens. It means to have a wider context window in what it can memorize.
I guess the input and output layers are fixed at 1024 tokens, but internally a mechanism is doing the job of handling the memory.
That doesn’t make sense. If I cannot pass more tokens, it can’t “memorise” anything.
You can pass tokens, but you will not receive 16k
When you say,
I assume you mean
Maximum length is limited to 2048 tokens for output?
This is 100% expected behavior, the same limit exists for GPT-4.
You need to think about what the purpose of the playground is, which is largely just to give developers a quick and easy interface for prototyping. It was never meant to be a complete solution or anything resembling a final product. It’s quite literally barely beyond a minimal working example of a web interface to their API.
That’s why the buttons for saving, sharing, and viewing the API call don’t work for the chat completions endpoint—they just never fully updated the playground after ChatGPT rolled out.
You could, in all of about 10 minutes if you’re not overly technically skilled, roll your own “playground” as a dead-simple HTML page that sits on your desktop and allows you to increase the
Maximum length parameter to 16,192 if you want.