A Welcome Improvement (But I Hope It Lasts)

I was playing around with my custom GPT as I usually do, and was expecting the usual decent-ish results, but to my surprise, I was actually blown away by the responses - a lot more creativity, a lot more detail, and much better adherence to my instructions.

So I decided to check the outputs in the Tokenizer, and the responses were around 1300-1600 tokens each time, when I was getting 700 previously with the same prompt. (It seems to be able to go upto 2048 before you have to continue.) Just by allowing the longer outputs, the quality is seeing a clear and obvious improvement. This is exactly as good as I remember it being when I first signed up last year, and only now do I fully realize now nerfed it actually was in the last few months.

If this is a deliberate and permanent change, then my thanks to the OpenAI team. I just hope I don’t have to come back to this thread to say, “I should have known it wouldn’t last.”