This isn’t like one of those threads where people complain about perceived drops in performance that may or may not just be in their heads. While using Chat GPT 4 in the past hour, I noticed it has provably been giving me shorter and less detailed responses than what I used to get when using more or less the same style of prompt I’ve been using for months without issue. There are 1000s of older outputs to compare the recent ones to, so this is definitely demonstrable.
The responses I’ve been getting using my prompt are now almost exactly half as long (from around 2500-2800 characters per output to less than 1300), meaning I don’t get the complex, detailed writing I’ve been enjoying getting from GPT 4. Is this a bug, or has Chat GPT been deliberately programmed to give shorter responses now? If it’s the latter, I’ll probably have to cancel my subsription, because the outputs aren’t good enough for my purposes now.
I found a “fix” by simply telling it to give me outputs over 2500 characters. But I’ve never had to do that before, meaning that there definitely has been some change in the way GPT is functioning as of a few hours ago.
I still haven’t decided whether it’s as good in terms of quality or not. I’ll need to test it more, but I’m probably always going to have a bit of doubt in the back of my find. But it’s at least better than when it gives me the shorter outputs, which are just terrible in terms of narrative quality.
Assuming you are experiencing this with your API calls … have you noticed this occuring with function calls specifically? I have two prompts, one as a function call and one regular … the function call provides very short results, the regular call a nice long result. wondering if it is just me or if this is actually a thing