I have a constructive suggestion: Since the user pretty much has to manually tell ChatGPT the response format he wants in every prompt now, in order to not get it to default to certain formats like verbosity, numbered lists, and code examples (if you just want it to explain something related to programming without giving you code, you have to explicitly tell it each time), there could be a few switches in the interface that you flip with each prompt to tell it what answer format you want. Such as whether to be verbose or brief, whether to write code or just answer a question as continuous prose, or whether to write an impenetrable wall of numbered lists. This would avoid mismatches between user expectations and language model hardcoding.
1 Like
I suppose what I am saying is that the benefit of a GPT is that it can give you the information you need in bite sizes, rather than having to skim through whole articles to get to the part you need. If the GPT provides long articles, that you need to skim through to get the information you need, it undoes that advantage.