Is it true that the output limit for ChatGPT is 600 Tokens? It is not able to optimize code, and/or to write a (markdown) table of some comparisons. Every model from Legacy over Default to GPT-4 has this limitations.
Most of the output is incomplete. I have to re, re-re and re-re-resubmit requests to get suitable or continued results. And ok, this isn’t the worst.
Unfortunately, ChatGPT forgets the related content after a few (re-re-) posts, even I try to give it a reference post, which it can refer to. And then, I have to start over, fiddling everything anyhow together.
Unfortunately, ChatGPT is not able to split results in chunks of 600 Tokens, it just cuts off the content. A real “continue” is not possible, without losing the references to earlier content inside the Chat.