I know other folks have struggled with ChatGPTs more aggressive abbreviation. I’ve found a solution that seems to work for me. What I think the problem tends to be is the increased input windows has folks working with larger input windows but still restricted by the 4k output window.
If you ask the model in your prompt to be aware of its output window, and rather than abbreviating code creating a continuation marker, it’ll allow the model to print in full and you to continue if the output token window runs out.
I’ve seen a couple threads that didn’t have a meaningful resolution, so hopefully this is helpful for others.