GPT-4o code seems to be cutting off more often

This seems to be happening more often since a few days ago. I am sure that it is not a context limit issue.

Take a look at this chat for example: 671cfca9-0f98-800f-8fa8-66bc1e0d8a0f (warning: very long). It’s my chat that I made public, just add the share prefix before it.

Go to the very bottom, and scroll up until you start seeing a bunch of code outputs that are truncated.

As you guys can see, in order to work around this, I had to prompt it to generate smaller code samples. But that is a quite annoying workaround which has the unintended side effect of exhausting my gpt-4o paid quota. So I want to know if anyone else is experiencing this lately.

Again, this is not a context length issue. The affected responses are short, and so no continue button appears.

I have the feeling that this is intentionally. Before o1 -Preiview and now o1 ist was never a problem to get long and complex code out of chatGPT. I could get good results with over 1.000 lines of code from 4o. All worked well. Now with 4o I only get arround 150-200 lines of code no matter what promting techniques I try. But with o1 long and complex code outputs work without problems. Seems to me that OpenAI like to get user pushed to the 200$ Pro plan. Sad to see. If this will be the same in the short term i will chage to other LLMs although I really like to work with ChatGPT and paid for it from the Beginning.