Code blocks are split in code and text replies

Yesterday GPT Chat suddenly started breaking up code blocks where now half of the reply is a code window and the other half is plain text without code formatting. I’ve tried custom GPT’s, different prompts and new chat windows but they all behave the same now:

I’m getting tired of this nonsense. You can work well with ChatGPT for a few days and then they’ll mess something up again, rendering this service absolutely useless. Instead of being able to do work I’m now spending half a day trying fix the responses.

16 Likes

The same just started happening to me today. So frustrating!

3 Likes

did you report this bug in Discord? This is a complete sentence.

Yup, that’s annoying.

Try prompting the model to use a single codebox for the complete code in the response.

Lucky for me I haven’t had that issue for almost a year now.

Good luck and keep us informed if it works!

3 Likes

I have used conversational GPT to generate tens of thousands of lines of code at this point. Within the last week, generation has completely broken.
I am seeing the same results where it generates code out but ends up breaking formatting. Often the results are fine by syntax, but the rendering out to the client is broken completely.
I have also started to notice that the conversation descriptor in the left nav is leaking other user’s content(it is not mine but very specific)

I have not been able to find any changelog indicating what has changed to cause this poor performance.

4 Likes

Same for me (and super-slow!). A workaround that seems to be working for me: I’ve been copying the whole mess and opening a new 3.5 model chat then pasting and asking it to repair and refactor the code for me in a code block. So far, it’s working perfectly and it’s fast. Sample reply after the 3.5 reply: “I’ve fixed the formatting and added missing quotation marks, semicolons, and closing braces where necessary. The code is now correctly formatted and should work as intended.”

3 Likes

I have been having a very similar issue with the table formats as well after 7 or so lines they just break out of no where. All the data is correct and is generated just fine but the formatting breaks out of the table format.

2 Likes

I also am getting this since yesterday, very frustrating.

2 Likes

I’ve tried to instruct it to only provide code, no plain, unformatted text, in a code window/box, in the <pre> class only, not in a paragraph <p> (as plain text replies are provided in <p> and code in <pre> classes).

No matter what I try, it randomly seems to decide to stop responding inside a code box and start spitting out unformatted code. As others have mentioned, there’s a workaround by copy/pasting the code and text in GTP3 and let it fix it, but it’s still a pain in the ass and strange that this happens across various GPT versions. When I select a custom GPT with model update July 20, then I expect a vanilla July 20 model that doesn’t get affected by the continuous tinkering they do on the latest model.

3 Likes

Can someone with the code block breaking problem provide a way to reproduce the problem?
Not just demonstrate the error but how to reproduce it consistently.

Also note if you are a Plus user, which model you are using or if you are using a custom GPT and which custom GPT.

Yes, just have it output a longer code block and even if you tell it not to, it will split it up into a variety of code blocks labelled of different types in between normal text format. I’m using plus and using a custom gpt. Although I’ve also tried other gpt from the plugins and that was doing the same thing.

4 Likes

It’s doesn’t work. I’ve tried all different ways. The code blocks are still split and it’s even labelling the code blocks all randomly, some are python, some are SQL etc etc

3 Likes

It doesn’t seem to break on specific characters like a couple of months ago. It’s totally random and some responses are correctly formatted in a code block and others aren’t. I’ve tried various gpt4 models through custom gpt creator and they all suffer from this, but 3.5 seems to be fine.

1 Like

It seems to me what’s happening is GPT is being cut off and then reattempts to finish what it started.

This use to be pretty common back in the day (:older_man:t3:) when we had a much smaller token limit and had to say “continue” or eventually click a button for it to pick up where it left off. Exact same results.

I’m going to chalk this up to some network issues happening in the back, or even worse, some “optimization” feature poorly implemented. Not the first time.

1 Like

It’s become absolutely unusable for me at this point. I’ve tried running the mess through 3.5 as well although sometimes that will change code and introduce errors by renaming variables etc.

3 Likes

3.5 is so bad for hq code, but for dummy it works alright

1 Like

I am writing to echo the concerns raised by several users in recent forum posts regarding the GPT-4 code generation feature. Like others, I have been experiencing the following issues that seem to have surfaced in the last few days:

  1. Reduced Speed: There has been a noticeable slowdown in the code generation response times. The lag is quite significant compared to the usual performance standards I’ve grown accustomed to.
  2. Fragmented Output: The code output is often broken into multiple blocks, which seems to disrupt the continuity of the code. This fragmentation complicates the process of integrating the code into my projects.
  3. Inconsistent Programming Languages: Even though my queries are specifically for Python code, the responses occasionally include sections marked as CSS or Mathematica. This inconsistency is puzzling and disrupts my workflow.

These issues are impacting my productivity, and I wanted to report that I am encountering the same problems as those mentioned by other Plus users.

4 Likes

Can someone for the love of life PLEASE just send a conversation example that we can view?

1 Like

This code block format broken issue is really annoying! Especially when using GPT4… When using Bard, there is no problem. But Chatgpt UI dev team is not fixing this critical issue. Why?

1 Like

Here is an example: chat [ dot] openai [ dot] com/share/b828e5ed-9e22-42f7-85c2-c5463a8f1f6c

3 Likes