How to deal with this annoying ever-present bold text in ChatGPT?

So, has anyone found a way to make ChatGPT stop using bold text everywhere, but without explicitly telling it to?

I failed to achieve this via custom instructions and memory. I tried to focus on both positive and negative rules, i.e. where I want bold text to be used, excluding all other cases, and where I don’t want it to appear at all. Nothing works so far. The first 5 or 6 conversation cycles go fine, but then it starts to bold everything like a lunatic. I also always check updated rules in fresh chats, so it can’t be that ChatGPT still uses previous versions of custom instructions which don’t work (as far as I concerned, changes to custom instructions only apply to new chats).

This mess is not funny and it’s starting to irritate me after 2 weeks of persistence, since the update made on January 29.

What have you tried, people? Any success?

8 Likes

Hello,

This thread echoes your feedback on the excess of bold text. Many users report significant performance degradation, and I find it reasonable to associate it with excessive bolding and italicizing. When an LLM is instructed to bold or italicize parts of every response (which is an easy and arbitrary specification), for the same amount of compute, its overall response quality will inevitably suffer.

I’ve personally completely stopped using ChatGPT-4o and cancelled my Plus subscription as a result. It has been over two weeks since this update, and despite mounting negative feedback on both the OpenAI forums and external platforms like Reddit, there has been no official response nor action on OpenAI’s part to fix the performance degradation issue. Furthermore, some people report instances of critical threads being shadowbanned, as they are not displayed in the “Latest topics” section of the forum for visitors.

I would like to take advantage of this opportunity to re-emphasize that OpenAI must prioritize user agency over enforcing its vision of what ChatGPT should be like- it is otherwise at risk of being eclipsed or replaced. An LLM capable of following specific instructions only for two messages before defaulting to a reductive and “lazy” response is a major usability problem.

3 Likes

Yeah, I’m tracking that thread (and also another one) and I have already read it from top to bottom.

On the other hand, rant and complaints, no matter how loud, don’t seem to work in this place.

I still expect that someone should’ve come up with some solution by now. It can’t be THAT bad, can it?

3 Likes