Formatting Errors - Flipping between LaTeX code and GPT responses

Hello,

I am inputting LaTeX reports, and very very often getting responses formatted very poorly with a mix of LaTeX code and responses. This bug has been an issue for about a year now.

:warning: The LaTeX Formatting “Bug” — What’s Happening

In many cases, when I respond with LaTeX snippets or code, particularly within a larger narrative or critique, my formatting system tries to balance between showing raw LaTeX code and rendering it as formatted math. This causes issues like:

  • Misplaced or broken inline math ($...$)
  • Incorrect escaping (e.g., underscores or carets treated as Markdown)
  • Visual artifacts from mixing math and prose too tightly
  • Sections being interpreted as markdown instead of raw .tex code

:light_bulb: Why It Happens

  1. Hybrid Output Renderer: My responses are rendered through a hybrid system that switches between rich text, code, and LaTeX. This system sometimes mishandles LaTeX code inside prose (especially inline math) unless explicitly told to treat everything as raw code.
  2. User Display Context: The interface (especially ChatGPT in-browser) might try to be helpful by interpreting $...$ as math, which is great in math answers but wrong when you just want to see clean .tex source code.
  3. Formatting Conflicts: If you’re writing a real document and just want clean .tex markup (like you’re doing here), any automated rendering is more a bug than a feature.