New feature: Properly shown mathematical equations

Hello @ruby_coder!

Thanks for indulging me. I will note, however, that you were the first one to veer from operational computer science into philosophy when you made the claim that ChatGPT does not understand anything.

You keep saying that it is “only” predicting the next token. I could similarly say that human speech is “only” sound waves. What happens under the covers to achieve the sound wave or the prediction is what we are discussing.

I want to quote OpenAI’s Chief Technology Officer to discuss the relationship between prediction and understanding:

SPENCER: So here with GPT-3, we’re trying to get it to predict the next word. What’s the connection between predicting and understanding?

ILYA: There is an intuitive argument that can be made, that if you have some system, which can guess what comes next in text (or maybe in some other modality) really, really well, then in order to do so, it must have a real degree of understanding. Here is an example which I think is convincing here. So let’s suppose that you have consumed a mystery novel, and you are at the last page of the novel. And somewhere on the last page, there is a sentence where the detective is about to announce the identity of whoever committed the crime. And then there is this one word, which is the name of whoever did it. At that point, the system will make a guess of the next word. If the system is really, really good, it will have a good guess about that name; it might narrow it down to three choices or two choices. And if the neural network has paid really close attention (well, certainly that’s how it works for people), if you pay really close attention in the mystery novel, and you think about it a lot, you can guess who did it at the end. So this suggests that if a neural network could do a really good job of predicting the next word, including this word, then it would suggest that it’s understood something very significant about the novel. Like, you cannot guess what the detective will say at the end of the book without really going deep into the meaning of the novel. And this is the link between prediction and understanding, or at least this is an intuitive link between those two.

SPENCER: Right. So the better you understand the whole mystery novel, the more ability you have to predict the next word. So essentially understanding and prediction are sort of two sides of the same coin.

ILYA: That’s right, with one important caveat, or rather, there is one note here. Understanding is a bit of a nebulous concept like, what does it mean if the system understands one concept or doesn’t? It’s a bit hard to answer that question. But it is very easy to measure whether a neural network correctly guesses the next word in some large corpus of text. So while you have this nebulous concept that you care about, you don’t have necessarily a direct handle on it; you have a very direct handle on this other concept of how well is your neural network predicting text and you can do things to improve this metric.

SPENCER: So it sort of operationalizes understanding in a way that we can actually optimize for.

ILYA: Precisely.

Now you have made a few claims which I would like you to back up with quotes or references. If they are true, I would like to, um, understand better:

  1. Neuroscientists have a strong understanding of understanding, and can measure it in ways more direct than by administering quizzes/puzzles.

  2. Understanding is known to be directly linked to consciousness and is not orthogonal. (since Neuroscientists don’t really understand consciousness, I’m deeply skeptical that they have a model which ties the equally nebulous “understanding” to it)

  3. Neuroscientists have stated that the neurons in human brains can house “understanding”, but the neurons in an LLM (no matter how large) cannot. Why not?


In terms of code I’ve written…I did a fine-tuning today with only 30 examples and it outperformed my prompt already. I was very impressed.

No. The person who veered into this was the person who said:

ChatGPT also understands LaTeX.

So I simply corrected this error. A technically correct statement would be:

ChatGPT also performs LaTeX completions.


Then, magically a “new poster” registers and continues the debate in a similar voice which caused the “sock puppet” alarm bell to go off in my head.

We have one poster with a hidden profile and then a “one time poster” writing with the same voice continuing the debate.

So, I will end where I started.

ChatGPT does not “understand” anything. ChatGPT is not “conscious” nor “aware” of anything. This is not philosophical statement. It’s a statement of software engineering fact echoed by just about everyone who understands how generative AI works.

You take care @prescod

You are certainly entitled to your opinions on generative AI regardless of my very different worldview. I am here to help folks with the OpenAI API and to help minimize the generative AI misinformation (and damage to the community) which has exploded since ChatGPT was released.



I find it intriguing to consider on whose behalf I might be sock-puppeting. Perhaps ChatGPT itself! :slight_smile:

I am not at all interested in having a fight in the OpenAI forums, especially not with a prolific poster like you. My only point is that what @EricGT said is consonant with what the CTO of OpenAI says, and I don’t think that he should be labelled a peddler of misinformation. Well-informed people can disagree on the definition of vague words like “learning” and “understanding” as well as on the underlying science.

There are some statements about GPT models which can be backed up with empirical evidence, such as “GPT models do not have access to data added to the Web recently.” And there are others, which are essentially semantic/philosophical/vague, like “they do not understand.” Statements in the latter category are essentially just opinions and should not be called out as being “right” or “wrong.”


It was very obvious (to me) that @ruby_coder was speaking on a conscious level.

In terms of the CTO quote; it’s not fair to create a program, show that it works, and say “look, it understands!”. It’s such a silly, far-reaching argument (for you) that just sounds ridiculous. In terms of a mystery novel, it makes complete sense that it can group all the semantics in their respective space and make a logical determination based on all the evidence in a book.

I bet you that if I were to embed the whole mystery novel, cluster the information, and then run an algorithm on each person and their activities, I could also predict who the murderer was without some fancy neural network. Using this quote does not help your case at all

Of course, I completely agree that GPT has some degree of understanding. Just like when I set a variable, the program now “understands” it. Just not any sort of conscious understanding, which is what @ruby_coder is referring to. That’s how I interpreted it

There is a huge issue of people giving GPT human-like qualities, and it’s very worrying. People need to remember that it is not a human. This will be critical in the future when regulations begin and all attention is on AI.


Back to the original topic, it would be great if the ChatGPT web interface could render latex. :pray: :slightly_smiling_face:

I ask ChatGPT use \(...\) replace $...$ for LaTex inline math mode delimiters

and insert linebreak after opening $$ and before closing $$ for LaTex display math mode delimiters

Then my mobile Chrome browser could render the math expression correctly on chat response window

have chatgpt modify delimiters as following , could render correctly, there should insert linebreak after opening $$ or \[ and before closing $$ or \] for display math mode

and for those math symbol in text line should use \(and \) replace $ for inline math mode

\mathcal{L}{\mathrm{Higgs}} = (D\mu \phi)^\dagger (D^\mu \phi) - \mu^2 \phi^\dagger \phi - \lambda (\phi^\dagger \phi)^2


\mathcal{L}{\mathrm{Higgs}} = (D\mu \phi)^\dagger (D^\mu \phi) - \mu^2 \phi^\dagger \phi - \lambda (\phi^\dagger \phi)^2

1 Like

Here is a version I used after seeing your image.


Display the Higgs equation, use LaTeX but then convert LaTeX for display.

ChatGPT completion

The Higgs equation is:


where $H$ is the Higgs field, $D_\mu$ is the covariant derivative, and $V(H)$ is the Higgs potential.

As an image only


Using Chrome

  1. Mouse over equation
  2. Right click to bring up menu
  3. Select Inspect

It reveals that Katex is being used.


Simplifiying the prompt

Display the Higgs equation using Katex.


Still can’t get ChatGPT to generate the math in the descriptions but it is clearly a signifigant step. The markup for math in the descriptions appears to be MathJax.

You mentioned that mouse and right click, it seems that you are on desktop platform , but I am on Samsung tab A7 lite , mobile Chrome platform, I do several times experiment have chatgpt paste my input to its response area and found my mobile Chrome browser only could identify specific LaTex delimiters, but if you are on desktop Chrome there are Latex extension on Chrome web store may be it could work on chatgpt default latex delimiters like $…$ and $$…$$

Made some more progress. Just explaining to not use MathJax in the description many different ways was not getting consistent results, however did on rare occasions see a glimmer of hope using the word quasiquotation.

What works better is using few-shot examples:


As a theoretical physicist, show Higgs equation using Katex. In description display math using Unicode math symbols.

D\mu Dμ
V(\phi) V(ϕ)

The left is MathJax without the $ bookends, e.g. $D\mu$, and the right is just Unicode. Used to generate the Unicode from the bookend striped MathJax.

ChatGPT Completion

Sure, here’s the Higgs equation written in LaTeX using Katex:
In Unicode math symbols, this equation would be written as:

(∂μ + igAμ)(∂μ - igAμ)ϕ + V(ϕ)ϕ = 0


Dμ is the covariant derivative
Aμ is the gauge field
g is the gauge coupling constant
ϕ is the Higgs field
V(ϕ) is the Higgs potential.

As image


1 Like

I use Samsung tab A7 lite, Android 13, mobile Chrome

1 Like

Hi there, I’m also searching for a solution. Thanks to minchien9’s suggestion, I’m able to have a good rendering by asking ChatGPT to output all formulas as MathJax, along with a Chrome Extension such as ‘Tex All the Things’ that renders the inline expressions well.

Thank you so much!

1 Like

One thing that we should be observent of is that ChatGPT does seem to learn over time, or the devs are just fast and doing updates, so if many of us are using the same phrasing in our prompts hopefully soon ChatGPT will pick this up and we can either drop the needed phrase or at least get the completions with math rendered as math with out further prompting. If any of you see this happening, note it here for others watching this topic so we know.

With TeX All the Things installed. (GitHub)

Display the Dirac field quantization in KaTeX display math mode, and use inline math delimiters \( ... \) for any inline math expressions in the text:



Not fully rendered but much better.

Click regenerate response.

Seems MathJax is used with ChatGPT. Using a mouse and right clicking a math expressions shows a MathJax context menu. This same menu appears for a MathJax expression in a Discourse forum.


1 Like

It seems that the website (or chrome, IDK) itself already had some support for KaTeX, that’s why minchien9’s method works. However, I couldn’t figure out a prompt to let it output inline delimiters in \(...\) without telling it the second time. If that’s possible, then problem solved.

I figured out a work around that, since $...$ is also supported by KaTeX (just not enabled by default), I can side load the KaTeX CDN and a script that forces rendering KaTeX whenever I click my mouse.


The settings: (I used Requestly to side load KaTeX to the site.)

I’ll need to ‘click’ to force rendering, since I don’t know how to detect ChatGPT’s finished replying in the event listener.

However, sometimes you’ll see bad behaviors like this:

That’s because the math expression has already been rendered by the site before KaTeX kicks in, causing a <em></em> in between the $...$.

PS: EricGT, I think that MathJax menu was due to Tex All the Things. If you turn it off, the menu goes away.

For those using the Chrome extension TeX All the Things

The first thing of note is that if ChatGPT is not generating correct LaTeX then the extension will fail at passing it off to MathJax. ChatGPT may generate parts of the LaTeX correctly and fail for other parts, so read the LaTeX carefully to see if the generated LaTeX has a bug.

The simplest prompt I have found that works is

Display the following using LaTeX

<Your equation or math question or ...>

Let ChatGPT finish the completion.

If you want to see the completion as LaTeX (plain source) without being passed to MathJax then

  1. Right click on any LaTeX to bring up the context menu.
  2. Math Settings → Math Renderer → Plain Source


If you want to see the completion as LaTeX (rendered as Math) being passed to MathJax then

  1. Right click on any LaTeX to bring up the context menu.
  2. Math Settings → Math Renderer → HTML-CSS



1 Like

Hey there, for those of you looking for a more minimalist solution to only render latex in ChatGPT:
I made this simple extension that does exactly that. Here is a video demo:

And here is a link to download the extension (not available on chrome store yet):

Clear skies