New feature: Properly shown mathematical equations

As a student, and a researcher, I see how powerful chatGPT is, but It would be much more powerful if it could display me mathematical equations. It can show me complex equations like higgs equation, but it is still in a text format. It would be great if there would be special “formula” block for model to use. Like for code.

4 Likes

Solutions

Note: These are based on the replies below. Thanks goes to those who responded here, it was a team effort.
You can use either option or both.

  • Modify the prompt

Many find this noted by @minchien9 to be the best answer which is to modify the prompt to include such

Could you please use the specific LaTeX math mode delimiters for your response?

LaTex math mode specific delimiters as following

inline math mode : `\(` and `\)`
display math mode: insert linebreak after opening `$$`, `\[` and before closing `$$`, `\]`

(ref 1) (ref 2)

  • Use a Chrome extension

As noted by @away, use the Chrome extension TeX All the Things (ref)



Above are the solutions that work

The remainder of this reply is the original reply to the first post.


Have you tired asking ChatGPT to generate LaTeX, MathML or such?

Do not limit yourself to just text when using ChatGPT


Prompt

Show Higgs equation in LaTeX and MathML

ChatGPT Completion

The Higgs equation can be expressed using LaTeX and MathML as follows:

LaTeX:

The Higgs equation in LaTeX is:

\begin{equation}
\mathcal{L}{\mathrm{Higgs}} = (D\mu \phi)^\dagger (D^\mu \phi) - \mu^2 \phi^\dagger \phi - \lambda (\phi^\dagger \phi)^2
\end{equation}

where \mathcal{L}{\mathrm{Higgs}} is the Higgs Lagrangian, D\mu is the covariant derivative, \phi is the Higgs field, \mu^2 is the mass term, and \lambda is the self-interaction term.

MathML:

The Higgs equation in MathML is:

<math>
  <mrow>
    <mi mathvariant="script">L</mi>
    <mi mathvariant="normal">Higgs</mi>
    <mo>=</mo>
    <mo stretchy="false">(</mo>
    <msup>
      <mo mathvariant="normal">(</mo>
      <mi>D</mi>
      <mi>&#x03BC;</mi>
      <mi>&#x03C6;</mi>
      <mo mathvariant="normal">)</mo>
      <mo>&#x2020;</mo>
    </msup>
    <mo mathvariant="normal">(</mo>
    <msup>
      <mo mathvariant="normal">(</mo>
      <mi>D</mi>
      <mi>&#x03BC;</mi>
      <mi>&#x03C6;</mi>
      <mo mathvariant="normal">)</mo>
      <mo>&#x2020;</mo>
    </msup>
    <mo stretchy="false">)</mo>
    <mo>&#x2212;</mo>
    <msup>
      <mi>&#x03BC;</mi>
      <mn>2</mn>
    </msup>
    <mo mathvariant="normal">(</mo>
    <msup>
      <mi>&#x03C6;</mi>
      <mo>&#x2020;</mo>
    </msup>
    <mi>&#x03C6;</mi>
    <mo mathvariant="normal">)</mo>
    <mo>&#x2212;</mo>
    <mi>&#x03BB;</mi>
    <mo mathvariant="normal">(</mo>
    <msup>
      <mo stretchy="false">(</mo>
      <msup>
        <mi>&#x03C6;</mi>
        <mo>&#x2020;</mo>
      </msup>
      <mi>&#x03C6;</mi>
      <mo stretchy="false">)</mo>
    </msup>
    <mo>&#x2022;</mo>
    <msup>
      <mo stretchy="false">(</mo>
      <msup>
        <mi>&#x03C6;</mi>
        <mo>&#x2020;</mo>
      </msup>
      <mi>&#x03C6;</mi>
      <mo stretchy="false">)</mo>
    </msup>
    <mo stretchy="false">)</mo>
  </mrow>
</math>

where \mathcal{L}{\mathrm{Higgs}} is the Higgs Lagrangian, D\mu is the covariant derivative, \phi is the Higgs field, \mu^2 is the mass term, and \lambda


To view LaTeX as graphics, one possible site: https://quicklatex.com/

image

Note: quicklatext.com will even give you a link to view as PNG
https://quicklatex.com/cache3/f5/ql_9eb6fb9d0725171481a12ba56e86f2f5_l3.png

To view MathML as Unicode, one possible site: MathML Tester

L Higgs = ( ( D μ φ ) † ( ( D μ φ ) † ) − μ 2 ( φ † φ ) − λ ( ( φ † φ ) • ( φ † φ ) )


Discourse which is the software that runs this site has a math plugin that can render MathJax and ChatGPT can generate MathJax.

Prompt

Show Higgs equation in MathJax

ChatGPT Completion

The Higgs equation in MathJax is:

$$\mathcal{L}*{\mathrm{Higgs}} = (D*\mu \phi)^\dagger (D^\mu \phi) - \mu^2 \phi^\dagger \phi - \lambda (\phi^\dagger \phi)^2$$

where \mathcal{L}*{\mathrm{Higgs}} is the Higgs Lagrangian, D*\mu is the covariant derivative, \phi is the Higgs field, \mu^2 is the mass term, and \lambda is the self-interaction term.

To get the math expression to represent in Discourse need to put the $$ bookmarks on separate lines.

$$
\mathcal{L}*{\mathrm{Higgs}} = (D*\mu \phi)^\dagger (D^\mu \phi) - \mu^2 \phi^\dagger \phi - \lambda (\phi^\dagger \phi)^2
$$

In trying that here it did not work so I guess this site does not have the plugin enabled. On another site which does have MathJax enabled it shows as

So you can have you cake and eat it too with a little effort. :slightly_smiling_face:


ChatGPT also understands LaTeX.

Prompt

Explain

\begin{equation}
\mathcal{L}{\mathrm{Higgs}} = (D\mu \phi)^\dagger (D^\mu \phi) - \mu^2 \phi^\dagger \phi - \lambda (\phi^\dagger \phi)^2
\end{equation}

ChatGPT Completion

The equation represents the Lagrangian density of the Higgs field in the context of the Standard Model of particle physics. The Higgs field is a scalar field that is responsible for the generation of mass for elementary particles such as the W and Z bosons and the fermions.

The Lagrangian density consists of three terms:

The first term represents the kinetic energy of the Higgs field. It is given by (D\mu \phi)^\dagger (D^\mu \phi), where \phi is the Higgs field and D_\mu is the covariant derivative. The covariant derivative is used to ensure that the Lagrangian is invariant under local gauge transformations.

The second term represents the potential energy of the Higgs field. It is given by -\mu^2 \phi^\dagger \phi, where \mu is a constant parameter known as the Higgs mass parameter.

The third term represents the self-interaction of the Higgs field. It is given by -\lambda (\phi^\dagger \phi)^2, where \lambda is a positive constant parameter known as the Higgs quartic coupling.

The equation describes the behavior of the Higgs field in the presence of other particles and fields in the Standard Model. The dynamics of the Higgs field are determined by the principle of least action, which leads to the equations of motion for the field. The Higgs field is a scalar field, which means that it has a single degree of freedom, and it interacts with other particles in the Standard Model through the exchange of gauge bosons.

So just as programmers use source code or JSON data in parts of the textual stream you can also use LaTeX in parts of the textual stream for both prompts and requesting in completions. :slightly_smiling_face:

Note: When the output from ChatGPT is pasted into a Discourse post with the Math plugin active this is how it is rendered. Notice that the MathJax is rendered as math and not MathJax.

image

If @admin could activate the Discourse math plugin many of us here would appreciate it. :slightly_smiling_face:

6 Likes

Hi Eric,

Sorry, your statement above (as well as some others in different topics regarding ChatGPT and generative AI) are not technically accurate.

ChatGPT does not “understand” anything .

I am confident you know that, but to be clear for others who many now know, ChatGPT predicts the next sequence of text based on the current sequence of text using probability and its massively large language mode. There is no “understanding” in the ChatGPT process at all, it’s simply “auto-completing” based on its LLM, fine-tuning, etc.

I’ve been following some of your posts and hesitant to correct you given your passion for OpenAI and excellent experience in IT and computing, but statements like “ChatGPT understands” anything is simply technically inaccurate.

Sorry about that, but I sure you agree that we should be technically accurate in how we describe how these generative AI models work because many people will misinterpret ChatGPT, as we see on a daily basis here and elsewhere.

Generative AIs do not “understand” anything. They simply auto-complete based on the data in their underlying models. ChatGPT has no “knowledge” nor “awareness” and of course, no “understanding” about any of the text it generates.

1 Like

Thanks, yes I know this.

@ruby_coder
Would you prefer this wording?

ChatGPT taining set included LaTeX so ChatGPT can do transformations remenisent of LaTeX.

Thank you for telling me this, Now I know a bit more how chatGPT process data, so then I can make better prompts, essential knowledge for prompt engineering.

You are welcome @skorakora

Please keep in mine that it is close to impossible to engineer any prompt for a generative AI which will result in a technically accurate completion on a subject which requires technical accuracy except in the most trivial cases.

Many users of generative AI mistake generative AI for an expert system AI because of how well these new generation of generative AIs output natural human language very confidently.

It is a mistake for anyone to take any output completion on a technical subject which requires technical accuracy as “fact”. Every completion must be confirmed by a human who has subject domain expertise.

HTH

:slight_smile:

2 Likes

There are certain memes that are present in these early days of LLMs which I think are not particularly helpful.

We judge whether a human understands a topic based on whether they can answer questions about the topic, especially novel questions, which cannot be pulled from a database or “rote memorized.”

I can type Python or Rust code into ChatGPT, ask it what the code does, ask to translate it into English, to translate between the languages and to modify it to have different properties.

From an operational perspective, this is understanding. ChatGPT could probably get a job as a junior programmer on the basis of its behaviours. (not succeed at the job, but pass the interview)

Now it’s easy to confuse ChatGPT. The ways it gets confused are quite different than the ways humans get confused (usually, but not always). You could claim that this proves it has “no understanding”, but would you apply that same label to a human who got confused? If you get a question wrong on a Calculus test then you have “no understanding”?

Yann LeCun, for example says they have “superficial” understanding. Not “no” understanding. I think it is an inaccurate oversimplification to say that LLMs have “no” understanding. They exhibit SOME understanding constantly. If you claim that’s just the “illusion” of understanding, then I’ll have to ask you what operational and measurable definition of “understanding” you are using. How does one distinguish between the “illusion” of understanding and “actual” understanding?

3 Likes

What you have described is not “understanding”, I am sorry to totally disagree with you @prescod.

ChatGPT is simply predicting text as a powerful auto-completion engine based on its LLM. Predicting text based on weights in a deep NN of an LLM is not “understanding” it is just auto-completion, as instructed and trained.

Neuroscientists have very specific definitions and views on what is “understanding” and what is “awareness”. Both of these properties, from a neuroscience perspective are emergent properties of “consciousness”. Consciences is an emergent property based-on millions of years of biological evolution.

On the other side of the debate are computer scientists who believe (without proof or practical results) that it will someday be computationally possible for AGI (Artificial General Intelligence) to mimic human consciousness, but that is only speculation. Most neuroscientists disagree that computational-consciousness is achievable in any foreseeable future; but many remain open-minded to future advances in computing power and new scientific discoveries far beyond where we are today.

However, what every neuroscientist seems to agree on (as well as the computational scientists) is that generative AI is no more than a powerful auto-completion engine and therefore it has no AGI capability and nor are generative AIs “conscious” or “aware” of anything and so they have zero “understanding”. OpenAI also has stated this and has asked people not to exaggerate the general capabilities of generative AI.

ChatGPT is not actually “confused” in the human sense. ChatGPT is simply predicting text based on prompts; and when you add enough noise and misinformation in the prompts, ChatGPT will create nonsense. It’s not “confused” like a living being is “confused”, it is simply “garbage-in, garbage-out”. Furthermore, ChatGPT is not “aware” of anything, it’s just trying to output a completion.

Yann is a computer scientist, not a neuroscientist, and so he falls into the same camp / side as many computer scientists in this decade. Quoting computer scientists who have no formal training and practice in biological neuroscience about what is “awareness” or “understanding” or “consciousness” is not going to fly very far in a debate on topics where the core ideas and concepts are more neuroscience in nature than computational.

You are entitled to your view @prescod, of course, but that view is a “minority view” held by computer scientists and not held by the vast majority of neuroscientists. This is a site for software developers not philosophers. Have any OpenAI API code to share? That is what we are supposed to be doing here, BTW, developing code.

My formal training is in electrical engineering and applied physics with a focus on computer science, systems engineering and networking. My view, which I have expressed, is the neuroscientist majority view that “conscious” is an emergent property of millions of years of evolutional biology and so therefore “awareness” and “understanding” are emergent properties of that same evolutionary biological process.

Putting all that aside, the harsh truth of the matter is that ChatGPT is just blah, blah, blah predicting the next sequence of text based its massive underlying LLM. It’s not “superficial understanding” it simply has no understanding, little more than a typewriter has any understanding of the output when someone types on a keyboard. The typewriter does what it is instructed to do. ChatGPT does what it is instructed to do, which is to predict text and to create completions based on that prediction, little different than text-autocompletions in you favorite text composition app.

HTH

:slight_smile:

1 Like

Hello @ruby_coder!

Thanks for indulging me. I will note, however, that you were the first one to veer from operational computer science into philosophy when you made the claim that ChatGPT does not understand anything.

You keep saying that it is “only” predicting the next token. I could similarly say that human speech is “only” sound waves. What happens under the covers to achieve the sound wave or the prediction is what we are discussing.

I want to quote OpenAI’s Chief Technology Officer to discuss the relationship between prediction and understanding:

SPENCER: So here with GPT-3, we’re trying to get it to predict the next word. What’s the connection between predicting and understanding?

ILYA: There is an intuitive argument that can be made, that if you have some system, which can guess what comes next in text (or maybe in some other modality) really, really well, then in order to do so, it must have a real degree of understanding. Here is an example which I think is convincing here. So let’s suppose that you have consumed a mystery novel, and you are at the last page of the novel. And somewhere on the last page, there is a sentence where the detective is about to announce the identity of whoever committed the crime. And then there is this one word, which is the name of whoever did it. At that point, the system will make a guess of the next word. If the system is really, really good, it will have a good guess about that name; it might narrow it down to three choices or two choices. And if the neural network has paid really close attention (well, certainly that’s how it works for people), if you pay really close attention in the mystery novel, and you think about it a lot, you can guess who did it at the end. So this suggests that if a neural network could do a really good job of predicting the next word, including this word, then it would suggest that it’s understood something very significant about the novel. Like, you cannot guess what the detective will say at the end of the book without really going deep into the meaning of the novel. And this is the link between prediction and understanding, or at least this is an intuitive link between those two.

SPENCER: Right. So the better you understand the whole mystery novel, the more ability you have to predict the next word. So essentially understanding and prediction are sort of two sides of the same coin.

ILYA: That’s right, with one important caveat, or rather, there is one note here. Understanding is a bit of a nebulous concept like, what does it mean if the system understands one concept or doesn’t? It’s a bit hard to answer that question. But it is very easy to measure whether a neural network correctly guesses the next word in some large corpus of text. So while you have this nebulous concept that you care about, you don’t have necessarily a direct handle on it; you have a very direct handle on this other concept of how well is your neural network predicting text and you can do things to improve this metric.

SPENCER: So it sort of operationalizes understanding in a way that we can actually optimize for.

ILYA: Precisely.

Now you have made a few claims which I would like you to back up with quotes or references. If they are true, I would like to, um, understand better:

  1. Neuroscientists have a strong understanding of understanding, and can measure it in ways more direct than by administering quizzes/puzzles.

  2. Understanding is known to be directly linked to consciousness and is not orthogonal. (since Neuroscientists don’t really understand consciousness, I’m deeply skeptical that they have a model which ties the equally nebulous “understanding” to it)

  3. Neuroscientists have stated that the neurons in human brains can house “understanding”, but the neurons in an LLM (no matter how large) cannot. Why not?

=====

In terms of code I’ve written…I did a fine-tuning today with only 30 examples and it outperformed my prompt already. I was very impressed.

1 Like

No. The person who veered into this was the person who said:

ChatGPT also understands LaTeX.

So I simply corrected this error. A technically correct statement would be:

ChatGPT also performs LaTeX completions.

Note:

Then, magically a “new poster” registers and continues the debate in a similar voice which caused the “sock puppet” alarm bell to go off in my head.

We have one poster with a hidden profile and then a “one time poster” writing with the same voice continuing the debate.

So, I will end where I started.

ChatGPT does not “understand” anything. ChatGPT is not “conscious” nor “aware” of anything. This is not philosophical statement. It’s a statement of software engineering fact echoed by just about everyone who understands how generative AI works.

You take care @prescod

You are certainly entitled to your opinions on generative AI regardless of my very different worldview. I am here to help folks with the OpenAI API and to help minimize the generative AI misinformation (and damage to the community) which has exploded since ChatGPT was released.

:slight_smile:

2 Likes

I find it intriguing to consider on whose behalf I might be sock-puppeting. Perhaps ChatGPT itself! :slight_smile:

I am not at all interested in having a fight in the OpenAI forums, especially not with a prolific poster like you. My only point is that what @EricGT said is consonant with what the CTO of OpenAI says, and I don’t think that he should be labelled a peddler of misinformation. Well-informed people can disagree on the definition of vague words like “learning” and “understanding” as well as on the underlying science.

There are some statements about GPT models which can be backed up with empirical evidence, such as “GPT models do not have access to data added to the Web recently.” And there are others, which are essentially semantic/philosophical/vague, like “they do not understand.” Statements in the latter category are essentially just opinions and should not be called out as being “right” or “wrong.”

2 Likes

It was very obvious (to me) that @ruby_coder was speaking on a conscious level.

In terms of the CTO quote; it’s not fair to create a program, show that it works, and say “look, it understands!”. It’s such a silly, far-reaching argument (for you) that just sounds ridiculous. In terms of a mystery novel, it makes complete sense that it can group all the semantics in their respective space and make a logical determination based on all the evidence in a book.

I bet you that if I were to embed the whole mystery novel, cluster the information, and then run an algorithm on each person and their activities, I could also predict who the murderer was without some fancy neural network. Using this quote does not help your case at all

Of course, I completely agree that GPT has some degree of understanding. Just like when I set a variable, the program now “understands” it. Just not any sort of conscious understanding, which is what @ruby_coder is referring to. That’s how I interpreted it

There is a huge issue of people giving GPT human-like qualities, and it’s very worrying. People need to remember that it is not a human. This will be critical in the future when regulations begin and all attention is on AI.

3 Likes

Back to the original topic, it would be great if the ChatGPT web interface could render latex. :pray: :slightly_smiling_face:

1 Like

I ask ChatGPT use \(...\) replace $...$ for LaTex inline math mode delimiters

and insert linebreak after opening $$ and before closing $$ for LaTex display math mode delimiters

Then my mobile Chrome browser could render the math expression correctly on chat response window

have chatgpt modify delimiters as following , could render correctly, there should insert linebreak after opening $$ or \[ and before closing $$ or \] for display math mode

and for those math symbol in text line should use \(and \) replace $ for inline math mode

$$
\begin{equation}
\mathcal{L}{\mathrm{Higgs}} = (D\mu \phi)^\dagger (D^\mu \phi) - \mu^2 \phi^\dagger \phi - \lambda (\phi^\dagger \phi)^2
\end{equation}
$$

or

\[
\begin{equation}
\mathcal{L}{\mathrm{Higgs}} = (D\mu \phi)^\dagger (D^\mu \phi) - \mu^2 \phi^\dagger \phi - \lambda (\phi^\dagger \phi)^2
\end{equation}
\]

2 Likes

Here is a version I used after seeing your image.

Prompt

Display the Higgs equation, use LaTeX but then convert LaTeX for display.

ChatGPT completion

The Higgs equation is:

image

where H is the Higgs field, D_\mu is the covariant derivative, and V(H) is the Higgs potential.


As an image only

image


Using Chrome

  1. Mouse over equation
  2. Right click to bring up menu
  3. Select Inspect

It reveals that Katex is being used.

image


Simplifiying the prompt

Display the Higgs equation using Katex.

image

Still can’t get ChatGPT to generate the math in the descriptions but it is clearly a signifigant step. The markup for math in the descriptions appears to be MathJax.

You mentioned that mouse and right click, it seems that you are on desktop platform , but I am on Samsung tab A7 lite , mobile Chrome platform, I do several times experiment have chatgpt paste my input to its response area and found my mobile Chrome browser only could identify specific LaTex delimiters, but if you are on desktop Chrome there are Latex extension on Chrome web store may be it could work on chatgpt default latex delimiters like $…$ and $$…$$

Made some more progress. Just explaining to not use MathJax in the description many different ways was not getting consistent results, however did on rare occasions see a glimmer of hope using the word quasiquotation.

What works better is using few-shot examples:

Prompt

As a theoretical physicist, show Higgs equation using Katex. In description display math using Unicode math symbols.

D\mu Dμ
V(\phi) V(ϕ)

The left is MathJax without the $ bookends, e.g. $D\mu$, and the right is just Unicode. Used https://www.unicodeit.net/ to generate the Unicode from the bookend striped MathJax.

ChatGPT Completion

Sure, here’s the Higgs equation written in LaTeX using Katex:
image
In Unicode math symbols, this equation would be written as:

(∂μ + igAμ)(∂μ - igAμ)ϕ + V(ϕ)ϕ = 0

where:

Dμ is the covariant derivative
Aμ is the gauge field
g is the gauge coupling constant
ϕ is the Higgs field
V(ϕ) is the Higgs potential.

As image

image

1 Like