Can GPT understand jqMath formulas?

First of all I’d like to thank the community for the unvalued help you are giving me.

OK then, this is my third post, but I’ve created a new one not to miss subjects.

My app allows users to take tests, not only English (as I’ve mentioned in other post), but Maths ones too.

It loads questions in html format inside a WebView (I’m attaching some screenshots as an example), and the fórmulas are created with jqMath Javascript library.

Do think gpt could understand these formulas in order to explain why my answer is wrong and what’s the correct answer?

Yes, the model will likely not have difficulty with the equations or symbols if they’re in some type of Mathjax/LaTeX format.

It’s really tough—even for a human—to explain why an answer is incorrect if there’s no work provided. The best you should hope for is some kind of indication of the correct answer and an explanation of how to do the problem.

That said…

Why do you want to use a GPT model for this? It doesn’t make any sense to me in the context you’ve provided.

If they’re multiple choice questions, with known answers, what is the benefit to calling the GPT APIs on every question? Seems expensive for zero perceivable gain.

Now, if you had some kind of metadata associated with each question such as what exactly it was meant to test then you could, after they did a quiz, evaluate where the user’s weaknesses are and use a GPT model as a tutor to drill down into where the user has gaps in knowledge or has a misunderstanding about a concept, I could see that as a valuable and proper use of the LLM, but what you’re proposing—at least what I understand of it—doesn’t seem to need or benefit from any AI at all.

Hello @elmstedt, first of all I appreciate you help really very much, thanks again for your time.

Said that, let me explain briefly my goal:

Our app is intended to students who are planning to travel abroad so they can prepare their english or math or whichever university admission exams in a mobile device.

We have 150 multiple choice tests (created by professionals) so far with an average of 5 to 10 questions each.

The app has nice interesting features, like searching a college worldwide, or search for learning tutorials in Khan Academy (through APIs), but we come to the conclusion that today, when AI is becoming more ad more popular every day, we need some new features to be competitive.

My knowledge on AI is not big yet, but I thought that implementing an “Explain My Answer” (like Duolingo did with ChatGPT4) could have been a nice new feature. “Explain my answer” would mean giving better and more complete/detailed explanations about the wrong answers than the ones the professionals we paid for did.

I have no metadata for the questions, just the question itself, the possible answer and the correct one. The student (user) takes a test and then it can see the results, check corrections, retake that test and see records of taken tests. The questions, as I already said, are in html (to display math formulas for math tests, and underlined test and so) and it’s not easy to change the text or do replacements (like you suggested in replacing “NO CHANGE”), because not all questions have the same format (for example a given paragraph), so not the same instructions would apply for all.

In resume, I’d like to introduce some kind of AI in our app to give it a little more added value, but don’t know exactly what to do, I have not a concrete idea (apart from this) yet.

The thing is, with explaining answers, you only need the one explanation for each question—I just don’t see the need to regenerate the explanation every time for every user.

You are right @elmstedt, I thought to call the API every time because each user could respond differently, but it’s true this would be much more expensive, and maybe nonsense.

Anyway, in the end I don’t see real (or at least easy to implement) possibilities of explaining answers in my app, due to the fact that app content is dynamic, so I cannot get a general rule to reformulate a question (once it has been loaded) because not all of them have the same exact format.

One interesting thing would be to try to find some way of using gpt to give users a personalised experience, but don’t know how to do it, really.

Honestly, I think the use I outlined earlier would be very valuable to students.

I’ve done more than a fair bit of mathematics tutoring in my lifetime. And what I see time and time again is that when students hit a wall it’s not because the material they are currently working on is “too hard,” but rather that knowledge gaps have accumulated to the point they just can’t go further.

When talking to students I usually use the analogy that learning mathematics is like building a tower—if there are too many pieces missing below where they are currently at, everything gets wobbly the higher they try to build.

By the time they are struggling enough to seek out tutoring, they’re under a time crunch—they need to know how to do this, that, and the other thing by Friday because they’ve got a test.

So, instead of going back and fixing the structure—filling in their knowledge gaps—they just want to shore things up and push back the collapse. So we need to work harder than we should have to to get them ready to barely pass the exam.

Basically we just put some scaffolding up to help them build their tower a little higher. But, that only works so well and for so long before even that hits a limit and they can go no higher. It’s also a lot more work for the amount of benefit the student gets.

That’s why I think the absolutely best app you could design—if you are really, truly interested in helping students achieve to their potential—would be one which gives them a comprehensive exam of questions which require different knowledge domains. Ideal it would be an adaptive exam, so as they take progress through the exam they get more and more questions in subdomains in which they have demonstrated some difficulty.

Then, armed with a hierarchical tree of concepts, the app would trace from where the student is supposed to currently be, back to the last concept for which they demonstrated mastery.

Once there, the app could start teaching new concepts being careful to anchor them with knowledge the student has mastered. Introducing only one concept at a time and relating it to mastered concepts.

Through interacting with the student with natural language the model could hone in on the individual students best learning styles and tailor its lessons accordingly.

Then, I imagine the model could also anticipate questions the student might have (much like Bing chat is doing now) and offer them alongside the lessons to encourage interaction and discovery on the student’s part.

This whole process is very hard and time consuming for a tutor to guide the student through. But, it’s also incredibly effective and thus valuable to students (and more realistically, their parents) once you get them to buy into the idea of correcting the root issues.

I used to charge upwards of $80/hour for this type of personalized tutoring which generally included two hours of prep each hour of instruction with modest discounts for multiple guaranteed weekly sessions. So, for instance, for three forty-minute sessions in a week I would charge $400.

A properly designed and implement GPT-based app could basically trivialize this at scale and democratize this kind of individualized educational assistance.

If you could absolutely nail it, I’m sure you’d have tens of thousands of parents ready to plop down $50+/month for something like this.

1 Like