Knowledge base or prompt words, which one is more efficient in solving elementary mathematics problems?

I need help: In this situation, should I build a database to solve the problem or use optimized prompts? Or is there a more feasible solution? Thanks for the assistance.

1.As shown in the figure, GPT made a mistake when solving simple function questions. The input information only contains “1. Question, 2. Prompt word: answer the question” which is very clean.

2.To help GPT answer correctly, I added a prompt: “A linear function that does not pass through the origin should pass through three quadrants.” The response was somewhat improved, but it was still incorrect. The GPT given was quadrants 1, 2, and 4.

3.As a beginner programmer, I sought advice on how to guide GPT to answer correctly. I appreciate the forum’s responses, and I also asked GPT-4 itself. The solution involves using more precise prompts or integrating a knowledge base.

4.Due to my limited programming skills, I cannot handle complex interactions. However, GPT’s database should be quite comprehensive for elementary math problems, right? Therefore, I am inclined to use more precise prompts to initially address this issue.

5.The new issue is that the types of math problems to be solved are randomly input. The problems are very clear, but the types are random and can involve multiple areas of elementary math, such as equations and inequalities, functions, word problems, and geometry. This presents a challenge in providing precise prompts.

tanks for help!

Advice from forum members:How to use functions with a knowledge base | OpenAI Cookbook

There is no universally “best” way to help it solve problems but one way that has been proven to work pretty reliably is by providing one or more examples in the context.

If you need it to solve many such problems, you could simply give it a completely worked example and it will use that template to do any additional such problems when you provide it to the model.

Alternately, you could just give it better instructions, e.g. instead of

A linear function that does not pass through the origin should pass through three quadrants.

You might try,

When a line does not pass through the origin the x-intercept and y-intercept occur on the boundary of two quadrants,

\begin{cases} y_{\text{intercept}} < 0 & 3\&4 \\ y_{\text{intercept}} > 0 & 1\&2 \\ x_{\text{intercept}} < 0 & 2\&3 \\ x_{\text{intercept}} > 0 & 1\&4 \\ \end{cases}

The line will pass through the union of the quadrants determined by the x-intercept and y-intercept.

You can see this in action here,

1 Like

Giving the model access to a math solver might help. Wolfram alpha comes to mind


thanks for help
Q : Is there any way to optimize this?

1、The project will be open for all students to use. They will upload various types of questions, and I cannot predict the types of questions in advance.
2.Since elementary math includes various types of problems, if I use examples, does this mean that I need to transmit a prompt containing a large number of examples during API interactions?
3.In this way ,the prompt may contain a lot of content. I tested the current prompt and it already requires 3000 tokens( include both the text and images) . If I add more, it will exceed the input limit.

Thank you for your help. I noticed your suggestion to use the Wolfram model. How can I use it to efficiently improve the accuracy of Solving Math problems ?

  1. Specify the use of the Wolfram model for solving Math problems during interactions with the GPT-API?
  2. Add a process of interaction with the Wolfram API during interactions with the GPT-API, tell GPT to use the results from the Wolfram interaction?

Honestly, this really isn’t the right tool for this particular job.

LLMs are language prediction models, not logic or math engines, they can often simulate those capabilities, but it’s not real.

If you build up enough scaffolding around the language models they can do a passable job, but they’re not really up to this task without the help. At least not for a very broad set of problems. For any given narrow set of problems you can craft a prompt that will get the model there.

Probably the best “general” method you can do is to take in the user’s question, modify the question to ask how such a question could be answered, then ask the model to solve the question using the plan it just created.

See: ChatGPT

Thank you very much for your help. I have tried multiple AI models and some other tools. So far, GPT has the best overall performance.

Improving the accuracy of GPT in solving complex math problems is the most challenging issue.

The project needs to help students solve multiple types of problems:

  1. Solve math/physics problems.
  2. Verify handwritten answers.
  3. Compare answers for grading.
  4. Identify and summarize incorrect knowledge points.

GPT-4o performs well in the following aspects:

  1. Solving non-complex math problems.
  2. OCR recognition.
  3. Comparing answers for grading.
  4. Identifying and summarizing incorrect knowledge points.

Now I need to tackle the last challenge so the project can start running.

I am a beginner in programming. Using multiple individual tools to solve corresponding problems and then integrating them exceeds my current capabilities. This is a major reason why I am using GPT for assistance.

If there are external tools, such as Wolfram, that can solve complex math problems and be utilized by GPT, I would be willing to try this approach if it is feasible.