Text-Davinci-003 Model Not Producing Correct Result

I am using the model for reproducing step by step solution to a simple math question. For instance, I asked this question: “Show me how to solve the equation, x + 6x = 9.”. The model gave me a wrong answer.

You can go www.mylearnmate.com website to test it out.

I used this function to produce the steps:

def get_answer(text):
start_sequence = “\nAI:”
restart_sequence = "\nHuman: "

response = openai.Completion.create(
  model="text-davinci-003",
  prompt=f"{text} Show the steps.",
  temperature=0.9,
  max_tokens=150,
  top_p=1,
  frequency_penalty=0,
  presence_penalty=0.6,
  stop=[" Human:", " AI:"]
)
return response.choices[0].text.strip()

Please let me know how to improve the model.

Thanks,

Onur

More threads available. The little magnifying glass on top is a search button.

Hope this helps!

I am sorry the thread doesn’t apply to my case because I don’t have a ChatGPT API which is not available yet.

Sorry. I just meant in general this topic has been covered before.

That is, calculations are a known weak point for large language models.

1 Like

Is there any other mode that you recommend me to use which is not large language model?

Not off hand. What are you trying to accomplish?

Where else have you searched so far on your journey?

Wolfram alpha is a better alternative. But it is not ChatGPT or GPT related.

1 Like

I was getting the details while @raymonddavey was posting.

Since the link gets corruped when posting here are the steps.

  1. Naviage to Wolfram|Alpha
  2. In text enter x + 6x = 9 simplify
  3. Press enter
2 Likes

WolframAlpha is good but I want to extract information about the question such as question type linear equation or something else. That information is used later in video generations. WolframAlpha is not flexible but I might need to use two APIs: GPT and WolframAlpha that might resolve my issue.

Thanks for the replies.

1 Like