I’m creating a trivia app. For some reason, the bot is having a hard time with math (but not the calculations). Yes, I know LLMs have an issue with math. However, gpt will literally respond like this.
“Oops! 8-4 doesn’t equal 4. The correct answer is 4!”
Does anyone have a suggestion for why this is happening? My best guess is that the zod format schema is “working” but the gpt splits up it’s answer and understanding of the context within the object output itself. If that is the case, is there a workaround?
This is the Schema:
const TriviaFormat = z.object({
was_answer_correct: z.boolean(),
fun_fact_or_critique: z.string(),
next_question_to_ask: z.string()
});
This is the bot setup:
const botResponse = await openai.beta.chat.completions.parse({
model: 'gpt-4o-mini',
messages,
max_tokens: 2000,
top_p: 0.125,
temperature: 0.125,
response_format: zodResponseFormat(TriviaFormat, 'event')
});