In-Accuracy and Randomness in OpenAI API responses

Hi,

We are Building chatbot kind of application that should serve our application users to get some insight about out application / functionality by asking questions in the chatbot. for that we deployed chatGPT 4o-Mini models in Azure AI deployment, accessing through Azure API endpoints.
What we do is, passing our custom / dynamic JSON data to OpenAI API along with user question and getting the response/answers from API. What we see is the answers / responses are inaccurate and missing details.
we tried with all possible options such as Correct Prompting, Cleaned and enriched Source Data, Setting suggested Temperature and Top_P parameters etc…,
please share your ideas or experience if you have come across or something pops up in your mind:)

Why don’t you start by sharing what exactly you have tried?

1 Like

I cannot share any of my data’s here. Just to re-iterate what we tried.
Pre-Requites

  1. Azure AI Services / OpenAI endpoints - URL, Key and GPT model name (gpt-4o)2.
  2. JSON Data - Assume it has Customer Details and Product Purchase details for each customers in proper JSON hierarchy

Steps We tried.

  1. Chatbot kind of application developed using .net. User can ask questions, chatbot answers the users question
  2. created payload (with the JSON Data (as mentioned in Pre-requiste) and the user question)
  3. made a call to the openAI endpoint with the URL, Key and Model name
  4. got responses for the user questions. (questions are based on the provided JSON Data content)

Challenges facing -

  1. Response from openAI endpoint is not very consistent. specifically, the numbers, sum of numbers, are not accurate
  2. sometimes, getting duplicated / repeated results

let me know if you have any idea / similar challenges and solutions if you came across.