Making results more accurate by making OpenAI responses follow a strict format

Hi
How can I ensure that OpenAI responses follow a strict format?

The context is assessment of documents and providing results. The format of results MUST follow a strict format, but OpenAI seems to be making a judgement call instead of using the rationale.

i.e. step one: assess the presence of the following variables and score each one as: Yes, No or Maybe.

step two: If all variables score YES, then the overall score is YES
If any variables score NO, then overall score is NO.
Any other combination is a MAYBE

OpenAI is returning too many MAYBE’s when some of the variables are NO.

It sounds like your concern is not “output format” but cognitive errors.

We could rewrite the conditional logic in a codelike manner as if you were programming a computer, or write out clear steps of each evaluation and what is to be produced.

  1. The AI should first assess each variable. This could involve a variety of tasks depending on the nature of the variables, but the end result should be that each variable is categorized as either YES, NO, or MAYBE.

  2. Once all variables have been assessed and categorized, the AI should begin to analyze the categories. It should first check if all variables are categorized as YES. If they are, it should output “YES” and stop further processing.

  3. If not all variables are categorized as YES, the AI should then check if any variables are categorized as NO. If at least one variable is categorized as NO, it should output “NO” and stop further processing.

  4. If neither of the first two conditions are met (meaning not all variables are YES, and none of the variables are NO), the AI should then check if at least one variable is categorized as MAYBE. If this condition is met, it should output “MAYBE”.

  5. If none of the above conditions are met, the AI should output an error message, as this would indicate that the variables have not been properly categorized.

Or upping the “strict format” to the next level.


// assistant response format follows a mandatory JSON schema showing here

{
  "$schema": "http://json-schema.org/draft-07/schema#",
  "type": "object",
  "properties": {
    "variableAssessments": {
      "type": "array",
      "items": {
        "type": "string",
        "enum": ["Yes", "No", "Maybe"]
      },
      "description": "The assessment results for each variable, categorized as 'Yes', 'No', or 'Maybe'."
    },
    "overallScore": {
      "type": "string",
      "enum": ["Yes", "No", "Maybe"],
      "description": "The overall score based on the assessment of all variables."
    }
  },
  "required": ["variableAssessments", "overallScore"]
}
2 Likes

This point is vague for OpenAI to understand. What happens for YES,NO,YES,NO. Should the output be NO, or YES.

Instead of that what you can do is attach weights to the documents. Let’s say you have 10 variables. Document 1 has 8 of those variables. The weight becomes 8. Calculate the mean of all those documents, using the same method. If mean is greater than 7 then its a YES, else NO.

1 Like

thanks @_j I’m going to play around with this.

Thanks @DavidOS366 . I think this is what OpenAI is actually doing, and I’m trying to avoid. I don’t want averages or aggregated results, as the results need to be strict and accurate recommendations.