Can you specify partial responses in the Structured Outputs API?

In the upcoming OSS framework, the fundamental abstraction is a goal :

gc = GoalComposer(provider="OpenAI", model="gpt-4o-mini")
gc = gc(global_context="global_context")

gc\
    .goal("read text file", with_goal_args={'file_name': 'jd.txt'})\
    .goal("answer the question from document", with_goal_args=
          {'param': 'jd.txt',  'question': 'If the job description specifies a salary expectation: Is this an exact salary?'})

...
gc.run()
  • You can also express all questions in a loop (start_loop, end_loop)
  • You can verify by rewording the question /answer into a fact and verifying the fact
Original Question   : Does the job description specify a salary expectation?
Original Answer     : Yes, the job description specifies a salary expectation of $55,000 - $75,000 annually.
Converted Fact      : The job description specifies a salary expectation of $55,000 - $75,000 annually.
Verification of Fact: True. The job description specifies a salary expectation of $55,000 - $75,000 annually.


Original Question   : If the job description specifies a salary expectation: Is this a salary range?
Original Answer     : YES
Converted Fact      : A specified salary expectation in a job description typically indicates a salary range.
Verification of Fact: True. The job description specifies a salary range of $55,000 - $75,000 annually, which indicates a salary expectation.


Original Question   : If the job description specifies a salary expectation: Is this an exact salary?
Original Answer     : No, the salary specified is a range of $55,000 - $75,000 annually, not an exact salary.
Converted Fact      : The salary specified in the job description is a range of $55,000 - $75,000 annually, rather than an exact amount.
Verification of Fact: Yes, that is correct. The salary specified in the job description is a range of $55,000 - $75,000 annually, not an exact amount.


Original Question   : Does the job offer a hybrid working arrangement?
Original Answer     : Unable to deduce answer from the provided document.
Converted Fact      : The document does not provide information regarding whether the job offers a hybrid working arrangement.
Verification of Fact: Correct, the document specifies that the position is remote but does not provide information about a hybrid working arrangement.
  • So we (a) expose the JD for each question (b) convert question / answer into fact WITHOUT jd (c) verify fact with JD

  • The “answer the question from document” automagically turns to the following function

@manage_function(TOOLS_FUNCTIONS, "document_functions")
def answer_question( 
                  global_context:Annotated[Any, "This is the global context"], 
                  param:Annotated[str, "This is the parameter to be read from the global context"],
                  question:Annotated[str, "This is the question whose answer is required from the document."])\
    -> Annotated[dict, 
                 """
                 :return: Returns a dictionary with keys
                 - extracted_text(str): returns the answer from the document of the question asked. 
                 """]: 
     """ This function answers the specific question asked of the document"""


     messages = []
     messages.append({"role": "system", "content": SYSTEM_CONTEXT_QA})
     doc = global_context[param]

     messages.append({"role": "user", "content": f"START_DOCUMENT {doc} END_DOCUMENT \n"})
     messages.append({"role": "user", "content": question})

                    
     chat_completion = openai_chat.chat.completions.create(
            messages = messages,
            model=MODEL_OPENAI_GPT4_MINI,
            temperature=0.4,
    )    

     extracted_answer = chat_completion.choices[0].message.content


     messages = []
     messages.append({"role": "system", "content": SYSTEM_CONTEXT_QA_CONVERT_TO_FACT})
     messages.append({"role": "user", "content": question})
     messages.append({"role": "user", "content": extracted_answer})

     chat_completion = openai_chat.chat.completions.create(
            messages = messages,
            model=MODEL_OPENAI_GPT4_MINI,
            temperature=0.4,
    )
     extracted_fact = chat_completion.choices[0].message.content

     messages = []

     messages.append({"role": "system", "content": SYSTEM_CONTEXT_QA_CONVERT_TO_FACT_VERIFIER})
     doc = global_context[param]
     messages.append({"role": "user", "content": f"START_DOCUMENT {doc} END_DOCUMENT \n"})
     messages.append({"role": "user", "content": extracted_fact})


     chat_completion = openai_chat.chat.completions.create(
            messages = messages,
            model=MODEL_OPENAI_GPT4_MINI,
            temperature=0.4,
    )
     
     extracted_verification = chat_completion.choices[0].message.content

     print(f"Original Question   : {question}")
     print(f"Original Answer     : {extracted_answer}")
     print(f"Converted Fact      : {extracted_fact}")
     print(f"Verification of Fact: {extracted_verification}")

     return {'extracted_text': extracted_answer}

hth