GPT Function Calling - Function Params Enum

Assume my company provides meals subscription service . I described the following functions to gpt-3.5.
By specifying "unit": {"type": "string", "enum": ["meals", "days"]}, I intent to force the ChatCompletion model returned parameter “unit” to be “meals” or “days”.

functions = [
           "name": "summarize_order",
           "description": "Summarize the customer order request",
           "parameters": {
               "type": "object",
               "properties": {
                   "product_name": {
                       "type": "string",
                       "description": "Product name ordered by customer",
                  "quantity": {
                       "type": "integer",
                       "description": "Quantity ordered by customer",
                   "unit": {
                       "type": "string",
                       "enum": ["meals", "days"],
                       "description": "unit of measurement of the customer order" 
               "required": ["product_name","quantity", "unit"],

def get_completion_from_messages(messages, 
   response = openai.ChatCompletion.create(
       function_call={"name": "summarize_order"},
   return response["choices"][0]["message"]

If a customer says: “I’d like to order the keto meal for 2 months”, it should return-
{‘product_name’: ‘keto meal’,
‘quantity’: 60,
‘unit’: ‘days’}
However, gpt-3.5-turbo-0613 still returns
{‘product_name’: ‘keto meal’,
‘quantity’: 2,
‘unit’: ‘months’}

How can I force the output parameter “unit” to be “meals” or “days”?


Well, you could spend time working on the prompting to steer the AI into doing as you wish, you could for example tell the AI that under no circumstances should it return months as a unit and give the example you just put here as a guide.

Or, you could have a bit of classical computer code that checks the return value and if it’s months, it changed it to days in the units variable and multiplies the qty by 30

Prompt Engineering doesn’t seem to help much here. When applying Function Calling, GPT seems to ignore my system message instructions.

1 Like

Same - wondering how to return values only from an enum as well. It doesn’t seem to work whereas previously with a prompt you could add a system message that contains the options and it worked more often than not. Now, within the function call, it doesn’t refer to the system message as closely and instead hallucinates those options.

I suppose it is possible to send a second message with the first value returned and categorise that value based on the enum? But this wouldn’t scale of course…

Wish there were more controls on what the AI returned!

1 Like

Same here. It seems that the best approach is to try and test with function call and see what GPT “likes to respond”. Then complete the script with old school computer code.

In my use I give it a question and ask it to classify the field of expertise and give it options with enum:
“Mathematics”,“Physics”,“Geography” and so on.
I gave it like 10 options. However GPTs wants to give me the not listed one “Philosophy”, instead of choosing the allowed “Humanities”.

In the end I included a “NoInfo” option and always check if it give me one in the list, otherwise I try again and eventually mark it as “NoInfo” and go on.

1 Like

I would try GPT-4. It is a lot smarter on functions calling instructions

Can you validate the response against JSON Schema in your application, then send it back to GPT with the error message if validation fails? This validate/retry pattern worked well for me, though I haven’t tried with json schema specifically.

Moving “unit” parameter before “quantity” seems to help.

1 Like

You need to add validation plus a feedback loop to your logic. See my medium post here: Making OpenAI Functions Reliable. I’ve seen a lot of developers… | by Steve Ickman | Jul, 2023 | Medium