Bridging the gap between the response on playground and api with function

Hi, I have noticed a difference in the response between the playground response and the api response.
The only difference is that I use a function. I need the function to make sure my answer is in the proper format.

But I have realized that because of the function the model has stopped asking clarifying questions.

I have tried to change the prompt so many times and asked the model to ask for clarification and then call the function.

Are there any tips that I can use so that my function calling sort of becomes conditional?

Yes. The model you’re using will determine when function calling is necessary. When applied correctly it should be conditional. Do you mind sharing your instructional functions?

Thanks for your response!.

UseCase: A chatbot with QnA capabilities.

Instructions: User can ask any question. My system prompt rules help chatbot add relevant information like time of the day when the question was asked, etc to the user question.
And finally the method should be called that outputs the data in a format that I specified.

{
        "type": "function",
        "function": {
            "name": "return_json_object",
            "description": "If there is no ambiguity in the question then call this function to return a well-formatted answer.",
            "parameters": {
                "type": "object",
                "properties": {
                    "object": {
                        "type": "string",
                        "description": "JSON object with fields question, question_time, answer",
                    }
                },
                "required": ["object"],
            },
        },
    }

But according to the rules, there is a chance that the user’s question may have some ambiguity. When I run this prompt on the playground 100% time the LLM asks clarifying questions. But when I use this function in my code, the it stops asking the clarifying question and always calls this function and makes assumptions. Thanks!

Hi Phoenix - this behaviour can sometimes happen. Couple of thoughts here: In the function description itself, be more specific what constitutes an ambiguity in your context, so the conditions as to when the function should be called is clearer.

Depending if you are using the function in the context of the Assistant API or the regular API, you should also expand on this in your instructions or the system message/prompt. The more specific you can be, the more likely is that you will experience the correct behavior.

I have used a similar logic for the implementation of my chatbot using the Assistants API and can confirm that it can work.

1 Like

Thanks! I will incorporate your suggestions. :slight_smile:

1 Like