Gpt3.5-turbo ignoring instructions for a function and calling it for all generic user requests

I’m trying to build a voice based chatbot that can assist the user in different tasks.
For some generic tasks, I heavily rely on GPT generating prompts without using functions as it reduces latencies.
I’m unable to add examples here from my actual use case as its confidential. But I’m providing some simple examples here that describe what I’m trying to do
For example, for generic query “What can i do with my computer”
GPT earlier would generate the response on its own without any problems.
I recently added a function called “get_games_on_my_computer” with description: “Get a list of games available on the user’s computer”.
After adding this function, GPT defaults to calling this function for all generic queries like the one I described above.
I tried adding system prompts in bunch of different ways. But nothing seems to completely solve the problem.

Any suggestions on how can I solve this issue?

If you know when to call function, you can pass function_call: "none" to force the model not using any function. Doc: OpenAI Platform

Otherwise, you need to tweak the description of the function to make the model knows your function better. Try to write the description like prompts.

1 Like

This is supposed to be a chat bot that can handle any generic query. So not sure how I can make the function_call value none for only specific requests.

You can see this problem in other’s “solutions”, such as Bing Chat, based on OpenAI GPT, which seems to do an internet search for anything, even things the AI could easily answer.

The problem is really that the AI doesn’t know what it already knows. It can’t “see” that it has the answer until after it has generated the answer.

Although the AI is trained to either answer a question or to call a function, you can use prompting to not just discourage function calling unless the query matches the function purpose 100%, but also, one might prompt it to generate an answer to the user, evaluate if the answer is accurate and sufficiently answers the meaning of the user’s question, and then if it doesn’t, call a function to obtain a better answer, all in the same prompt and response. Then pull out just the JSON of the function and ignore the first poor AI answer.

Such multi-step self-examination is possible, but only when the knowledge is actually output in to the response context to then build on answering more QA steps of a question refinement. One may also want to give even more prompting so that the “final answer” alone can be isolated from trials and self-examination of multi-steps the chat user doesn’t need to see.

1 Like

I tried your sample function and prompt. The result is different and behaves correctly.

I tested using this function definition

{
        name: "get_games_on_my_computer",
        description: "Get list of games available on the user's computer",
        parameters: {
            type: "object",
            properties: {
                games: {
                    type: "array",
                    items: {
                        type: "object",
                        properties: {
                            name: { type: "string" }
                        }
                    }
                }
            },
            required: ["games"]
        }
   }

But I called without system prompt.