Chat completions keeping context from tool reply

My use case is fairly simple - I have a json list of books (titles,authors) and I’m using the chat completion API as a “librarian”. For some reason, when I specifically ask it to suggest a book NOT in my list the following is happening:

  1. It is successfully calling my function
  2. The function is returning my book list
  3. However when returning my answer, it returns a book that is IN my list but mentions that it is not in my list and offers it as an answer to the prompt.
    (Fyi - when I ask it if I have a book it my list it’s returning successfully)

I need to send a subsequent message for it to look outside the actual function and into it’s training set. Has anyone had this happen before? My system prompt clearly tells it to look for recommendations in it’s training.

Hi, and welcome to the developer forum.

You have an interesting case here, and it is always interesting to dig into the operation of AI models and find out how they think - and where their shortcomings are.

The AI will call a function when it thinks the response value will allow it to better answer the user from the information retrieved.

However, the twist is that you don’t want an answer directly from that retrieval.

You, as user, have input a task that does invoke the function call, likely taking some parameters about the types of books. I’ll imagine you ask about a ‘sci-fi’ category in your long list, where a function is an efficient method of not loading up the AI with every book you own.

“Take a look at the sci-fi books I own, and recommend three others I might like based on my previous purchases”

We can jump right into writing an additional system directive for the AI, one that I think might work for this case, but is ultimately up to the other text and input, and the cognition of the AI model you are using.

// AI librarian tasks
You may often perform one of two types of tasks:

  1. search for information within and using the owner’s list of books, or
  2. only use the information about the owner’s books to answer about completely different books.

In the second case, where you are making recommendations or producing answers that rely on your knowledge, be sure not to include titles from API returns that are already owned. Recommendations must be completely new titles sourced from pre-existing AI knowledge, not library function.

Wordy, but that should be all-encompassing, and should cover many possible scenarios.

If you want to be really creative, you could give your function another property “purpose”. Give that purpose string two enums [“research”, “recommendations”]. If the AI decided to use the recommendation purpose, you can prefix your return value back to AI with similar instructions and avoidance techniques.

If that falls flat on it’s face, you might consider gpt-4 as your AI model.

This is an amazing response. And yes - I will be adding category, don’t have it right now because I just want to test things out.

Let me try that - I’m using 3.5 turbo as of now. It is definitely is a weird circumstance and I’m fairly surprised it’s behaving this way. Let me try out your suggestion and see what I get.

1 Like