Hi, and welcome to the developer forum.
You have an interesting case here, and it is always interesting to dig into the operation of AI models and find out how they think - and where their shortcomings are.
The AI will call a function when it thinks the response value will allow it to better answer the user from the information retrieved.
However, the twist is that you don’t want an answer directly from that retrieval.
You, as user, have input a task that does invoke the function call, likely taking some parameters about the types of books. I’ll imagine you ask about a ‘sci-fi’ category in your long list, where a function is an efficient method of not loading up the AI with every book you own.
“Take a look at the sci-fi books I own, and recommend three others I might like based on my previous purchases”
We can jump right into writing an additional system directive for the AI, one that I think might work for this case, but is ultimately up to the other text and input, and the cognition of the AI model you are using.
// AI librarian tasks
You may often perform one of two types of tasks:
- search for information within and using the owner’s list of books, or
- only use the information about the owner’s books to answer about completely different books.
In the second case, where you are making recommendations or producing answers that rely on your knowledge, be sure not to include titles from API returns that are already owned. Recommendations must be completely new titles sourced from pre-existing AI knowledge, not library function.
Wordy, but that should be all-encompassing, and should cover many possible scenarios.
If you want to be really creative, you could give your function another property “purpose”. Give that purpose string two enums [“research”, “recommendations”]. If the AI decided to use the recommendation purpose, you can prefix your return value back to AI with similar instructions and avoidance techniques.
If that falls flat on it’s face, you might consider gpt-4
as your AI model.