I’d like to get text content as well as a function call back in the same response. Is this possible?
If it isn’t possible what is the best practice?
The most advanced example on git just has the assistant return the SQL information but it doesn’t respond with text that contains more than just the information which would be ideal.
For example rather than just getting back [(‘Iron Maiden’, 213), (‘U2’, 135), (‘Led Zeppelin’, 114), (‘Metallica’, 112), (‘Lost’, 92)] it would be nice if we could get back "The top 5 artists by number tracks are Iron Maiden with 213, Led Zeppelin with 114, etc…
Or it would be nice if the content could even just respond to a larger prompt like for example if I sent it a prompt + forced it to make a function call where the prompt is like “How are you doing today” and the forced function call is get_time I get back for content: “I’m doing great thanks for asking” + the function call of “10:50 AM” rather than just content: None
I think the latter is more important to me actually.
you ask something the AI thinks it can use a function to answer
the AI calls the function chart_hits_per_artist(genre, search_years)
you give another AI call back your raw retrieved data in a functions role
the AI writes a response augmented by the knowledge
It is possible to have the AI both say “In order to answer this question, I’ll search by alternative genre to find how many hits Nirvana had.” and also write the function, but it takes specific deliberate prompting language to get it to produce that preliminary text.
But you aren’t “getting back” this data from the LLM, this is the data you are going to find locally or via another API and send to the LLM as the answer for it to incorporate into its knowledge.
The LLM is not responsible to know anything about the charts, you are going to supplement its knowledge to reduce the risk significantly of it hallucinating.
In this paradigm you are only using the LLM to manipulate the language, not determine facts: ie playing to its strengths.
Now you are correct. I guess I need to re-look at that example. Thought it was responding with that information but perhaps it was doing so only after being provided it and a secondary prompt?
That said, it should be even easier than to have an actual content response along with the function call response?
You can start with, “before you use API searches for music information, explain to the user how you will fulfill their requests with the functions provided”. Then customize that text to what needs to be presented for your app, or bring out longer thought processes the AI must discuss before it calls.
Then refine over and over to get it to do both a response and function call reliably, as it’s not tuned that way and must be instructed.
I think the first thing to get here is the overall concept, before messing with the LLM to force unintended behaviour.
This system is designed to provide a framework for call and response, “factual” knowledge enrichment then the final “money” step which is providing a well worded response to the user, usually having hidden all the interstitial steps and “cheatsheet provision” from the user.
If you use GPT 3.5 these multiple calls aren’t even that expensive or time consuming.
That’s a valid use of a local function too, if provided with the right inputs that the LLM can decide upon.
You could not only provide an output of a local function but do some local processing too to change local state and return a representation of that state to the LLM as needed.
Right but it just seems a bit silly to me. If I wanted the AI to say something and direct at the same time that I can’t, I have to get it to first direct and then get it to say something or vice versa. But I see no reason why we can’t do it at the same time.
I think I’ll manage to work around this limitation, but it also seems like it shouldn’t really be a limitation
For an “action prompt” (“post this tweet”), the AI should still see each conversation history turn that it got the question, made the request, got a result, and then responded about the success.
Eliminating steps can give the impression of incompletion or error, and then you get a loop.
From the user’s perspective, the AI simply answers the question.
The to and fro meaning the message history. In the end the user will just see the last response.
I was framing this from the view that I should try and minimize the message history as much as possible. Or that maybe I didn’t really need that much of a history, but at least with the way functions are designed, I’ll need to include that command in the chain prior to the actual response I want the user to hear.
Well, eventually, once everything is up and running, you might be able to minimise this.
But you don’t have to maintain everything in the prompt you send back to the LLM, only the critical stuff you need to move to next state … so this includes any knowledge which must be taken into account, including any new state.
So by “to and fro” i’m not just referring to an ever elongating prompt … I’m just refering to the looping exchange between the LLM and your logic. It doesn’t all have to be sent back again, necessarily.
So does the auto specifier for the function call take into account the message history to determine whether or not a specific function should get called?
Like if there’s a pool of functions that could potentially get called or even if one function should be called, I would like to leave that up to the discretion of GPT. It needs some information to have the context to know. Hey should I call this? Or hey I should call this one function out of five?
So I’m assuming it will use the message history to decide what should happen when it said to auto?
In my chatbot I send recent history, but sometimes you change subject and ask a very off-topic questions, so my hunch tells me the last message is probably the most significant in determining what function is called.
And let’s be careful here … by recent history I’m talking about the summary layer of the visible conversation to the user, not the internal thoughts of the agent piecing together the answer which will only be maintained for the current QnA step.
Yes, this is possible. I had the same problem and wracked my brain for ages on how to do it. I simply have another text parameter in the function call which is for the message that I want displayed to the user. I can therefore send parameters of the function I want called, plus the message all in the same function call.
This only works if you are maintaining two separate conversations, ie the one you want shown to the use and the one you’re having with the LLM.
@dan01962 can you show me an example of this? How did you achieve this? I would like to do this via streaming. The idea is to stream the output message and then later call the function on the inputs given once the streaming is done. Also what do you mean by having two conversations? Does this mean you pass the same input to LLMs twice?