How a LLM-based application, integrates a custom function (API)?

Hi
The general problem here is how to create conversational applications based on GPT3 able to accomplish task-oriented deterministic activities, overtaking the ‘partially-no-deterministic’ elaborations of generative systems.

In practice, you want to call any custom function (API acting as info retrieval or a disposable action) modifying the LLM completions letting the LLM to call that function as needed.

Practical examples? Consider a chatbot that help customers with usual informative answers but that allow to open a ticket on some help-desk system. Or consider a question/answering system that do need to retrieve some info in realtime, etc.

Let’s deepen the chatbot use case. Let’s pretend it’s a chatbot specialized in weather forecasts.

Suppose you build a chatbot as described in nice video: https://www.youtube.com/watch?v=ePdmv4ucmb8 , a python program get user input sentence from the terminal, add the sentence at the bottom of a dialog turn, call the openai completion API appending the GPT3 response at the bottom of the growing text:

You are SUNNY (BOT:), my weather forecasts personal assistant. You give me advice and info about weather and related meteorological alerts.

USR: blablabla
BOT: blablabla
USR: blablabla
BOT: blablabla

Now suppose you want to let the chatbot aware of the today REAL weather forecast in a specific location/city, e.g. by default where I’m (Genova, Italy).

Suppose you have some python function (calling a web service API) that returns the weather forecast. Let incapsulate that service under a python function with signature: request_weather(where, when). Usage example:

request_weather('Genova, Italy', 'today')
# => 'Today (January 8, 2023) in Genoa the weather is beautiful, there is the sun and the temperature is 20 degrees.'

A first idea (especially for the dialog initialization) is to call the above function before the chatbot turns-exchange, injecting the weather forecast statement (ad any other useful meteorological data) in the above prompt, ‘statically’. Like that:

You are SUNNY (BOT:), my weather forecasts personal assistant. You give me advice and info about weather and related meteorological alerts.

Weather forecast: today (25/12/2022) in Genoa the weather is beautiful, there is the sun and the temperature is 20 degrees.

BOT: Hi! I’m Sunny, your meteo assistant. You can ask me the weather!
USR: what’s the weater today?
BOT: Today in Genoa the weather is beautiful, with the sun shining and the temperature at 20 degrees. Enjoy the day!

BTW, in the above prompt example the BOT utterances are real text-da-vinci-003 replies (running the playground, with default parameters).


Ok, it works! The LLM answer with data specified in the prompt himself. But what if the user ask, at the middle of the initiated dialog, about the weather in another location, e.g. New York?

Suppose you want to answer the user, without a fake answer (GPT is able to invent anything if you let it be free) but instead answering the user with the real weather forecast in New York. In this case you do need to call the function request_weather('New York, USA').

In general,
you want to call an external API inside the dialog.
How to implement that feature?

  • Idea 1
    What comes to mind is to catch the user “intent” before the GPT3 completion request, so you could implement an usual intent classifier on top GPT3. Maybe, but I do not like this solution because I need to implement that classifier by ‘hands’ with usual efforts (training the intent/entities phrases dataset etc. etc.). let’s put this idea aside.

  • Idea 2
    Instruct the GPT3 model, with a specific prompt, to reply a ‘command’ (with a specified syntax) when it match a known intent (aka command or function), as roughly shown here: https://beta.openai.com/examples/default-text-to-command.

The idea 2 imply calling the function request_weather when the model return this command inside the answer (let’s call this a back-command; So the python program could catch the back-command, run-it the function return text (say a text that describe the weather) and the dialog prompt will be appended with function return text as a response to the user.

Using text-davinci-003, it seems to work! I inserted in the initial prompt some pseudo-code instructions. See the example here below:

You are SUNNY (BOT:), my weather forecasts personal assistant. You give me advice and info about weather and related meteorological alerts. Reply as described instruction section, using data about me.

DATA:
My name is Giorgio.
I’m now in Genova, Italia.

ISTRUCTIONS:
Do not invent info.
If I ask weather forecast, reply with a single statement command: request_weather(‘where’, ‘when’).
If you don’t have attributes ‘where’ and ‘when’, retrieve them from the previous conversation or use defaults in previous data section.
If you don’t find them, ask me.

CONVERSATION:
BOT: Hi! I’m Sunny, your meteo assistant. You can ask me the weather!

USR: what’s the weater today?
BOT: Sure thing, Giorgio. The weather in Genova, Italia today is: request_weather(‘Genova’, ‘today’).

USR: aftertomorrow it will rains?
BOT: Sure thing, Giorgio. The weather in Genova, Italia aftertomorrow is: request_weather(‘Genova’, ‘aftertomorrow’).

USR: next week?
BOT: Sure thing, Giorgio. The weather in Genova, Italia next week is: request_weather(‘Genova’, ‘next week’).

USR: I mean in Milano
BOT: Sure thing, Giorgio. The weather in Milano, Italia next week is: request_weather(‘Milano’, ‘next week’).

BTW, in the above prompt example the BOT utterances are REAL text-da-vinci-003 replies (running the playground, with default parameters).

The solution seems working just with the above prompt (I suppose also that here a fine-tuning could help to have the back-command correctly generated with explicit examples).

So on top of the LLM, a python ‘controller’ program layer could SUBSTITUTE the back-command generated by the LLM with the real invocation of the corresponding function.
Here a pseudo-code:

request_weather_command = "request_weather(‘`where`’, ‘`when`’)"
if completion.match(request_weather_command): # boring hard-coded regexp match
    reply_weather_command = request_weather(‘`where`’, ‘`when`’) 
    # => "sunny!"
    completion = completion.replace(request_weather_command, reply_weather_command) 
    # => "BOT: Sure thing, Giorgio. The weather in Milano, Italia next week is: sunny!"

Architecture:


Make senses? Pros and Cons? Any suggestion is very welcome.
Thanks
Giorgio

3 Likes

I did that in my experiment on an AI that builds its own source code. you can look for some ideas A technical look into "A PoC for an AI that expands its features by building its own source code." - YouTube

1 Like

BTW, Here a screenshot maybe better explaining the way I found to call an external #API from a #GPT3 prompt that’s modified at run-time, just to implement a #chatbot that supplies weather forecasts (with some API integration)



Any suggestion is very welcome.

1 Like

A possible solution using langchain here: Custom tool (function): it's possible to have multiple input arguments? · Issue #832 · hwchase17/langchain · GitHub

1 Like

Simple calling a function is maybe nice. But how do you ensure that the function is called on the right time e.g. Call the function gettemp(roomid) only when the user requested the temperature and if you do not have a room number ask him for one. For a single function. That’s maybe easy to describe using a prompt oder maybe by doing some fine tuning but If you have many functions how do you ensure that this is working together. I think the openai plugins will also not solve this issue…??? Or maybe I just need bigger trainingsfiles for fine tuning?!?