Assistants and function/API

I’m starting to work with assistants APIs and threads. I have questions about how external functions/APIs works overall.
I want to use an external API (an API of mine) that GPT can use if needed.
If I understood correctly, depending on the question GPT decides or not whether it should make an API call.
With classic “functions” (before the assistants) GPT decides whether or not to use the function, and if so, it gives back the parameters for it. Then, we can use the parameters to locally call the function and return the results to GPT.
My questions are:

  • do calls to my API work like functions ? GPT only returns the parameters and it’s up to me to call the API?

Let’s take a concrete example such as text-to-sql:

How can I do the same with an assistant/thread?
Thanks!

1 Like

Pretty much the same. Big difference is that it happens in step in a run which then waits and you have to pass back a result for the run to continue. You can watch this happen in the playground nicely.

2 Likes

Thanks. When i pass back the results of the API is there a way to give instruction on how i want GPT to format the results?

It seems to process the return the same way as the code interpreter. If I understand your question, hope this is relevant:

I send back output from my functions and stdout/stderr from running its scripts (basically a code interpreter on my side) and it totally gets it and knows what to do. To get it up and running I sent back empty strings then ‘success’ then moved on to real data and all of it was fine. ‘Error’ or that kind of thing in there and it would make it act the same as it does in the code interpreter where it tries again, etc.

I wouldn’t be surprised if you could just add a message in there and pass back: ‘Success. Here’s the data do [whatever] with it please: [data]’ and it would work.

1 Like

Thank you. I’m gonna try that, that’s a good idea.

Look like he wants JSON only. I’ve try something like:
Success. Here’s the data, please format it like this [whatever]: [data]
But when i click on submit nothing happens.

I wrote two articles about this - a simple one Playing with the new OpenAI Assistant and function calling in Python | by Jean-Luc Vanhulst | Nov, 2023 | Medium
and just now a more advanced one that describes a fully distributed way of doing it inside Django with Celery
Building a scalable OpenAI Assistant Processor in Django with Celery | by Jean-Luc Vanhulst | Dec, 2023 | Medium

4 Likes

I was only able to submit responses to tool calls by calling the api. Look in the logs of the run, particularly the steps, and you can see the way things flow and how it works.

Do you know of a way to stop it trying again? Natively I mean, I could do it programmatically by setting a variable to a value if the result of the API call is an error. I have noticed my assistant doing this, and it is getting costly!

Not off the top of my head but it’s set up for that, there’s a ‘max_retry’ or something along those lines.

It sure can burn through tokens, I had a run generate 6 million… To be fair what it was doing was very impressive, checking it’s work and fixing things then making a new plan and getting on with the task.

Thanks for all your responses. I was able to understand Assistants much better by using the Playground. That’s a good way to learn!
Now i’m gonna try to see how i can format a bit the assistant response.

1 Like

did you ever find out a way to format the assistant response based on output from the tool(s)?

Yes, using the last GPT 4 as model I’ve put some instructions in the function description and it works. It was not working with GPT 3 I think.

Thank you. I have it working now with GPT-4. I returned json string {answer:“text of my answer”} and it was repeated by the assistant in it message. I did tell the assistant in its instructions this is what it should do but not sure that was necessary.