I’ve been working on a Chatbot for my team. I was able to get function calling to work, and was able to complete certain tasks with it. I moved on to some other, unrelated work and when I came back, my functions were no longer being used by the Assistant.
I have tried every configuration I can think of for the functions, and I’ve searched the forums without finding any open errors related to this function.
I am confused as I can see the functions in the assistants playground: https://platform.openai.com/playground?assistant=asst_z55uKcCcMnhIBotULYsLNpqz&mode=assistant&thread=thread_hynHFUFHjoVOHpb909U7WPFN
However, these functions are never called. When I trace the application, the Run object never enters the “requires_action” state. I also created a placeholder function named “florp” to make sure it wasn’t just my prompting, but it didn’t recognize the “florp” name at all.
It seems to not be a problem with how I’m defining the functions, as they show up in the Assistants Playground. Are there some other “gotchas” I should be aware of? Some bit of configuration to enable these besides just passing the functions as tools? I’m more than happy to provide more context if this isn’t enough.
Is it possible that your overwrite the instructions for the Assistant by providing instructions in the Thread? I made that mistake once and took me quite a while to realize my mistake
Thanks for the reply!
I am aware of this issue and did the same thing previously! I ended up switching to passing “additional_instructions” to the Run instead, so I didn’t accidentally overwrite it again.
The moment I created this ticket, I deleted all of the instructions out of the assistant in the assistant playground and ran it - then it worked! For some reason, the instructions are preventing it from running. I am getting to the limit of the instructions (32K chars)… does the instructions param share context with the tools? That doesn’t sound right to me, but I can’t figure out why deleting my instructions would cause my chatbot to work
Yes I believe so the total for instructions, which would logically include the functions is 32k. Which is still a lot but with functions it can go fast.
Well with NO instructions it will see functions which are usually by themself pretty descriptive so I can totally see that.
One more thing - did you add the functions programaticcaly of by pasting in the back end? I have made mistakes by uploading wrongly formatted json and only discovering some of those errors recently when they adding error checking to the back end. Right now you cannot enter wrong JSON for a function.
That’s a shame. I wasn’t expecting that as it isn’t called out in the docs. They just say that the instructions have a 32K char limit and the tools are limited to 128 separate tools:
I’ll have to rework this to pass in the extra context from instructions in another way, I presume.
This is all coded up in Python so it’s all overwritten every time in the backend. I basically look for an assistant with the same name, and if I find it, I call
update on it with the latest instructions/tools.
Mm that would also use tokens I think just for updating. Normally not an issue but if you do it all the time and with a 32k instruction set… But curious how many functions do you have ? Are parts of your instruction also ‘context’ and not actual instructions maybe that you could add as files to the Assistant?
Thank you for your help @jlvanhulst , I think I understand where I went wrong now.
Based on the API documentation I was under the impression that the instruction set and tool functions did not count against the context tokens for the LLM. I was of the impression that this was a new technology that was able to incorporate this information separate from that context.
I did just start working with this technology, so this is all research at this point. I expected to come across misunderstandings of the tool. I absolutely agree, I’m going to look at breaking this extra context out into something the RetrievalTool can provide.
In terms of functions, I only have 3 right now that I’ve built, but I do expect to expand that greatly. I will have to be careful how much context I’m passing. I’ll probably end up building some function that can monitor my token usage for these tool functions and instructions.
I do say this has been a wonderful discussion and very enlightening, which is a welcome departure from other experiences I’ve had on different forums. I appreciate your help.
I am adding functions to my Assistant programmatically and this was working a week ago. Now the response when I add the functions is that they are all added as code_interpreter. They do not appear as functions when I look in playground or in the response to my call to openaiAPI. Something has changed.
An older Assistant called Strategy with exactly the same tools I’m trying to add works fine. Now I can’t added those tools.
Have you tried manuall adding the exact JSON created through the back-end?
I ran into several ‘json errors’ that way - and only recently has the backend received decent error checking as well. Ie it will let you know what is wrong.