The docs say: “An Assistant has instructions and can leverage models, tools, and knowledge to respond to user queries. The Assistants API currently supports three types of tools: Code Interpreter, Retrieval, and Function calling.”
When I ask it in the playground it says:
“As an AI text model, I’m not currently equipped with the ability to create pictures or visual content directly”
And:
“I’m sorry for any confusion, but as an AI developed by OpenAI, I don’t have direct integration with DALL·E or the capability to generate images. DALL·E is a separate AI system, also developed by OpenAI, that can create images from textual descriptions.”
For some reason both Dall-E and GPT-4V are unavailable in the assistants API, but they are working on making it happen.
The Assistants API currently supports three types of tools: Code Interpreter, Retrieval, and Function calling. In the future, we plan to release more OpenAI-built tools, and allow you to provide your own tools on our platform.
What do you think will be the difference in their minds between a “Tool” an “Action” and a “Plugin”? They are deprecating plugins but plan to allow “custom tools” in the future? Seems confusing to me.
I believe Actions/Plugins are exclusive to ChatGPT.
Good point though. I’m not sure what they meant by “custom tools”. I would have thought that function calling already covers that. Maybe an additional store for pre-built functions? Like an API store for assistants?