Hi there! We’re embedding a custom GPT in our app and started using the Assistant API for that purpose. For a first release, it’s pretty powerful and works great. We wanted to share our initial feedback after a few weeks hacking on it:
-
Please update the docs with a lot more code examples. Some parts like “Function calling” are not obvious at first sight. It took us some digging to really understand how to call external API with it. You’d expect to be able to give an endpoint somewhere but it doesn’t work like this. We figured it out in the end but I feel like the documentation could be largely improved to better describe how Function calling works with a concrete example of how to call an external API with it. Looking at the forum, we are not the only ones who were puzzled at first sight
-
Output streaming support is an absolute must. Without it, the UX is subpar and clearly inferior to what users expect. I have no doubt you guys will ship that soon but… it still wanted to mention it.
-
Incomplete third-party library support (as of today, at least). it seems that Assistant API is only partially supported by langchain and llamaindex. The lazy devs that we are were hoping to use a wrapper to simplify our life but, from what we saw, using Function Calling via Assistants API is not yet supported by those wrappers (Let me know if we’re mistaken here ).