Custom GPT vs Assistant API + Completions APi

I am currently using a combination of Assistant API (frontend for user conversation) + it’s function calling which is then used to retrieve correct context from my vectordb (Weaviate) and then lastly is sent back to chat completions API to render final results.

We now have option to use custom GPT as well. How much cystomization can be done with custom GPT ?

I would say I wish there was a way to provide some custom JavaScript code to adjust the output. But that would be pretty dangerous for OpenAI I guess.

Maybe they could do it in a way that they have a safeguard or a team that checks the code before it get’s deployed like with functions when they came out.

I mean how cool would that be? Frontend Developers could finally be a part of the family haha…

Beside that you can provide Endpoints on you own servers and tell the Custom GPT to interact with that, which opens a wide range of options including giving a link to the custom JavaScript you would want to have bundled inside ChatGPT