I have built a tool to build and run your own ChatGPT plugins without writing any code — just upload your own data and make it accessible for ChatGPT via the plugin.
- If you have a developer access to ChatGPT plugins, and
- are struggling to build a ChatGPT Retrieval plugin yourself
Then you might find this tool useful. It’s been enabling quick iterations for non-programmers and I feel honored to be of assistance to innovators such as everyone here. Internally, it uses Qdrant as a vector db to store and retrieve matched documents to the query as a context.
If you do test it out, please leave some feedback from the chat bubble on the bottom right of the website. It’s still in the nascent phase, and I want to figure out what are the most helpful features to the community.
Just playing around with this plugin.
Thank you. What are you trying to build? Please let me know your use case so that I can align the product’s roadmaps accordingly and determine the direction it should take!
When I get developer access to create plugins, I will give it a spin
You should have a tier which you can sell which gives access to a docker container to run on ones own hardware. While the system currently doesn’t support private data well, or rather at all beyond not using your chats as training data. It’ll be worthwhile for people as privacy improves. Especially if they launch things like private plugins. Ones for singular organizations. I want X onboarding document available, but not public
That’s an interesting idea. So far I’ve heard from large companies that they don’t want to share private data, yet want to take advantage of the technology. With OpenAI / Azure, hosting the LLMs on premise is not an option. When the time is right, I’d definitely offer private / open LLMs as an option. In the short term though, I don’t see viable alternatives to OpenAI / Azure so far.
check out the JiggyBase plugin for a business-class implementation of something similar
Nice. Is that customizable with icons and plugin names, descriptions, etc?
One interesting thing about how ChatGPT plugins work is that description_for_model can be used as an opportunity for “SEO” written in 8,000 characters — the model choose which plugins to use based on that.
With Gista, anyone can experience it regardless of their coding skills. I believe that will bring the future sooner.
And LLM on premise is not going to be an option to any but the largest companies; however, I could see hosting the plugin portion locally to be viable for a lot of companies especially if Open AI adds the ability to force a specific server to make sure that the data doesn’t leave their country which from what I’ve seen has been one of the biggest concerns when it comes to private data.
That’s my experience, too. Large enterprises care a lot about data safety.
However I also found it depends on the use case. For instance, marketing departments are a lot more open to the risk of new technologies compared to the operational / business process departments. Historically speaking, adoption of new technologies starts out from the outward facing functions and penetrate toward mission-critical tasks. I think LLMs will also follow that path.
I have 467,897 Insurance clients and 26,984 Social Media Followers - So When you are ready I can test market and gather input form possible buyers
I think ChatGPT plugins are a great new avenue to get exposure to millions of people daily, and any businesses including insurance industry would benefit from it.
However, Sam Altman said earlier this week that ChatGPT plugins do not have a PMF yet. I’d expect a drastic change from the current plugin spec, but iterations in the current state would give you a head start when that happens.
LLMs will evolve over time (including the open source / on-prem options), and enterprises are the last to adopt new technologies anyway. So it’s all about timeline, I guess.
Turns out I was right - Sam himself recently admitted:
ChatGPT plugins aren’t really taking off because people want ChatGPT in their applications, not applications in ChatGPT .
My April 7 reaction to GPT Plugins…
Mentioned in my Jun 1 comment above, and then the world moved on to function calling for the “ChatGPT in their applications” use cases.
I can attest that function calling is order-of-magnitude better, and as a result, Gista has pivoted to AI agent that takes advantage of it.
Good to know. But does that solve the issue? Are businesses flooding to functions?
Some initial tests with
functions and it seems to be quite unreliable. Are other people experiencing the same thing?
I like the way you did the email capture – was very neat. Did you use
functions for that? How did you trigger that email capture widget in the input field?
Thanks! Initially we used function calling to let the model extract email and name from text, but it was more troubles than it’s worth and we’ve switched to a form-style input since then.
We now only use function calling for sentiment — if the use sentiment is positive, call “proceed” function to go to the next step, which is asking email. This binary true/false is the best we can do, but still it can go that far.
We’re really early in this game, it seems…