I’m curious, what do people think are the top 3 pain points for plugin developers? Is there anything that makes developing plugins difficult or time-consuming?
First off a bit of my background as it is not common among programmers. I have been programming for over 40 years and mostly in Prolog for the last ~5 years. I did work on production web sites around the turn of the century (remember this) but never really kept up to date with websites and front ends after leaving that line of work.
- As someone who did not notice in the documentation that OpenAI and OpenAPI (which is not OpenAI API) are two different things that was a learning curve. novaphil pointed that out to me here, thanks.
- As someone who really never gave CORS much attention, learned about preflight and HTTP options method. Finding production working Prolog code of this took a bit of effort but did find some code that was quite beneficial for studying. (ref) Thanks Gavin Mendel-Gleason.
- AFAIK being the only one trying to create an SWI-Prolog plugin for ChatGPT. It is lonely on the bleeding edge. The best help I get is actually by prompting ChatGPT for information.
I realize it is an alpha product - and the potential is amazing
I think the main limiting factor at this point is not related to design/developing plugins but rather speed of execution. The end product inevitably is a proof of concept but not a practical project yet.
The second biggest pain point is that there are relatively few examples out there of completed plugins. If the underlying plugin definition files for plugins in the plugin store were available that would be immensely helpful for learning.
Definitely agree that most plugins are current proof of concepts and not full practical projects! Excited to see what individual developers, startups and big tech companies are able to accomplish to actually drive value.
FWIW, many do have public definition files available for plugins. Check out: https://pugin.ai/ You can even see how they made changes to tweak their prompts and get better results.
Thank you - your site is excellent to view example definition files. Much appreciated.
So is something like this helpful? We can collect various cases and build local plugins to test them out.
For me, it’s so easy to replicate the technology, and doesn’t restrict me to only using ChatGPT.
I don’t want my users, or myself to be stuck with a text-only full-screen interface that not only doesn’t work for some places, but requires ANOTHER account to use. Quite the opposite, I want my users to be able to explore a visual and interactive application, and THEN use a conversational agent to help clarify, or drive information. Nevermind the fact that they may NEED to register two accounts just to use the service, they may need to PAY TWICE TO EVEN TRY IT
If they would open up plugins for the API, and allow them to also work with ChatGPT, that would be wonderful. It begs the question: why aren’t they doing this?
Side note: Anyone else notice that Google has a side-window appropriately sized for a chat appearing? If this can parse a web-page & communicate it will be incredible. Web Browsing plus the interactive benefits of a web page. They get to keep their traffic, and also have a free conversational agent (perhaps we will be able to store the same format of plugins in something like robots.txt to effectively communicate the data). Actually, Google already has a much more robust and natural way of doing this.
So my points would be
While the rest of the world is building towards uniformity such as Progressive Web Apps, ChatGPT is incredibly restrictive in it’s approach of forcing clients to use only their application - for other people’s services.Apparently it can be used outside of ChatGPT?
It completely eliminates the freedom of visual interactivity and forces the conversational agent to be the focus
The competition already has a much stronger and wider foundation to build their LLMs on. Google, for example not only has all of this structured data formatted and usable, a combination with their search engine would mean that the conversational agent almost ALREADY KNOWS what you are asking before you even finish typing it.
The usage cap has been killing me. Otherwise, I love the AI plugin paradigm and see a lot of opportunities to develop novel experiences. Looking forward to being able to better leverage this soon!
Totally agree that the lack of plugins with the API is puzzling and limiting…
Agreed. It could be that they are planning to release it for API, but I can’t sit around and wait for these improvements to happen - if they even do. There’s no roadmap.
This is not true, the plugin protocol is designed to work with any LLM, that is the point of the design and the way the files are hosted on a public known domain. See this section of the docs: OpenAI API
Many people have asked for plugins in the API, certainly one of the top requests! But you are correct there is no roadmap, hard to share when things change all the time and you move quickly.
Thanks for the correction. That is very good to know. I was under the impression that ChatGPT plugins MUST use the ChatGPT interface.
Understandable, it’s frustrating for developers but I can imagine it’s more frustrating for you guys.
Thanks for the response & clarification.
I think the major one that I have faced till now is Submission of plugin. As for people with different UTC, won’t be able to rectify the plugin error at the moment. They have to wait for 1- 2 days to get reviewed.
Another is while facing any error or bugs, its difficult to know that is from developer side or from OpenAI, like some days ago I faced Unverified Plugin was taking forever to respond.
But all is not that bad , I believe this is next phase of development and we don’t get diamonds without handling pressure.
How do you test your plugin before submitting it? I’m particularly curious when devs have a plugin in production but they want to develop v2 of the plugin. Are you creating different endpoints and then hosting v2 versions of the manifest file somewhere?
- Lots of cURL commands and then proofreading the results. (ref)
- F12 with Chrome
- The Open plugin dev tools also helps.
So if I understand correctly
- You have a remote server with the API endpoints
- You have the manifest and spec files hosted remotely
- You use curl to test the endpoints and to make sure that the manifest and spec files are returning
- You then open the dev tools to look at what is going on as you test out through the ChatGPT UI
Do you use the local plugin for dev or testing? When you are developing a new version, how do you deploy and test without touching the production version that users are using?
You have a remote server with the API endpoints
Yes, it is started as an HTTP server from an SWI-Prolog command prompt and runs on the same machine for developing the plugin.
Click to expand
Welcome to SWI-Prolog (threaded, 64 bits, version 9.1.8) ... ?- working_directory(_,'C:/Users/Groot/SWI-Prolog - ChatGPT plugin (localhost)'). true. ?- ['ChatGPT plugin Todo']. % Started server at http://localhost:5003/ true.
You have the manifest and spec files hosted remotely
No they are on the same machine for development. Development is done using VSCode on Windows and SWI-Prolog toplevel, think REPL.
You use curl to test the endpoints and to make sure that the manifest and spec files are returning
You then open the dev tools to look at what is going on as you test out through the ChatGPT UI
Yes. This only becomes useful once the Plugin server and ChatGPT start communicating. (Just before writing this was able to get ChatGPT to load the
openapi.yaml without errors). The dev tools validated the files and did not like the use of
- in certain fields. The
- was part of
SWI-Prolog so changed it to
_ for now.
Developer tools for Chrome F12 is also open the entire time and caching is disabled.
I don’t have any production versions at the moment. That may happen down the road but will need to shop the idea around and see if it floats, for a hint of an SWI-Prolog server running full time online see SWISH.
Cool, thanks for the reply. I’ve done Prolog programming in the not-too-distant past, if you need any testers let me know.
Thanks, I’ll kept that in mind.
If you want to hang out with those using SWI-Prolog just join the SWI-Prolog Discourse forum, its free and there are never any ads, imagine that.