ChatGPT plugins are here!

You’re right :sweat_smile: The morning of the release I was just finishing an app using embedding, in a few hours it became completely obsolete with the announcement

3 Likes

There certainly is a lot of confusion with the term “developer” in the announcement. To make it clearer, they should have said SaaS developers who are interested in exposing their software to an LLM front-end, in particular, the OpenAI implementation of ChatGPT, not the gpt-3.5-turbo or gpt-4 models that “developers”, the other ones, that aren’t necessarily SaaS, can access.

But it makes sense, and is basically a fractal of what Zapier did … attract SaaS companies to a low/no-code platform in hopes of more SaaS revenue $$$$. So this business model works!

I’m interested to see where this goes. One of the big hurdles for fine-tunes was a lack of trustworthy seq-to-seq instructions (Words → API calls). So if they can nail this seq-to-seq problem with plugins, this is a huge leap forward.

But as a non-SaaS developer, if they could expose the core model that translates Words → API instructions, this would be a lot bigger deal for me, since this is the core technology offering in the announcement.

5 Likes

@bill.french @curt.kennedy

Very good points, thank you for your responses.
This has been quite the ride. Looking forward to the future.

3 Likes

For sure. You see that already with Google and Microsoft incorporating GPT powers in their office suites, completely wrecking hundreds of early entry apps and extensions people came up with.

But it was as predictable as it is necessary.
Small startups and devs have to stop jumping on the obvious use cases, because the big players have both the access and the resources to cover those themselves much better than an individual dev can do at home after their day job.

Instead they need to look for niche and proprietary applications of the technology. E.g. don’t create a generic chatbot, for christ’s sake! Instead create a conversational interface to a specific company’s internal knowledgebase or processes and documentation.
Or hook into a specific special-usecase business application and automate something in there with a plugin or a connection to a text model of some kind… something that MS or Google won’t railroad you on within a few weeks.

4 Likes

Absolutely!

  • Defensible moats are likely to be a process, perhaps some unique data, or both. But certainly, if you have either or both and it is not easily accessible by others, it will provide a differentiator.
  • Mixing your AI solution with domain experts who you cater to in a deeply custom way is also likely to make it more difficult for others to penetrate the moat.
  • Lastly - legacy data; if you can modernize these silos with AI, all the better, because the customers will see bigger advantages to using a system that can see into the past as well as it can work in the present.
1 Like

I’ve been using drinkwater.ai for some weeks and this API features is there from the start.

And the knowledge base with semântic search too.

Not only that, but also the case with these gradual rollouts … Even the newest features I can’t incorporate into my own development, such as access to GPT-4 API which I still didn’t get although I am on the waitlist.

So other insiders not only get the knowledge of the roadmap you mentioned, but also get access before the release and after the release for a long time … and that is a serious disadvantage to all the community.

I must admit with the plugins release they killed a bit some of my ideas and some of the work I am doing… but it was an obvious market move they could make.

As someone mentioned, commit to a use case and focus on solving and adapt, but at least a roadmap would be cool. I am interested at the moment if e.g. these plugins will be accessible over the API and not only through ChatGPT? Info like that would be helpful at least…

5 Likes

If it was obvious, maybe that’s not where you should have placed a bet.

Sometimes we do the obvious for different reasons:

  1. To learn
  2. Blind spots
  3. Delusional thinking

If you can think it up, 100 other people thought it up the week before you did. And at least one of the one hundred is the core infrastructure provider.

No business is obligated to expose its strategic plans.

They won’t. The entire purpose of plugins is to enrich GPT services, not the other way around. Creating plugins to serve as endpoints to your enrichments is not their objective. Rather, creating better solutions that utilize their API is precisely their objective.

2 Likes

Can y’all OpenAI folks give any indication how quickly people from the waitlist are being added? Is it going at a good clip or more of a slow roll?

2 Likes

Can you clarify that? I am not sure I understand the logic. If for example Zapier creates a plugin to extend the capability of ChatGPT, do you think the Zapier integration will be available only within ChatGPT or also in an API?

1 Like

Zapier will still live, and ChatGPT will use Zapier. But API developers will use the raw ingredients we’ve always been afforded and now SaaS companies can get in the mix directly to OpenAI without Zapier or other “SaaS Hubs”.

The more the merrier!

3 Likes

That’s already the case.

Nope. It’s already available in five thousand other apps and services.

Zapier is like an adhesive; it’s designed to glue stuff together. Lots of stuff. And they’ve been invited behind the velvet rope to run seamlessly inside ChatGPT. Every recipe in the Zapier ecosystem will be accessible through plugins that will blossom in abundance because they will require no code. As a result, Zapier will be instrumental in breaking the bounds of static LLMs by making access to information that is not in the LLM.

Zapier has always been accessible through its API. I anticipate OpenAIs API will provide plug-in methods, but not specifically anything related to Zapier itself.

2 Likes

I am blocked from ChatGPT because I purchased ChatGPTPlus, which blockes you from ChatGPT.

This is what ChatGPT Plugins are. It’s a way for Software as a Service (SaaS) companies to integrate their services with OpenAI without Zapier or anything else. How did Zapier come about? The same way, they (the many SaaS companies) developed their “plugin” for Zapier. It’s the same thing! Only difference is that the interface is through ChatGPT, not generic HTTP web pages.

Here is the development pipeline for a SaaS developer to build their offering into Zapier.


My point is, OpenAI is offering a similar pipeline, but through ChatGPT. This is what Plugins are. A developer pipeline, similar to Zapier. But the cool part, is that the interface is an LLM, which has the capacity to be much more user friendly and “normal” for novices. The core tech of the interface is the seq-to-seq model of converting Words → API instructions. Before it was button mashing and clicking dropdowns with a mouse and keyboard (if you were a Zapier user).

So AI is now “smart” enough to interpret your words, and create these button mashes and drop-downs for you.

Like I said, Words → API calls (ChatGPT Plugins), instead of Buttons/Dropdowns → API calls (in the case of Zapier). And in the case of API users/developers, it’s Lines_Of_Code → API calls.

So “software” is getting more and more abstracted and democratized (but ironically, emanating from a central point). But now with LLM’s, it is becoming more natural and much more abstracted. I wonder, if in 50 years, folks will look at Zapier and think it is Assembly Code. Software over the years has become so abstracted. This is the obvious trend.

And I know, I started coding with punch cards! (??? pre-Assembly ???)

4 Likes

Totally agree. On the LLM side, the abstractions will be far more productive. Zapier’s buttons and connectors often make convoluted Goldberg machines look elegant.

But we need to accept the reality that there is now evolving (with the help of LLMs) two sides to every Zapier recipe; the one inside ChatGPT and the other end of the recipe. Zapier will still be the glue sandwich; on the LLM side - simple words will connect to the integration workflow, which must then connect to something that will still require discrete configurations with buttons, and Zapier skills, etc.

Are you saying that GPT Plugins will essentially handle both ends of the integration recipe? If so, doesn’t that mean the days are numbered for Zapier altogether?

I’ve always felt uneasy about the middle layer that Zapier and Make provide. It increases attack surfaces, exposes your content and API keys to yet another SaaS provider, adds to the complexities of integration, and generally forces builders to push business logic into places it should probably not exist.

Elegant summation.

They will. And they will wonder how we got anything done.

1 Like

Since SaaS providers can go directly to the LLM “platform/plugin”, and if that proves to be a superior interface (I’m skeptical it would be, but who knows), then Zapier is cut out. So, yes, their days could be numbered.

I agree, but it does provide a place for non-coders to contribute. For example, my wife will build something in Zapier to do something for our business. If it proves successful, I will then code it in AWS and turn off the Zapier version. But this means I can “see” what it is doing. If the highly abstracted LLM version is so buried in abstraction that I cannot see or mimic what it is doing in code, then this is going to piss a lot of devs off that need to offload those processes to their cloud sans-Zapier.

And yeah, the attack surface just got bigger. So for the sake of security, the abstraction path we’re on has to stop somewhere. Or our security protocols will have to be revised to combat this.

1 Like

Funny - I wrote exactly that [positive] use case on the Airtable forum at least once yearly for the past five years. I really don’t have a problem with temporary adhesives in life or tech. These glue factories are often the engine of innovation, so I do agree they have a place.

I sometimes [maybe irrationally] worry about the additional machinery, security implications, complexities, and assigning control to all your data through a single point of access or failure. It all works wonderfully until it doesn’t. If you are pushing a shopping cart down the aisles of 7-Eleven, you probably have an eating disorder. If you need 175 Zapier recipes to make your 30-person SaaS system function, you have an integration disorder. :wink:

1 Like

It’s not irrational @bill.french, it’s a normal human reaction. The root of all anxiety is loss or lack of control in a given situation. This is what LLM’s could be perceived as, a new unknown that we can’t control. But when you dig down, use the API, and start to realize you can influence and even harness this new technology, you regain control and have less or no more anxiety over the issue.

You sound like you might come from the old-school on-premises server-farm days (like me), before cloud based computing. But there, you had to patch, maintain, fix, those servers to keep them running, as a full time job in some cases. Now, with cloud, especially serverless, it is all abstracted, you don’t care about the server environment or patches. But you don’t have insight other than, does my code work or not. And, the good is I can see my code, debug it, and change it if it’s broke. So in the case of LLM generated “glueware”, am I going to see it?, or is that abstracted, something I can’t control, and therefore worry about.

There’s also the “make/buy” decision process. Is this mystery AI generated code solving a critical business problem for me, and at a reasonable cost? If it does, great, spend time on something else of higher importance. But you are right, when it fails, it’s firefighting time. There is often no way around it, even in the best architected software system (with internal or external software, or internal or external infrastructure).

Years ago we had to switch from the RingCentral API for sending texts to the Twilio API. Had to do it on a dime, and within a few days. Could I develop my own cell phone network … honestly no, not my interest or objective. So I had to switch to another provider (external unavoidable dependency). You will always be tied to external SaaS in some manner, otherwise, your speed to market is so slow and your business will dissolve. Speed, acceleration, change, new tech, that can make or break everything. And, this is why I think it’s smart for OpenAI to adapt a way for SaaS to interface with its LLM’s. If they don’t do it, then someone else will.

1 Like

I am excited to know what your new ideas will be, which is part of the get chat path, and it is to exploit our creativity to the fullest; I think it is one of the most exciting challenges, reshaping our ideas as ditto in Pokémon.

Many companies will need to integrate AI into their business, could be nice to be part of those rollouts. Or generate a project for a specific solution, using abstract thinking beyond the general implementation of ai in a product.

It is unnecessary to put all the food on your plate, see the menu, or keep attention to new releases, but eat.

1 Like

Yep - 1975 on punch cards → TRS-80 → Apple II → IBM-PC → (then it’s a blur of 808?'s). LapLink, QuickSite, StarBase, MyST, into the cloud (more blur), Google Apps, then real-time data (sockets), video analytics, (more blur) → today. I’m exhausted. :wink:

Yep - did that too. PTSD.

Well said.

1 Like