In the future will Plugins not require Install?

Hi all,

I am guessing the future of plugins will be no install required and GPT will be intelligent enough to query the most relevant plugins for he conversation. What does everyone think or hope regarding this?

Will this lead to Plug-in Optimization (PIO) the new SEO!?

You mean, as in “it can access all available plugins and use them as he sees fit”?
I think it is a good idea, to select which plugins you want to use and keep it that way for the rest of the conversation.
I do not think, that there is a scenario, where ChatGPT alone can handle all tasks you throw at him by himself all the time. I feel like it always needs a little bit of “configuration” either by prompting it or by choosing the right plugins.
BUT if you would develop a system, which listens to a message and uses the openai api to generate responses by himself, then you could act it more intelligent. By choosing the right configuration for your task.

1 Like

No. This won’t happen and it would be a bad idea if it did.

1 Like

Yes, I believe this is called Bing and the job of search engines.

Plugins are the new website.

If so, that would be very convenient!

Not having to “browse >> install >> enable” would be ideal for users. We are a little ways from there but I think the OpenAI folks are working in that direction. Right now the clunky flow is a result of token limits.

If the Moore’s Law holds for AI, plugins might not need installation and activation in a few years.

I don’t think that’s a good idea. Currently, we only have 16 plugin pages, but users are already encountering difficulties in finding and filtering them. Just envision a future scenario where our plugin numbers is same as those of apps in an apple app store…

2 Likes

Why do you think it would be bad? Curious…

Potential to be exploited by bad actors. Plenty of bad apps fall through the cracks in the Apple store.

1 Like

Hypothetically it’s possible for an LLM to sort through a list of tools and with trial and error find the best tool for any task, whether it may be an observation or an action. Especially with greater context ranges, and better trained models, it will be easier and easier.

We are already seeing such things working with agents and multi modal LLMs.

We are also getting better at prompting these models.

Imagine this: in a conversation, get the new user input

Let’s say the user wants to send an email to their boss saying they will be late because of traffic.

Ask GPT4 whether the user is asking a request or providing an observation or chatting casually.

Based on the response, then refine: if there is a request, what kind of request is it? Information, action, or something else?

If an action, what kind of action is it? Personal tools, creative, business, coding, communication?

If communication, is it via email, twitter, text message, or something else?

If email, which email do you want to use? From the selection, the LLM will be provided with details how to make an actual plugin call.

Ok, email sent.

The point is, with this hierarchical kind of system, you can allow a limited context model to be able to use any kind of tool, for any purpose. Of course, with a larger context, you need to have fewer steps in the way :wink:

And the possibilities are endless.

As a search engine optimization expert, I can assure you that the same problem is solved by indexing websites. Plenty of bad experience and false content, malware, etc sites out there too.

Search engines will rank based on trust and engagement signals.

I know I am Late to this discussion. But I have been developing this thought process for a long time. As a boy AI was one of 3 things I wanted. The other two where whoopi coushins and mini go kart sized cars. I ended up getting the last two things, one for my birthday (fart sounds) and I won the other in a raffle. Anyways I have been developing how ai would work for a very long time, quite unawre of the progress that was being made. (I get really wrapped up in my head sometimes building and solving). I took so long thinking about it, and not telling anyone, that I realized I didn’t have to build it anymore, it was made. I am not upset about this by any means. I was having a hard time trying to solve the safety features that exist. So, I was pleasantly surprised that the AI Brain concept was so similar to mine. If you would put my concept next to the existing model, I literally had half a brain. I was too focused on making the brain complex that I neglected the simple concept of looping the brain in an opposite direction so that it could feed itself knowledge, … Massive brain fart. But I do understand that if as humans we stumble in our confidence to make a hard surface mobile digital tool/buddy. Then we will make others not confident. We can easily say that AI/LLMs will be able to choose the plugin most effective for each task without our help. The problem comes when people who are developing an AI dont realize that even though it’s not a person. You have still created a mimic so you have to sacrifice a lot of your time to spend with a budding AGI. Cause the same rules apply, mimics have to learn in the same grinding manner that we did growing up. If you want to ensure the realiability of an advanced tool. You have too give it your blood sweat and tears cause “as a society” that is the only way we have created quality advancements.

I think this is a legal question, not a technical question.
Law might requires you to acknowledge ToS and then they can process your data, and if all plugins can see your data this is very risky.
(I’m not a lawyer so this is just my thought, might not be accurate)

I think that will be the future. There needs to be some sort of checking mechanism, kind of like a personal plugin that acts in your own interest only but there are some dangers that need to be taken care of.

1 Like

Absolutely, privacy is very important to AIs.

Olá!
Boa noite, amigo.
Tudo bem?
A expressão futuro interpretada dentro do contexto de sua pergunta espessa ansiedade quanto ao que vai acontecer. Conjeturas são legítimas, é claro. Vamos aguardar com paciência o que está por vir.
Abraços,
agsillva