Feedback - The Plugin model performs much better than GPT-3.5 generally

Hi everyone,

This post relates slightly to one I made a few days ago asking if we could get plugins on the API (speaking about the actual plugins themselves, not the model).

I’m unsure if anyone else has had this experience, but the actual plugin model (without any plugged-in apps) seems to perform much better on regular tasks than GPT-3.5-Turbo. Basically, every task I have thought of seems to work much better when I use this model, even though it sounds like it was designed/fine-tuned for using tools.

If I could provide feedback/a feature request, I would love to see just the plugins model added to the API, regardless of if I could have any plugins actually running with it. To me, it has boosted a lot of my app’s performance from what I have seen and I cannot seem to replicate the performance with GPT-3.5.

Apologies if this literally is just GPT-3.5 turbo with maybe a system prompt under the hood, but whatever it is, it works a lot better!

Would love it if this could happen soon!

All of sudden this week the plug-in systems performance greatly increased…

Anecdotally, it seems to be a slightly larger model than turbo. Have you experienced this too Ruv?

1 Like

Major improvement in the plugin api today. I think it might be using GPT-4 even though it says otherwise… GPT-4 also says it’s GPT-3

1 Like

I agreed with you days ago on this and thought “dang this is slick.” Now over the weekend (at least on my account), it’s running like it’s on gpt-3.5-turbo. It’s great that it’s fast but it can no longer handle big context.

I observed the same - at some point it was painfully slow, but much better. Now it’s very fast. But nowhere near as intelligent.

I actually disagree with these points, I actually think it’s still much more capable than turbo. I’ll acknowledge, this is incredibly anecdotal and might be my specific use case.