The browsing plugin/model appeared in my dropdown menu earlier today, and I was excited to give it a try. It worked great, although it did produce some empty responses. However, after just a few hours, it vanished as mysteriously as it appeared!
I’m not sure if it was a temporary feature, a bug, or just an odd glitch in the system.
@ruv is it still working for you? I had the same experience as @N2U. Had it for one evening then haven’t seen it since. I’m curious if others still have it. Thanks.
No, it’s still gone unfortunately, I’m hoping it will be back soon.
As far as I know they’re currently rolling out access very slowly. I’m guessing it’s due to the bandwidth needed to let gpt access websites on behalf of users.
You could have been part of Canary Testing, defined as:
Canary testing allows developers to test new software on a group of users before launching, helping to find and fix issues before they are deployed at a larger scale.
So you were in the new test group, used it, they got whatever data, then pulled it from you.
Exactly, also Plugin developers such as @ruv would likely be part of the permanent testing pool. So no new information added if a Plugin developer says it still exists.
But interesting that you think the access was based on dynamic interactions. That sounds advanced to me, and a bit surprising, but, I suppose, possible too.
I don’t think it has to be super advanced, I’m thinking it could be triggered by something as simple as words that doesn’t exist in the embedding matrix
Going back through my conversations i can see the specific prompt was:
What is the brand name of Phosphoramidothioic acid
The word “Phosphoramidothioic” is 8 tokens, most of which means nothing at all or something completely different
I did nothing and rarely use Chat interface and had it for an evening too, so don’t think it was any advanced usage metrics that made it show up briefly. Either small random pool or accidental is my guess.
I thought when it comes to giving testers new features, you are just assigned a role behind the scenes. Such as [“codex”, “labs”, “chatgpt-plugin-developers”]
Right now I’m leaning towards either a random occurrence or just randomly selected testing. Anything that requires any sort of computation beyond what’s strictly necessary, like the stuff I mentioned before, is probably a bad idea given the kind of server load OpenAI is dealing with.
There’s no way to know for sure without someone from OpenAI, but it would be great if they had just slapped the UI with a “temporary feature” watermark or something. I assume they don’t want users to think that chatGPT is buggy