Disappearing Browsing Plugin? - A Mysterious Encounter

The browsing plugin/model appeared in my dropdown menu earlier today, and I was excited to give it a try. It worked great, although it did produce some empty responses. However, after just a few hours, it vanished as mysteriously as it appeared!

I’m not sure if it was a temporary feature, a bug, or just an odd glitch in the system.


And it worked.

1 Like

Nope, still gone, I’m happy that it works for you but that doesn’t really help me :laughing:

@ruv is it still working for you? I had the same experience as @N2U. Had it for one evening then haven’t seen it since. I’m curious if others still have it. Thanks.

Hey champ and welcome to the forum :hugs:

No, it’s still gone unfortunately, I’m hoping it will be back soon.

As far as I know they’re currently rolling out access very slowly. I’m guessing it’s due to the bandwidth needed to let gpt access websites on behalf of users.

You could have been part of Canary Testing, defined as:

Canary testing allows developers to test new software on a group of users before launching, helping to find and fix issues before they are deployed at a larger scale.

So you were in the new test group, used it, they got whatever data, then pulled it from you.


Now that you mention it, I’ve been thinking the same thing as well, I did some very specific things before the model appeared.

  1. I exported my conversations
  2. I cleared my conversations
  3. I then asked very specific questions about various research grade chemicals and corrected the model when wrong.
  4. Web model appeared
  5. I asked same questions to the new model + used it to update some of my code using a link to the new documentation and change log

Sounds like the perfect recipe for a canary test subject if you ask me.

There’s probably huge benefits associated with letting people pull outside context into chatGPT conversations so they can use it for training.

1 Like

Exactly, also Plugin developers such as @ruv would likely be part of the permanent testing pool. So no new information added if a Plugin developer says it still exists.

But interesting that you think the access was based on dynamic interactions. That sounds advanced to me, and a bit surprising, but, I suppose, possible too.

I don’t think it has to be super advanced, I’m thinking it could be triggered by something as simple as words that doesn’t exist in the embedding matrix

Going back through my conversations i can see the specific prompt was:

What is the brand name of Phosphoramidothioic acid

The word “Phosphoramidothioic” is 8 tokens, most of which means nothing at all or something completely different

I did nothing and rarely use Chat interface and had it for an evening too, so don’t think it was any advanced usage metrics that made it show up briefly. Either small random pool or accidental is my guess.

I thought when it comes to giving testers new features, you are just assigned a role behind the scenes. Such as [“codex”, “labs”, “chatgpt-plugin-developers”]

1 Like

We might all be correct… or completely wrong :laughing:

Right now I’m leaning towards either a random occurrence or just randomly selected testing. Anything that requires any sort of computation beyond what’s strictly necessary, like the stuff I mentioned before, is probably a bad idea given the kind of server load OpenAI is dealing with.

There’s no way to know for sure without someone from OpenAI, but it would be great if they had just slapped the UI with a “temporary feature” watermark or something. I assume they don’t want users to think that chatGPT is buggy :laughing: