I just experienced a new feature "@" before a GPT name will send the output of the prompt to another GPT

I don’t see any information about this, it is part of new update pushing out. Very interesting, and seems to work, though I failed getting a Dall-E image generated through it, from one GPT to another, but I think this is awesome. If anyone has a link to documentation about this, I’d love to hear it.

pls post screenshot

1 Like

Hijacking keypresses and preventing you from sending characters to AI unless you go out of your way to cut and paste is an undesirable dark pattern…

additional: seems the forward slash shortcuts don’t work any more. So now you can at least input a code comment if you’ve disabled a browser’s typeaheadfind…

1 Like

I’m intrigued. If you don’t mind: is it possible to simply call in multiple GPTs to add-on to the current conversation? So it’d be something like:

User: “Let’s bulldoze this tree”
ChatGPT: “Some lame stuff about being a language model”
Tree-hugging GPT (THGPT) invited
THGPT: “Trees are life man, that’s like a billion lifeforms you’re killing, imagine if they were your dog”
Yellow-Hat Joe GPT (YHJGPT) invited
YHJGPT: “Get a load of this guy, pfff, let’s bull-doze this sucker after I crush my third red-bull”

2 Likes

image

I take a prior conversation with plain GPT-4 (with some custom instructions), and do the GPT search from recently used GPTs to use a “podcast” (clicking on the scroll bar of the dialog is bugged):

The GPT behavior is seen.

It would be interesting to see if the inexplicably changing roles is maintained in conversation history. Actually, we can find out, as an “x” button press will cancel the GPT:

image

So yes, you have the opportunity to really confuse the AI about what it does.

1 Like



1 Like

This nonsense is already messing with me.

Several lines into input, I get stuff popping up, threatening to replace my text.

That’s the appearance after typing two @@

1 Like

That’s pretty stupid for them to hijack the input field. Seriously. WebDev 101.

I mean… My god. The damn FORUM SOFTWARE that they use has this almost EXACT functionality :sob: @RageQuit

Hopefully they’ll fix it.

I don’t know what you’re talking about. You can still put “@” into prompts. Type abother space after the “@” and the GPT input goes away. The field isn’t “hijacked”, it works fine, and is intuitive.

I would be interesting to find plausible use cases for this feature. Any thoughts?

The current rush seems to be towards routing & chaining agents together.

Complete pure speculation but GPT-4 introduced MoE (Mixture of Experts), which was a really cool concept: Instead of having a mega-brain that knows everything (and therefore be very expensive to run), a routing agent should be able to direct the query to an “Expert” of it’s field. Seems like this same concept can apply to GPTs.

So an easy use-case would be a medical bot:

User: “Hey medical bot, my patient is describing some serious pains in their tooth, it seems pretty black & nasty up in there”

(Not at all what they’d say, just an example)

Medical Bot: “Rip that shit out”

User: “Fuck yeah. Do I need to provide any forms for the patient to perform this operation?”

User invites Medical Lawyer GPT (MLGPT)

MLGPT: “You’ll have to get them to sign form DISJFIOSDJFOWERNJK:TNoi213213 and FIDJHFHDUJKSFHJKHR%423”

User: “Great. Thanks”

OK, just rolled out to me