BUG Image Generation: prompts and instructions ignored!

Tbh I don’t know if it’s illogical.

If you mention a concept, you will get some sort of activation - humans do that too:

„Don’t think about breathing, don’t think about breathing, do NOT think about how you have to inhale, and exhale.“

While I think that we could engineer a model that can actively internalize negation (we have the technology!) I think it’s super interesting that this phenomenon keeps emerging naturally in machine learning models.

I personally think this effect could teach us things about human to human (h2h?) communication - how we consciously or unconsciously transfer (mental) compute costs to other interlocutors through either lazy or intentionally convoluted language.

What I mean is that a negation needs to be resolved before it can be understood. Either you do it (i.e. don’t mention a negation in the first place; reinforce your statement with positive examples instead) - or your interlocutor has to do the mental gymnastics for you (try to think up examples that are related to the negative example, but aren’t disqualified by the negation: “draw anything, but don’t include fruits, like for example pears → what’s not a pear? Apple. An apple’s a fruit. An orange? Nope. Orange tree? Still has oranges, which are fruit. A tree? Maybe!”)

2 Likes