Semantic depth of embeddings

question about how much “semantic” depth embeddings catch.
say I embed “A” and " NOT A"
will they have a really big distance in the vector space ? or will they be close to each other since they both are about “A” ?

Why don’t you test it?

This would take a minute only.

I have many thousand texts like this and I can do some tests of course but can’t be sure for all of them. That’s why I thought maybe somebody has a fundamental answer that goes beyond the anecdotical tests I can make

1 Like