Any other useful operations with text embedding vectors?

I have 2 text chunks, t1 & t2, with embedding vectors v1 & v2. I’ve already seen the distance / similarity uses. Are there other interesting operations I can do with v1, v2, t1, t2? For example, might v1 - v2 yield a result with some intuitive meaning?

If I do some vector operation op(v1, v2) and get v3, is there any way to generate GPT’s best idea of approximate text corresponding to v3?

Lastly, how does the vector v1 relate to the vectors flowing through the LLM architecture when processing (inference or training) t1? [Not looking for trade secret info, just a general idea in terms of encoder, decoder, attention head, the standard architecture picture]

Thanks!

1 Like