The intuitive answer seems to be yes, buuuuuut… anyone have anything to say about this? Can I add embeddings for “red”, “bicycle”, “horn”, and have it compare favorable against “red bicycle with a horn”? Are there other considerations?
Thanks
The intuitive answer seems to be yes, buuuuuut… anyone have anything to say about this? Can I add embeddings for “red”, “bicycle”, “horn”, and have it compare favorable against “red bicycle with a horn”? Are there other considerations?
Thanks
I don’t think I quite understand your question, do you think you could add some more clarity?
Embeddings are numbers, right? If I take two embeddings and add them together, do I get a combination of the two vectors, or do I get something else entirely? I’m not sure if this is a practical thing to do or not at this point. I’m just trying to think of ways to keep my queries down. In the videos, the dude is getting amazing results, but he’s also querying the model like no one is going to ask him what he’s spending on it. I’m not sure I’m going to have that same luxury, and we could be talking about a lot of queries in my case, so I’m trying to be creative. If I can add three embeddings together and get the same thing as getting a new embedding, maybe that’s one way to go about it.
Ahh, I understand. I do not believe it works that way.
You might look into the Google sentence encoder or Pinecone. Both do embeddings similarity for way cheaper than OpenAI.