If you are using a vector database retrieval search system, the output of that is not vectors, but the text results of top matches. The automatic injection you place is text, so embeddings-based search results can be language that any AI can understand about your products in the database, or can just be used in any other type of search product that doesn’t involve further AI, like searching this forum for the most similar questions.
If the question is "can I embed all my document chunks with 1536-dimension text-embedding-3-small
and then compare that with the embeddings result I get from 1536-dimension text-embedding-ada-002
(or any other embeddings AI), the answer is a huge NO.
The reason for “No” is that every model is trained differently and learns different semantic aspects in its layers that are expressed in individual dimension activations. As a simplified depiction: Dimension 5 of the original model might show a strong correlation for text involving birds, but another model is bird-like elsewhere, and dimension 5 is strong in political things.