I know the text-embedding-3 models are multilingual, but I couldn’t see a clear answer if they cluster together the same semantic meaning in different languages? Or is each language in its own space.
The use case is for RAG - where we want to be able to ask a question in one language and match document chunks about that subject in another.