Token Limit For Embeddings vs. Text-Davinci-0003

Thanks @nelson, I understand your point. In my NLP pipeline, I first identify the best n embeddings, and then the corresponding pieces of text go into my prompt. So being able to have more tokens per embedding is, I think, irrelevant. Unless I am missing something…