Will the embedding API from OpenAI be able to capture this? I am building a search functionality for my app. The input to the search field will be text, and this can have severe typos. I am using a vector database to chunk, vectorise and index all my docs first (typo free). Now, in real world scenario, the user most likely has atleast 5 typos in my use case. I am seeing reasonably good results with this approach - 75% accuracy. How can I further improve this? Any suggestions?
Many thanks!