Using Embeddings for search poor results vs GPT3

Dear Open AI Team,
I need some help with embeddings.
Use Case: Better search on technical documentation
What I am doing: I am getting decent results with GPT3 fine-tuned curie models. I want to do even better so i am experimenting with embeddings. using ada models “text-embedding-ada-002”. Using cosine search an k-nearest neighbors I am getting pretty poor results. I tried creating an encoding for a whole page vs a single like.I also include the title of the page+content in each embedding, but the results are very subpar compared with curie. Any ideas to improve on this will help.