Optimal chunk size and number of chunks for knowledge-base context chatbot

We are in development of a conversational knowledge-base chatbot. I am experimenting woth chunk size of the context, number of documents (embeddings) to provide as a context, as well as the relevancy of these embeddings. I am currently on 500 characters chunk size, 80 chars overlap, 6 documents provided as context. I am experimenting with limiting to only those embeddings that have higher than 0.8 match. From your experience, what was the best, sweetest spot for these metrics? For one, you don’t want the context to be small enough, but also you don’t want it to be too big, so that the chatbot can extract or find the relevant info without being “distracted” with irelevant context.