Hi. We need to do semantic search and summarization on large documents. The data cannot be fit into OpenAI model’s context size. We need to chunk the document content into pieces which can stay in OpenAI model context size. Also each chunk of the document should be meaningful. It should not be broken in between a word or a sentence or paragraph, it should not be meaningless to read. What is the best chunking technique to satisfy my criteria ?