I might know who is this coming from… here is the update: I have a working algo that cuts the text precisely “at thought”. This way the chunks contain one idea at a time and are perfect for embedding where the goal is to get a “ready to go” context item directly from the database without further processing by additional model, which reduces the operational costs.
1 Like