I’m trying to set up an automation that analyzes a sales call transcripts. The issue is that most of the calls last over an hour, which exceeds GPT’s context window.
The most common solution is to divide the call into chunks, but I find this method unreliable. Information provided at the beginning of a call might end up in chunk 1, while it’s necessary for understanding chunk 10, for instance.
I had thought about converting the transcripts into Word or PDF documents, but it doesn’t seem possible to attach a document to GPT via the API (except through an assistant, but here I’m not looking to do RAG; I need a summary of the sales call).
Hey!
Seems like embeddings would be perfect for this!
A vectorstore would probably make your life a lot easier.
An easy way to try this is with Llamaindex (More advanced) or Flowise (Easier to use), both of which I can recommend!
Would embeddings work to summarize a very long text?
Sorry if my question seems silly, but I had the impression that embeddings were only used in the context of a RAG, where the use case is to ask questions to a knowledge base.
In my case, I don’t need to ask questions. I just want to create a prompt that would say: “Summarize this text by highlighting the key points: point 1, point 2, etc.”