Hi developer community! I’m just getting started exploring beyond prompting and could use some advice.
I’m running some creative writing experiments with GPT-4 and a localhost chat retrieval plugin connected to a Pinecone index.
In Pinecone I have around 20 transcripts of the same TV show, formatted reasonably consistently in markdown. Since they are in script format, the docs are inherently fairly ‘structured’ as you’d expect:
INT. TELEVISION STUDIO - EVENING
The camera pans across a sleek, modern news studio. The atmosphere is tense, the music dramatic.
Good evening. Here are tonight’s headlines.
My goal is to accurately mimic the style of each speaker in generated text and my idea was to have the retrieval plugin study the scripts as either a few-shot technique or by analysing each instance of a speaker’s line across all the scripts to derive a composite character style.
In fact what happens when run is that we get a handful of results from Pinecone (a different subset each time). The results are accurate but don’t give enough signal to capture the speaker’s style.
Could this be improved with how my documents or retrieval requests are set up, or is this just a limitations of the RAG approach?