I am tinkering with a custom GPT to help me write a memoir.
I prompted it to learn as much as possible about topics related to major life chapters.
I gave it an outline of the chapters, the tone to take, similar authors, a target audience, etc.
I gave it 10 PDF research papers regarding things I wanted it to learn and apply to the advice it gives, ranging from papers about memoir to papers about my generation and topics like mental health.
I fed it a ton of information and realized it doesn’t necessarily remember things like I imagined it might. So it is not giving quite the answers I am looking for. Maybe it has too broad a range of information it is trying to deal with?
I am wondering if anyone with much more experience with making GPT’s would have input on honing this to its most effective application given current limits. I read a thread with someone talking about having multiple assistants that talk to each other and actions that output through GPT-4 or something along those lines. And I wondered if that kind of application, i.e., making a separate ‘expert’ (assistant) on each topic, and one about the ‘main character’ would solve the overcontextualization problem.
Any thoughts on any of this?