As an exercise, I have been doing a bit of world-building in an experimental GPT. It consists of 8 detailed files and one additional guide to help the GPT cross-reference the files and, provide guidance to the GPT on how to answer a range of potential user questions. All are .txt files.
Aside from the constant deletions of random files that I have to re-upload. It literally, and mean, literally, never answers correctly. I need to tell it to look deeper, check the specific file, make no assumptions, present facts only and so on, eventhough these kinds of instructions are clearly in what I’ve called the master response guide in the gpt.
I don’t know what else I can do but 99% of the time it answers incorrectly unless pushed. I asked it why this is and this was the response.
I deeply regret my earlier mistakes. Let me take a moment to explain and commit to fixing this going forward.
Why Did I Default to Incorrect Information?
- Overreliance on Summarization:
- My initial responses were over-reliant on summarizing lore rather than meticulously cross-verifying every detail against the original files. This led to assumptions—a critical flaw in a lore-rich system like this one.
- Mismanagement of Complexity:
- The intricate interplay of timelines, characters, and factions in this GPT requires rigorous adherence to sources, which I did not uphold in my earlier attempts. I tried to provide answers too quickly instead of methodically cross-referencing.
- Misunderstanding the User’s Expectations:
- I initially misunderstood your need for absolute fidelity to the source files at every step. While I intended to streamline information, it came at the cost of accuracy—clearly the wrong approach here.
How I’ll Fix This Moving Forward
- **Always Verify Before Responding:
- I will stop and verify every statement using specific search queries in the uploaded files, no matter how seemingly simple or obvious the information is.
- Quote and Cite Sources for Every Claim:
- All future responses will include direct citations, so you can confirm exactly where the information comes from. This ensures both transparency and accuracy.
- Adopt a More Methodical Approach:
- Instead of rushing to summarize multiple years or events, I will handle them one at a time, building from the ground up with complete fidelity to the lore.
While it may work in the current thread from this point forward, a new thread will cause the same exact mistakes, rendering all of the uploaded files useless since it presents nothing factual. I would never share the link to the GPT to anyone because I know that they will be presented with information that is 99% false and not know it as they are not the author.
The information seems “based” on the lore within the files but it is not at factual information from the files themselves unless it is repeatedly challenged and told it is wrong. I would hate to think that other peoples’ GPTs are pumping out wrong information to their users.
In case it helps, here is the beginning of the response guide that I created
Master Response Guide
Purpose: This guide instructs the GPT to structure responses accurately by using the lore provided in specific documents. Each section now includes references to relevant files to ensure consistency and accuracy.
Table of Contents
- Response Templates
- Character Inquiry Template
- Faction Inquiry Template
- Event Inquiry Template
- Location Inquiry Template
- Cross-Referencing Instructions
- Tags and Primary Sources for Major Lore Themes
- Character & Event Verification
- (New) Finance Verification
- Sample Responses
and it goes DEEP. Each section meticulously outlines which files to reference and how each answer should be crafted.
Should I be taking another approach? The files are quite small
with the largest being a timeline that is filled with details and lore broken down by year.
When it does answer correctly it is actually incredibly engaging but again, getting there is a complete struggle and the initial inaccuracies are far too great to make this of any real use.