How do we stop GPT from summarizing everything?

We are experiencing a persistent and critical issue when using OpenAI models that is severely impacting our workflow. We are generating detailed reports from text and data, and despite extensive efforts, the models consistently summarize and remove or omit tonnes of information. This is not limited to a specific model, but appears to be a systemic problem. Maybe we’re not doing it right. Who knows. It’s super frustrating.

We have tried numerous prompt engineering techniques, including:

  • Explicitly stating “Do not summarize.”
  • Instructing the models to retain “every detail.”
  • Using negative constraints to prohibit summarization-related language.
  • Breaking down the text into smaller chunks and processing them individually.
  • Sentence by sentence processing and reconstruction.

However, the models consistently condense or alter the input, rendering the output almost unusable. This is a significant problem, as the reports require absolute fidelity to the original data.

We are seeking assistance and feedback from the community on potential solutions. We are particularly interested in:

  • Advanced prompt engineering techniques that have proven effective in preventing summarization across all models.
  • Any insights into the models’ inherent tendencies toward summarization and how to counteract them, regardless of the model used.
  • Alternative approaches or tools that might be better suited for detail retention, applicable to any OpenAI model.
  • Information regarding the use of vector databases in this context.
  • Any python scripting techniques that may be helpful, particularly those that work across the OpenAI API.
  • Any model parameters that may be adjusted to address this issue across all models.

Has anyone else encountered this issue, and if so, what strategies have you found successful? We need to ensure the AI outputs the full data, and does not summarize it, regardless of the model being used.

Any guidance or suggestions would be greatly appreciated.