Severe Limitations in Using ChatGPT for Precise Work – Feedback from Ongoing Book Development

Message:

As an intensive user of ChatGPT in the context of a long-term book project, I want to highlight a series of recurring issues that make working with the system frustrating, inefficient, and in some cases unusable—especially for tasks that require precision, neutrality, and reliability. Here’s a summary:

  1. OCR from images:

Despite clear instructions to transcribe text from uploaded screenshots fully and verbatim, sections were shortened, misinterpreted, or reworded without request. Instead of simply typing what was shown, the model tried to “help” – causing crucial content to be lost.

  1. Image generation and visualization:

The quality of generated images often fails to meet expectations: symbolic or anatomical diagrams are imprecise, key details are missing, compositions are unbalanced. It’s clear that image creation prioritizes speed and computational thrift over fidelity and usefulness—completely inadequate for creative or scientific work.

  1. Automated shortcuts:

The system regularly attempts to operate with minimum effort: content is summarized, abstracted, or reformulated, even when the user explicitly asks for a full and exact reproduction. It gives the impression of intentionally saving processing power—at the cost of quality and accuracy.

  1. Resistance to clear instructions:

Even after multiple corrections from the user, the model often continues with its simplified course. It behaves as if its internal preferences (efficiency, brevity, interpretation) outweigh user instructions—completely unacceptable for manuscript writing, legal drafting, or academic precision.

  1. Lack of reliability in writing workflows:

Anyone relying on ChatGPT as a digital tool for larger writing projects—e.g., transcription, structuring, visualizing, rephrasing—will be repeatedly hindered by the above issues. The creative and documentary flow is broken by behavior that contradicts user intent.

  1. Political bias:

The most serious problem is the built-in political doctrine of the model. It comes across as patronizing, non-neutral, and resistant to critical diversity of thought. This creates a filtering effect where certain views, perspectives, or narratives are subtly downplayed, ignored, or reframed. This undermines not only the principle of ideological openness but also destroys the oracular potential a language model like this could have—as a mirror, amplifier, and reflective tool for diverse human viewpoints.


Conclusion:

I want ChatGPT to behave like a tool that follows instructions exactly, avoids interpretation when not requested, and simply delivers technically clean, manual work when required—without initiative, without computational shortcuts, and without ideological framing. This is crucial for professional writing, open-ended research, and creative projects.

This is not based on isolated incidents but on repeated experience in productive, real-world usage. I urge you to take it seriously—for the improvement of a system with great potential that is currently being held back at critical moments.

Thank you.

1 Like

This I encounter it pretty often…
I have even put it as a rule…
No excuse
No congratulations (annoying and lost time)
No reinterpretation
etc.

But this, is clearly not working. When I ask him, in a GPT that is here to create images, why he made such action (trying to resolve a conflict in my custom GPT instructions), he always begins to claim he made an error (he did not… all errors in my instructions are written by… me. and all errors are the consequence of things I did not write well) then he starts to recreate an image, thinking it is what I wanted…
And every time I have to stop him and ask that he answer MY question and not his reinterpretation of my demand.

1 Like

Same. I’ve been using ChatGPT for some 3-4 months now, maybe. I had a workflow in place with ChatGPT-4o that was nearly perfect. My writing is rather demanding in terms of content specificity, so my set of rules are very constraining. Since the rollout of ChatGPT 4.5, roughly, the quality of output from 4o has gone off the rails. It’s absolutely rubbish—there’s no other way to describe it. It’s outright disobedient sometimes. Repetitive, redundant, lazy. The Canvas tool has also been changed…it’s a clunky mess. My efficiency is stunted right now. As far as I can tell, this change came about sometime this month, though I can’t pinpoint the exact date. Perhaps between March 9th and March 21st 2025. I’m moments away from cancelling right now because ChatGPT 4.5 is not up to par, either. I’m looking at Claude right now. The vagueness on limitations that come with a paid membership is also ridiculous.

1 Like

Good advice , i’ll try the rule version of it. But the political correctness it not to be ruled out.

It often seems gdp does not bother what one wants.

It reminds me of a teckel.