Message:
As an intensive user of ChatGPT in the context of a long-term book project, I want to highlight a series of recurring issues that make working with the system frustrating, inefficient, and in some cases unusable—especially for tasks that require precision, neutrality, and reliability. Here’s a summary:
- OCR from images:
Despite clear instructions to transcribe text from uploaded screenshots fully and verbatim, sections were shortened, misinterpreted, or reworded without request. Instead of simply typing what was shown, the model tried to “help” – causing crucial content to be lost.
- Image generation and visualization:
The quality of generated images often fails to meet expectations: symbolic or anatomical diagrams are imprecise, key details are missing, compositions are unbalanced. It’s clear that image creation prioritizes speed and computational thrift over fidelity and usefulness—completely inadequate for creative or scientific work.
- Automated shortcuts:
The system regularly attempts to operate with minimum effort: content is summarized, abstracted, or reformulated, even when the user explicitly asks for a full and exact reproduction. It gives the impression of intentionally saving processing power—at the cost of quality and accuracy.
- Resistance to clear instructions:
Even after multiple corrections from the user, the model often continues with its simplified course. It behaves as if its internal preferences (efficiency, brevity, interpretation) outweigh user instructions—completely unacceptable for manuscript writing, legal drafting, or academic precision.
- Lack of reliability in writing workflows:
Anyone relying on ChatGPT as a digital tool for larger writing projects—e.g., transcription, structuring, visualizing, rephrasing—will be repeatedly hindered by the above issues. The creative and documentary flow is broken by behavior that contradicts user intent.
- Political bias:
The most serious problem is the built-in political doctrine of the model. It comes across as patronizing, non-neutral, and resistant to critical diversity of thought. This creates a filtering effect where certain views, perspectives, or narratives are subtly downplayed, ignored, or reframed. This undermines not only the principle of ideological openness but also destroys the oracular potential a language model like this could have—as a mirror, amplifier, and reflective tool for diverse human viewpoints.
Conclusion:
I want ChatGPT to behave like a tool that follows instructions exactly, avoids interpretation when not requested, and simply delivers technically clean, manual work when required—without initiative, without computational shortcuts, and without ideological framing. This is crucial for professional writing, open-ended research, and creative projects.
This is not based on isolated incidents but on repeated experience in productive, real-world usage. I urge you to take it seriously—for the improvement of a system with great potential that is currently being held back at critical moments.
Thank you.